Voice Interface vs Visual Interface

--

And will ‘Smell no evil’ and ‘Taste no evil’ be the next two monkeys?

All rights reserved https://infusions.deviantart.com/

We’ll come back to the monkeys later.

The dual between Voice and Visual interfaces basically comes down to the age old question of visual communication versus audio communication.

Before we get into it, let’s go back in time. Which came first? How did humans first communicate? Was it through visuals such as cave paintings — which are basically icons and graphic symbols, or through noises — grunts and a basic spoken language?

“Most biologists, linguists, and anthropologists believe language occurred well before cave art. The gene believed to be responsible for language acquisition, the FOXP2 gene, has been found in Neanderthals” — Brian Collins

Ref:
https://www.quora.com/What-appeared-first-cave-art-visual-language-or-speech-verbal-language

However, with that noted:

“Language need not have started in a spoken modality; sign language may have been the original language (e.g., Corballis 2002). The presence of speech supports the presence of language, but not vice versa.” (Johansson, 2013, p. 56)

So, it is most likely that the first inter human communications were a mixture of visual sign language accompanied with basic vocal noises.

THE PRESENT

The rise of the internet, texting and then smart phones have led to our heavy reliance on visual interfaces. Huge investments have been made in these areas, with most UX Design focussing on driving seamless, visual, interactive experiences.

All this is about to change with the introduction of digital voice technology. Enter Google Home et al and the virtual assistants — all now focused on voice. This shift is leading companies to ask the question of whether they should continue to invest in the visual interface or should budgets shift to voice?

Gartner estimatates that:

30% of all searches will be conducted without a screen by 2021

https://www.mediapost.com/publications/article/291913/gartner-predicts-30-of-searches-without-a-screen.htm

Surely much more focus needs to be placed on voice as the new interface.

This leads us to bigger questions:

  1. How does designing UX and UI for sound/voice change our role and the tools we use?
  2. How prepared for the shift is the UX industry?

The basic user centric design principles and methodology will remain, but obviously with a shift of emphasis from visual representations to language and sound based ones.

What effect will chatting to a machine have on language? We’re all well aware of the effects texting and instant messaging has had! LOL

If this trend is anything to go by, will we need to develop a sound version of icons and emojis? An audio version of shorthand?

What will the sound version of apps be? But let’s not get into the current, debate on the death of apps!

Too many questions sorry, let’s get back on track!

THE OTHER MONKEYS

Of course it’s not only voice that needs to be considered. In the past few years dramatic technical advances in interfaces are making them more and more immersive, utilising more human senses:

  • Vision - 360 video, VR and AR (especially with the iPhone’s AR capabilities)
  • Sound - Watson, Google Home, Alexa etc.
  • Touch (touchscreen) - ok this one is a bit of a stretch and doesn’t even have a monkey.

Still, with this increased utilisation of the senses, how long will it be before the other two, taste and smell, get in on the action?

You could say crude versions of this are already happening, with VR experiences like Marriott Hotels’ pop up Innovation Lab, which includes tasting and smelling.

http://www.eventmarketer.com/article/marriott-mobile-innovation-lab/

But this is just the beginning. What could happen next is the really exciting bit.

THE FUTURE

We should be mindful that the comparison between voice and visual may ‘soon’ be a thing of the past. Indeed, even the discussion around the competing senses may be irrelevant as brain interfaces enter the market. Hooking up peoples brains to a computer is nothing new and rudementary versions have been around for a while.

Brain-Computer Interfaces Are Already Here

And now Mr Musk has entered the fray by funding ‘medical research’ startup Neuralink, who are developing neural laces .

His goal? To merge the brain and AI.

These ultra thin bits of mesh are inserted into your brain through a tiny needle. They then unravel and literally mesh with your brain, eventually being absorbed as part of your brain. In theory this will allow your brain to wirelessly connect to your computer. In the future we may be able to comunicate with a computer (and therefor each other) through our thoughts.

According to Robert Scoble brain interfaces will be the fifth major wave of user interfaces - the first being the command line, then graphical user interfaces, touch, then AR and VR.

Us humans, have always used a mix of our senses to interact and communicate with each other. This is the same for interacting with technology and will always be the case. But as technology progresses and systems plug into our brains, will thought also have to be included as a new sense?

In the future if we can plug straight into the processing areas of our brain, we could fool it into experiencing artificially generated sense triggers (ala The Matrix)? Thus, cutting out the middlemen of eyes, ears, nerve endings, noses, and taste buds, bringing into play the full set of all our senses, including smell and taste.

The medical benifits coukd be huge. Not just the accessibility possibilities it opens up for people with mobility, vision and hearing impairments, but bionic eyes, ears and limbs wired into the brain, as the line between biological and digital intelligence blurs. This could lead to treatments for disabilities and degenerative neurological diseases — some, like Parkinson’s are already being helped by chip implants emitting electronic signals.

Brain/computer interfaces throw up even more interesting design questions. Even with a direct link, will some sort of iteration of the external visual interface screen remain? Or will we see the visual interface inside our head too? A perfect version of VR, or AR version with mixed realities experienced simultaneously?

Will we need to see/hear confirmation that our intent has been actioned, or will we simply just be able to feel/know it has?

And what about VC? Not the medal, but Virtual Consciousness? Will future brain research be able to create a Virtual Consciousness? Virtual Subconsciousness? Virtual Personalities? Ok, I’ll stop there.

Of course there are some pretty big problems to be solved and a lot of work to be done before we wake up to a telepathic world, communicating through thoughts. Our brains are very complicated and delicate artefacts.

Are people even going to want an implant inserted into their head just to upgrade their personal software?

This article seems to have turned into a list of questions rather than answers! So in keeping with the theme I’ll end on one. If Brain — Computer interfaces are just a matter of time, how do we even begin to get our heads around designing for a thought based interface??

Now there’s a challenge.

--

--