What I learned about voice tech at Voice Summit 2018
A few weeks ago, I attended Voice Summit 2018, the largest voice tech event in the world. It was a fantastic event that had over 2500 attendees, 200 speakers, and major sponsors like Amazon, Prudential, Microsoft, Audible and more. The inaugural event gave me a multidimensional perspective of the current state of Voice tech (e.g. voice assistants, chatbots, etc.) and a glimpse into its future.
Speakers at the event covered a range of topics including Natural language conversation analysis, AI & machine learning, product management, microphone technology, ethics, accessibility, healthcare applications, and much more.
Here are some interesting things I took away.
Conversation (or language) is advanced human technology.
One of the challenges with the current generation of voice/chatbot interfaces is that they lack the flexibility to handle the wide range of responses an average adult can.
A seemingly simple question like, “What would you like to eat?” can actually draw a wide range of responses such as:
- Sushi.
- Italian food.
- Something healthy.25%x 25%
- I want it delivered.
- I don’t know… Anything.
People constantly learn new info that they will later access to converse with others. As a result, conversations can branch infinitely into any subject. The ability to respond to those unexpected branches differentiates an average adult’s ability to converse and the current generation of voice tech.
AI & Machine Learning is the driver of next-gen Voice Assistants and Chatbots.
So how do we develop Voice tech that can handle such complexity? We teach our tech to “learn” and “apply” on their own.
It’s no surprise that Google and Amazon are at the forefront of AI, Machine Learning, and Voice Assistants. They have massive amounts of data that Google Assistant and Amazon Alexa can draw from to “learn” and “apply” to conversations.
As Google and Amazon teach their voice tech, it’s likely that they will eventually serve as platform infrastructure that everyone can build upon. It’s already begun with Google Assistant’s smart-speaker partnerships and Alexa Skills development.
The ethics of AI are attracting lots of discussions.
Despite news headlines, conscious AI that can take over humanity probably won’t happen anytime soon. As a matter of fact, it seems that designers are prioritizing the better parts of humanity when creating voice/chatbot tech.
I had the opportunity to hear Lauren Lucchese, Head of AI Content Design at Capital One speak about their chatbot, Eno. They designed Eno to be smart, helpful, and humane. Eno is gender neutral, likes cats, and doesn’t judge your financial history. I wonder how many actual humans could do that last one. Visit Capitalone.com for a closer look at how Eno converses.
Voice and Chatbot tech will bring the power of the internet to more people.
There are many that have impediments to engaging with computers. Some don’t want to learn how to use one and some have physical disabilities (e.g. blindness), making it difficult. Natural language interfacing (conversations) have the potential to overcome those impediments. As long as a person can converse, they’ll be able to harness the power and convenience of the internet.
Voice tech is growing FAST.
In a survey conducted by Voicebot and Voysis, a staggering 54.4 Million of the US’ 252 million adults own smart-speakers (voice assistant powered speakers).
And growth in Smart Speaker ownership almost tripled between May 2017 and 2018, according to a study from NPR and Edison Research.
Voice Tech is doing what technology has always done: helping people do more.
When I see the results of a study by Voicebot, RAIN Agency, and Pullstring, showing how Smart Speakers aid our everyday lives, I see a productive future aided by smart, helpful, tech that literally and figuratively, speaks my language.
As we approach the next milestone in truly conversational voice-tech, I’m excited to see what lies beyond. Maybe walking, talking robots that we see in movies? I’d love to hear your ideas.