Voice recognition gives devices like smartphones, tablets, and computers the capability to “listen” to its user’s voice. It is listening for a user’s commands or to input speech into text. Pretty amazing when you think about it; devices acting as our assistants by “understanding” what we say to follow our commands!

Siri is currently the most famous maiden of voice recognition, residing in all Apple products. Since her grand debut in 2010, she’s won the hearts of young and old alike, while taking the assistive technology (AT) world by storm. Siri gives some people with disabilities the power to use a smartphone and other devices as easily as everyone else. Siri is especially helpful for individuals with limited mobility, memory issues, learning disabilities, and more. Check out the video below to see a few of Siri’s voice commands.

 

http://youtu.be/U-JK49q8XW4

 

So, what’s wrong with Siri and other voice recognition platforms? Well, although she’s close to perfect, she has one flaw. Christopher Mims sums it up beautifully:

The problem isn’t voice recognition software per se, which is more accurate than ever. The problem is that voice recognition is still a challenging enough problem, computationally, that all the major consumer platforms that do it—whether built by Google, Apple or Microsoft with the new Xbox—must send a compressed recording of your voice to servers hundreds or thousands of miles away. There, computers more powerful than your phone or game console transform it into text or a command. It’s that round trip, especially on slower cellular connections, that make voice recognition on most devices so slow.

Where does that bring us? Intel recently revealed Intel Edison — a 400MHz computer board that fits into an SD card. The Intel Edison will be the core of many of Intel’s newly announced smart products, and a huge component in Siri’s competition. Intel partnered with a undisclosed third party to develop voice recognition software on a Intel mobile processor that does not require a trip to the cloud and back to do it’s job.

(Enter Jarvis stage right.) Jarvis is the prototype born from Intel and it’s unnamed mash-up. (And yes – named after Iron Man’s personal assistant J.A.R.V.I.S.) He is a headset that pairs with the user’s phone to act as an assistant that can both listen to commands and respond in his own voice. Jarvis is also more of a listener and conversationalist. He listens all the time and is rumored to respond to more questions and commands than Siri, and in a more “human” way.

Jarvis works with and without the cloud. Let’s say you’re in the middle of the woods on a beautiful hike, its just you and nature – no distractions. You realize – this peace and quiet thing isn’t for me. Time to turn around! No WiFi or data connection? No problem! Jarvis can navigate you back home without any connectivity at all – and with NO delay! One annoying downfall with current voice recognition software that relies on the cloud, is the delay that occurs during use, and especially when connectivity to the cloud is limited. You could say Siri needs to get her head out of the clouds! (Har-har-har!)

How does the world of AT fit into this picture? In many ways! Jarvis is just the beginning of where the world of voice recognition is headed. With more and more flexibility and advancements, voice recognition could become more personalized to the user; through understanding their specific voice better, getting to know them and their environment on a more personal level, and by combining various environmental controls into one. Jarvis could become closer to an assistant or companion than a piece of software – and I can’t wait for that day.

 

 

 

Skip to content