Echo! Can You Understand Me, Now?

by

Last week, I stepped onto my soapbox (mine has special safety handrails) and voiced my opinion about an issue that’s been bugging me for several years. It’s a problem I share with about 7.5 million other people who have trouble using their voices and cannot get voice-activated devices to understand commands.

Miscommunication

My voice problem is dysarthria, a common symptom of ALS. At first, it made for a few funny moments. I’d speak a command to my Echo Show and laugh at the mismatch of its reply. That novelty quickly wore off.

Frustrated with the one-sided game and the device’s penchant to ignore me altogether, I was ready to pitch it into the garbage bin. But an invitation to join Project Euphonia, sponsored by Google to help teach its software how to better understand me, saved the day.

Over the past year, I logged into my weekly sessions, read aloud a random series of sentences shown on the screen, and felt satisfied that I was part of the solution. As a side benefit, I believe that reading out loud is helping me practice good speech habits.

An invitation

Back to last week, I opened an email from a reporter at The Wall Street Journal asking to interview me for a story about how technology companies are working to train their voice assistants to better understand people with dysarthria and other nonstandard speech.

I said yes, of course, and we quickly set up a date and time for a call.

That’s when I felt the anxiety and fear of failing welling up. Would she understand me? Or would I come across garbled and sounding like the mismatch of words my Echo Show spoke?

From past experience, I’ve learned that my success at communication goes way up when I can add facial expression and body language into the conversation. Too often on a phone call, I hear the other person answer with a generic “uh-huh,” and I know they really didn’t understand me at all.

So, I suggested we do the interview online, with cameras on. She agreed.

Talking together

On the day of the interview, I sat down, did a few body stretches and some deep breathing exercises, and crossed my fingers.

Suddenly, a friendly face was smiling back at me coming all the way from the U.K. We said hello, and I don’t remember much else. The 30 minutes flew by as the front of my brain babbled on. Occasionally, the back of my brain checked in with commands to breathe more slowly, enunciate each word, and use a little less body language, please.

We discussed many topics, especially focusing on how people with speech problems may be the group that benefits the most from voice recognition technology.

Devices and technology in the home that can control the lights, door locks, curtains, room temperature, and even Roombas have become essential for many living with chronic disease and related physical disabilities.

Having devices that understand our voice commands should be a given. Too many of us use the demoralizing workaround of programming the text-to-speech app on one device to “speak” the command to another device.

You can read the full article and my interview here.

Technology will always evolve, and if our future includes voice-activated devices, we want the technology companies to include us. Because we all want to live well while we live with ALS.

When Dagmar was diagnosed with ALS at the age of 59 in 2010, she tapped into her nearly 30 years of professional experience. She not only follows her own wellness and fitness advice but also inspires and teaches others to do the same. Dagmar is a patient columnist at BioNews, writing “Living Well with ALS.” In addition, she is one of the moderators for the ALS News Today Forum and writes a personal blog called “ALS and Wellness.” She lives in Arizona, enjoying finding humor in life’s situations, and spends her free time pursuing creative projects in fiber arts.

Read More