Connect with Joseph Steinberg

Artificial Intelligence (AI)

Google Seeks People With Down Syndrome To Help Train AIs To Understand Human Speech

Google Seeks People With Down Syndrome To Help Train AIs To Understand Human Speech

The last half decade has ushered in the era of humans interacting with technology through speech, with Amazon’s Alexa, Apple’s Siri, and Google’s AI rapidly becoming ubiquitous elements of the human experience. But, while the migration from typing to voice has brought great convenience for some folks (and improved safety, in the case of people utilizing technology while driving), it has not delivered on its potential for the people who might otherwise stand to benefit the most from it: those of us with disabilities.

For people with Down Syndrome, for example, voice-based control of technology offers the promise of increased independence – and even of some new, potentially life-saving products. Yet, for this particular group of people, today’s voice-recognizing AIs pose serious problems, as a result of a combination of 3 factors:

1. Modern voice-recognition systems employ Artificial Intelligence (AI)-type technology, and are taught to recognize speech by learning from large collections of millions of recordings of people speaking; if few or none of the recordings are of people who speak with a particular, distinctive set of intonations, pronunciations, modulations, or other elements of vocalization, however, it is unlikely that the AI will be able to learn to properly understand members of that particular group.

2. Because of several factors, including the fact that they frequently have smaller mouths and larger tongues than do people without the condition, individuals with Down Syndrome often iterate somewhat differently than do most of their neighbors.

3. The sample set that Google utilized to train its AI apparently contained relatively few speech samples produced by people with Down Syndrome – and, as a result, according to Google, its AI has proven unable to properly decode about one out of every three words spoken by someone with Down Syndrome, rendering the technology effectively impotent for a group of people who could otherwise stand to benefit tremendously from it.

To address this issue, and as a step forward towards ensuring that people with health conditions that cause AIs to be unable to understand them are able to utilize modern technology, Google is partnering with the Canadian Down Syndrome Society; via an effort called Project Understood, Google hopes to obtain recordings of people with Down Syndrome reading simple phrases, and to use those recordings to help train its AI to understand the speech patterns common to those with Down Syndrome. This effort is an extension of Google’s own Project Euphonia, which seeks to improve computers’ abilities to understand diverse speech patterns including impaired speech, and, which, earlier this year, began an effort to train AIs to recognize communication from people with the neuro-degenerative condition ALS, commonly known as Lou Gehrig’s Disease.

So far, 300 people have contributed recordings to Project Understood – and, of course, the more people who do “lend their voices” to the effort, the better the AI will likely become at understanding people with Down Syndrome. Project Understood hopes to obtain at least 500 recordings from the public in the near future.

If you, or a loved one, have Down Syndrome, and wish to help with this effort, please visit the Project Understood website at https://projectunderstood.ca/.

Continue Reading

More in Artificial Intelligence (AI)

 

POSTS BY CATEGORY

JOIN MY NEWSLETTER

* indicates required