Learn something new every day
More Info... by email
The McGurk effect is a principle involved in the understanding of human speech. The essence of the effect is that visual cues play an important role in a listener’s understanding of the spoken word. People depend on mouth and facial shape as well as hearing to understand a speaker’s meaning; in the absence of this information, miscommunication is more likely. The effect was first documented by researchers Harry McGurk and John MacDonald in 1976. It is sometimes called the McGurk-MacDonald effect.
The McGurk effect was apparent to some people long before it was actually named. Deaf and hard-of-hearing people have been communicating by lip reading for centuries. An experienced lip reader picks up most of a speaker’s meaning by watching mouth and facial movements. Speech therapists and researchers studying this phenomenon soon realized it applied to individuals with normal hearing as well. That is, all people unconsciously practice a form of lip reading in the course of everyday conversation.
The McGurk effect is easily observed. In a normal conversation, the listener watches the speaker’s face. If the listener looks away, it takes greater concentration to follow the speaker’s words, and some sentences may need to be repeated. When the speaker’s face is visible, the listener will depend on facial and mouth movements, as well as the context and intent of the speaker, to fully understand the speech. This concept, universal to all spoken languages, is well known to the world’s deaf community and plays a role in sign language as well.
Researchers can demonstrate the McGurk effect more thoroughly with computer speech simulators. These programs project an image of a human face that is coordinated with pre-programmed spoken phrases. They can also be programmed, unlike human speakers, to take the visual shape of a different sound than the one being spoken. When this happens, listeners are likely to perceive a third sound altogether. This has been observed even when listeners know what the simulator is doing, suggesting the McGurk effect is rooted deep in the human consciousness.
The McGurk effect has been the subject of study by computer specialists working on voice-recognition software. It seems likely that to fully comprehend the vast nuances of human speech, such programs must take the McGurk effect into account. Programs in development will use a small camera to observe a person’s facial movements as he or she utters a command. The computer will then integrate this information with the recorded sound for a more accurate understanding of the command being spoken.