New lip-reading technology to catch inaudible sound

New lip-reading technology to catch inaudible sound
x
Highlights

Scientists from the University of East Anglia (UEA) have developed a new lip-reading technology that can help in solving crimes and provide communication assistance for people with hearing and speech impairments.

Scientists from the University of East Anglia (UEA) have developed a new lip-reading technology that can help in solving crimes and provide communication assistance for people with hearing and speech impairments.

The visual speech recognition technology, created by Dr Helen L. Bear and professor Richard Harvey, can be applied "any place where the audio isn't good enough to determine what people are saying."

Unique problems with determining speech arise when sound isn't available such as on CCTV footage or if the audio is inadequate and there are no clues to give the context of a conversation.

“We are still learning the science of visual speech and what it is people need to know to create a fool-proof recognition model for lip-reading, but this classification system improves upon previous lip-reading methods by using a novel training method for the classifiers,” Dr Bear explained.

Potentially, a robust lip-reading system could be applied in a number of situations from criminal investigations to entertainment. Lip-reading has been used to pinpoint words footballers have shouted in heated moments on the pitch, but is likely to be of most practical use in situations where are there are high levels of noise, such as in cars or aircraft cockpits.

"Such a system could be adapted for use for a range of purposes like for people with hearing or speech impairments. Alternatively, a good lip-reading machine could be part of an audio-visual recognition system,” Dr Bear added.

Lip-reading is one of the most challenging problems in artificial intelligence so it's great to make progress on one of the trickier aspects “which is how to train machines to recognise the appearance and shape of human lips,” Harvey noted.

The findings were scheduled to be presented at the International Conference on Acoustics, Speech and Signal Processing (ICASSP) in Shanghai on Friday. The paper was published in the journal Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing 2016.

Show Full Article
Print Article
Next Story
More Stories
ADVERTISEMENT
ADVERTISEMENTS