LipNet, lipreading program, University of Oxford, Oxford, United Kingdom


LipNet: How easy do you think lipreading is?

Published on Nov 4, 2016

This work was carried out at the University of Oxford Computer Science Department by Yannis Assael, Brendan Shillingford, Prof Shimon Whiteson and Prof Nando de Freitas. We thank Google DeepMind, CIFAR, and NVIDIA for financial support. We also thank University of Sheffield, Jon Barker, Martin Cooke, Stuart Cunningham and Xu Shao for the GRID corpus dataset; Aine Jackson, Brittany Klug and Samantha Pugh for helping us measure the experienced lipreader baseline; Mitko Sabev for his phonetics guidance; Odysseas Votsis for his video production help; and Alex Graves and Oiwi Parker Jones for helpful comments.

LipNet is doing lipreading using Machine Learning, aiming to help those who are hard of hearing and can revolutionise speech recognition.

Abstract:
Lipreading is the task of decoding text from the movement of a speaker's mouth. Traditional approaches separated the problem into two stages: designing or learning visual features, and prediction. More recent deep lipreading approaches are end-to-end trainable (Wand et al., 2016; Chung & Zisserman, 2016a). All existing works, however, perform only word classification, not sentence-level sequence prediction. Studies have shown that human lipreading performance increases for longer words (Easton & Basala, 1982), indicating the importance of features capturing temporal context in an ambiguous communication channel. Motivated by this observation, we present LipNet, a model that maps a variable-length sequence of video frames to text, making use of spatiotemporal convolutions, an LSTM recurrent network, and the connectionist temporal classification loss, trained entirely end-to-end. To the best of our knowledge, LipNet is the first lipreading model to operate at sentence-level, using a single end-to-end speaker-independent deep model to simultaneously learn spatiotemporal visual features and a sequence model. On the GRID corpus, LipNet achieves 93.4% accuracy, outperforming experienced human lipreaders and the previous 79.6% state-of-the-art accuracy.
 
Article "Oxford University’s lip-reading AI is more accurate than humans, but still has a way to go"

by Dave Gershgorn
November 7, 2016

Article "Can deep learning help solve lip reading?"
New research paper shows AI easily beating humans, but there's still lots of work to be done

by James Vincent
November 7, 2016

Article "Is no secret safe? Lipreading robot proves MORE accurate than a human in deciphering speech"
LipNet could match videos with known sentences with 93.4% accuracy
The AI software uses a neural network to work out what it is seeing
It was trained with over 29,000 videos of volunteers giving commands
Researchers say it could be used in a range of applications, including silent dictation and improved hearing aids

by Ryan O'Hare
November 9, 2016
 

LipNet in autonomous vehicles | CES 2017

Published on Jan 6, 2017

LipNet is doing lipreading using Machine Learning, aiming to help those who are hard of hearing and revolutionise speech recognition.
 
Back
Top