Page 1 of 2 12 LastLast
Results 1 to 10 of 13

Thread: Miscellaneous

  1. #1


    Speech Recognition Breakthrough for the Spoken, Translated Word

    Published on Nov 8, 2012

    Chief Research Officer Rick Rashid demonstrates a speech recognition breakthrough via machine translation that converts his spoken English words into computer-generated Chinese language. The breakthrough is patterned after deep neural networks and significantly reduces errors in spoken as well as written translation.

  2. #2

    Natural Language Interaction with ISA Handling and Consequence Reasoning

    Published on Aug 3, 2015

    Here we see the DIARC architecture controlling a Nao robot in a simple natural language based interaction. The robot is able to obey both literal and non-literal commands. It is also able to appropriately reject commands based on reasoning regarding action effects. In this case, it rejects commands that can possibly result in harm to itself (walking into solid objects).

  3. #3

    Mato voice recognition

    Published on May 28, 2016

    This is my first test of the MOVI voice recognition (and speech) shield for the Arduino. It will be used on "Little Friend" on

  4. #4

  5. #5
    Article "Smartphone speech recognition can write text messages three times faster than human typing"
    Smartphone speech recognition software is not only three times faster than human typists, it's also more accurate. The researchers hope the revelation spurs the development of innovative applications of speech recognition technology.

    by Bjorn Carey
    August 24, 2016

  6. #6

    Lip reading sentences in the wild

    Published on Nov 17, 2016

    Lip Reading Sentences in the Wild - Joon Son Chung, Andrew Senior, Oriol Vinyals, Andrew Zisserman


    The goal of this work is to recognise phrases and sentences being spoken by a talking face, with or without the audio. Unlike previous works that have focussed on recognising a limited number of words or phrases, we tackle lip reading as an open-world problem – unconstrained natural language sentences, and in the wild videos.

    Our key contributions are: (1) a ‘Watch, Listen, Attend and Spell’ (WLAS) network that learns to transcribe videos of mouth motion to characters; (2) a curriculum learning strategy to accelerate training and to reduce overfitting; (3) a ‘Lip Reading Sentences’ (LRS) dataset for visual speech recognition, consisting of over 100,000 natural sentences from British television.

    The WLAS model trained on the LRS dataset surpasses the performance of all previous work on standard lip reading benchmark datasets, often by a significant margin. This lip reading performance beats a professional lip reader on videos from BBC television, and we also demonstrate that visual information helps to improve speech recognition performance even when the audio is available.

  7. #7
    Article "Learning words from pictures"
    System correlates recorded speech with images, could lead to fully automated speech recognition.

    by Larry Hardesty
    December 6, 2016

  8. #8

    How to make a simple Tensorflow speech recognizer

    Published on Dec 9, 2016

    In this video, we'll make a super simple speech recognizer in 20 lines of Python using the Tensorflow machine learning library. I go over the history of speech recognition research, then explain (and rap about) how we can build our own speech recognition system using the power of deep learning.

  9. #9

  10. #10

    An AI has learned how to pick a single voice out of a crowd

    Published on Oct 25, 2017

    The artificial intelligence can solve the "cocktail party problem"; with 90 percent accuracy and will soon be installed in public places.
    "An AI has learned how to pick a single voice out of a crowd"

    by Richard Gray
    October 24, 2017

Page 1 of 2 12 LastLast

Социальные закладки

Социальные закладки

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts