Results 1 to 10 of 11

Thread: Miscellaneous

Hybrid View

Previous Post Previous Post   Next Post Next Post
  1. #1

    Miscellaneous



    Artificial Intelligence in Video Games

    Uploaded on Apr 1, 2010

    Video game developer and Mad Doc Software founder Ian Davis lectures on interactive multimedia, game engineering, and development of characters in video games.

    Hosted by Metropolitan College Department of Computer Science on November 28, 2007.

  2. #2


    Flappy Bird Bot - A Robot that Plays Flappy Bird!

    Published on Feb 21, 2014

    "Flappy Bird Bot" is a robot that is designed to play flappy bird, a 2013 mobile game that is notorious for its difficulty level.
    It was done by two engineers from Xi'an in China. They spent about 4 days to finish the prototype.

  3. #3


    Towards knowledge transfer between robots: Computers teach each other Pac-Man

    Published on Mar 27, 2014

    PULLMAN, Wash. - Researchers in Washington State University's School of Electrical Engineering and Computer Science have developed a method to allow a computer to give advice and teach skills to another computer in a way that mimics how a real teacher and student might interact.

    For more about this story, click here:
    "Knowledge transfer: Computers teach each other Pac-Man"

    March 27, 2014

  4. #4


    Top 10 video games with the best AI

    Published on Jul 26, 2015

    Someday computers may rise up and kill us all, so it’s best to suck up to the most intelligent of them right now. For this list we’re looking for examples of virtual intelligence, whether it's used as friend or foe that stand tall in the games we love. We're focusing strictly on dynamic and reactive events, where the AI can show its strengths on its own terms, so no scripted sequences here. Also, and perhaps most importantly, we’re not saying the AI in these games are perfect…

  5. #5


    Computer teaches itself to play games - BBC News

    Published on Mar 1, 2015

    Researchers say they have developed an artificial intelligence system that has taught itself how to win 1980s computer games. The computer program, which is inspired by the human brain, learned how to play 49 classic Atari games. In half, it was better than a professional human player. Google's DeepMind team said this was the first time a system had learned how to master a wide range of complex tasks.
    The study is published in the journal Nature.
    Dr Demis Hassabis, DeepMind's vice-president of engineering, showed the BBC's Pallab Ghosh how the AI had taught itself to excel at the classic Breakout.
    Article "Google's AI Masters Space Invaders (But It Still Stinks at Pac-Man)"

    by Tom Simonite
    February 25, 2015



    Google DeepMind's Deep Q-learning playing Atari Breakout

    Published on Mar 7, 2015

    Google DeepMind's implemented an artificial intelligence program using deep reinforcement learning that plays Atari games and improves itself to a superhuman level. It is capable of playing many Atari games and uses a combination of deep artificial neural networks and reinforcement learning. After presenting their initial results with the algorithm, Google almost immediately acquired the company for several hundred million dollars, hence the name Google DeepMind. Please enjoy the footage and let me know if you have any questions regarding deep learning!
    Article "Researchers say this is the most impressive act of artificial intelligence they've ever seen"

    by Jennifer Welsh
    November 13, 2015

  6. #6

Социальные закладки

Социальные закладки

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •