Page 1 of 2 12 LastLast
Results 1 to 10 of 11

Thread: Miscellaneous

  1. #1

    Miscellaneous



    Artificial Intelligence in Video Games

    Uploaded on Apr 1, 2010

    Video game developer and Mad Doc Software founder Ian Davis lectures on interactive multimedia, game engineering, and development of characters in video games.

    Hosted by Metropolitan College Department of Computer Science on November 28, 2007.

  2. #2


    Flappy Bird Bot - A Robot that Plays Flappy Bird!

    Published on Feb 21, 2014

    "Flappy Bird Bot" is a robot that is designed to play flappy bird, a 2013 mobile game that is notorious for its difficulty level.
    It was done by two engineers from Xi'an in China. They spent about 4 days to finish the prototype.

  3. #3


    Towards knowledge transfer between robots: Computers teach each other Pac-Man

    Published on Mar 27, 2014

    PULLMAN, Wash. - Researchers in Washington State University's School of Electrical Engineering and Computer Science have developed a method to allow a computer to give advice and teach skills to another computer in a way that mimics how a real teacher and student might interact.

    For more about this story, click here:
    "Knowledge transfer: Computers teach each other Pac-Man"

    March 27, 2014

  4. #4


    Top 10 video games with the best AI

    Published on Jul 26, 2015

    Someday computers may rise up and kill us all, so it’s best to suck up to the most intelligent of them right now. For this list we’re looking for examples of virtual intelligence, whether it's used as friend or foe that stand tall in the games we love. We're focusing strictly on dynamic and reactive events, where the AI can show its strengths on its own terms, so no scripted sequences here. Also, and perhaps most importantly, we’re not saying the AI in these games are perfect…

  5. #5


    Computer teaches itself to play games - BBC News

    Published on Mar 1, 2015

    Researchers say they have developed an artificial intelligence system that has taught itself how to win 1980s computer games. The computer program, which is inspired by the human brain, learned how to play 49 classic Atari games. In half, it was better than a professional human player. Google's DeepMind team said this was the first time a system had learned how to master a wide range of complex tasks.
    The study is published in the journal Nature.
    Dr Demis Hassabis, DeepMind's vice-president of engineering, showed the BBC's Pallab Ghosh how the AI had taught itself to excel at the classic Breakout.
    Article "Google's AI Masters Space Invaders (But It Still Stinks at Pac-Man)"

    by Tom Simonite
    February 25, 2015



    Google DeepMind's Deep Q-learning playing Atari Breakout

    Published on Mar 7, 2015

    Google DeepMind's implemented an artificial intelligence program using deep reinforcement learning that plays Atari games and improves itself to a superhuman level. It is capable of playing many Atari games and uses a combination of deep artificial neural networks and reinforcement learning. After presenting their initial results with the algorithm, Google almost immediately acquired the company for several hundred million dollars, hence the name Google DeepMind. Please enjoy the footage and let me know if you have any questions regarding deep learning!
    Article "Researchers say this is the most impressive act of artificial intelligence they've ever seen"

    by Jennifer Welsh
    November 13, 2015

  6. #6

  7. #7


    Top 10 most helpful A.I companions In video games

    Published on May 21, 2016

  8. #8


    Pong AI with policy gradients

    Published on May 28, 2016

    Trained for ~8000 episodes, each episode = ~30 games. Updates were done in batches of 10 episodes, so ~800 updates total. Policy network is a 2-layer neural net connected to raw pixels, with 200 hidden units. Trained with RMSProp and learning rate 1e-4. The final agent does not beat the hard-coded AI consistently, but holds its own. Should be trained longer, with ConvNets, and on GPU.

    This is ATARI 2600 Pong version, using OpenAI Gym.

  9. #9

  10. #10


    Cargo-Bot
    July 5, 2013

    The object of Cargo-bot is to write programs that control a robotic arm to move, sort, and stack colored crates. The computer language is a simple instruction set consisting of of squares that tell the arm which direction to move, and whether or not to perform an action based on the color of the crate. You write the programs by dragging and dropping the instruction squares into a sequence that causes the arm to perform the assigned task. You can also write programs that execute other programs you've written. (This is important because each program has space for just 8 squares, so you need to be able to write efficient code to complete the challenges). The challenges start out easy but become maddeningly difficult as you progress. With subroutines, if-then statements, and plenty of opportunities to practice debugging, it's a good way to get kids to think like a programmer. You can also record a video of your program in action and share it to YouTube.

Page 1 of 2 12 LastLast

Социальные закладки

Социальные закладки

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •