Australian Centre for Robotic Vision, Brisbane, Queensland, Australia


Learning to play breakout with Baxter

Published on Dec 4, 2016

Paper: "A Robustness Analysis of Deep Q Networks"

by Adam W. Tow, Sareh Shirazi, Jurgen Leitner, Niko Sunderhauf, Michael Milford, Ben Upcroft

Abstract: Deep Q Networks are a type of deep reinforcement learning algorithm that has been shown
to be particularly adept at learning a variety of tasks with minimal priors. Specifically, DQN
agents have been shown to learn a variety of Atari 2600 video games using only raw images
of the game screen and the game score.
To leverage DQNs in real world robotics applications, we must first understand how robust
these networks are to the perceptual noise common to all robotics domains. In this paper,
we present an analysis of the robustness of Deep Q Networks to various types of perceptual
noise (changing brightness, Gaussian blur, salt and pepper, distractors). We present a
benchmark example that involves playing the game Breakout though a webcam and screen
environment, like humans do. We present a simple training approach to improve the performance
maintained when transferring a DQN agent trained in simulation to the real world
(36% vs. 1% maintained performance - see Table 1). We also evaluate DQN agents trained
under a variety of simulation environments to report for the first time how DQNs cope with
perceptual noise, common to real world robotic applications
 
Back
Top