alexa Human-level control through deep reinforcement learning.
Engineering

Engineering

International Journal of Swarm Intelligence and Evolutionary Computation

Author(s): Mnih V, Kavukcuoglu K, Silver D, Rusu AA, Veness J,

Abstract Share this page

Abstract The theory of reinforcement learning provides a normative account, deeply rooted in psychological and neuroscientific perspectives on animal behaviour, of how agents may optimize their control of an environment. To use reinforcement learning successfully in situations approaching real-world complexity, however, agents are confronted with a difficult task: they must derive efficient representations of the environment from high-dimensional sensory inputs, and use these to generalize past experience to new situations. Remarkably, humans and other animals seem to solve this problem through a harmonious combination of reinforcement learning and hierarchical sensory processing systems, the former evidenced by a wealth of neural data revealing notable parallels between the phasic signals emitted by dopaminergic neurons and temporal difference reinforcement learning algorithms. While reinforcement learning agents have achieved some successes in a variety of domains, their applicability has previously been limited to domains in which useful features can be handcrafted, or to domains with fully observed, low-dimensional state spaces. Here we use recent advances in training deep neural networks to develop a novel artificial agent, termed a deep Q-network, that can learn successful policies directly from high-dimensional sensory inputs using end-to-end reinforcement learning. We tested this agent on the challenging domain of classic Atari 2600 games. We demonstrate that the deep Q-network agent, receiving only the pixels and the game score as inputs, was able to surpass the performance of all previous algorithms and achieve a level comparable to that of a professional human games tester across a set of 49 games, using the same algorithm, network architecture and hyperparameters. This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks. This article was published in Nature and referenced in International Journal of Swarm Intelligence and Evolutionary Computation

Relevant Expert PPTs

Relevant Speaker PPTs

Recommended Conferences

  • 3rd International Conference on Data Structures and Data Mining
    August 17-18, 2017, Toronto, Canada
  • 4th International Conference on BigData Analysis and Data Mining
    September 07-08, 2017, Paris, France
  • 6th International Conference on Biostatistics and Bioinformatics
    Nov 13-14, 2017, Atlanta, USA
  • 4th World Congress on Robotics and Artificial Intelligence
    October 23-24, 2017

Relevant Topics

Peer Reviewed Journals
 
Make the best use of Scientific Research and information from our 700 + peer reviewed, Open Access Journals
International Conferences 2017-18
 
Meet Inspiring Speakers and Experts at our 3000+ Global Annual Meetings

Contact Us

 
© 2008-2017 OMICS International - Open Access Publisher. Best viewed in Mozilla Firefox | Google Chrome | Above IE 7.0 version
adwords