Depending on the amount of data to process, file generation may take longer.

If it takes too long to generate, you can limit the data by, for example, reducing the range of years.

Chapter

Download BibTeX

Title

Heterogeneous Team Deep Q-Learning in Low-Dimensional Multi-Agent Environments

Authors

[ 1 ] Instytut Informatyki, Wydział Informatyki, Politechnika Poznańska | [ P ] employee

Year of publication

2016

Chapter type

paper

Publication language

english

Abstract

EN Deep Q-Learning is an effective reinforcement learning method, which has recently obtained human-level performance for a set of Atari 2600 games. Remarkably, the system was trained on the high-dimensional raw visual data. Is Deep Q-Learning equally valid for problems involving a low-dimensional state space? To answer this question, we evaluate the components of Deep Q-Learning (deep architecture, experience replay, target network freezing, and meta-state) on a Keepaway soccer problem, where the state is described only by 13 variables. The results indicate that although experience replay indeed improves the agent performance, target network freezing and meta-state slow down the learning process. Moreover, the deep architecture does not help for this task since a rather shallow network with just two hidden layers worked the best. By selecting the best settings, and employing heterogeneous team learning, we were able to outperform all previous methods applied to Keepaway soccer using a fraction of the runner-up's computational expense. These results extend our understanding of the Deep Q-Learning effectiveness for low-dimensional reinforcement learning tasks.

Pages (from - to)

201 - 208

DOI

10.1109/CIG.2016.7860413

URL

https://ieeexplore.ieee.org/document/7860413

Book

Computational Intelligence & Games 2016, Santorini, Greece, 20-23 September, 2016 : conference proceedings

Presented on

Conference on Computational Intelligence and Games, CIG 2016, 20-23.09.2016, Santorini, Greece

Publication indexed in

WoS (15)

This website uses cookies to remember the authenticated session of the user. For more information, read about Cookies and Privacy Policy.