|
CMU-CS-03-107
Computer Science Department
School of Computer Science, Carnegie Mellon University
CMU-CS-03-107
William Uther, Manuela Veloso
January 2003
Note from Authors:
This manuscript was originally submitted for publication in
April 1997.
Corrections were never completed and the paper was not published.
However, a copy was placed on the web and a number of people have
referenced the work from there.
This technical
report is now being published for ease of reference.
With the exception of the addition of a title page, the work is
unmodified from the 1997 original.
CMU-CS-03-107.pdf
Keywords: Reinforcement learning, Markov games, adversarial
reinforcement learning
Reinforcement Learning has been used for a number of years in
single agent environments. This article reports on our investigation
of Reinforcement Learning techniques in a multi-agent and adversarial
environment with continuous observable state information. We introduce
a new framework, two-player hexagonal grid soccer, in which to evaluate
algorithms. We then compare the performance of several single-agent
Reinforcement Learning techniques in that environment. These are
further compared to a previously developed adversarial Reinforcement
Learning algorithm designed for Markov games. Building upon these
efforts, we introduce new algorithms to handle the multi-agent,
the adversarial, and the continuous-valued aspects of the domain.
We introduce a technique for modelling the opponent in an adversarial
game. We introduce an extension to Prioritized Sweeping that allows
generalization of learnt knowledge over neighboring states in the
domain; and we introduce an extension to the U Tree generalizing
algorithm that allows the handling of continuous state spaces.
Extensive empirical evaluation is conducted in the grid soccer domain.
22 pages
|