Anytime Learning and Adaptation of Structured Fuzzy Behaviors
Bonarini Andrea
Журнал:
Adaptive Behavior
Дата:
1997
Аннотация:
We present an approach to support effective learning and adaptation of
behaviors for autonomous agents with reinforcement learning algorithms.
These methods can identify control systems that optimize a reinforcement
program, which is, usually, a straightforward representation of the designer's
goals. Reinforcement learning algorithms usually are too slow to be applied in
real time on embodied agents, although they provide a suitable way to
represent the desired behavior. We have tackled three aspects of this problem:
the speed of the algorithm, the learning procedure, and the control system
architecture. The learning algorithm we have developed includes features to
speed up learning, such as niche-based learning, and a representation of the
control modules in terms of fuzzy rules that reduces the search space and
improves robustness to noisy data. Our learning procedure exploits
methodologies such as learning from easy missions and transfer of policy from
simpler environments to the more complex. The architecture of our control
system is layered and modular, so that each module has a low complexity and
can be learned in a short time. The composition of the actions proposed by the
modules is either learned or predefined. Finally, we adopt an anytime learning
approach to improve the quality of the control system on-line and to adapt it to
dynamic environments.The experiments we present in this article concern learning to reach another
moving agent in a real, dynamic environment that includes nontrivial situations
such as that in which the moving target is faster than the agent and that in
which the target is hidden by obstacles.
1.801Мб