Koichiro Morihiro, Teijiro Isokawa, Nobuyuki Matsui, Haruhiko Nishimura
Proceedings of the SICE Annual Conference 2395-2399 2005年12月1日
In reinforcement learning, the exploration, that is a process of trial and error, plays an important role. As a generator for exploration, it seems to be familiar to use the uniform pseudorandom number generator. However, it is known that chaotic source also provides a random-like sequence as like as stochastic source. Applying this random-like feature of deterministic chaos for a generator of the exploration, we already found that the deterministic chaotic generator for the exploration based on the logistic map gives better performances than the stochastic random exploration generator in a nonstationary shortcut maze problem. In this research, in order to make certain such a difference of the performance, we examine target capturing as another nonstationary task. The simulation result in this task approves the result in our previous work. © 2005 SICE.