Low-Rank Representation of Reinforcement Learning Policies

Main Article Content

Bogdan Mazoure
Thang Doan
Tianyu Li
Vladimir Makarenkov
Joelle Pineau
Doina Precup
Guillaume Rabusseau

Abstract

We propose a general framework for policy representation for reinforcement learning tasks. This framework involves finding a low-dimensional embedding of the policy on a reproducing kernel Hilbert space (RKHS). The usage of RKHS based methods allows us to derive strong theoretical guarantees on the expected return of the reconstructed policy. Such guarantees are typically lacking in black-box models, but are very desirable in tasks requiring stability and convergence guarantees. We conduct several experiments on classic RL domains. The results confirm that the policies can be robustly represented in a low-dimensional space while the embedded policy incurs almost no decrease in returns.

Article Details

Section
Articles