Skip to main content

Research Repository

Advanced Search

Combining Reinforcement Learning and Tensor Networks, with an Application to Dynamical Large Deviations

Gillman, Edward; Rose, Dominic C; Garrahan, Juan P

Authors

Dominic C Rose



Abstract

We present a framework to integrate tensor network (TN) methods with reinforcement learning (RL) for solving dynamical optimisation tasks. We consider the RL actor-critic method, a model-free approach for solving RL problems, and introduce TNs as the approximators for its policy and value functions. Our "actor-critic with tensor networks" (ACTeN) method is especially well suited to problems with large and factorisable state and action spaces. As an illustration of the applicability of ACTeN we solve the exponentially hard task of sampling rare trajectories in two paradigmatic stochastic models, the East model of glasses and the asymmetric simple exclusion process (ASEP), the latter being particularly challenging to other methods due to the absence of detailed balance. With substantial potential for further integration with the vast array of existing RL methods, the approach introduced here is promising both for applications in physics and to multi-agent RL problems more generally.

Citation

Gillman, E., Rose, D. C., & Garrahan, J. P. (in press). Combining Reinforcement Learning and Tensor Networks, with an Application to Dynamical Large Deviations. Physical Review Letters,

Journal Article Type Article
Acceptance Date Apr 4, 2024
Deposit Date Apr 5, 2024
Publicly Available Date Apr 5, 2024
Journal Physical Review Letters
Print ISSN 0031-9007
Electronic ISSN 1079-7114
Publisher American Physical Society
Peer Reviewed Peer Reviewed
Public URL https://nottingham-repository.worktribe.com/output/33293313

Files




You might also like



Downloadable Citations