Cipollone, RobertoRonca, AlessandroJonsson, AndersSadegh Talebi, Mohammad2025-01-272025-01-272023Cipollone R, Ronca A, Jonsson A, Talebi MS. Provably efficient offline reinforcement learning in regular decision processes. In: Oh A, Naumann T, Globerson A, Saenko K, Hardt M, Levine S, editors. Advances in Neural Information Processing Systems 36 (NeuroIPS 2023); 2023 Dec 10-16; New Orleans.http://hdl.handle.net/10230/69308This paper deals with offline (or batch) Reinforcement Learning (RL) in episodic Regular Decision Processes (RDPs). RDPs are the subclass of Non-Markov Decision Processes where the dependency on the history of past events can be captured by a finite-state automaton. We consider a setting where the automaton that underlies the RDP is unknown, and a learner strives to learn a near-optimal policy using pre-collected data, in the form of non-Markov sequences of observations, without further exploration. We present RegORL, an algorithm that suitably combines automata learning techniques and state-of-the-art algorithms for offline RL in MDPs. RegORL has a modular design allowing one to use any off-the-shelf offline RL algorithm in MDPs. We report a non-asymptotic high-probability sample complexity bound for RegORL to yield an ε-optimal policy, which makes appear a notion of concentrability relevant for RDPs. Furthermore, we present a sample complexity lower bound for offline RL in RDPs. To our best knowledge, this is the first work presenting a provably efficient algorithm for offline learning in RDPs.application/pdfengCopyright © (2024) by individual authors and Neural Information Processing Systems Foundation Inc. All rights reserved. This is the accepted manuscript version of the paper. The final version is available online from the Neural Information Processing Systems Foundation at: https://proceedings.neurips.cc/paper_files/paper/2023/hash/7bf3e93543a612b75b6373178ba1faa4-Abstract-Conference.htmlProvably efficient offline reinforcement learning in regular decision processesinfo:eu-repo/semantics/conferenceObjecthttps://doi.org/10.48550/arXiv.2412.19194Reinforcement LearningRegular decision processesNon-Markov decision processesRegORLinfo:eu-repo/semantics/openAccess