Induction and exploitation of subgoal automata for reinforcement learning
Mostra el registre complet Registre parcial de l'ítem
- dc.contributor.author Furelos Blanco, Daniel
- dc.contributor.author Law, Mark
- dc.contributor.author Jonsson, Anders, 1973-
- dc.contributor.author Broda, Krysia
- dc.contributor.author Russo, Alessandra
- dc.date.accessioned 2021-04-28T06:37:54Z
- dc.date.available 2021-04-28T06:37:54Z
- dc.date.issued 2021
- dc.description.abstract In this paper we present ISA, an approach for learning and exploiting subgoals in episodic reinforcement learning (RL) tasks. ISA interleaves reinforcement learning with the induction of a subgoal automaton, an automaton whose edges are labeled by the task’s subgoals expressed as propositional logic formulas over a set of high-level events. A subgoal automaton also consists of two special states: a state indicating the successful completion of the task, and a state indicating that the task has finished without succeeding. A state-of-the-art inductive logic programming system is used to learn a subgoal automaton that covers the traces of high-level events observed by the RL agent. When the currently exploited automaton does not correctly recognize a trace, the automaton learner induces a new automaton that covers that trace. The interleaving process guarantees the induction of automata with the minimum number of states, and applies a symmetry breaking mechanism to shrink the search space whilst remaining complete. We evaluate ISA in several gridworld and continuous state space problems using different RL algorithms that leverage the automaton structures. We provide an in-depth empirical analysis of the automaton learning performance in terms of the traces, the symmetry breaking and specific restrictions imposed on the final learnable automaton. For each class of RL problem, we show that the learned automata can be successfully exploited to learn policies that reach the goal, achieving an average reward comparable to the case where automata are not learned but handcrafted and given beforehand.
- dc.description.sponsorship The authors would like to thank the anonymous reviewers for their helpful comments and suggestions. Anders Jonsson is partially supported by the Spanish grants PCIN-2017-082 and PID2019-108141GB-I00.
- dc.format.mimetype application/pdf
- dc.identifier.citation Furelos-Blanco D, Law M, Jonsson A, Broda K, Russo A. Induction and exploitation of subgoal automata for reinforcement learning. J Artif Intell Res. 2021;70:1031-116. DOI: 10.1613/jair.1.12372
- dc.identifier.doi http://dx.doi.org/10.1613/jair.1.12372
- dc.identifier.issn 1943-5037
- dc.identifier.uri http://hdl.handle.net/10230/47236
- dc.language.iso eng
- dc.publisher AI Access Foundation
- dc.relation.ispartof Journal of Artificial Intelligence Research. 2021;70:1031-116
- dc.relation.projectID info:eu-repo/grantAgreement/ES/2PE/PID2019-108141GB-I00
- dc.relation.projectID info:eu-repo/grantAgreement/ES/2PE/PCIN-2017-082
- dc.rights © AI Access Foundation. The original article was published in Journal of Artificial Intelligence Research and can be found at http://dx.doi.org/10.1613/jair.1.12372
- dc.rights.accessRights info:eu-repo/semantics/openAccess
- dc.subject.keyword Reinforcement learning
- dc.subject.keyword Inductive logic programming
- dc.subject.keyword Logic programming
- dc.title Induction and exploitation of subgoal automata for reinforcement learning
- dc.type info:eu-repo/semantics/article
- dc.type.version info:eu-repo/semantics/publishedVersion