Furelos Blanco, DanielLaw, MarkJonsson, AndersBroda, KrysiaRusso, Alessandra2025-01-272025-01-272023Furelos-Blanco D, Law M, Jonsson A, Broda K, Russo A. Hierarchies of reward machines. In: Krause A, Brunskill E, Cho K, Engelhardt B, Sabato S, Scarlett J, editors. Proceedings of the 40th International Conference on Machine Learning, PMLR; 2023 Jul 23-29; Honolulu, Hawaii, USA. San Diego; 2023. p.10494-541http://hdl.handle.net/10230/69309Reward machines (RMs) are a recent formalism for representing the reward function of a reinforcement learning task through a finite-state machine whose edges encode subgoals of the task using high-level events. The structure of RMs enables the decomposition of a task into simpler and independently solvable subtasks that help tackle longhorizon and/or sparse reward tasks. We propose a formalism for further abstracting the subtask structure by endowing an RM with the ability to call other RMs, thus composing a hierarchy of RMs (HRM). We exploit HRMs by treating each call to an RM as an independently solvable subtask using the options framework, and describe a curriculum-based method to learn HRMs from traces observed by the agent. Our experiments reveal that exploiting a handcrafted HRM leads to faster convergence than with a flat HRM, and that learning an HRM is feasible in cases where its equivalent flat representation is not.application/pdfengCopyright 2023 by the author(s).Hierarchies of reward machinesinfo:eu-repo/semantics/conferenceObjecthttps://doi.org/10.48550/arXiv.2205.15752Reward machinesHierarchiesinfo:eu-repo/semantics/openAccess