Fast active learning for pure exploration in reinforcement learning

Mostra el registre complet Registre parcial de l'ítem

  • dc.contributor.author Ménard, Pierre
  • dc.contributor.author Darwiche Domingues, Omar
  • dc.contributor.author Kaufmann, Emilie
  • dc.contributor.author Jonsson, Anders
  • dc.contributor.author Leurent, Édouard
  • dc.contributor.author Valko, Michal
  • dc.date.accessioned 2025-01-27T13:54:52Z
  • dc.date.available 2025-01-27T13:54:52Z
  • dc.date.issued 2021
  • dc.description.abstract Realistic environments often provide agents with very limited feedback. When the environment is initially unknown, the feedback, in the beginning, can be completely absent, and the agents may first choose to devote all their effort on exploring efficiently. The exploration remains a challenge while it has been addressed with many hand-tuned heuristics with different levels of generality on one side, and a few theoreticallybacked exploration strategies on the other. Many of them are incarnated by intrinsic motivation and in particular explorations bonuses. A common choice is to use 1/√n bonus, where n is a number of times this particular state-action pair was visited. We show that, surprisingly, for a pure-exploration objective of reward-free exploration, bonuses that scale with 1/n bring faster learning rates, improving the known upper bounds with respect to the dependence on the horizon H. Furthermore, we show that with an improved analysis of the stopping time, we can improve by a factor H the sample complexity in the best-policy identification setting, which is another pure-exploration objective, where the environment provides rewards but the agent is not penalized for its behavior during the exploration phase.
  • dc.description.sponsorship The research presented was supported by European CHIST-ERA project DELTA, French Ministry of Higher Education and Research, Nord-Pas-de-Calais Regional Council, French National Research Agency project BOLD (ANR19-CE23-0026-04). Anders Jonsson is partially supported by the Spanish grants PID2019-108141GBI00 and PCIN-2017-082. Pierre Ménard is supported by the SFI Sachsen-Anhalt for the project RE-BCI ZS/2019/10/102024 by the Investitionsbank SachsenAnha
  • dc.format.mimetype application/pdf
  • dc.identifier.citation Ménard P, Darwiche O, Jonsson A, Kaufmann E, Leurent E, Valko M. Fast active learning for pure exploration in reinforcement learning. In: Meila M, Zhang T, editors. Proceedings of the 38th International Conference on Machine Learning, PMLR; 2021 Jul 18-24; Virtual. San Diego, CA; 2021. p.7599-7608.
  • dc.identifier.doi https://doi.org/10.48550/arXiv.2007.13442
  • dc.identifier.issn 2640-3498
  • dc.identifier.uri http://hdl.handle.net/10230/69310
  • dc.language.iso eng
  • dc.publisher PMLR
  • dc.relation.projectID info:eu-repo/grantAgreement/ES/2PE/PID2019-108141GB-I00
  • dc.rights Copyright 2021 by the author(s).
  • dc.rights.accessRights info:eu-repo/semantics/openAccess
  • dc.subject.keyword Fast active learning
  • dc.subject.keyword Reinforcement learning
  • dc.title Fast active learning for pure exploration in reinforcement learning
  • dc.type info:eu-repo/semantics/conferenceObject
  • dc.type.version info:eu-repo/semantics/publishedVersion