Welcome to the UPF Digital Repository

Solving Montezuma's Revenge with Planning and Reinforcement Learning

Show simple item record

dc.contributor.author Garriga Alonso, Adrià
dc.date.accessioned 2017-04-21T11:13:21Z
dc.date.available 2017-04-21T11:13:21Z
dc.date.issued 2017-04-21
dc.identifier.uri http://hdl.handle.net/10230/30867
dc.description Treball de fi de grau en informàtica
dc.description Tutor: Anders Jonsson
dc.description.abstract Traditionally, methods for solving Sequential Decision Processes (SDPs) have not worked well with those that feature sparse feedback. Both planning and reinforcement learning, methods for solving SDPs, have trouble with it. With the rise to prominence of the Arcade Learning Environment (ALE) in the broader research community of sequential decision processes, one SDP featuring sparse feedback has become familiar: the Atari game Montezuma’s Revenge. In this particular game, the great amount of knowledge the human player already possesses, and uses to find rewards, cannot be bridged by blindly exploring in a realistic time. We apply planning and reinforcement learning approaches, combined with domain knowledge, to enable an agent to obtain better scores in this game. We hope that these domain-specific algorithms can inspire better approaches to solve SDPs with sparse feedback in general.
dc.format.mimetype application/pdf
dc.language.iso eng
dc.rights Atribución-NoComercial-SinDerivadas 3.0 España
dc.rights.uri http://creativecommons.org/licenses/by-nc-nd/3.0/es/
dc.subject.other Intel·ligència artificial
dc.title Solving Montezuma's Revenge with Planning and Reinforcement Learning
dc.type info:eu-repo/semantics/bachelorThesis
dc.rights.accessRights info:eu-repo/semantics/openAccess

This item appears in the following Collection(s)

Show simple item record

Search DSpace

Advanced Search


My Account


Compliant to Partaking