Efficient algorithms for linearly solvable Markov decision processes

Enllaç permanent

Descripció

  • Resum

    In recent years, Reinforcement Learning (RL) has emerged as a powerful paradigm for sequential decision making under uncertainty. Within this framework, Markov Decision Processes (MDPs) serve as a fundamental model, defining the dynamics of state transitions and rewards. However, traditional RL algorithms, like Q-Learning, often struggle with large or continuous state spaces due to computa- tional complexity. Linearly-solvable Markov Decision Processes (LMDPs) offer a promising alternative, leveraging linear programming techniques for efficient planning and value function approximation. The focus of this work is on evaluating and benchmarking state-of-the-art RL models against algorithms for continuous MDPs leveraging LMDPs, such as Z-Learning. The aim is to improve the performance and scalability of these algorithms in larger and more intricate domains. We investigate efficient methods for optimal action selection and value function approximation within the linear framework. To enable a fair comparison with traditional MDP-based RL, we develop methods for embedding MDPs into LMDPs and vice versa. This allows us to benchmark state-of-the-art RL models against algorithms designed for continuous MDPs using LMDPs. Furthermore, our research rigorously explores various factors that influence the learning behavior of algorithms in the context of Linearly-solvable MDPs. Particularly, we focus on analyzing the impact of different exploration strategies, aiming to uncover their effectiveness across diverse scenarios. By delving into these aspects, our study contributes valuable insights into the optimization and enhancement of reinforcement learning algorithms.
  • Descripció

    Tutor: Anders Johnson
    Treball de fi de grau en Enginyeria Matemàtica en Ciència de Dades
  • Mostra el registre complet