Goal recognition over POMDPs: inferring the intention of a POMDP agent

Citation

  • Ramírez M, Geffner H. Goal recognition over POMDPs: inferring the intention of a POMDP agent. In: Proceedings of the Twenty-Second International Joint Conference on Artificial Intelligence; july 16-22, 2011; Barcelona. Menlo Park, California: AAAI Press ; 2011. p. 2009-2014.

Permanent Link

Description

  • Abstract

    Plan recognition is the problem of inferring the goals and plans of an agent from partial observations of her behavior. Recently, it has been shown that the problem can be formulated and solved using/nplanners, reducing plan recognition to plan generation./nIn this work, we extend this model-based/napproach to plan recognition to the POMDP setting, where actions are stochastic and states are partially observable. The task is to infer a probability distribution over the possible goals of an agent whose behavior results from a POMDP model. The POMDP model is shared between agent and observer except for the true goal of the agent that is hidden to the observer. The observations are action sequences O that may contain gaps as some or even most of the actions done by the agent may not be observed. We show that the posterior goal distribution P(GjO) can be computed from the value function VG(b) over beliefs b generated by the POMDP/nplanner for each possible goal G. Some extensions/nof the basic framework are discussed, and a number/nof experiments are reported.
  • Full item page