Hilbert space embeddings of pomdps

Yu Nishiyama, Abdeslam Boularias, Arthur Gretton, Kenji Fukumizu

Research output: Chapter in Book/Report/Conference proceedingConference contribution

23 Scopus citations

Abstract

A nonparametric approach for policy learning for POMDPs is proposed. The approach represents distributions over the states, observations, and actions as embeddings in feature spaces, which are reproducing kernel Hilbert spaces. Distributions over states given the observations are obtained by applying the kernel Bayes' rule to these distribution embeddings. Policies and value functions are defined on the feature space over states, which leads to a feature space expression for the Bellman equation. Value iteration may then be used to estimate the optimal value function and associated policy. Experimental results confirm that the correct policy is learned using the feature space representation.

Original languageEnglish (US)
Title of host publicationUncertainty in Artificial Intelligence - Proceedings of the 28th Conference, UAI 2012
Pages644-653
Number of pages10
StatePublished - 2012
Externally publishedYes
Event28th Conference on Uncertainty in Artificial Intelligence, UAI 2012 - Catalina Island, CA, United States
Duration: Aug 15 2012Aug 17 2012

Publication series

NameUncertainty in Artificial Intelligence - Proceedings of the 28th Conference, UAI 2012

Other

Other28th Conference on Uncertainty in Artificial Intelligence, UAI 2012
Country/TerritoryUnited States
CityCatalina Island, CA
Period8/15/128/17/12

All Science Journal Classification (ASJC) codes

  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Hilbert space embeddings of pomdps'. Together they form a unique fingerprint.

Cite this