Planning with Markov decision processes : an AI perspective
- Responsibility
- Mausam and Andrey Kolobov.
- Imprint
- Cham, Switzerland : Springer, ©2012.
- Physical description
- 1 online resource (xvi, 194 pages) : illustrations
- Series
- Synthesis lectures on artificial intelligence and machine learning ; # 17. 1939-4616
Online
More options
Description
Creators/Contributors
- Author/Creator
- Mausam.
- Contributor
- Kolobov, Andrey.
Contents/Summary
- Bibliography
- Includes bibliographical references (pages 163-185) and index.
- Contents
-
- Introduction MDPs Fundamental Algorithms Heuristic Search Algorithms Symbolic Algorithms Approximation Algorithms Advanced Notes.
- (source: Nielsen Book Data)
- Publisher's summary
-
Markov Decision Processes (MDPs) are widely popular in Artificial Intelligence for modeling sequential decision-making scenarios with probabilistic dynamics. They are the framework of choice when designing an intelligent agent that needs to act for long periods of time in an environment where its actions could have uncertain outcomes. MDPs are actively researched in two related subareas of AI, probabilistic planning and reinforcement learning. Probabilistic planning assumes known models for the agent's goals and domain dynamics, and focuses on determining how the agent should behave to achieve its objectives. On the other hand, reinforcement learning additionally learns these models based on the feedback the agent gets from the environment. This book provides a concise introduction to the use of MDPs for solving probabilistic planning problems, with an emphasis on the algorithmic perspective. It covers the whole spectrum of the field, from the basics to state-of-the-art optimal and approximation algorithms. We first describe the theoretical foundations of MDPs and the fundamental solution techniques for them. We then discuss modern optimal algorithms based on heuristic search and the use of structured representations. A major focus of the book is on the numerous approximation schemes for MDPs that have been developed in the AI literature. These include determinization-based approaches, sampling techniques, heuristic functions, dimensionality reduction, and hierarchical representations. Finally, we briefly introduce several extensions of the standard MDP classes that model and solve even more complex planning problems.
(source: Nielsen Book Data)
Subjects
- Subjects
- Artificial intelligence > Mathematical models.
- Markov processes.
- Intelligence artificielle > Modèles mathématiques.
- Processus de Markov.
- COMPUTERS > Enterprise Applications > Business Intelligence Tools.
- COMPUTERS > Intelligence (AI) & Semantics.
- MDP
- AI planning
- probabilistic planning
- uncertainty in AI
- sequential decision making under uncertainty
- reinforcement learning
Bibliographic information
- Publication date
- 2012
- Series
- Synthesis lectures on artificial intelligence and machine learning, 1939-4616 ; # 17
- Note
- Part of: Synthesis digital library of engineering and computer science.
- Referenced in
- Compendex
- INSPEC
- Google scholar
- Google book search
- ISBN
- 9781608458875 (electronic bk.)
- 1608458873 (electronic bk.)
- 9781608458868 (pbk.)
- 9783031015595 (electronic bk.)
- 3031015592 (electronic bk.)
- DOI
- 10.2200/S00426ED1V01Y201206AIM017
- 10.1007/978-3-031-01559-5