Chairs: C. Cirstea, F. Gadducci, H. Schlingloff Past Chairmen: M. Roggenbach, L. Schröder, T. Mossakowski, J. Fiadeiro, P. Mosses, H.-J. Kreowski
Fri, 15 March 2019 at 12:20 pm in Prague, Czech Republic
Joint work with: Sebastian Arming, Ezio Bartocci, Krishnendu Chatterjee, Joost-Pieter Katoen
Abstract: Markov decision processes (MDPs) are a well studied class of models for planning and decision making in presence of uncertainty. They are closely related to probabilistic systems and hence form the basis of probabilistic verification. Unlike for verification purposes, the main goal in AI for such MDPs is to synthesise optimal strategies that optimise a certain goal, e.g. reachability. This is relatively easy for MDPs, but indeed an open and interesting problem for parametric MDPs in which some of the transition probabilities are unknown parameters. Some work has already been done in synthesising schedulers that optimise maximal reachability probability. In our work we study different notions of optimality and focus in particular on synthesising expectation optimal schedulers. My talk will be a gentle introduction to these topics and a brief presentation of our results so far.
Slides