Home > Publications
Home University of Twente
Prospective Students
Intranet (internal)

EEMCS EPrints Service

24693 Extending Markov Automata with State and Action Rewards
Home Policy Brochure Browse Search User Area Contact Help

Guck, D. and Timmer, M. and Blom, S.C.C. (2014) Extending Markov Automata with State and Action Rewards. In: Proceedings of the 12th Workshop on Quantitative Aspects of Programming Languages and Systems (QAPL 2014), 12-13 April 2014, Grenoble, France. Inria. ISBN not assigned

Full text available as:


133 Kb
Open Access

Official URL:

Exported to Metis


This presentation introduces the Markov Reward Automaton (MRA), an extension of the Markov automaton that allows the modelling of systems incorporating rewards in addition to nondeterminism, discrete probabilistic choice and continuous stochastic timing. Our models support both rewards that are acquired instantaneously when taking certain transitions (action rewards) and rewards that are based on the duration that certain conditions hold (state rewards). In addition to introducing the MRA model, we extend the process-algebraic language MAPA to easily specify MRAs. Also, we provide algorithms for computing the expected reward until reaching one of a certain set of goal states, as well as the long-run average reward. We extended the MAMA tool chain (consisting of the tools SCOOP and IMCA) to implement the reward extension of MAPA and these algorithms.

Item Type:Conference or Workshop Paper (Extended Abstract, Talk)
Research Group:EWI-FMT: Formal Methods and Tools
Research Program:CTIT-DSN: Dependable Systems and Networks
Research Project:SYRUP: SYmbolic RedUction of Probabilistic Models, ArRangeer: smARt Railroad maintenance eNGinEERing with stochastic model checking, ROCKS: RigorOus dependability analysis using model ChecKing techniques for Stochastic systems
Uncontrolled Keywords:Markov automata, Rewards, Process algebra, Expected reward, Long-run average
ID Code:24693
Deposited On:26 May 2014
More Information:statisticsmetis

Export this item as:

To correct this item please ask your editor

Repository Staff Only: edit this item