EEMCS EPrints Service
Guck, D. and Timmer, M. and Hatefi, H and Ruijters, E.J.J. and Stoelinga, M.I.A. (2014) Modelling and analysis of Markov reward automata (extended version). Technical Report TR-CTIT-14-06, Centre for Telematics and Information Technology, University of Twente, Enschede. ISSN 1381-3625
Full text available as:
Costs and rewards are important ingredients for cyberphysical systems, modelling critical aspects like energy consumption, task completion, repair costs, and memory usage. This paper introduces Markov reward automata, an extension of Markov automata that allows the modelling of systems incorporating rewards (or costs) in addition to nondeterminism, discrete probabilistic choice and continuous stochastic timing. Rewards come in two flavours: action rewards, acquired instantaneously when taking a transition; and state rewards, acquired while residing in a state. We present algorithms to optimise three reward functions: the expected accumulative reward until a goal is reached; the expected accumulative reward until a certain time bound; and the long-run average reward. We have implemented these algorithms in the SCOOP/IMCA tool chain and show their feasibility via several case studies.
Export this item as:
To correct this item please ask your editor
Repository Staff Only: edit this item