Efficient simulation techniques for stochastic model checking.
PhD thesis, University of Twente.
CTIT Ph.D.-thesis series No. 13-281
Full text available as:
Official URL: http://dx.doi.org/10.3990/1.9789036535861
In this thesis, we focus on methods for speeding-up computer simulations of stochastic models. We are motivated by real-world applications in which corporations formulate service requirements in terms of an upper bound on a probability of failure. If one wants to check whether a complex system model satisfies such a requirement, computer simulation is often the method of choice. We aim to aid engineers during the design phase, so a question of both practical and mathematical relevance is how the runtime of the simulation can be minimised.
We focus on two settings in which a speed-up can be achieved. First, when the probability of failure is low, as is typical in a highly reliable system, the time before the first failure is observed can be impractically large. Our research involves importance sampling; we simulate using a different probability measure under which failure is more likely, which typically decreases the variance of the estimator. In order to keep the estimator unbiased, we compensate for the resulting error using the Radon-Nikodym theorem. If done correctly, the gains can be huge. However, if the new probability measure is unsuited for the problem setting the negative consequences can be similarly profound (infinite variance is even possible). In our work, we have extended an importance sampling technique with good performance (i.e., proven to have bounded relative error) that was previously only applicable in restricted settings to a broad model class of stochastic (Markovian) Petri nets. We have also proposed methods to alleviate two well-known problems from the rare event simulation literature: the occurrence of so-called high-probability cycles and the applicability to large time horizons. For the first we use a method based on Dijkstra’s algorithm, for the second we use renewal theory.
Second, it often occurs that the number of needed simulation runs is overestimated. As a solution, we use sequential hypothesis testing, which allows us to stop as soon as we can say whether the service requirement is satisfied. This area has seen a lot of recent interest from the model checking community, but some of the techniques used are not always perfectly understood. In our research we have compared the techniques implemented in the most popular model checking tools, identified several common pitfalls and proposed a method that we proved to not have this problem. In particular, we have proposed a new test for which we bounded the probability of an incorrect conclusion using martingale theory.
|Item Type:||PhD Thesis|
|Supervisors:||Haverkort, B.R.H.M. and Boucherie, R.J.|
|Assistant Supervisors:||de Boer, P.T. and Scheinhardt, W.R.W.|
|Research Group:||EWI-DACS: Design and Analysis of Communication Systems, EWI-SOR: Stochastic Operations Research|
|Research Program:||CTIT-DSN: Dependable Systems and Networks|
|Research Project:||McSTORES: Model Checking Stochastic Systems using Rare Event Simulation|
|Uncontrolled Keywords:||Rare event simulation, Importance sampling, Statistical model checking, Dependable systems|
|Deposited On:||15 December 2013|
Export this item as:
To correct this item please ask your editor
Repository Staff Only: edit this item