Result: Concurrent algorithms and data structures for model checking

Title:
Concurrent algorithms and data structures for model checking
Authors:
Contributors:
Fokkink, Wan, van Glabbeek, Rob
Source:
van de Pol, J 2019, Concurrent algorithms and data structures for model checking. in W Fokkink & R van Glabbeek (eds), 30th International Conference on Concurrency Theory, CONCUR 2019., 4, Dagstuhl Publishing, Leibniz International Proceedings in Informatics, LIPIcs, vol. 140, 30th International Conference on Concurrency Theory, CONCUR 2019, Amsterdam, Netherlands, 27/08/2019. https://doi.org/10.4230/LIPIcs.CONCUR.2019.4
Publisher Information:
Dagstuhl, 2019.
Publication Year:
2019
Document Type:
Conference Conference object<br />Contribution for newspaper or weekly magazine
Language:
English
DOI:
10.4230/lipics.concur.2019.4
Accession Number:
edsair.dedup.wf.002..ae6e3a4561e7ee4b3d0c2652632d78e3
Database:
OpenAIRE

Further Information

Model checking is a successful method for checking properties on the state space of concurrent, reactive systems. Since it is based on exhaustive search, scaling this method to industrial systems has been a challenge since its conception. Research has focused on clever data structures and algorithms, to reduce the size of the state space or its representation; smart search heuristics, to reveal potential bugs and counterexamples early; and high-performance computing, to deploy the brute force processing power of clusters of compute-servers. The main challenge is to combine these approaches – brute-force alone (when implemented carefully) can bring a linear speedup in the number of processors. This is great, since it reduces model-checking times from days to minutes. On the other hand, proper algorithms and data structures can lead to exponential gains. Therefore, the parallelization bonus is only real if we manage to speedup clever algorithms. There are some obstacles though: many linear-time graph algorithms depend on a depth-first exploration order, which is hard to parallelize. Examples include the detection of strongly connected components (SCC) and the nested depth-first-search (NDFS) algorithm. Both are used in model checking LTL properties. Symbolic representations, like binary decision diagrams (BDDs), reduce model checking to “pointer-chasing”, leading to irregular memory-access patterns. This poses severe challenges on achieving actual speedup in (clusters of) modern multi-core computer architectures. This talk presents some of the solutions found over the last 10 years, which led to the high-performance model checker LTSmin [2]. These include parallel NDFS (based on the PhD thesis of Alfons Laarman [3]), the parallel detection of SCCs with concurrent Union-Find (based on the PhD thesis of Vincent Bloemen [1]), and concurrent BDDs (based on the PhD thesis of Tom van Dijk [4]). Finally, I will sketch a perspective on moving forward from high-performance model checking to high-performance synthesis algorithms. Examples include parameter synthesis for stochastic and timed systems, and strategy synthesis for (stochastic and timed) games.