Goals
========================================

The Model Checking Contest (MCC) is a yearly event that assesses existing
verification tools for concurrent systems on a set of models (i.e., benchmarks)
proposed by the scientific community. All tools are compared on the same
benchmarks and using the same computing platform, so that a fair comparison can
be made, contrary to most scientific publications, in which different
benchmarks are executed on different platforms.

Another goal of the Model Checking Contest is to infer conclusions about the
respective efficiency of verification techniques for Petri nets (decision
diagrams, partial orders, symmetries, etc.) depending on the particular
characteristics of models under analysis. Through the feedback on tools
efficiency, we aim at identifying which techniques can best tackle a given
class of models.

Finally, the Model Checking Contest seeks to be a friendly place where
developers meet, exchange, and collaborate to enhance their verification tools.

The Model Checking Contest is organized in three steps:

- the Call for Models,
- the present Call for Tools,
- and the Contest itself.

NOVELTY IN 2019
========================================

The MCC will jointly take place with the TOOLympics 2019, that will gather a
dozen established competitions to celebrate their achievements in the field of
Tools and Algorithms for the Construction and Analysis of System (TACAS).

The MCC will be part of TOOLYMPICS as one of the involved competitions, in
Prague, April 6-7, 2019. Consequently, participants of the MCC should pay close
attention to the deadlines in the calls as they occur very early for the 2019
edition.

Participating tools are invited to contact the MCC organizers to elaborate a
common paper providing short consistent description of participating tools.
This paper will be part of a dedicated "special 25th anniversary" proceedings
(published by Springer) with papers coming from the various competitions of the
TOOLYMPICS.

IMPORTANT: Participating tools for 2019 are invited to provide up to 2 pages
(LNCS style) describing the tool they submit and the novelty to be evaluated in
the MCC'2019 by January 7, 2019 (the latest). This contribution will be
integrated in a common paper describing the MCC'2019 to be inserted in a 25th
anniversary proceedings of TACAS. Developers of registered tools will be
contacted for more details on the contents of this paper.

Call for Tools
========================================

For the 2019 edition, we kindly ask the developers of verification tools for
concurrent systems to participate in the MCC competition. Each tool will be
assessed on both the accumulated collection of MCC models (these are the
"known" models, see http://mcc.lip6.fr/models.php 
<http://mcc.lip6.fr/models.php>) and on the new models
selected during the 2019 edition (these are the "surprise" models, see
http://mcc.lip6.fr/cfm.php <http://mcc.lip6.fr/cfm.php>).

The benchmarks on which tools will be assessed, are colored Petri nets and/or
P/T nets. Some P/T nets are provided with additional information giving a
hierarchical decomposition into sequential machines (these models are called
Nested-Units Petri nets - see http://mcc.lip6.fr/nupn.php 
<http://mcc.lip6.fr/nupn.php> for more
information): tools may wish to exploit this information to increase
performance and scalability.

Each tool may compete in one or more categories of problems, such as
reachability analysis, evaluation of CTL formulas, of LTL formulas, etc.

Tools have to be submitted in binary-code form. Each submitted tool will be run
by the MCC organizers in a virtual machine (typically configured with up to 4
cores, 16 Gbytes RAM, and a time confinement of 60 minutes per run, i.e., per
instance of a model). Last year, more than 1500 days of CPU time have been
invested in the MCC competition. The MCC relies on BenchKit
(https://github.com/fkordon/BenchKit <https://github.com/fkordon/BenchKit>), a 
dedicated execution environment for
monitoring the execution of processes and gathering of data.

By submitting a tool, you explicitly allow the organizers of the Model Checking
Contest to publish this tool in binary form on the MCC web site, so that
experiments can be reproduced by others after the contest. Detailed information
is available from http://mcc.lip6.fr/rules.php <http://mcc.lip6.fr/rules.php>.

Note: to submit a tool, it is not required to have submitted any model to the
MCC Call for Models. However, it is strongly recommended to pre-register your
tool using the dedicated form before February 1, 2019:
http://mcc.lip6.fr/registration.php <http://mcc.lip6.fr/registration.php>. You 
will then be informed of the way the
contest is going. The sooner is the better.

IMPORTANT: based on the discussions between the organizers and the tool
developers, 2019 introduces some changes to increase the accuracy of the
contest. Please have a close look at the submission manual that includes such
changes. You can find below the list of those that may have an impact for you:

Models now embed information about the properties located in the model forms
(when available). The way it is described is presented here. So you may for
example check if it is known that the model is «safe» or not. The default
virtual machine is divided in two disk images. mcc2019.vmdk is the bootable one
that you update with your tools. It mounts mcc2019-input.vmdk in read-only mode
that contains models and formulas for the contest. Grammar for formulas have
not changed, so the formulas for 2019 will be published with surprise models.

Important Dates

Oct. 26, 2018: publication of the updated 2018 contest rules at
         http://mcc.lip6.fr/pdf/rules.pdf <http://mcc.lip6.fr/pdf/rules.pdf>
Dec. 5, 2018: publication of the Tool Submission Kit, which is available 
        from http://mcc.lip6.fr/archives/ToolSubmissionKit.tar.gz 
<http://mcc.lip6.fr/archives/ToolSubmissionKit.tar.gz>
Jan. 7, 2018: deadline for tool pre-registration If you plan to submit a tool 
        to the contest, please fill in the pre-registration form available 
        from http://mcc.lip6.fr/registration.php 
<http://mcc.lip6.fr/registration.php>
Feb. 13, 2019: deadline for tool submission
Fev. 28, 2019: early feedback to tool submitters, following the preliminary 
        qualification runs, which are performed using a few small instances of t
                he "known" models.
April 6-7, 2019: official announcement of MCC'2019 results during the 
        TOOLympics 2019 at TACAS'19 (Prague, Czech Republic)

Committees
========================================

General Chairs

- Didier Buchs - Univ. Geneva, Switzerland
- Fabrice Kordon - Sorbonne Université, France

Execution Monitoring Board

- Francis Hulin-Hubard - CNRS and Sorbonne Université, France
- Fabrice Kordon - Sorbonne Université, France

Model Board

- Hubert Garavel - Inria/LIG, France
- Lom Messan Hillah - Univ. Paris Nanterre, France
- Fabrice Kordon - Sorbonne Université, France

Formula Board

- Loïg Jezequel - Univ. Nantes, France
- Emmanuel Paviot-Adet - Univ. Paris 5, France
- César Rodríguez - Univ. Paris 13, France

--------------------------------------------------------------------------------------
Fabrice Kordon          
Sorbonne Université
Campus Pierre & Marie Curie
LIP6/MoVe, Bureau 26/25-212
4 place Jussieu, 75252 Paris Cedex 05
http://lip6.fr/Fabrice.Kordon/ <http://lip6.fr/Fabrice.Kordon/>
----
[[ Petri Nets World:                                                ]]
[[              http://www.informatik.uni-hamburg.de/TGI/PetriNets/ ]]
[[ Mailing list FAQ:                                                ]]
[[ http://www.informatik.uni-hamburg.de/TGI/PetriNets/pnml/faq.html ]]
[[ Post messages/summary of replies:                                ]]
[[                               [email protected] ]]

Reply via email to