++++++++++++++++++ FME2001: - Review reports for paper 67 +++++++++++++++++++++ - Dear author(s) : this file was extracted automatically from a very large - mailbox. In case you find any problem concerning mail encodings (or any - other kind of anomaly disturbing your understanding of the reviews) please - email any of the PC cochairs (pamela@research.att.com,jno@di.uminho.pt). ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ PAPER NUMBER: 67 CATEGORY: 1 TITLE: Model Checking Safety Critical Systems AUTHOR(S): C. Bernardeschi Alessandro Fantechi S. Gnesi -------------------------------------------------------------------------------- 1. Briefly SUMMARIZE the paper (2-3 lines): The paper argues that safety critical systems are the next candidate to use model-checking techniques successfully in industry. This is based on the fact that state space can be reduced via a reorganisation which is easy to deal with. 2. RELEVANCE: Please provide a rating of the paper's relevance to the FME Symposium, using the scale: 0 = Out of scope 1 = Marginal interest 2 = Minority interest 3 = Majority interest 4 = Outstanding interest Numeric Rating: 2 Please comment your rating: The authors argument is good and new in the presentation manner, but it simply is not enough to solve entirely the state explosion problem for real systems. So, in the same way the authors suggest that decomposition and abstraction are partially successful, I believe that their proposal is also a partial contribution that could be added to domain-specific applications. 3. OVERALL RECOMMENDATION: Please provide a rating of the paper's acceptability, using the scale: 1 = Strong reject 2 = Weak reject 3 = Could go either way 4 = Weak accept 5 = Strong accept Numeric Rating: 3 NB: There should be a correlation between the two rates above. 4. CONFIDENCE LEVEL: Please provide a rating of your expertise in the area addressed by the paper, using the scale: 1 = Know virtually nothing about this area 2 = Not too knowledgeable, but I know a bit 3 = Know a moderate amount, about average I'd say 4 = Not my main area of work, but I know a lot about it 5 = My area of work, I know it extremely well Numeric Rating: 3 NB: PC members are responsible for ensuring that 1 is not used here. 5. ORIGINALITY. What is NEW and SIGNIFICANT in the work reported here? Comment: The importance of the work is on focusing on particular characteristics of safety critical systems which can help when applying model-checking. It is new in the way it points out a specific treatment to safety critical systems, but there exist ot her works which carry on this result implicitly (see Item 12). 6. How WORTHWHILE is the goal of the authors? Comment: A critical problem with the application of model-checking in industrial systems is the state-space explosion. The authors show that this can be amenable by a particular characteristic observed in safety-critical systems, specifically on railway interlocking, namely reorganising the redundant parts of the system. 7. How well is this goal EXPLAINED and JUSTIFIED? Comment: Considering the fact that their proposal potentially reduces state-space, the goal was well-explained and justified. However, I am not sure that this result implies a revolutionary step towards acceptance of model-checking in the safety-critical industry. Then, the overall justification is still partial in my view. 8. TECHNICAL QUALITY. Are the technical parts (definitions, statements, specifications, proofs, algorithms, etc.) SOUND? Comment: The whole paper is very informal. And the unique part which is more formal is sound although no proof was given. 9. APPLICABILITY. If the work is primarily theoretical or conceptual, are the implications in terms of applicability adequately justified? If the paper is about a new formal technique, are satisfactory arguments presented in favor of the new technique? If a methodology is proposed, is it sound? If experimental results are presented, is their relevance justified? Comment: As explained in item 7, I think that the result is interesting but not the definitive point to argue that safety-critical systems are the next candidate to share the success of using model-checking. It is indeed an orthogonal effort which can be applied to deal with more larger systems. 10. PRESENTATION: Describe the QUALITY of the writing, including suggestions for changes where appropriate. Comment: In general, the paper is well written. There is only a small problem in the 4th paragraph of Section 4.1 where ``global'' is written as ``gloabal''. 11. Were there any formatting or mechanical problems with this paper?: No Are the figures and length acceptable?: Yes Are the references correct?: Reference 13 has the word ``Verification'' missing its first i. 12. OTHER COMMENTS you believe would be useful to the author(s), including pointers to missing relevant work: A more formal presentation and pointers to related formal work are missing. For example: Roscoe, A.W. and Brookes, S.D. Deadlock analysis in networks of communicating processes Distributed Computing(1991) 4:209-230, Springer-Verlag +++++++++++++++++++++ End of FME 2001 Paper Review Report ++++++++++++++++++++++ PAPER NUMBER: 67 CATEGORY: 1 TITLE: Model Checking Safety Critical Systems AUTHOR(S): C. Bernardeschi Alessandro Fantechi S. Gnesi -------------------------------------------------------------------------------- 1. Briefly SUMMARIZE the paper (2-3 lines): The paper gives an overview of key formal methods that can potentially be used for verifying safety critical systems, and the reasons why they were not that successful in penetrating the safety-critical industry. Then it shows how redundancy, which is typical to systems in this domain, can be exploited for reducing the state-space. 2. RELEVANCE: Please provide a rating of the paper's relevance to the FME Symposium, using the scale: 0 = Out of scope 1 = Marginal interest 2 = Minority interest 3 = Majority interest 4 = Outstanding interest Numeric Rating: 3 Please comment your rating: 3. OVERALL RECOMMENDATION: Please provide a rating of the paper's acceptability, using the scale: 1 = Strong reject 2 = Weak reject 3 = Could go either way 4 = Weak accept 5 = Strong accept Numeric Rating: 2 NB: There should be a correlation between the two rates above. 4. CONFIDENCE LEVEL: Please provide a rating oof your expertise in the area addressed by the paper, using the scale: 1 = Know virtually nothing about this area 2 = Not too knowledgeable, but I know a bit 3 = Know a moderate amount, about average I'd say 4 = Not my main area of work, but I know a lot about it 5 = My area of work, I know it extremely well Numeric Rating: 4 NB: PC members are responsible for ensuring that 1 is not used here. 5. ORIGINALITY. What is NEW and SIGNIFICANT in the work reported here? Comment: exploiting redundancy in Safety-critical systems for reducing state-space is, as far as I know, new. 6. How WORTHWILE is the goal of the authors? Comment: it is not clear what their goal is. On the one hand they make very broad statements about model checking of safety critical systems (including the title), on the other hand their contribution to the matter is merely in offering just one more optimization that might be helpful in some cases (and there are no experiments to back it up). 7. How well is this goal EXPLAINED and JUSTIFIED? Comment: see above. 8. TECHNICAL QUALITY. Are the technical parts (definitions, statements, specifications, proofs, algorithms, etc.) SOUND? Comment: The paper hardly has any 'formal' argument or definitions. The introduction includes several technical mistakes and unclear statements (see below). 9. APPLICABILITY. If the work is primarily theoretical or conceptual, are the implications in terms of applicability adequately justified? If the paper is about a new formal technique, are satisfactory arguments presented in favor of the new technique? If a methodology is proposed, is it sound? If experimental results are presented, is their relevance justified? Comment: It all comes back to the structure, title and introduction of the paper. These promise to help in verifying safety critical systems, but only a single untested suggestion is given, which by itself does not justify the promise. 10. PRESENTATION: Describe the QUALITY of the writing, including suggestions for changes where appropriate. Comment: 1. Writing the paper should be carefully edited because it includes many mistakes in English as well as incomprehensible sentences, especially in the first half of the paper (sections 1 - 3). I will later mention some of these mistakes. 2. Structure the paper could be worth while if section 4.1 becomes it's center, with a title such as 'exploiting redundancy in model checking of safety critical systems'. This means that almost the entire introduction is superfluous. You can assume that people who are interested in exploitation of redundancy know about theorem proving and model checking. I'm not sure if this is the right place to make such a broad introduction of theorem proving, model-theoretic approaches, explicit Vs. symbolic model checking, etc. nor it is the place to discuss the usage of model checking in different industries, unless it supports your arguments later (the only argument it supports is that state-explosion is still a problem. Even if your readers don't know this, it can be stated explicitly without all the other irrelevant details). You can replace it by a short paragraph saying (what is a common knowledge) that every effort in reducing the state-space can assist in automatic model checking of these kind of systems and hence increased usage of model checking in these industries. As for sec 4.1 itself, it should be extended further. The best way to show that exploiting redundancy has a strong effect, is by showing experimental results, if you can. Furthermore - you explicitly say in the bottom of sec. 4.1 that the modeling of fault occurrences break the similarity of replicas. Could it be that experiments with real designs will show that this is a crucial factor that will make the usage of redundancy in decreasing the state-space impossible, or hardly worth-while ? This is the only way to check the tradeoff between more fault scenarios and smaller state-space (in regard to your comment that only a restricted set of fault scenarios should be checked). 3. detailed comments Some detailed comments (I mention here only few. as I mentioned before, the first half should be rewritten entirely). page | par | line | comment =========================== 1 abstract 4 'statement first' -> 'statement by first' 1 abstract 8 'experience of ' -> 'experience with ' 1 abstract -1 'cases studies' -> 'case studies' 1 1 2-4 'should promise ...' -> 'should guarantee that a computer ...' note that here you only refer to safety properties (no liveness)? 1 3 6 'an high' -> 'a high' 1 3 8 'ensure that the correctness of implementations' -> 'ensure the correctness of the implementation' in this paragraph you mention test case derivation after 'high level of confidence of the correctness of the system'. Note that you need test cases only if you fail to fully apply formal verification. 2 1 2 'maturity of tool support' -> 'maturity of the tool and support' 2 2 4 what's 'innovation cost' ? 2 4 2 'categories,' -> 'categories:' 2 4 3 'modelled' -> 'modeled' 2 4 3 beginning from 'the system state is modelled in terms of set theoretical ...' this is not clear, and probably not correct, too. Systems are not necessarily modeled this way, so you should clarify what you mean, also in the rest of this sentence. 2 6 2 'satisfiability' -> I think here you mean 'validity' 2 6 8- the state explosion problem may arise for the control path, on a single sub-system too. In any case it is worse-case exponential in the number of variables. 2 6 12 by 'traditional' I assume you mean 'explicit model checking'. You should say this explicitly. Is there a reason why you mention this? 2 7 1 remove ',' after 'techniques' 3 2 The two interpretations which you present for the term 'model checking' are false. The second (non-exhaustive) is not even considered as a formal method. It's plain old testing and simulation. this is informal methods. In the first definition you include almost all algorithmic formal methods: equivalence checking, preorder checking, etc. The term Model checking normally refers to the validation that a transition system is a model (satisfies) some formula (the property). The term 'Model' is taken from Mathematical logic, and has nothing to do with the fact that we check 'a model of the system'. You can use the preface in Clarke, Grumberg & Peled's book 'Model checking'. 4 3 6 'require a large investment in people' -> 'require a lot of manual work' 4 3 -1 'attacked' -> 'handled' 4 3 8 'an heavy' -> 'a heavy' 4 The distinction you make between the industries should actually be a distinction between type of systems. It is correct that once the system includes a continuous part model checking is harder. But all the industries which you mentioned also have systems based on discrete modeling. The fact is that there is some positive experience with model checking in Aeronautics (e.g. British aerospace) automotive (BMW, Siemens (which produces some components for this industry)), etc. You might want to mention here the 'StateMate Verification Environment' introduced in CAV00 (W. Damm) that is aimed exactly at this market. 6 3 1 what is 'relative corrective actions' ? 7 2 5 'making difficult' -> 'making it difficult' 11. Were there any formatting or mechanical problems with this paper?: Are the figures and length acceptable?: Are the references correct?: Note that references 9, 11 and 14 are the same ! 12. OTHER COMMENTS you believe would be useful to the author(s), including pointers to missing relevant work: +++++++++++++++++++++ End of FME 2001 Paper Review Report ++++++++++++++++++++++ PAPER NUMBER: 67 CATEGORY: 1 TITLE: Model Checking Safety Critical Systems AUTHOR(S): C. Bernardeschi Alessandro Fantechi S. Gnesi -------------------------------------------------------------------------------- 1. Briefly SUMMARIZE the paper (2-3 lines): This paper presents an argument that some of the characteristics of safety critical computing systems make them particularly suited for model checking. A discussion of state space management in redundant systems and in Byzantine fault-tolerant systems supports the claim. 2. RELEVANCE: Please provide a rating of the paper's relevance to the FME Symposium, using the scale: 0 = Out of scope 1 = Marginal interest 2 = Minority interest 3 = Majority interest 4 = Outstanding interest Numeric Rating: 3 Please comment your rating: Many FME participants will be acquainted with the development of high-integrity and safety-critical systems. 3. OVERALL RECOMMENDATION: Please provide a rating of the paper's acceptability, using the scale: 1 = Strong reject 2 = Weak reject 3 = Could go either way 4 = Weak accept 5 = Strong accept Numeric Rating: 1 NB: There should be a correlation between the two rates above. 4. CONFIDENCE LEVEL: Please provide a rating oof your expertise in the area addressed by the paper, using the scale: 1 = Know virtually nothing about this area 2 = Not too knowledgeable, but I know a bit 3 = Know a moderate amount, about average I'd say 4 = Not my main area of work, but I know a lot about it 5 = My area of work, I know it extremely well Numeric Rating: 4 NB: PC members are responsible for ensuring that 1 is not used here. 5. ORIGINALITY. What is NEW and SIGNIFICANT in the work reported here? Comment: The basic suggestion in the paper is that safety-critical systems utilising certain strategies for fault-tolerance are likely to be suited to model checking on the basis of regularities in their structure. The suggestion is not very original or surprising, but I am not aware of a paper on the subject. 6. How WORTHWILE is the goal of the authors? Comment: I am not sure what the goal of the authors is. Perhaps it is to encourage researchers to look at safety-critical systems for model-checking applications. In that case, the goal is worthwhile. 7. How well is this goal EXPLAINED and JUSTIFIED? Comment: The purpose of the paper is not explained very explicitly. It is simply said that the purpose of the paper is to argue that safety-critical systems will be the next application area for model-checking. 8. TECHNICAL QUALITY. Are the technical parts (definitions, statements, specifications, proofs, algorithms, etc.) SOUND? Comment: The paper is short and written at an intuitive level. There is only very limited technical content, with little discussion about, for example, the range of suitable formalisms for representing classes of fault-tolerance problem. There is little hard mathematical evidence to read in the paper. 9. APPLICABILITY. If the work is primarily theoretical or conceptual, are the implications in terms of applicability adequately justified? If the paper is about a new formal technique, are satisfactory arguments presented in favor of the new technique? If a methodology is proposed, is it sound? If experimental results are presented, is their relevance justified? Comment: The paper is not really reporting a particular technical advance. 10. PRESENTATION: Describe the QUALITY of the writing, including suggestions for changes where appropriate. Generally literate. Would benefit from a further copy-editing. Comment: 11. Were there any formatting or mechanical problems with this paper?: No Are the figures and length acceptable?: OK Are the references correct?: Yes 12. OTHER COMMENTS you believe would be useful to the author(s), including pointers to missing relevant work: I think that few would argue with the paper's thesis that fault-tolerance is a promising area for research into model checking, but you need a clearer focus to the paper. I would urge revising the paper by adding considerably more technical detail about the model-checking solution to, for example, the verification problem of a Byzantine fault-tolerant clock synchronisation algorithm (e.g. the simple fault tolerant average). Such an example would be simple enough to allow a comparison of verification strategies, and keep the reader's attention. In addition, you need some real experimental evidence oon more substantial examples and develop some model-checking strategies that can be applied in these cases. The paper seems to be about fault-tolerance more than safety per se. Is there something special about safety-critical systems in this regard? What about security- or mission-critical systems? Certainly in the security-critical systems area, model checking already plays an important role (see, for example the work on FDR model-checking for CSP models of secure systems - papers in the FM'99 proceedings). There is also extensive work on proof verification in areas such as Byzantine agreement. +++++++++++++++++++++ End of FME 2001 Paper Review Report ++++++++++++++++++++++