++++++++++++++++++ FME2001: - Review reports for paper 25 +++++++++++++++++++++ - Dear author(s) : this file was extracted automatically from a very large - mailbox. In case you find any problem concerning mail encodings (or any - other kind of anomaly disturbing your understanding of the reviews) please - email any of the PC cochairs (pamela@research.att.com,jno@di.uminho.pt). ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ PAPER NUMBER: 25 CATEGORY: 2 TITLE: Coverage Directed Generation of System-Level Test Cases for the Validation of a DSP System AUTHOR(S): Laurent Arditi He'di Boufai%ed Arnaud Cavanie' Vincent Stehle' -------------------------------------------------------------------------------- 1. Briefly SUMMARIZE the paper (2-3 lines): The authors present a complete methodology to validate digital circuits with test. The test generation is automated thanks to the use of formal methods. 2. RELEVANCE: Please provide a rating of the paper's relevance to the FME Symposium, using the scale: 0 = Out of scope 1 = Marginal interest 2 = Minority interest 3 = Majority interest 4 = Outstanding interest Numeric Rating: 3 Please comment your rating: The methodology which is presented uses both formal methods and manual help. It is an improvement of a previous manual methodology. The contribution of the formal method is underlined by a comparision of the two processes on a large example. 3. OVERALL RECOMMENDATION: Please provide a rating of the paper's acceptability, using the scale: 1 = Strong reject 2 = Weak reject 3 = Could go either way 4 = Weak accept 5 = Strong accept Numeric Rating: 4 NB: There should be a correlation between the two rates above. 4. CONFIDENCE LEVEL: Please provide a rating oof your expertise in the area addressed by the paper, using the scale: 1 = Know virtually nothing about this area 2 = Not too knowledgeable, but I know a bit 3 = Know a moderate amount, about average I'd say 4 = Not my main area of work, but I know a lot about it 5 = My area of work, I know it extremely well Numeric Rating: 3 NB: PC members are responsible for ensuring that 1 is not used here. 5. ORIGINALITY. What is NEW and SIGNIFICANT in the work reported here? Comment: Generation of test for hardware *at the system level* (instead of at the module level). 6. How WORTHWILE is the goal of the authors? Comment: Testing is important and expensive since it is quite the only validation method. It is important to find some way to automate the test, so that the reliability of the system (hardware or software) is improved. What is interesting here is that the methodology details each step and how the manual efforts should be done. 7. How well is this goal EXPLAINED and JUSTIFIED? Comment: The introduction of the formal method is proposed as an improvement of a manual existing one. The justification relies on the comparison of the two methods. 8. TECHNICAL QUALITY. Are the technical parts (definitions, statements, specifications, proofs, algorithms, etc.) SOUND? Comment: Seems OK 9. APPLICABILITY. If the work is primarily theoretical or conceptual, are the implications in terms of applicability adequately justified? If the paper is about a new formal technique, are satisfactory arguments presented in favor of the new technique? If a methodology is proposed, is it sound? If experimental results are presented, is their relevance justified? Comment: The methodology proposed relies both on a manual and automated process. It seems sound. The applicability is illustrated with a large experiment. The results show clearly an improvement when using the formal methods. The article seems detailed enough so that the methodology could be applied again. 10. PRESENTATION: Describe the QUALITY of the writing, including suggestions for changes where appropriate. Comment: English is not my mother language, so I will not comment the grammar or the spelling. Page 2, section 2.2, end of paragraph 2. "For each cycle, all output signals are equivalent on both C and VHDL..." Well, this sentence is an assertion. But the two models are developed in parallel. A priori, they may be not consistent. In fact, you wrote that some errors where discovered in each model (end of page 15). So I wonder if the models are *really* equivalent. Page 4, section 2.4. The structure of the paragraph is surprising. It begins by the sentence "Instead of developping test benches and a huge set of test patterns, we have developed..." If one does not know what "test patterns" are, one will not understand the first paragraph. The transition of between the 1st and the 2nd paragraph is not clear. Page 6, section 3.4, after itemize 2. You write "Since we know these properties are false..." I don't see why you know that (why you so sure those properties are false). The environment constraints or a specification error may be so that an output is never activated... I would prefer "When there properties are false, ..." Page 8, section 3.7, paragraph 2. "Another way to do that is to dump the sequence T' of inputs at M's boundery while running P". Sorry, I didn't understand that point. You may say that "BDD" stands for Binary Decision Diagram. 11. Were there any formatting or mechanical problems with this paper?: No Are the figures and length acceptable?: Yes Are the references correct?: Yes. Please specify the LNCS volume number for the reference 8 (LNCS 1102, I think) 12. OTHER COMMENTS you believe would be useful to the author(s), including pointers to missing relevant work: In software testing area, there are some works on automated black-box test generation for Esterel programs. It might be interesting for your future work ? @INPROCEEDINGS{puchol97, AUTHOR={Jagadeesan, L.J. and Porter, A. and Puchol, C. and Ramming, J.C. and Votta, L.}, TITLE = {{Specification-based Testing of Reactive Software: Tools and Experiments}}, BOOKTITLE={19th International Conference on Software Engineering}, YEAR={1997} } +++++++++++++++++++++ End of FME 2001 Paper Review Report ++++++++++++++++++++++ PAPER NUMBER: 25 CATEGORY: 2 TITLE: Coverage Directed Generation of System-Level Test Cases for the Validation of a DSP System AUTHOR(S): Laurent Arditi Hédi Boufaïed Arnaud Cavanié Vincent Stehlé -------------------------------------------------------------------------------- 1. Briefly SUMMARIZE the paper (2-3 lines): A methodology for generating test cases is presented and is illustrated by real case studies and real applications. 2. RELEVANCE: Please provide a rating of the paper's relevance to the FME Symposium, using the scale: 0 = Out of scope 1 = Marginal interest 2 = Minority interest 3 = Majority interest 4 = Outstanding interest Numeric Rating: 3 Please comment your rating: The paper is interesting for a very large community using tests. 3. OVERALL RECOMMENDATION: Please provide a rating of the paper's acceptability, using the scale: 1 = Strong reject 2 = Weak reject 3 = Could go either way 4 = Weak accept 5 = Strong accept Numeric Rating: 4 NB: There should be a correlation between the two rates above. 4. CONFIDENCE LEVEL: Please provide a rating oof your expertise in the area addressed by the paper, using the scale: 1 = Know virtually nothing about this area 2 = Not too knowledgeable, but I know a bit 3 = Know a moderate amount, about average I'd say 4 = Not my main area of work, but I know a lot about it 5 = My area of work, I know it extremely well Numeric Rating: 3 NB: PC members are responsible for ensuring that 1 is not used here. 5. ORIGINALITY. What is NEW and SIGNIFICANT in the work reported here? Comment: the way they are using different techniques and the methodolgy they got. The work is also based on applications. 6. How WORTHWILE is the goal of the authors? Comment: well 7. How well is this goal EXPLAINED and JUSTIFIED? Comment: The paper is well written and the methodology is well explained. 8. TECHNICAL QUALITY. Are the technical parts (definitions, statements, specifications, proofs, algorithms, etc.) SOUND? Comment: yes 9. APPLICABILITY. If the work is primarily theoretical or conceptual, are the implications in terms of applicability adequately justified? If the paper is about a new formal technique, are satisfactory arguments presented in favor of the new technique? If a methodology is proposed, is it sound? If experimental results are presented, is their relevance justified? Comment: the work is applicable and has been applied. 10. PRESENTATION: Describe the QUALITY of the writing, including suggestions for changes where appropriate. Comment: very good writing 11. Were there any formatting or mechanical problems with this paper?: no Are the figures and length acceptable?: yes Are the references correct?: yes 12. OTHER COMMENTS you believe would be useful to the author(s), including pointers to missing relevant work: +++++++++++++++++++++ End of FME 2001 Paper Review Report ++++++++++++++++++++++