Recent Publications

Pinto JS, Cruz D, Areias S.  2010.  Contract- based slicing helps on safety reuse. 16th International Coal Preparation Congress - ICPC. Abstracticpc2010posterslicingvf.pdf

In this poster we describe a work in progress aimed at using a variant of specification-based slicing to improve the reuse of annotated software components, developed under the so called design-by-contract approach. We have named this variant as contract-based because we use the annotations, more precisely the pre and post-conditions, to slice programs intra and inter-procedures. The idea, expressed in the poster, is to take the pre-condition of the reused annotated component as slicing criterion, and slice backward the program where the component is called. In that way, we can isolate the statements that have influence on the variables involved on the pre-condition and check if it is preserved by that invocation, or not.

Pinto JS, Henriques PR, Cruz D.  2010.  Contract - based slicing. 4th International Symposium On Leveraging Applications of Formal Methods , Verification and Validation - ISOLA . Abstractisola2010contractslicingvf.pdf

In the last years, the concern with the correctness of programs has been leading programmers to enrich their programs with annotations following the principles of design-by-contract, in order to be able to guarantee their correct behaviour and to facilitate reuse of verified components without having to reconstruct proofs of correctness.
In this paper we adapt the idea of specification-based slicing to the scope of (contract-based) program verification systems and behaviour specification languages. In this direction, we introduce the notion of contract-based slice of a program and show how any specification-based slicing algorithm can be used as the basis for a contract-based slicing algorithm.

Sousa SM, Pinto JS, Carvalho J, Carvalho A.  2010.  Model- checking temporal properties of real-time HTL programs. 4th International Symposium On Leveraging Applications of Formal Methods, Verification and Validation - ISOLA. Abstract2010_isola_10_a.pdf

This paper describes a tool-supported method for the formal veri cation of timed properties of HTL programs, supported by the automated translation tool HTL2XTA, which extracts from a HTL program (i) an Uppaal model and (ii) a set of properties that state the compliance of the model with certain automatically inferred temporal constraints. These can be manually extended with other temporal properties provided by the user. The paper introduces the details of the proposed mechanisms as well as the results of our experimental validation.

Pinto JS, Henriques PR, Cruz D.  2010.  GamaSlicer: an online laboratory for program verification and analysis. Proceedings of the of the Tenth Workshop on Language Descriptions, Tools and Applications - LDTA. Abstractldta2010gamaslicervf.pdf

In this paper we present the GamaSlicer tool, which is primarily a semantics-based program slicer that also offers formal verification (generation of verification conditions) and program visualization functionality. The tool allows users to obtain slices using a number of different families of slicing algorithms (\precond-based, \postcond-based, and specification-based), from a correct software component annotated with pre and postconditions (contracts written in JML-annotated Java). Each family in turn contains algorithms of different precision (with more precise algorithms being asymptotically slower). A novelty of our work at the theoretical level is the inclusion of a new, much more effective algorithm for specification-based slicing, and in fact other current work at this level is being progressively incorporated in the tool.

The tool also generates (in a step-by-step fashion) a set of verification conditions (as formulas written in the SMT-lib language, which enables the use of different automatic SMT provers). This allows to establish the initial correctness of the code with respect to their contracts.

Pinto JS, Henriques PR, Cruz D, Areias S.  2010.  GammaPolarSlicer. CSIS - Center for strategic & international studies. Abstract1820-02141100006a.pdf

In software development, it is often desirable to reuse existing software components. This has been recognized since 1968, when Douglas Mcllroy of Bell Laboratories proposed basing the software industry on reuse. Despite the failures in practice, many efforts have been made to make this idea successful. In this context, we address the problem of reusing annotated components as a rigorous way of assuring the quality of the application under construction. We introduce the concept of caller-based slicing as a way to certify that the integration of an annotated component with a contract into a legacy system will preserve the behavior of the former.To complement the efforts done and the benefits of the slicing techniques, there is also a need to find an efficient way to visualize the annotated components and their slices. To take full profit of visualization, it is crucial to combine the visualization of the control/data flow with the textual representation of source code. To attain this objective, we extend the notion of System Dependence Graph and slicing criterion.

Saraiva JA, Fernandes JP, Monteiro M, Diniz P, Cardoso J.  2010.  A domain- specific aspect language for transforming MATLAB programs. Fifth Workshop on Domain-Specific Aspect Languages (DSAL). Abstract

Aspect-oriented programming enables software developers to augment programs with information out of the scope of the base language while not hampering the code readability and thus its portability. MATLAB is a popular modeling/programming language that can significantly benefit from aspect-oriented programming features.

Almeida JB, Barbosa MB, Pinto JS, Vieira B.  2010.  Deductive verification of cryptographic software. The 8th IEEE International Conference on Software Engineering and Formal Methods - SEFM. Abstract09nfm.pdf

We report on the application of an off-the-shelf verification platform to the RC4 stream cipher cryptographic software implementation (as available in the openSSL library), and introduce a deductive verification technique based on self-composition for proving the absence of error propagation.

Silva A, Bonchi F, Bonsangue M, Rutten J.  2010.  Generalizing the powerset construction, coalgebraically. Proceedings of the IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2010). 8:272–283. Abstract24.pdf

Coalgebra is an abstract framework for the uniform study of different kinds of dynamical systems. An endofunctor $F$ determines both the type of systems ($F$-coalgebras) and a notion of behavioral equivalence ($\sim_F$) amongst them. Many types of transition systems and their equivalences can be captured by a functor $F$. For example, for deterministic automata the derived equivalence is language equivalence, while for non-deterministic automata it is ordinary bisimilarity. The powerset construction is a standard method for converting a nondeterministic automaton into an equivalent deterministic one as far as language is concerned. In this paper, we lift the powerset construction on automata to the more general framework of coalgebras with structured state spaces. Examples of applications include partial Mealy machines, (structured) Moore automata, and Rabin probabilistic automata.

Moon Y-J, Silva A, Krause C, Arbab F.  2010.  A Compositional Semantics for Stochastic Reo Connectors. Proceedings of the 9th International Workshop on the Foundations of Coordination Languages and Software Architectures (FOCLASA '10). 30:93–107. Abstract16529d.pdf

In this paper we present a compositional semantics for the channel-based coordination language Reo which enables the analysis of quality of service (QoS) properties of service compositions. For this purpose, we annotate Reo channels with stochastic delay rates and explicitly model data-arrival rates at the boundary of a connector, to capture its interaction with the services that comprise its environment. We propose Stochastic Reo automata as an extension of Reo automata, in order to compositionally derive a QoS-aware semantics for Reo. We further present a translation of Stochastic Reo automata to Continuous-Time Markov Chains (CTMCs). This translation enables us to use third- party CTMC verification tools to do an end-to-end performance analysis of service compositions.

Bonsangue M, Caltais G, Goriac E, Lucanu D, Rutten J, Silva A.  2010.  A decision procedure for bisimilarity of generalized regular expressions. Proceedings of the 13th Brazilian Symposium on Formal Methods (SBMF 2010). 6527:226–241. Abstractsbmf10.pdf

A notion of generalized regular expressions for a large class of systems modeled as coalgebras, and an analogue of Kleene’s theorem and Kleene algebra, were recently proposed by a subset of the authors of this paper. Examples of the systems covered include infinite streams, deterministic automata and Mealy machines. In this paper, we present a tool where the aforementioned expressions can be derived automatically and a novel algorithm to decide whether two expressions are bisimilar or not. The procedure is implemented in the automatic theorem prover CIRC, by reducing coinduction to an entailment relation between an algebraic specification and an appropriate set of equations.

Sözer H, Abreu R, Aksit M, Van Gemund AJC.  2010.  Increasing system availability with local recovery based on fault localization. 10th International Conference on Quality Software - QSIC. :276–281. Abstract4131a276.pdf

Due to the fact that software systems cannot be tested exhaustively, software systems must cope with residual defects at run-time. Local recovery is an approach for recovering from errors, in which only the defective parts of the system are recovered while the other parts are kept operational. To be efficient, local recovery must be aware of which component is at fault. In this paper, we combine a fault localization technique (spectrum-based fault localization, SFL) with local recovery techniques to achieve fully autonomous fault detection, isolation, and recovery. A framework is used for decomposing the system into separate units that can be recovered in isolation, while SFL is used for monitoring the activities of these units and diagnose the faulty one whenever an error is detected. We have applied our approach to MPlayer, a large open-source software. We have observed that SFL can increase the system availability by 23.4% on average.

Cunha M, Paiva ACR, Ferreira HS, Abreu R.  2010.  PETTool: a pattern-based GUI testing tool. 2nd International Conference on Software Technology and Engineering - ICSTE. 1:V1–202. Abstracte087.pdf

Nowadays, the usage of graphical user interfaces (GUIs) in order to ease the interaction with software applications is preferred over command line interfaces. Despite recent advances in software testing, GUIs are still tested in a complete ad-hoc, manual fashion, with little support from (industrial) testing tools. Automating the process of testing GUIs has additional challenges when compared to command-line applications. This paper presents an approach for GUI (semi-automated) testing which uses knowledge of the common behaviour of a GUI. To do so, the most common aspects in a GUI are identified and then a suite of test cases is automatically generated and executed. To validate our approach, we have run it against well known web-based applications, such as GMail.

Abreu R, Gonzalez-Sanchez A, Van Gemund AJC.  2010.  Exploiting count spectra for bayesian fault localization. Proceedings of the 6th International Conference on Predictive Models in Software Engineering. :12. Abstract22_abreu.pdf

Background: Automated diagnosis of software defects can drastically increase debugging efficiency, improving reliability and time-to-market. Current, low-cost, automatic fault diagnosis techniques, such as spectrum-based fault localization (SFL), merely use information on whether a component is involved in a passed/failed run or not. However, these approaches ignore information on component execution frequency, which can improve the accuracy of the diagnostic process. Aim: In this paper, we study the impact of exploiting component execution frequency on the diagnostic quality. Method: We present a reasoning-based SFL approach, dubbed Zoltar-C, that exploits not only component involvement but also their frequency, using an approximate, Bayesian approach to compute the probabilities of the diagnostic candidates. Zoltar-C is evaluated and compared to other well-known, low-cost techniques (such as Tarantula) using a set of programs available from the Software Infrastructure Repository. Results: Results show that, although theoretically Zoltar-C can be of added value, exploiting component frequency does not improve diagnostic accuracy on average. Conclusions: The major reason for this unexpected result is the highly biased sample of passing and failing tests provided with the programs under analysis. In particular, the ratio between passing and failing runs, which has a major impact on the probability computations, does not correspond to the false negative (failure) rates associated with the actually injected faults.

Clarke D, Muschevici R, Proença J, Schaefer I, Schlatte R.  2010.  Variability Modelling in the ABS Language. 9th International Symposium on Formal Methods for Components and Objects - FMCO . :204–224. Abstractmain_1.pdf

The HATS project aims at developing a model-centric methodology for the design, implementation and verification of highly configurable systems, such as software product lines, centred around the Abstract Behavioural Specification (ABS) modelling Language. This article describes the variability modelling features of the ABS Modelling framework. It consists of four languages, namely, TVL for describing feature models at a high level of abstraction, the Delta Modelling Language DML for describing variability of the ‘code’ base in terms of delta modules, the Product Line Configuration Language CL for linking feature models and delta modules together and the Product Selection Language PSL for describing a specific product to extract from a product line. Both formal semantics and examples of each language are presented.

Clarke D, Proença J.  2010.  Towards a Theory of Views for Feature Models. 14th International Conference on Software Product Lines - SPLC . :91–98. Abstractfmsple10.pdf

Variability in a Software Product Line (SPL) is expressed in terms of a feature model. As software development efforts involve increasingly larger feature models, scalable techniques are required to manage their complexity. Furthermore, as many stakeholders have a vested interest in different aspects of a feature model, modularity techniques are required to independently expresses their views of the product line abstracting away from unnecessary details. To address these issues of scalability and modularity this paper introduces a theory of views for features models, encompassing both view compatibility and view reconciliation for plugging together different views of a product line.