Recent Publications

Mendes J.  2011.  ClassSheet-driven spreadsheet environments. Visual Languages and Human-Centric Computing Symposium on Visual Languages and Human-Centric Computing. :235–236. Abstractvlhcc11gc.pdf

Spreadsheet systems are well known and widely used in all kinds of business applications. They are used by tens of millions of people who create hundreds of millions of spreadsheets every day [1]. Spreadsheet systems are easy to use and very intuitive, allowing the aggregation of all types of data.

Madeira A, Faria JM, Martins M, Barbosa LS.  2011.  On requirements engineering for reactive systems: A formal methodology. 1ª Conferência Brasileira de Sistemas Embarcados Críticos. Abstractrefrs.pdf

This paper introduces a rigorous methodology for requirements speci cation of systems that react to external stimulus and consequently evolve through different operational modes, providing, in each of them, di erent functionalities. The proposed methodology proceeds in three stages, enriching a simple state machine with local algebraic speci cations. It resorts to an expressive variant of hybrid logic which is latter translated into rst-order to allow for ample automatic tool support.

Martins M, Madeira A, Barbosa LS.  2011.  Reasoning about complex requirements in a uniform setting. Third International Congress on Tools for Teaching Logic - TICTIL. Abstractracrus.pdf

The paper formulates HEQ, an institution for hybrid equational logic to provide a uniform setting to express and reasoning about di ferent sorts of properties of complex software. It is also shown how, through the de nition of a suitable comorphism to FOL, this can be integrated in Hets, providing suitable tool support for teaching and research. The whole exercise was motivated by the need to unify, in a single under-graduate course in a Computer Science curriculum, the speci cation of data and behavioural constraints of recon gurable systems.

Madeira A, Martins M, Barbosa LS.  2011.  Models as arrows: the role of dialgebras. Computability in Europe. Abstractoriginalcie2011.pdf

A large number of computational processes can suitably be described as a combination of construction, i.e. algebraic, and observation, i.e. coalgebraic, structures. This paper suggests dialgebras as a generic model in which such structures can be combined and proposes a small calculus of dialgebras including a wrapping combinator and se- quential composition. To take good care of invariants in software design, the paper also discusses how dialgebras can be typed by predicates and proves that invariants are preserved through composition. This lays the foundations for a full calculus of invariant proof-obligation discharge for dialgebraic models.

Ribeiro AN, Barbosa A, Gonçalves J, Costa A.  2011.  Integration of SIP protocol in android media framework. International Conference on Computer as a Tool - EUROCON. Abstract

The transmission of multimedia content between mobile devices is increasingly an area of exploration. The evolution of mobile devices and networks that support them, provide potential to create more sophisticated and innovating services. The fact that the Android platform does not provide the SIP protocol in its architecture, is a limiting factor for the development of new streaming applications. Throughout this paper the Android platform is presented, and more specifically the streaming protocols supported by the current streaming Media Framework. Based on the current Android Media Framework, this paper presents a possible architecture for the integration of the SIP protocol. The integration of this architecture is to surpass the limitations of the current Android platform and promote an improved performance in the current SIP applications, which is reflected in a lower power consumption of the device.

Oliveira JN, Mu S.  2011.  Programming from Galois connections. RAMiCS - 12th International Conference on Relational and Algebraic Methods in Computer Science. Abstract

Problem statements often resort to superlatives such as in the longest such list” which lead to specifications made of two parts: one defining a broad class of solutions (the easy part) and the other requesting one particular such solution optimal in some sense (the hard part).
This report introduces a binary relational combinator which mirrors thi linguistic structure and exploits its potential for calculating programs by optimization. This applies in particular to specifications written in the form of Galois connections, in which one of the adjoints delivers the optimal solution. The framework encompasses re-factoring of results previously developed by by Bird and de Moor for greedy and dynamic programming, in away which makes them less technically involved and therefore easier to understand and play with.

Moon Y-J, Arbab F, Silva A, Stam A, Verhoef C.  2011.  Stochastic Reo: a case study. Proceedings of the 5th International Workshop on Harnessing Theories for Tool Support in Software (TTSS '11). Abstractttss11.pdf

QoS analysis of coordinated distributed autonomous services is currently of interest in the area of service-oriented computing and calls for new technologies and supporting tools. In previous work, the first three authors have proposed a compositional automata model to provide semantics for stochastic Reo, a channel based coordination language that supports the specification of QoS values (such as request arrivals or processing rates). Furthermore, translations from this automata model into stochastic models, such as continuous-time Markov chains (CTMCs) and interactive Markov chains (IMCs) have also been presented. Based on those results, we describe in this paper a case study of QoS analysis. We analyze a certain instance of the ASK system, an industrial software system for connecting people offering professional services to clients requiring those services. We develop a model of the ASK system using stochastic Reo. The distributions used in this model were obtained by applying statistical analysis techniques on the raw values that we obtained from the real logs of an actual running ASK system. These distributions are used for the derived CTMC model for the ASK system to analyze and to improve the performance of the system, under the assumption that the distributions are exponentially distributed. In practice, this is not always the case. Thus, we also carry out a simulation-based analysis by a Reo simulator that can deal with non-exponential distributions. Compared to the analysis on the derived CTMC model, the simulation is approximation-based analysis, but it reveals valuable insight in the behavior of the system. The outcome of both analyses helps both the developers and the installations of the ASK system to improve the performance of the system.

Silva A.  2011.  A Specification Language for Reo Connectors. Proceedings of the Fundamentals of Software Engineering - 4th IPM International Conference (FSEN 2011). 7141:368–376. Abstract18033d_2.pdf

Recent approaches to component-based software engineering employ coordinating connectors to compose components into software systems. Reo is a model of component coordination, wherein complex connectors are constructed by composing various types of primitive channels.Reo automata are a simple and intuitive formal model of context- dependent connectors, which provided a compositional semantics for Reo.

In this paper, we study Reo automata from a coalgebraic perspective. This enables us to apply the recently developed coalgebraic theory of generalized regular expressions in order to derive a specification language, tailor-made for Reo automata, and sound and complete axiomatizations with respect to three distinct notions of equivalence: (coalgebraic) bisimilarity, the bisimulation notion studied in the original papers on Reo automata and trace equivalence. The obtained language is simple, but nonetheless expressive enough to specify all possible finite Reo automata. Moreover, it comes equipped with a Kleene-like theorem: we provide algorithms to translate expressions to Reo automata and, conversely, to compute an expression equivalent to a state in a Reo automaton.

Silva A, Sokolova A.  2011.  Sound and Complete Axiomatization of Trace Semantics for Probabilistic Systems. Proceedings of the Twenty-seventh Conference on the Mathematical Foundations of Programming Semantics (MFPS '11). 276:291––311. Abstractbms-11-draft.pdf

Coalgebras provide a uniform framework to study dynamical systems, including several types of automata. In this paper, we make use of the coalgebraic view on systems to investigate, in a uniform way, under which conditions calculi that are sound and complete with respect to behavioral equivalence can be extended to a coarser coalgebraic language equivalence, which arises from a generalised powerset construction. We illustrate the framework with two examples: non-deterministic automata, for which we recover Rabinovich’s sound and complete calculus for language equivalence, and weighted automata, for which we present the first sound and complete calculus for weighted language equivalence.

Gonzalez-Sanchez A, Abreu R, Gross H-G, Van Gemund AJC.  2011.  An empirical study on the usage of testability information to fault localization in software. Proceedings of the 2011 ACM Symposium on Applied Computing. :1398–1403. Abstractp1398-gonzalez-sanchez.pdf

When failures occur during software testing, automated software fault localization helps to diagnose their root causes and identify the defective statements of a program to support debugging. Diagnosis is carried out by selecting test cases in such way that their pass or fail information will narrow down the set of fault candidates, and, eventually, pinpoint the root cause. An essential in gredient of effective and efficient fault localization is knowledge about the false negative rate of tests, which is related to the rate at which defective statements of a program will exhibit failures. In current fault localization processes, false negative rates are either ignored completely, or merely estimated a posteriori as part of the diagnosis. In this paper, we study the reduction in diagnosis effort when false negative rates are known a priori. We deduce this information from testability, following the propagation-infection-execution (PIE) approach. Experiments with real programs suggest significant improvement in the diagnosis process, both in the single and the multiple-fault cases. When compared to the next-best technique, PIE-based false negative rate information yields a fault localization effort reduction of up to 80% for systems with only one fault, and up to 60% for systems with multiple faults.

Gupta S, Van Gemund AJC, Abreu R.  2011.  Probabilistic Error Propagation Modeling in Logic Circuits. IEEE Fourth International Conference on Software Testing, Verification and Validation Workshops - ICSTW. :617–623. Abstracttebug2011_submission_6.pdf

Recent study has shown that accurate knowledge of the false negative rate (FNR) of tests can significantly improve the diagnostic accuracy of spectrum-based fault localization. To understand the principles behind FNR modeling in this paper we study three error propagation probability (EPP) modeling approaches applied to a number of logic circuits from the 74XXX/ISCAS-85 benchmark suite. Monte Carlo simulations for random injected faults show that a deterministic approach that models gate behavior provides high accuracy (O(1%)), while probabilistic approaches that abstract from gate modeling generate higher prediction errors (O(10%)), which increase with the number of injected faults.

Gonzalez-Sanchez A, Abreu R, Gross H-G, Van Gemund AJC.  2011.  Spectrum-based sequential diagnosis. Twenty-Fifth AAAI Conference on Artificial Intelligence 2011. Abstract3565-16280-1-pb.pdf

We present a spectrum-based, sequential software debugging approach coined SEQUOIA, that greedily selects tests out of a suite of tests to narrow down the set of diagnostic candidates with a minimum number of tests. SEQUOIA handles multiple faults, that can be intermittent, at polynomial time and space complexity, due to a novel, approximate diagnostic entropy estimation approach, which considers the subset of diagnoses that cover almost all Bayesian posterior probability mass. Synthetic experiments show that SEQUOIA achieves much better diagnostic uncertainty reduction compared to random test sequencing. Real programs, taken from the Software Infrastructure Repository, confirm SEQUOIA’s better performance, with a test reduction up to 80% compared to random test sequences.

Gonzalez-Sanchez A, Abreu R, Gross H-G, Van Gemund AJC.  2011.  Prioritizing Tests for Fault Localization through Ambiguity Group Reduction. 26th IEEE/ACM International Conference on Automated Software Engineering (ASE 2011). Abstractase11.pdf

In practically all development processes, regression tests are used to detect the presence of faults after a modification. If faults are detected, a fault localization algorithm can be used to reduce the manual inspection cost. However, while using test case prioritization to enhance the rate of fault detection of the test suite (e.g., statement coverage), the diagnostic information gain per test is not optimal, which results in needless inspection cost during diagnosis. We present RAPTOR, a test prioritization algorithm for fault localization, based on reducing the similarity between statement execution patterns as the testing progresses. Unlike previous diagnostic prioritization algorithms, RAPTOR does not require false negative information, and is much less complex. Experimental results from the Software Infrastructure Repository’s benchmarks show that RAPTOR is the best technique under realistic conditions, with average cost reductions of 40% with respect to the next best technique, with negligible impact on fault detection capability.

Riboira A, Abreu R, Rodrigues R.  2011.  An OpenGL-based eclipse plug-in for visual debugging. Proceeding of the 1st workshop on Developing tools as plug-ins - TOPI. :60–60. Abstracttopi11.pdf

Locating components which are responsible for observed failures is the most expensive, error-prone phase in the software development life cycle. We present an Eclipse Plug-in that aims to fill some of the automatic debugging tools gaps: the lack of a visualization tool that provides intuitive feedback about the defect distribution over the code base, and easy access to the faulty locations.

Muschevici R, Proença J, Clarke D.  2011.  Modular Modelling of Software Product Lines with Feature Nets. Proceedings of 9th International Conference on Software Engineering and Formal Methods - SEFM . :318–333. Abstract70410318.pdf

Formal modelling and verification are critical for managing the inherent complexity of systems with a high degree of variability, such as those designed following the software product line (SPL) paradigm. SPL models tend to be large—the number of products in an SPL can be exponential in the number of features. Modelling these systems poses two main challenges. Firstly, a modular modelling formalism that scales well is required. Secondly, the ability to analyse and verify complex models efficiently is key in order to ensure that all products behave correctly. The choice of a system modelling formalism that is both expressive and well-established is therefore crucial. In this paper we show how SPLs can be modelled in an incremental, modular fashion using a formal method based on Petri nets. We continue our work on Feature Petri Nets,a lightweight extension to Petri nets, by presenting a framework for modularly constructing Feature Petri Nets to model SPLs.