Recent Publications

Taivan C, Andrade JM, José R, Silva B, Pinto H, Ribeiro AN.  2013.  Development Challenges in Web Apps for Public Displays. 7th International Conference on Ubiquitous Computing and Ambient Intelligence - UCAmI. 8276 Abstract82760135.pdf

Digital public displays can have a key role in urban ubiquitous computing infrastructures, but they have not yet managed to fill this role. A key step in that direction would be the emergence of an application model for open display networks that would enable anyone to create applications for display infrastructures. In this work, we study the development of web-based applications for public displays. We report on our experience of application development for real world public deployment and also on an experiment with external web developers to assess their ability to create such applications using our own development tools. The results show that the web-based app model can effectively be used in the context of public displays and that web developers are able to leverage upon their expertise to create this type of applications.

Oliveira L, Ribeiro AN, Campos JC.  2013.  The Mobile Context Framework: providing context to mobile applications. The 27th International British Computer Society Human Computer Interaction Conference: The Internet of things - HCI. 8028 Abstractoliveirarc13-hcii2013.pdf

The spread of mobile devices in modern societies has forced the industry to create software paradigms to meet the new challenges it faces. Some of these challenges are the huge heterogeneity of devices or the quick changes of users’ context. In this scenario, context becomes a key element, enabling mobile applications to be user centric and adapt to user requirements. The Mobile Context Framework, proposed in this paper, is a contribution to solve some of these challenges. Using Web servers running on the devices, context data can be provided to web applications. Besides the framework’s architecture, a prototype is presented as proof of concept of the platform’s potential.

Ferreira JF, Huang Y, He G, Qin S, He J.  2013.  Deadline analysis of AUTOSAR OS periodic tasks in the presence of interrupts. 15th International Conference on Formal Engineering Methods - ICFEM. Abstract2013-deadlineanalysisautosar_os.pdf

AUTOSAR, the open and emerging global standard for automotive embedded systems, offers a timing protection mechanism to protect tasks from missing their deadlines. However, in practice, it is difficult to predict when a deadline is violated, because a task missing its deadline may be caused by unrelated tasks or by the presence of interrupts. In this paper, we propose an abstract formal model to rep- resent AUTOSAR OS programs with timing protection. We are able to determine schedulability properties and to calculate constraints on the allowed time that interrupts can take for a given task in a given period. We implement our model in Mathematica and give a case study to illus- trate the utility of our method. Based on the results, we believe that our work can help designers and implementors of AUTOSAR OS programs check whether their programs satisfy crucial timing properties.

Nunes A, Oliveira R, Pereira JO.  2013.  Ajitts: Adaptive just-in-time transaction scheduling. Distributed Applications and Interoperable Systems DAIS. :57–70. Abstractajitts.pdf

Distributed transaction processing has benefited greatly from optimistic concurrency control protocols thus avoiding costly fine-grained synchronization. However, the performance of these protocols degrades significantly when the workload increases, namely, by leading to a substantial amount of aborted transactions due to concurrency conflicts. Our approach stems from the observation that when the abort rate increases with the load as already executed transactions queue for longer periods of time waiting for their turn to be certified and committed. We thus propose an adaptive algorithm for judiciously scheduling transactions to minimize the time during which these are vulnerable to being aborted by concurrent transactions, thereby reducing the overall abort rate. We do so by throttling transaction execution using an adaptive mechanism based on the locally known state of globally executing transactions, that includes out-of-order execution.
Our evaluation using traces from the industry standard TPC-E workload shows that the amount of aborted transactions can be kept bounded as system load increases, while at the same time fully utilizing system resources and thus scaling transaction processing throughput.

Jeannin J-B, Kozen D, Silva A.  2013.  Language Constructs for Non-Well-Founded Computations. 22nd European Symposium on Programming - ESOP. 7792:61–80. Abstractnwf.pdf

In automata theory, a machine transitions from one state to the next when it reads an input symbol. It is common to also allow an automaton to transition without input, via an epsilon-transition. These epsilon-transitions are convenient, e.g., when one defines the composition of automata. However, they are not necessary, and can be eliminated. Such epsilon-elimination procedures have been studied separately for different types of automata, including non-deterministic and weighted automata. It has been noted by Hasuo that it is possible to give a coalgebraic account of epsilon-elimination for some automata using trace semantics (as defined by Hasuo, Jacobs and Sokolova). In this paper, we give a detailed description of the epsilon-elimination procedure via trace semantics (missing in the literature). We apply this framework to several types of automata, and explore its boundary. In particular, we show that is possible (by careful choice of a monad) to define an epsilon-removal procedure for all weighted automata over the positive reals (and certain other semirings). Our definition extends the recent proposals by Sakarovitch and Lombardy for these semirings.

Casanova P, Garlan D, Schmerl B, Abreu R.  2013.  Diagnosing architectural run-time failures. SEAMS - Proceedings of the 8th International Symposium on Software Engineering for Adaptive and Self-Managing Systems. :103–112. Abstractdarf.pdf

Self-diagnosis is a fundamental capability of self-adaptive systems. In order to recover from faults, systems need to know which part is responsible for the incorrect behavior. In previous work we showed how to apply a design-time diagnosis technique at run time to identify faults at the architectural level of a system. Our contributions address three major shortcomings of our previous work: 1) we present an expressive, hierarchical language to describe system behavior that can be used to diagnose when a system is behaving different to expectation; the hierarchical language facilitates mapping low level system events to architecture level events; 2) we provide an automatic way to determine how much data to collect before an accurate diagnosis can produced; and 3) we develop a technique that allows the detection of correlated faults between components. Our results are validated experimentally by injecting several failures in a system and accurately diagnosing them using our algorithm.

Campos JC, Abreu R.  2013.  Encoding Test Requirements as Constraints for Test Suite Minimization. Tenth International Conference on Information Technology: New Generations - ITNG. :317–322. Abstractjosecampos-itng-2013.pdf

Software (regression) testing is performed not only to detect errors as early as possible but also to guarantee that changes did not affect the system negatively. As test suites tend to grow over time, e.g., new test cases are added to test new features, (re-)executing the entire suite becomes prohibitive. We propose an approach, RZOLTAR, addressing this issue: it encodes the relation between a test case and its testing requirements (code statements in this paper) in a so-called coverage matrix; maps this matrix into a set of constraints; and computes a set of optimal solutions (main taining the same coverage as the original suite) by leveraging a fast constraint solver. We show that RZOLTAR efficiently (0.68 seconds on average) finds a collection of test suites that significantly reduce the size of the original suite (61.12%), while greedy only finds one solution with a reduction of 56.58% in 6.92 seconds on average.

Machado P, Campos JC, Abreu R.  2013.  MZoltar: automatic debugging of Android applications. Proceedings of the 2013 International Workshop on Software Development Lifecycle for Mobile - DeMobile. :9–16. Abstractpedromachado-demobile-2013.pdf

Automated diagnosis of errors and/or failures detected during software testing can greatly improve the eciency of the debugging process, and thus help to make applications more reliable. In this paper, we propose an approach, dubbed MZoltar, o ering dynamic analysis (namely, spectrum- based fault localization) of mobile apps that produces a diagnostic report to help identifying potential defects quickly. The approach also o ers a graphical representation of the diagnostic report, making it easier to understand. Our experimental results show that the approach requires low runtime overhead (5.75% on average), while the tester needs to inspect 5 components (statements in this paper) on average to a nd the fault.

Cardoso N, Abreu R.  2013.  A Kernel Density Estimate-Based Approach to Component Goodness Modeling. AAAI - The Twenty-Seventh AAAI Conference on Artificial Intelligence . Abstract6192-31142-1-pb.pdf

Intermittent fault localization approaches account for the fact that faulty components may fail intermittently by considering a parameter (known as goodness) that quantifies the probability that faulty components may still exhibit correct behavior. Current, state-of-the-art approaches (1) assume that this goodness probability is context independent and (2) do not provide means for integrating past diagnosis experience in the diagnostic mechanism. In this paper, we present a novel approach, coined Non-linear Feedback-based Goodness Estimate (NFGE), that uses kernel density estimations (KDE) to address such limitations. We evaluated the approach with both synthetic and real data, yielding lower estimation errors, thus increasing the diagnosis performance.

Campos JC, Abreu R, Fraser G, d'Amorim M.  2013.  Entropy-based test generation for improved fault localization. IEEE/ACM 28th International Conference on Automated Software Engineering - ASE). :257–267. Abstractcampos-etal-ase2013.pdf

Spectrum-based Bayesian reasoning can effectively rank candidate fault locations based on passing/failing test cases, but the diagnostic quality highly depends on the size and diversity of the underlying test suite. As test suites in practice often do not exhibit the necessary properties, we present a technique to extend existing test suites with new test cases that optimize the diagnostic quality. We apply probability theory concepts to guide test case generation using entropy, such that the amount of uncertainty in the diagnostic ranking is minimized. Our ENTBUG prototype extends the search-based test generation tool EVOSUITE to use entropy in the fitness function of its underlying genetic algorithm, and we applied it to seven real faults. Empirical results show that our approach reduces the entropy of the diagnostic ranking by 49% on average (compared to using the original test suite), leading to a 91% average reduction of diagnosis candidates needed to inspect to find the true faulty one.

Steimann F, Frenkel M, Abreu R.  2013.  Threats to the validity and value of empirical assessments of the accuracy of coverage-based fault locators. Proceedings of the 2013 International Symposium on Software Testing and Analysis (ISSTA). :314–324. Abstractissta13.pdf

Resuming past work on coverage-based fault localization, we find that empirical assessments of its accuracy are subject to so many imponderables that they are of limited value. To improve on this situation, we have compiled a comprehensive list of threats to be considered when attempting such assessments in the future. In addition, we propose the establishment of theoretical lower and upper bounds of fault localization accuracy that depend on properties of the subject programs (including their test suites) only. We make a suggestion for a lower bound and show that well-known fault locators do not uniformly perform better.

Campos J, Abreu R.  2013.  Leveraging a Constraint Solver for Minimizing Test Suites. 13th International Conference on Quality Software - QSIC. :253–259. Abstractpaper.pdf

Software (regression) testing is performed to detect errors as early as possible and guarantee that changes did not affect the system negatively. As test suites tend to grow over time, (re-)executing the entire suite becomes prohibitive. We propose an approach, RZoltar, addressing this issue: it encodes the relation between a test case and its testing requirements (code statements in this paper) in a so-called coverage matrix, maps this matrix into a set of constraints, and computes a collection of optimal minimal sets (maintaining the same coverage as the original suite) by leveraging a fast constraint solver. We show that RZoltar efficiently (0.95 seconds on average) finds a collection of test suites that significantly reduce the size (64.88% on average) maintaining the same fault detection (as initial test suite), while the well-known greedy approach needs 11.23 seconds on average to find just one solution.

Gouveia C, Campos JC, Abreu R.  2013.  Using HTML5 visualizations in software fault localization. First IEEE Working Conference on Software Visualization - VISSOFT. :1–10. Abstractcarlosgouveia-vissoft-2013.pdf

Testing and debugging is the most expensive, error-prone phase in the software development life cycle. Automated software fault localization can drastically improve the efficiency of this phase, thus improving the overall quality of the software. Amongst the most well-known techniques, due to its efficiency and effectiveness, is spectrum-based fault localization. In this paper, we propose three dynamic graphical forms using HTML5 to display the diagnostic reports yielded by spectrum-based fault localization. The visualizations proposed, namely Sunburst, Vertical Partition, and Bubble Hierarchy, have been implemented within the GZOLTAR toolset, replacing previous and less-intuitive OpenGL-based visualizations. The GZOLTAR toolset is a plug-and-play plugin for the Eclipse IDE to ease world-wide adoption. Finally, we performed an user study with GZOLTAR and confirmed that the visualizations help to drastically reduce the time needed in debugging (e.g., all participants using the visualizations were able to pinpoint the fault, whereas of those using traditional methods only 35% found the fault). The group that used the visualizations took on average 9 minutes and 17 seconds less than the group that did not use them.

Perez A, Abreu R.  2013.  Cues for scent intensification in debugging. IEEE International Symposium on Software Reliability Engineering Workshops - ISSREW. :120–125. Abstractalexandreperez-iwpd-2013.pdf

Information foraging is a theory to understand how people search for information. In this theory, information scent is the perceived likelihood by the “predator” that a cue will lead to a “prey”. The better the cues, the better the information scent. In automatic debugging, it is the perceived likelihood that the diagnostic report leads to the cause of failures. In this paper, we detail a visualization, offered by the GZOLTAR toolset, that has the potential to provide better cues. With better we mean providing more information that leads to the fault than, e.g., the source code and code coverage information. The toolset provides a graphical display of the diagnostic reports yielded by well-known debugging techniques. From an information foraging point of view, we argue that the visualization is of added value while debugging. Finally, we report a user study to confirm that GZOLTAR’s visualization provides better cues for pinpointing faults.

Proença J, Clarke D.  2013.  Interactive Interaction Constraints. Proceedings of 15th International Conference on Coordination Models and Languages Florence. :211–225. Abstractinteractivechannels.pdf

Interaction constraints are an expressive formalism for describing coordination patterns, such as those underlying the coordination language Reo, that can be efficiently implemented using constraint satisfaction technologies such as SAT and SMT solvers. Existing implementations of interaction constraints interact with external components only in a very simple way: interaction occurs only between rounds of constraint satisfaction. What is missing is any means for the constraint solver to interact with the external world during constraint satisfaction. This paper introduces interactive interaction constraints which enable interaction during constraint satisfaction, and in turn increase the expressiveness of coordination languages based on interaction constraints by allowing a larger class of operations to be considered to occur atomically. We describe how interactive interaction constraints are implemented and detail a number of strategies for guiding constraint solvers. The benefit of interactive interaction constraints is illustrated using two examples, a hotel booking system and a system of transactions with compensations. From a general perspective, our work describes how to open up and exploit constraint solvers as the basis of a coordination engine.