Recent Publications

Machado J, Seabra E, Campos JC, Soares F, Leão C, Silva JF.  2007.  Simulation and Formal Verification of Industrial Systems Controllers. 19th International Congress of Mechanical Engineering - COBEM. Abstractpublicacao_-_ssm3_iii_09_-_publicado_em_2008.pdf

Actually, the safety control is one of the most important aspects studied by the international researchers, in the field of
design and development of automated production systems due to social (avoid work accidents, ...), economics (machine stop time
reduction, increase of productivity,...) and technological aspects (less risks of damage of the components,...). Some researchers of
the Engineering School of University of Minho are also studying these aspects of safety control, using simulation and modelchecking
techniques in the development of Programmable Logic Controllers (PLC) programs.
The techniques currently used for the guarantee of automated production systems control safety are the Simulation and the Formal
Verification. If the Simulation is faster to execute, has the limitation of considering only some system behavior evolution scenarios.
Using Formal Verification it exists the advantage of testing all the possible system behavior evolution scenarios but, sometimes, it
exists the limitation of the time necessary for the attainment of formal verification results. In this paper it is shown, as it is possible,
and desirable, to conciliate these two techniques in the analysis of PLC programs. With the simultaneous use of these two
techniques, the developed PLC programs are more robust and not subject to errors. It is desirable the use of simulation before using
formal verification in the analysis of a system control program because with the simulation of some possible system behaviors it is
possible to eliminate a set of program errors in reduced intervals of time and that would not happen if these errors were detected
only through the use of formal verification techniques. Conciliating these two techniques it can be substantially reduced the time
necessary for the attainment of results through the use of the formal verification technique.
For the analysis of a system control program for simulation and formal verification it is used the Dymola for the Simulation
(through the creation of system models with Modelica language) and UPPAAL (through the creation of system models with timed
automata).

Machado J, Seabra E, Soares F, Campos JC.  2007.  A New Plant Modelling Approach For Formal Verification Purposes. 11th IFAC/IFORS/IMACS/IFIP Symposium on Large Scale Systems - Theory and Applications (LSS 2007). Abstractifac-lss-2007011-01jul-0167mach.pdf

This paper presents a new approach in plant modeling for the formal verification of real time systems. A system composed by two tanks is used, where all its components are modeled by simple modules and all the interdependences of the system?s modular models are presented. As innovating parameters in the plant modeling, having as purpose its use on formal verification tasks, the plant is modeled using Dymola software and Modelica programming language. The results obtained in simulation are used to define the plant models that are used for the formal verification tasks, using the model-checker UPPAAL. The paper presents, in a more detailed way, the part of this work that is related to formal verification, being pointing out the used plant modeling approach.

Pinto H, José R, Campos JC.  2007.  An Interaction Model and Infrastructure for Localized Activities in Pervasive Computing Environments. IEEE International Conference on Pervasive Services 2007 (ICPS'07). :232-241. Abstractpinto-jose-campos-final.pdf

This paper presents an interaction model for pervasive computing environments supporting localized activities, i.e., activities strongly associated to a specific physical environment. We are particularly interested in activities performed by occasional visitors to public spaces. This interaction model is characterized by an activity-centered approach to pervasive computing and is defined in a conceptual model inspired by Activity Theory. ActivitySpot, a software infrastructure implementing this conceptual model, is also presented. User interaction in ActivitySpot is based on simple, everyday pervasive computing devices, which facilitates usage learning and allows for a wide user population. ActivitySpot has supported the deployment of several pervasive computing solutions for localized activities. Our conceptual model has been evaluated by user studies run at different public spaces and global results demonstrate the model’s suitability to the targeted type of scenario.

Ribeiro AN, Campos JC, Martins F.  2007.  Integrating HCI into a UML based Software Engineering course. Proceedings HCI Educators 2007. :48-57. Abstracthcied_fv.pdf

Software Engineering (SE) and HCI (Human Computer Interaction) are not the same age, do not have the same history, background or foundations, and did never share design principles and design models. The separation principle, by encouraging separate concerns and techniques to design the interactive and the computational layers of a software system - despite being absolutely correct from several SE crucial design principles, like modularity, separation of concerns, encapsulation, context independence and so on -, has sometimes been misjudged and mistakenly used. Therefore, instead of bridging the gap between the two separate de- signs, it helped widening that gap. However, the principle does not mention and does not impose any restrictions on how the integration should be done. In the context of a software engineering course the authors have been involved with for some years, the need has arisen to provide students with HCI skills. Several attempts at integrating HCI into software engineering can be found in the literature. However, none seemed amenable to application in the context of the course, basically because none of them could be taught and learnt in such a way (methodology) that could easily be blent into the software engineering design process. We present a methodological process that we have been teaching that aims at shortening the gap that software engineering students face when trying to adapt SE techniques to the interactive layer.

Chatty S, Campos JC, Gonzalez M, Lepreux S, Nilsson E, Penichet V, den Bergh SV.  2007.  Processes: Working group report. Interactive Systems: Design, Specification and Verification - Interactive Systems: Design, Specification and Verification - DSV-IS. 4323:262-264. Abstractwgreport-chatty.pdf

It has often been suggested that model-driven development of user interfaces amounted to producing models of user interfaces then using automatic code generation to obtain the final result. However, this may be seen as an extreme interpretation of the model-driven approach. There are examples where that approach is successful, including mobile computing and database management systems. But in many cases automatic generation may be either impossible or may limit the quality of the final interface.

Silva JC, Campos JC, Saraiva JA.  2007.  Combining Formal Methods and Functional Strategies Regarding the Reverse Engineering of Interactive Applications. Design, Specification and Verification: Interactive Systems - DSV-IS. 4323:137-150. Abstractjcsilva06.pdf

Graphical user interfaces (GUIs) make software easy to use by providing the user with visual controls. Therefore, correctness of GUI’s code is essential to the correct execution of the overall software. Models can help in the evaluation of interactive applications by allowing designers to concentrate on its more important aspects. This paper describes our approach to reverse engineer an abstract model of a user interface directly from the GUI’s legacy code. We also present results from a case study. These results are encouraging and give evidence that the goal of reverse engineering user interfaces can be met with more work on this technique.

Barbosa MB, Moss A, Page D.  2007.  Compiler Assisted Elliptic Curve Cryptography. On the Move to Meaningful Internet Systems - OTM. 4804:1785-1802. Abstractpaper_1.pdf

Although cryptographic software implementation is often performed by expert programmers, the range of performance and security driven options, as well as more mundane software engineering issues, still make it a challenge. The use of domain specific language and compiler techniques to assist in description and optimisation of cryptographic software is an interesting research challenge. Our results, which focus on Elliptic Curve Cryptography (ECC), show that a suitable language allows description of ECC based software in a manner close to the original mathematics; the corresponding compiler allows automatic production of an executable whose performance is competitive with that of a hand-optimised implementation. Our work are set within the context of CACE, an ongoing EU funded pro ject on this general topic.

Barbosa MB, Farshim P.  2007.  Randomness Reuse: Extensions and Improvements. Proceedings of 11th IMA International Conference - Cryptography and Coding . 4887:257-276. Abstractreuse.pdf

We extend the generic framework of reproducibility for reuse of randomness in multi-recipient encryption schemes as proposed by Bellare et al. (PKC 2003). A new notion of weak reproducibility captures not only encryption schemes which are (fully) reproducible under the criteria given in the previous work, but also a class of efficient schemes which can only be used in the single message setting. In particular, we are able to capture the single message schemes suggested by Kurosawa (PKC 2002), which are more efficient than the direct adaptation of the multiple message schemes studied by Bellare et al. Our study of randomness reuse in key encapsulation mechanisms provides an additional argument for the relevance of these results: by taking advantage of our weak reproducibility notion, we are able to generalise and improve multi-recipient KEM constructions found in literature. We also propose an efficient multi-recipient KEM provably secure in the standard model and conclude the paper by proposing a notion of direct reproducibility which enables tighter security reductions.

Correia A, Pereira JO, Rodrigues L, Carvalho N, Vilaça R, Oliveira R, Guedes S.  2007.  GORDA: An open architecture for database replication. Sixth IEEE International Symposium on Network Computing and Applications - NCA. :{287-290}. Abstractgapi.pdf

Database replication has been a common feature in database management systems (DBMSs) for a long time. In particular, asynchronous or lazy propagation of updates provides a simple yet efficient way of increasing performance and data availability and is widely available across the DBMS product spectrum. High end systems additionally offer sophisticated conflict resolution and data propagation options as well as, synchronous replication based on distributed locking and two-phase commit protocols. This paper presents GORDA architecture and programming interface (GAPI), that enables different replication strategies to be implemented once and deployed in multiple DBMSs. This is achieved by proposing a reflective interface to transaction processing instead of relying on-client interfaces or ad-hoc server extensions. The proposed approach is thus cost-effective, in enabling reuse of replication protocols or components in multiple DBMSs, as well as potentially efficient, as it allows close coupling with DBMS internals.

Leitão J, Pereira JO, Rodrigues L.  2007.  Epidemic broadcast trees. 26th IEEE Symposium on Reliable Distributed Systems - SRDS. :{301-310}. Abstractlpr07a.pdf

There is an inherent trade-off between epidemic and deterministic tree-based broadcast primitives. Tree-based approaches have a small message complexity in steady-state but are very fragile in the presence of faults. Gossip, or epidemic, protocols have a higher message complexity but also offer much higher resilience. This paper proposes an integrated broadcast scheme that combines both approaches. We use a low cost scheme to build and maintain broadcast trees embedded on a gossip-based overlay. The protocol sends the message payload preferably via tree branches but uses the remaining links of the gossip overlay for fast recovery and expedite tree healing. Experimental evaluation presented in the paper shows that our new strategy has a low overhead and that is able to support large number of faults while maintaining a high reliability.

Leitão J, Pereira JO, Rodrigues L.  2007.  HyParView: A membership protocol for reliable gossip-based broadcast. The 37th Annual IEEE/IFIP International Conference on Dependable Systems and Networks - DSN. :{419-428}. Abstract10.1.1.190.3289.pdf

Gossip, or epidemic, protocols have emerged as a powerful strategy to implement highly scalable and resilient reliable broadcast primitives. Due to scalability reasons, each participant in a gossip protocol maintains a partial view of the system. The reliability of the gossip protocol depends upon some critical properties of these views, such as degree distribution and clustering coefficient. Several algorithms have been proposed to maintain partial views for gossip protocols. In this paper, we show that under a high number of faults, these algorithms take a long time to restore the desirable view properties. To address this problem, we present HyParView, a new membership protocol to support gossip-based broadcast that ensures high levels of reliability even in the presence of high rates of node failure. The HyParView protocol is based on a novel approach that relies in the use of two distinct partial views, which are maintained with different goals by different strategies.

Cunha A, Pacheco H, Berdaguer P, Visser J.  2007.  Coupled Schema Transformation and Data Conversion For XML and SQL. Proceedings of the 9th International Symposium on Practical Aspects of Declarative Languages - PADL. 4354:290–304. Abstractpadl07.pdf

A two-level data transformation consists of a type-level transformation of a data format coupled with value-level transformations of data instances corresponding to that format. We have implemented a system for performing two-level transformations on XML schemas and their corresponding documents, and on SQL schemas and the databases that they describe. The core of the system consists of a combinator library for composing type-changing rewrite rules that preserve structural information and referential constraints. We discuss the implementation of the system’s core library, and of its SQL and XML front-ends in the functional language Haskell. We show how the system can be used to tackle various two-level transformation scenarios, such as XML schema evolution coupled with document migration, and hierarchical-relational data mappings that convert between XML documents and SQL databases.

Cunha A, Visser J.  2007.  Strongly Typed Rewriting For Coupled Software Transformation. Proceedings of the 8th International Workshop on Rule-Based Programming - RULE. 174:17–34. Abstractrule06_joost_visser.pdf

Coupled transformations occur in software evolution when multiple artifacts must be modified in such a way that they remain consistent with each other. An important example involves the coupled transformation of a data type, its instances, and the programs that consume or produce it. Previously, we have provided a formal treatment of transformation of the first two: data types and instances. The treatment involved the construction of type-safe, type-changing strategic rewrite systems. In this paper, we extend our treatment to the transformation of corresponding data processing programs. The key insight underlying the extension is that both data migration functions and data processors can be represented type-safely by a generalized abstract data type (GADT). These representations are then subjected to program calculation rules, harnessed in type-safe, type-preserving strategic rewrite systems. For ease of calculation, we use point-free representations and corresponding calculation rules. Thus, coupled transformations are carried out in two steps. First, a type-changing rewrite system is applied to a source type to obtain a target type together with (representations of) migration functions between source and target. Then, a type-preserving rewrite system is applied to the composition of a migration function and a data processor on the source (or target) type to obtain a data processor on the target (or source) type. All rewrites are type-safe.

Cunha A, Visser J.  2007.  Transformation of Structure-Shy Programs - Applied to XPath Queries and Strategic Functions. Proceedings of the Workshop on Partial Evaluation and Program Manipulation - PEPM. :11–20. Abstractpepm07corrected.pdf

Various programming languages allow the construction of structureshy programs. Such programs are defined generically for many different datatypes and only specify specific behavior for a few relevant subtypes. Typical examples are XML query languages that allow selection of subdocuments without exhaustively specifying intermediate element tags. Other examples are languages and libraries for polytypic or strategic functional programming and for adaptive object-oriented programming. In this paper, we present an algebraic approach to transformation of declarative structure-shy programs, in particular for strategic functions and XML queries. We formulate a rich set of algebraic laws, not just for transformation of structure-shy programs, but also for their conversion into structure-sensitive programs and vice versa. We show how subsets of these laws can be used to construct effective rewrite systems for specialization, generalization, and optimization of structure-shy programs. We present a typesafe encoding of these rewrite systems in Haskell which itself uses strategic functional programming techniques.

Carvalho N, Pereira JO, Oliveira R, Rodrigues L.  2007.  Emergent structure in unstructured epidemic multicast. Proceedings of 37th IEEE/IFIP International Conference on Dependable Systems and Networks (DSN). Abstractdsn07-carvalho.pdf

In epidemic or gossip-based multicast protocols, each node simply relays each message to some random neighbors, such that all destinations receive it at least once with high proba- bility. In sharp contrast, structured multicast protocols explicitly build and use a spanning tree to take advantage of efficient paths, and aim at having each message received exactly once. Unfortunately, when failures occur, the tree must be rebuilt. Gossiping thus provides simplicity and resilience at the expense of performance and resource efficiency. In this paper we propose a novel technique that exploits knowledge about the environment to schedule payload transmission when gossiping. The resulting protocol retains the desirable qualities of gossip, but approximates the performance of structured multicast. In some sense, instead of imposing structure by construction, we let it emerge from the operation of the gossip protocol. Experimental evaluation shows that this approach is effective even when knowledge about the environment is only approximate.