Publications

Silva JC, Campos JC, Saraiva JA.  2006.  Engenharia Reversa de Sistemas Interactivos Desenvolvidos em Java2/Swing. Interação 2006 - Actas da 2a. Conferência Nacional em Interação Pessoa-Máquina. :63-72. Abstractinter2006_jas.pdf

A manutenção e evolução de sistemas interactivos dá origem a problemas importantes que afectam a sua eficiência e eficácia. Neste trabalho pretende-se combinar programação funcional, programação estratégica, code slicing e modelos com semântica formal na tentativa de fortalecer a tese de que a aplicação destas metodologias e tecnologias, em conjunção com um processo de engenharia reversa de código fonte, permite melhorar significativamente o suporte à manutenção de sistemas interactivos.

Sousa N, Campos JC.  2006.  IVY Trace Visualiser. Interação 2006 - Actas da 2a. Conferência Nacional em Interação Pessoa-Máquina. :181-190. Abstract17_yvy_jose_campos_1.pdf

No contexto do projecto IVY, tem vindo a ser desenvolvida uma ferramenta de modelação e análise de sistemas interactivos, tendo em vista a detecção de potenciais problemas de usabilidade no início do desenvolvimento de um qualquer sistema interactivo. Quando uma dada propriedade em análise não se verifica, a ferramenta procura indicar um contra-exemplo: um comportamento do modelo que demonstre a falsidade da propriedade em questão. Estes contra-exemplos, no entanto, podem atingir tamanhos consideráveis, dependendo da complexidade do mode- lo, o que dificulta a sua análise. De forma a facilitar essa análise, a arquitectura da ferramenta IVY prevê um componente de suporte à análise. Este componente visa, através de representações visuais e de mecanismos de análise, facilitar a compreensão dos contra exemplos, de forma a tornar mais claro qual o problema que está a ser apontado e possíveis soluções para o mesmo. Este artigo apresenta o componente de análise da ferramenta IVY. São apresentadas a arquitectura do componente, as representações implementadas e os mecanismos de análise disponibilizados.

Barbosa MB, Farshim P.  2006.  Secure Cryptographic Workflow in the Standard Model. 7th International Conference on Cryptology in India - Progress in Cryptology - INDOCRYPT. 4329:379-393. Abstract450.pdf

Following the work of Al-Riyami et al. we define the notion of key encapsulation mechanism supporting cryptographic workflow (WF-KEM) and prove a KEM-DEM composition theorem which extends the notion of hybrid encryption to cryptographic workflow. We then generically construct a WF-KEM from an identity-based encryption (IBE) scheme and a secret sharing scheme. Chosen ciphertext security is achieved using one-time signatures. Adding a public-key encryption scheme we are able to modify the construction to obtain escrow-freeness. We prove all our constructions secure in the standard model.

Pereira JO, Oliveira R, Rodrigues L.  2006.  Efficient epidemic multicast in heterogeneous networks. On The Move To Meaningful Internet Systems - OTM - International Workshop on Reliability in Decentralized Distributed Systems. 4278:{1520-1529}. Abstractrdds06_jop.pdf

The scalability and resilience of epidemic multicast, also called probabilistic or gossip-based multicast, rests on its symmetry: Each participant node contributes the same share of bandwidth thus spreading the load and allowing for redundancy. On the other hand, the symmetry of gossiping means that it does not avoid nodes or links with less capacity. Unfortunately, one cannot naively avoid such symmetry without also endangering scalability and resilience. In this paper we point out how to break out of this dilemma, by lazily deferring message transmission according to a configurable policy. An experimental proof-of-concept illustrates the approach.

Carvalho N, Pereira JO, Rodrigues L.  2006.  Towards a generic group communication service. On The Move To Meaningful Internet Systems (OTM) International Symposium on Distributed Objects, Middleware, and Applications (DOA). 4276:{1485-1502}. Abstractcamera-ready_jop.pdf

View synchronous group communication is a mature technology that greatly eases the development of reliable distributed applications by enforcing precise message delivery semantics, especially in face of faults. It is therefore found at the core of multiple widely deployed and used middleware products. Although the implementation of a group communication system is a complex task, application developers may benefit from the fact that multiple group communication toolkits are currently available and supported. Unfortunately, each communication toolkit has a different interface, that differs from every other interface in subtile syntactic and semantic aspects. This hinders the design, implementation and maintenance of applications using group communication and forces developers to commit beforehand to a single toolkit, thus imposing a significant hurdle to portability. In this paper we propose jGCS, a generic group communication service for Java, that specifies an interface as well as minimum semantics that allow application portability. This interface accommodates existing group communication services, enabling implementation independence. Furthermore, it provides support for the latest state-of-art mechanisms that have been proposed to improve the performance of group-based applications. To support our claims, we present and experimentally evaluate implementations of jGCS for several major group communication systems, namely, Appia, Spread/FlushSpread and JGroups, and describe the port of a large middleware product to jGCS.

Cunha A, Oliveira JN, Visser J.  2006.  Type-safe Two-level Data Transformation. Proceedings of 14th International Symposium on Formal Methods - FM. 4085:284–299. Abstract10.1.1.106.738_1.pdf

A two-level data transformation consists of a type-level transformation of a data format coupled with value-level transformations of data instances corresponding to that format. Examples of two-level data transformations include XML schema evolution coupled with document migration, and data mappings used for interoperability and persistence.
We provide a formal treatment of two-level data transformations that is type-safe in the sense that the well-formedness of the value-level transformations with respect to the type-level transformation is guarded by a strong type system. We rely on various techniques for generic functional programming to implement the formalization in Haskell.
The formalization addresses various two-level transformation scenarios, covering fully automated as well as user-driven transformations, and allowing transformations that are information-preserving or not. In each case, two-level transformations are disciplined by one-step transformation rules and type-level transformations induce value-level transformations. We demonstrate an example hierarchical-relational mapping and subsequent migration of relational data induced by hierarchical format evolution.

Cunha A, Pinto JS, Proença J.  2006.  A Framework for Point-free Program Transformation. Selected Papers of the 17th International Workshop on Implementation and Application of Functional Languages - IFL. 4015:1–18. Abstractfrapfpt.pdf

The subject of this paper is functional program transformation in the so-called point-free style. By this we mean first translating programs to a form consisting only of categorically-inspired combinators, algebraic data types defined as fixed points of functors, and implicit recursion through the use of type-parameterized recursion patterns. This form is appropriate for reasoning about programs equationally, but difficult to actually use in practice for programming. In this paper we present a collection of libraries and tools developed at Minho with the aim of supporting the automatic conversion of programs to point-free (embedded in Haskell), their manipulation and rule-driven simplification, and the (limited) automatic application of fusion for program transformation.

Sousa AL, Correia A, Moura F, Pereira JO, Oliveira R.  2006.  Evaluating certification protocols in the partial database state machine. Proceedings of 1st International Conference on Availability, Reliability and Security (ARES). Abstractpdbsm_als.pdf

Partial replication is an alluring technique to ensure the reliability of very large and geographically distributed databases while, at the same time, offering good performance. By correctly exploiting access locality most transactions become confined to a small subset of the database replicas thus reducing processing, storage access and communication overhead associated with replication. The advantages of partial replication have however to be weighted against the added complexity that is required to manage it. In fact, if the chosen replica configuration prevents the local execution of transactions or if the overhead of consistency protocols offsets the savings of locality, potential gains cannot be realized. These issues are heavily dependent on the application used for evaluation and render simplistic benchmarks useless. In this paper, we present a detailed analysis of partial database state machine (PDBSM) replication by comparing alternative partial replication protocols with full replication. This is done using a realistic scenario based on a detailed network simulator and access patterns from an industry standard database benchmark. The results obtained allow us to identify the best configuration for typical online transaction processing applications.

Oliveira R, Pereira JO, Correia A, Archibald E.  2006.  Revisiting 1-Copy equivalence in clustered databases. Proceedings of 21st ACM Symposium on Applied Computing (SAC). Abstractocpa06.pdf

Recently renewed interest in scalable database systems for shared nothing clusters has been supported by replication protocols based on group communication that are aimed at seamlessly extending the native consistency criteria of centralized database management systems. By using a read-one/write-all-available approach and avoiding the fine-grained synchronization associated with traditional distributed locking, one needs just a single distributed interaction step for each update transaction. Therefore the system can easily be scaled to a large number of replicas, especially, with read intensive loads typical of Web server support environments.In this paper we point out that 1-copy equivalence for causal consistency, which is subsumed by both serializability and snapshot isolation criteria, depends on basic session guarantees that are costly to ensure in clusters, especially in a multi-tier environment. We then point out a simple solution that guarantees causal consistency in the Database State Machine protocol and evaluate its performance, thus highlighting the cost of seamlessly providing common consistency criteria of centralized databases in a clustered environment.

Grov J, Soares L, Correia A, Pereira JO, Oliveira R, Pedone F.  2006.  A pragmatic protocol for database replication in interconnected clusters. Proceedings of 12th Pacific Rim International Symposium on Dependable Computing (PRDC). Abstract10.1.1.190.3848.pdf

Multi-master update everywhere database replication, as achieved by protocols based on group communication such as DBSM and Postgres-R, addresses both performance and availability. By scaling it to wide area networks, one could save costly bandwidth and avoid large round-trips to a distant master server. Also, by ensuring that updates are safely stored at a remote site within transaction boundaries, disaster recovery is guaranteed. Unfortunately, scaling ex- isting cluster based replication protocols is troublesome. In this paper we present a database replication proto- col based on group communication that targets intercon- nected clusters. In contrast with previous proposals, it uses a separate multicast group for each cluster and thus does not impose any additional requirements on group commu- nication, easing implementation and deployment in a real setting. Nonetheless, the protocol ensures one-copy equiv- alence while allowing all sites to execute update transac- tions. Experimental evaluation using the workload of the industry standard TPC-C benchmark confirms the advan- tages of the approach.

Santo JE, Frade MJ, Pinto L.  2006.  Structural Proof Theory as Rewriting. Proceedings of 17th International Conference on Term Rewriting and Applications - RTA. :197-211. Abstractstrproofthrewrtcameraready.pdf

The multiary version of the $\lambda$-calculus with generalized applications integrates smoothly both a fragment of sequent calculus and the system of natural deduction of von Plato. It is equipped with reduction rules (corresponding to cut-elimination/normalisation rules) and permutation rules, typical of sequent calculus and of natural deduction with generalised elimination rules. We argue that this system is a suitable tool for doing structural proof theory as rewriting. As an illustration, we investigate combinations of reduction and permutation rules and whether these combinations induce rewriting systems which are confluent and terminating. In some cases, the combination allows the simulation of non-terminating reduction sequences known from explicit substitution calculi. In other cases, we succeed in capturing interesting classes of derivations as the normal forms w.r.t. well-behaved combinations of rules. We identify six of these "combined'' normal forms, among which are two classes, due to Herbelin and Mints, in bijection with normal, ordinary natural deductions. A computational explanation for the variety of ``combined'' normal forms is the existence of three ways of expressing multiple application in the calculus.

Jesus P, Moreno CB, Almeida PS.  2006.  ID Generation in Mobile Environments. Conference on Mobile and Ubiquitous Systems (CSMU). Abstractjesus-148.pdf

This work is focused in the ID generation problem in mobile environments. We discuss the suitability of traditional mechanisms and techniques to generate IDs in mobile environments. Based on the “Birthday Problem”, we deduced some formulas to evaluate the ID trust that is directly related to the number of entities in the system. The estimation of the system size revels to be the main problem of our approach. To deal with it, we develop a recursive schema that needs to be evaluated. Alternatively, we also design an aggregation algorithm to estimate the system size, which results are currently been analyzed.

Silva A, Visser J.  2006.  Strong types for relational databases. Proceedings of the 2006 ACM SIGPLAN workshop on Haskell (Haskell '06). :25–36. Abstracthw2006.pdf

Haskell's type system with multi-parameter constructor classes and functional dependencies allows static (compile-time) computations to be expressed by logic programming on the level of types. This emergent capability has been exploited for instance to model arbitrary-length tuples (heterogeneous lists), extensible records, functions with variable length argument lists, and (homogenous) lists of statically fixed length (vectors).We explain how type-level programming can be exploited to define a strongly-typed model of relational databases and operations on them. In particular, we present a strongly typed embedding of a significant subset of SQL in Haskell. In this model, meta-data is represented by type-level entities that guard the semantic correctness of database operations at compile time.Apart from the standard relational database operations, such as selection and join, we model functional dependencies (among table attributes), normal forms, and operations for database transformation. We show how functional dependency information can be represented at the type level, and can be transported through operations. This means that type inference statically computes functional dependencies on the result from those on the arguments.Our model shows that Haskell can be used to design and prototype typed languages for designing, programming, and transforming relational databases.

Abreu R, Zoeteweij P, Van Gemund AJC.  2006.  An evaluation of similarity coefficients for software fault localization. 12th Pacific Rim International Symposium on Dependable Computing - PRDC. :39–46. Abstractazg_prdc06.pdf

Automated diagnosis of software faults can improve the efficiency of the debugging process, and is therefore an important technique for the development of dependable software. In this paper we study different similarity coefficients that are applied in the context of a program spectral approach to software fault localization (single programming mistakes). The coefficients studied are taken from the systems diagnosis / automated debugging tools Pinpoint, Tarantula, and AMPLE, and from the molecular biology domain (the Ochiai coefficient). We evaluate these coefficients on the Siemens Suite of benchmark faults, and assess their effectiveness in terms of the position of the actual fault in the probability ranking of fault candidates produced by the diagnosis technique. Our experiments indicate that the Ochiai coefficient consistently outperforms the coefficients currently used by the tools mentioned. In terms of the amount of code that needs to be inspected, this coefficient improves 5% on average over the next best technique, and up to 30% in specific cases.

Ferreira JF, Sobral JL, Proença AJ.  2006.  JaSkel: A Java Skeleton-Based Framework for Structured Cluster and Grid Computing. Sixth IEEE International Symposium on Cluster Computing and the Grid (CCGRID 06). :301-304. Abstract2006-jaskel.pdf

This paper presents JaSkel, a skeleton-based framework to develop parallel and grid applications. The framework provides a set of Java abstract classes as a skeleton catalogue, which implements recurring parallel interaction paradigms. This approach aims to improve code efficiency and portability. It also helps to structure scalable applications through the refinement and composition of skeletons. Evaluation results show that using the provided skeletons do contribute to improve both application development time and execution performance.