Recent Publications

Abreu R, Zoeteweij P, Van Gemund AJC.  2006.  Program Spectra Analysis in Embedded Software: A Case Study. :12. Abstract0607116.pdf

Because of constraints imposed by the market, embedded software in consumer electronics is almost inevitably shipped with faults and the goal is just to reduce the inherent unreliability to an acceptable level before a product has to be released. Automatic fault diagnosis is a valuable tool to capture software faults without extra effort spent on testing. Apart from a debugging aid at design and integration time, fault diagnosis can help analyzing problems during operation, which allows for more accurate system recovery. In this paper we discuss perspectives and limitations for applying a particular fault diagnosis technique, namely the analysis of program spectra, in the area of embedded software in consumer electronics devices. We illustrate these by our first experience with a test case from industry.

Moreno CB, Lopes N.  2004.  B Trees on P2P: Providing content indexing over DHT overlays. :1-5. Abstractbtp2p-techrep_2.pdf

The ability to search by content has been at the core of P2P data sharing systems and is a fundamental tool in the modern Web. However, currently deployed P2P search technology still suffers from either excessive centralization, abuse of network resources or low accuracy.
Efcient overlay structuring systems, like distributed hash tables (DHTs), provide adequate solutions to content location as long as unique identiers are used. They cannot, however, directly support search without negative impacts on the load balance of data distribution among peer nodes. We will show that DHTs can be used as a base for efcient content indexing by building a BTree structure that coordinates the use of homogeneous size blocks,
compatible with the DHT load balance assumptions. The remaining of the paper is dedicated
to a discussion of some of the issues, problems and possible solutions, that need to considered when building complex data structures on top of a peer-to-peer DHT layer.

Moreno CB, Moura F.  1997.  Specification of convergent abstract data types for autonomous mobile computing. Distributed Systems Group, Minho University. :18. Abstractscadt4.pdf

Abstract Traditional replica control mechanisms such as quorum consensus, primary replicas and other strong consistency approaches are unable to provide a useful level of availability on unconstrained mobile environments. We define an environment thats exploits pair-wise communication and allows autonomous creation and joining of replicas while ensuring eventual convergence. A set of composable components (ADTs) are formally specified using the SETS Calculus. These components can be used to build simple distributed applications that take advantage of peer-to-peer communication between mobile hosts.

Couto R, Ribeiro AN, Campos JC.  2017.  Pattern Based Software Development (PhD thesis). Approved, Very Goodcouto_phdthesis.pdf
Pereira V.  2016.  A deductive verification tool for cryptographic software. Master Abstractvitor_pereira_dissertation.pdf

Security is notoriously diffcult to sell as a feature in software products. In addition to meeting a set
of security requirements, cryptographic software has to be cheap, fast, and use little resources. The
development of cryptographic software is an area with specific needs in terms of software development
processes and tools. In this thesis we explore how formal techniques, namely deductive verification
techniques, can be used to increase the guarantees that cryptographic software implementations indeed
work as prescribed.
CAO (C and OCCAM) is a programming language specific to the domain of Cryptography. Con-
trol structures are similar to C, but it incorporates data types that deal directly with the needs of a
programmer when translating specifications of cryptographic schemes (eg, from scientific papers or
standards) to the real world. CAO language is supported by a compiler and an interpreter developed
by HASLab, in a sequence of research and development projects.
The CAOVerif tool was designed to allow deductive verification programs written in CAO. This
tool follows the same paradigm as other tools available for high level programming languages, such
as Frama-C, according to which a CAO program annotated with a specification is converted in an
input program to the Jessie/Why3 tool-chain, where the specified properties are then analysed.
After the development of CAOVerif, a new tool, specific to the domain of Cryptography - named
EasyCrypt - was developed. The objective of this project is to evaluate EasyCrypt as a potential
backend for the CAOVerif tool, through the development of a prototype that demonstrates the advan-
tages and disadvantages of this solution.

Oliveira N.  2015.  Architectural Reconfiguration of Interacting Services. Abstractthesis_full_annex3.pdf

The exponential growth of information technology users and the rising of their expectations imposed a paradigmatic change in the way software systems are developed. From monolithic to modular, from centralised to distributed, from static to dynamic. Software systems are nowadays regarded as coordinated compositions of several computational blocks, distributed by different execution nodes, within flexible and dynamic architectures.
They are not flawless, though. Moreover, execution nodes may fail, new requirements may become necessary, or the deployment environment may evolve in such a way that measures of quality of service of the system become degraded. Reconfiguring, repairing, adapting, preferably in a dynamic way, became, thus, relevant issues for the software architect.
But, developing such systems right is still a challenge. In particular, current (formal) methods for characterising and analysing contextual changes and reconfiguration strategies fall behind the engineering needs.
This thesis formalises a framework, referred to as aris, for modelling and analysing architectural reconfigurations. The focus is set on the coordination layer, understood in the context of the Reo model, as it plays the key role in defining the behaviour of compositional systems. Therefore, it proposes a notion of a Coordination Pattern, as a graph-based model of the coordination layer; and of Reconfiguration Patterns, as parametric operations inducing topological changes in coordination patterns.
Properties of reconfigurations can be stated and evaluated from two different perspectives: behavioural and structural. The former compares the behavioural semantics of the reconfigured system based on whatever semantic model one associates to coordination patterns.
The latter focuses on the graph topology of the coordination pattern. Properties are expressed in a propositional hybrid logic, referring to the actual connectivity expressed in that graph.
To bring quality of service into the picture, the thesis also contributes with a new semantic model for stochastic Reo, based on interactive Markov chains. This opens new possibilities for analysis of both coordination patterns and reconfigurations. In particular for inspecting the effects of reconfigurations in the system’s quality of service, or determining reconfiguration triggers, based on the variations of the latter.
Another contribution of the thesis is the integration of aris in a monitoring strategy that enables self-adaptation and attempts to deliver it as a service in a cloud environment.
Tools are delivered to support aris. In particular, language-based technology to encode, transform and analyse coordination and reconfiguration patterns, materialises it in a dedicated editor.
All the above mentioned contributions are assessed through a case study where a static
system is worked out to support self-adaptation.

Gonçalves RC.  2015.  Parallel Programming by Transformation. Abstractgoncalves-2015.pdf

The development of efficient software requires the selection of algorithms and optimizations tailored for each target hardware platform. Alternatively, performance portability may be obtained through the use of optimized libraries. However, currently all the invaluable knowledge used to build optimized libraries is lost during the development process, limiting its reuse by other developers when implementing new operations or porting the software to a new hardware platform. To answer these challenges, we propose a model-driven approach and framework to encode and systematize the domain knowledge used by experts when building optimized libraries and program implementations. This knowledge is encoded by relating the domain operations with their implementations, capturing the fundamental equivalences of the domain, and defining how programs can be transformed by refinement (adding more implementation details), optimization (removing inefficiencies), and extension (adding features). These transformations enable the incremental derivation of efficient and correct by construction program implementations from abstract program specifications. Additionally, we designed an interpretations mechanism to associate different kinds of behavior to domain knowledge, allowing developers to animate programs and predict their properties (such as performance costs) during their derivation. We developed a tool to support the proposed framework, ReFlO, which we use to illustrate how knowledge is encoded and used to incrementally—and mechanically—derive efficient parallel program implementations in different application domains. The proposed approach is an important step to make the process of developing optimized software more systematic, and therefore more understandable and reusable. The knowledge systematization is also the first step to enable the automation of the development process.

Paulo J.  2015.  Dependable Decentralized Storage Management for Cloud Computing. Abstractpp15.pdf

The volume of worldwide digital information is growing and will continue to grow at an impressive rate. Storage deduplication is accepted as valuable technique for handling such data explosion. Namely, by eliminating unnecessary duplicate content from storage systems, both hardware and storage management costs can be improved. Nowadays, this technique is applied to distinct storage types and, it is increasingly desired in cloud computing infrastructures, where a significant portion of worldwide data is stored. However, designing a deduplication system for cloud infrastructures is a complex task, as duplicates must be found and eliminated across a distributed cluster that supports virtual machines and applications with strict storage performance requirements. The core of this dissertation addresses precisely the challenges of cloud infrastructures deduplication. We start by surveying and comparing the existing deduplication systems and the distinct storage environments targeted by them. This discussion is missing in the literature and it is important for understanding the novel issues that must be addressed by cloud deduplication systems. Then, as our main contribution, we introduce our own deduplication system that eliminates duplicates across virtual machine volumes in a distributed cloud infrastructure. Redundant content is found and removed in a cluster-wide fashion while having a negligible impact in the performance of applications using the deduplicated volumes.
Our prototype is evaluated in a real distributed setting with a benchmark suited for deduplication systems, which is also a contribution of this dissertation.

Maia F.  2015.  Epidemic Store for Massive Scale Systems. Abstractmain.pdf

Considering the state-of-the-art systems for data management, it is observable that they exhibit two main frailties when deployed in a large scale system. On one hand, coordination protocols used in traditional relational database management systems do not perform well when the system grows beyond tens of nodes. On the other hand, data management approaches that relax consistency guarantees, thus avoiding coordination, struggle with high levels of system churn. In this dissertation, we present a completely decentralized, coordinationfree, scalable and robust data store. Our design is aimed at environments with several thousands of nodes and high levels of churn. Offering the current ubiquitous key-value data structures and programming interfaces, we describe how to overcome challenges raised by the need to distribute data -essential for load balancing, to replicate data - the crux of fault tolerance, and to route requests - key to performability. Alongside the design of our data store, we make several contributions in the context of distributed systems slicing. We propose a novel slicing protocol that overcomes state-of-the-art limitations. Additionally, we propose a novel epidemic algorithm for scalable and decentralized organization of system nodes into groups. This algorithm is used as an alternative to slicing at the core of our system. It organizes nodes into groups of parameterizable size without the need to have nodes knowing the system size. The contributions made on slicing protocols and the proposed group construction protocol are independent from the design of the data store. They are generic and can also be used as building blocks for other applications.

Neves F, Pereira JO, Vilaça R.  2015.  Análise de Desempenho e Otimização do Apache HBase para Dados Relacionais. Abstractmain.pdf

A popularidade que os sistemas NoSQL têm vindo a conquistar leva a que sejam constantemente submetidos a análises e otimizações de desempenho. A capacidade destes sistemas capazes de escalar melhor que as tradicionais bases de dados relacionais motivou a migração de inúmeras aplicações para sistemas NoSQL mesmo quando não se tira partido da estrutura de dados flexível por eles fornecida. Porém, a consulta destes dados estruturados tem um custo adicional que deriva da flexibilidade dos sistemas NoSQL.

Este trabalho demonstra algumas limitações de desempenho do Apache HBase e propõe e avalia o Prepared Scan, uma operação que visa tirar partido do conhecimento da estrutura de dados por parte da aplicação, diminuindo assim o custo associado à consulta de dados estruturados.

Martins P.  2014.  Embedding Attribute Grammars and their Extensions using Functional Zippers. Abstractphd.pdf

Gramáticas de atributos são um formalismo que permite exprimir algoritmos complexos de análise e transformação de programas, que tipicamente requerem várias travessias as árvores abstractas que os representam. As gramáticas de atributos foram extendidas com mecanismos que permitem referências, ordem superior e circularidade em atributos. Estas extensões permitem a implementação de mecanismos complexos e modulares de computações em linguagens.
Neste trabalho embebemos gramáticas de atributos e as suas extensões de forma elegante e simples, através de uma técnica chamada ”zippers”. Na nossa técnica, especificações de linguagens são definidas com um conjunto de componentes independentes de primeira ordem, que podem ser facilmente compostos para formar poderosos ambientes de processamento de linguagens.
Também desenvolvemos técnicas que descrevem transformações bidireccionais entre gramáticas. Definimos métodos de especificar transformações que, através de mecanismos completamente automáticos são invertidas e estendidas e geram graméticas de atributos que especificam o nosso ambiente bidirecional.
Com esta tecnica foram implementados varios exemplos de especificacao e processamento de linguagens, alguns dos quais estao definidos e explicados neste documento. Da mesma forma, criamos e desenvolvemos uma linguagem de dominio especifico usando a nossa tecnica; linguagem essa que integramos num portal que permite a criação de analises de programas completamente configurada para servir os requisitos particulares de cada utilizador.