Recent Publications

Bernardeschi C, Lettieri G, Martini L, Masci P.  2005.  A Space-Aware Bytecode Verifier for Java Cards. Electronic Notes in Theoretical Computer Science. 141:237–254. Abstract

The bytecode verification is a key point of the security chain of the Java Platform. This feature is optional in many embedded devices since the memory requirements of the verification process are too high. In this paper we propose a verification algorithm that remarkably reduces the use of the memory by performing the verification during multiple specialized passes. The algorithm reduces the type encoding space by operating on different abstractions of the domain of types. The results of the experiments show that this bytecode verification can be performed directly on small memory systems.

Almeida PS, Almeida PS, Moreno CB.  2004.  Bounded version vectors. Distributed Computing - Lecture Notes in Computer Science. 3274:102–116. Abstractdisc04.pdf

Version vectors play a central role in update tracking under optimistic distributed systems, allowing the detection of obsolete or inconsistent versions of replicated data. Version vectors do not have a bounded representation; they are based on integer counters that grow indefinitely as updates occur. Existing approaches to this problem are scarce; the mechanisms proposed are either unbounded or operate only under specific settings. This paper examines version vectors as a mechanism for data causality tracking and clarifies their role with respect to vector clocks. Then, it introduces bounded stamps and proves them to be a correct alternative to integer counters in version vectors. The resulting mechanism, bounded version vectors, represents the first bounded solution to data causality tracking between replicas subject to local updates and pairwise symmetrical synchronization.

Barthe G, Frade MJ, Giménez E, Pinto L, Uustalu T.  2004.  Type-based termination of recursive definitions. Mathematical Structures in Computer Science. 14:97-141. Abstracttbterm.pdf

This paper introduces "lambda-hat", a simply typed lambda calculus supporting inductive types and recursive function definitions with termination ensured by types. The system is shown to enjoy subject reduction, strong normalisation of typable terms and to be stronger than a related system "lambda-G" in which termination is ensured by a syntactic guard condition. The system can, at will, be extended to also support coinductive types and corecursive function definitions.

Barbosa M, Barbosa LS.  2004.  A Relational Model for Component Interconnection. Journal of Universal Computer Science. 10:808–823. Abstractbb04sblp.pdf

The basic motivation of component based development is to replace conventional programming by the composition of reusable off-the-shelf units, externally coordinated through a network of connecting devices, to achieve a common goal. This paper introduces a new relational model for software connectors and discusses some preliminary work on its implementation in HASKELL. The proposed model adopts a coordination point of view in order to deal with components’ temporal and spatial decoupling and, therefore, to provide support for looser levels of inter-component dependency and effective external control.

Moreno CB, Lopes N.  2003.  Towards peer-to-peer content indexing. Operating Systems Review. 37:90–96. Abstractp2pcontent.pdf

Distributed Hash Tables are the core technology on a significant share of system designs for Peer-to-Peer information sharing. Typically, a location mechanism is provided and object identifiers act as keys in the index of object locations. When introducing a search mechanism, where single words are used as keys, the key image cardinality will be driven by the word popularity and most of the present designs will be unable to load balance the index among the nodes. We present two contributions: A design that allows participating nodes to load balance the indexing of popular keys and avoid content hot-spots on single nodes; A distributed mechanism for probabilistic filtering of popular keys (with low search relevance) that paves the way for scalable full content indexing.

Pereira JO, Rodrigues L, Oliveira R.  2003.  Semantically reliable multicast: Definition, implementation, and performance evaluation. IEEE Transactions on Computers. 52:150-165. Abstractsemantic.pdf

Semantic reliability is a novel correctness criterion for multicast protocols based on the concept of message obsolescence: A message becomes obsolete when its content or purpose is superseded by a subsequent message. By exploiting obsolescence, a reliable multicast protocol may drop irrelevant messages to find additional buffer space for new messages. This makes the multicast protocol more resilient to transient performance perturbations of group members, thus improving throughput stability. This paper describes our experience in developing a suite of semantically reliable protocols. It summarizes the motivation, definition, and algorithmic issues and presents performance figures obtained with a running implementation. The data obtained experimentally is compared with analytic and simulation models. This comparison allows us to confirm the validity of these models and the usefulness of the approach. Finally, the paper reports the application of our prototype to distributed multiplayer games.

Cunha A.  2003.  Automatic Visualization of Recursion Trees: a Case Study on Generic Programming. Electronic Notes in Theoretical Computer Science. 86(3):70-84. Abstractwflp03entcs.pdf

Although the principles behind generic programming are already well understood, this style of programming is not widespread and examples of applications are rarely found in the literature. This paper addresses this shortage by presenting a new method, based on generic programming, to automatically visualize recursion trees of functions written in Haskell. Crucial to our solution is the fact that almost any function definition can be automatically factorized into the composition of a fold after an unfold of some intermediate data structure that models its recursion tree. By combining this technique with an existing tool for graphical debugging, and by extensively using Generic Haskell, we achieve a rather concise and elegant solution to this problem.

Barbosa LS, Oliveira JN.  2003.  State-based Components Made Generic. Electronic Notes in Theoretical Computer Science. 82:39-56. Abstract1-s2.0-s1571066104806315-main.pdf

Genericity is a topic which is not sufficiently developed in state-based systems modelling, mainly due to a myriad of approaches and behaviour models which lack unification. This paper adopts coalgebra theory to propose a generic notion of a state-based software component, and an associated calculus, by quantifying over behavioural models specified as strong monads. This leads to the pointfree, calculational reasoning style which is typical of the so-called Bird-Meertens school.

Barbosa LS.  2003.  Towards a Calculus of State-based Software Components. Journal of Universal Computer Science. 9:891-909. Abstractbarbosa_l_s.pdf

This paper introduces a calculus of state_based software components modelled as concrete coalgebras for some Set endofunctors, with specified initial conditions. The calculus is parametrized by a notion of behaviour, introduced as a strong (usually commutative) monad. The proposed component model and calculus are illustrated through the characterisation of a particular class of components, classified as separable, which includes the ones arising in the so-called model oriented approach to systems design.

Oliveira JN.  2001.  Bagatelle in C arranged for VDM SoLo. Journal of Universal Computer Science. 7:754-781. Abstract_ol01.pdf

This paper sketches a reverse engineering discipline which combines formal and semi-formal methods. Central to the former is denotational semantics, expressed in the ISO/IEC 13817-1 standard specification language (VDM-SL). This is strengthened with algebra of pro- gramming, which is applied in “reverse order” so as to reconstruct formal specifications from legacy code. The latter include code slicing, a “shortcut” which trims down the complexity of handling the formal semantics of all program variables at the same time.
A key point of the approach is its constructive style. Reverse calculations go as far as absorbing auxiliary variables, introducing mutual recursion (if applicable) and reversing semantic denota- tions into standard generic programming schemata such as cata/paramorphisms.
The approach is illustrated for a small piece of code already studied in the code-slicing literature: Kernighan and Richtie’s word count C programming “bagatelle”.

Barbosa LS.  2001.  Process Calculià la Bird-Meertens. Electronic Notes in Theoretical Computer Science. 44:47-66. Abstract10.1.1.28.7848.pdf

This paper is an attempt to apply the reasoning principles and calculational style underlying the so-called Bird-Meertens formalism to the design of process calculi, parametrized by a behaviour model. In particular, basically equational and pointfree proofs of process properties are given, relying on the universal characterisation of anamorphisms and therefore avoiding the explicit construction of bisimulations. The developed calculi can be directly implemented on a functional language supporting coinductive types, which provides a convenient way to prototype processes and assess alternative design decisions.

Campos JC, Harrison M.  2001.  Model checking interactor specifications. Automated Software Engineering. 8:275-310. Abstract

Recent accounts of accidents draw attention to “automation surprises” that arise in safety critical systems. An automation surprise can occur when a system behaves differently from the expectations of the operator. Interface mode changes are one class of such surprises that have significant impact on the safety of a dynamic interactive system. They may take place i>implicitly as a result of other system action. Formal specifications of interactive systems provide an opportunity to analyse problems that arise in such systems. In this paper we consider the role that an i>interactor based specification has as a partial model of an interactive system so that mode consequences can be checked early in the design process. We show how interactor specifications can be translated into the SMV model checker input language and how we can use such specifications in conjunction with the model checker to analyse potential for mode confusion in a realistic case. Our final aim is to develop a general purpose methodology for the automated analysis of interactive systems. This verification process can be useful in raising questions that have to be addressed in a broader context of analysis.

Doherty G, Campos JC, Harrison M.  2000.  Representational Reasoning and Verification. Formal Aspects of Computing. 12(4):260-277. Abstractdohertych00.pdf

Formal approaches to the design of interactive systems rely on reasoning about properties of the system at a very high level of abstraction. Specifications to support such an approach typically provide little scope for reasoning about presentations and the representation of information in the presentation. In contrast, psychological theories such as distributed cognition place a strong emphasis on the role of representations, and their perception by the user, in the cognitive process. However, the post-hoc techniques for the observation and analysis of existing systems which have developed out of the theory do not help us in addressing such issues at the design stage. Mn this paper we show how a formalisation can be used to investigate the representational aspects of an interface. Our goal is to provide a framework to help identify and resolve potential problems with the representation of information, and to support understanding of representational issues in design. We present a model for linking properties at the abstract and perceptual levels, and illustrate its use in a case study of a ight deck instrument. There is a widespread consensus that proper tool support is a prerequisite for the adoption of formal techniques, but the use of such tools can have a profound effect on the process itself. In order to explore this issue, we apply a higher-order logic theorem prover to the analysis.

Denvir T, Oliveira JN, Plat N.  2000.  The Cash-Point (ATM) Problem. Formal Aspects of Computing. 12:211–215. Abstract

This paper provides a description and summary of the solutions submitted to a competition in formal specification, which was held during FM’99 in Toulouse, September 1999.

Moreno CB, Moura F.  1999.  Using structural characteristics for autonomous operation. Operating Systems Review. 33:90–96. Abstractstructural.pdf

The majority of current mobile computing systems operate either in conjunction with a central network by some form of weak connectivity or tend to operate in total isolation and perform sporadic synchronization with a backup or a central network. These configurations miss an additional and very useful pattern of operation --- mobile to mobile interaction. Recent mobile devices have the capacity for direct communication among them, but this option is essentially neglected by the application software.In order to address this pattern of operation we believe that there is a need to support re-usable peer-to-peer synchronization mechanisms that both respects data ownership and enables some level of state reconciliation.Naming this operation pattern as autonomous operation, we can observe that this pattern is already found on many legacy applications deployed in distributed systems. For example, personal information managers, Mail/News readers and Web browsers, often store persistent state in local files, but tacitly assume a single copy. Noticing that these separate copies are in fact replicas of a distributed entity, leads to the creation of semantically knowledgeable file synchronizers that strive to restore an unified state from these replicas.Evolution from static distributed systems to mobile platforms raises a demand for applications that, not only are adapted to user mobility but, take advantage of it. It is clear that despite continuous improvements on connectivity support for mobile environments, the cost and coverage limits still imply a major share of disconnected operation. When connectivity does exist it usually interposes wide area networks between communication peers, when one party is on the road, leading to lower channel quality. On the other hand, user mobility is likely to conduce to, normally unforeseen, physical proximity of the user's mobile computer with other mobile or fixed systems. This occurrence is likely to increase as the installed population of mobile devices increases.In this work we show that without imposing restrictions on availability, which is a crucial factor for personal applications, it is possible to enable some data sharing among autonomous mobile applications. This sharing would take advantage of any pairwise encounters of replica holders.To determine the level of sharing that is compatible with permanent availability, we model general purpose data types that provide the necessary reconciliation guarantees. These guarantees are obtained by placing restrictions on the allowed behavior in order to avoid the occurrence of conflicting concurrent operations that would prevent reconciliations. Among other uses, these data types should help to identify sharable segments of data on classes of applications that traditionally support no sharing at all, and identify which parts of the state can be effectively shared.In the next section we present some examples of sharable data that motivates the modeling of a more generic and higher level description. This description is presented in the third section together with a framework of convergent components. Section four builds on this framework and gives a general presentation of a Java implementation for a component hierarchy. Before presenting the conclusions we show how these tools where used to build a merger for pairs of bookmark files, giving some insight on how to combine the components to create concrete applications.