Recent Publications

Moreno CB, Jin D, He D, Liu D.  2010.  Genetic algorithm with local search for community mining in complex networks. 22nd International Conference on Tools with Artificial Intelligence - ICTAI. 1:105–112. Abstractgh36

Detecting communities from complex networks has triggered considerable attention in several application domains. Targeting this problem, a local search based genetic algorithm
(GALS) which employs a graph-based representation (LAR) has been proposed in this work. The core of the GALS is a local search based mutation technique. Aiming to overcome the drawbacks of the existing mutation methods, a concept called marginal gene has been proposed, and then an effective and efficient mutation method, combined with a local search strategy which is based on the concept of marginal gene, has also been proposed by analyzing the modularity function. Moreover, in this paper the percolation theory on ER random graphs is employed to further clarify the effectiveness of LAR presentation; A Markov random walk based method is adopted to produce an accurate and diverse initial population; the solution space of GALS will be significantly reduced by using a graph based mechanism. The proposed GALS has been tested on both computer-generated and real-world networks, and compared with some competitive community mining algorithms. Experimental result has shown that GALS is hig y effective and efficient for discovering community structure.

Jesus P, Moreno CB, Almeida PS.  2010.  Fault-Tolerant Aggregation for Dynamic Networks. 29th Symposium on Reliable Distributed Systems (SRDS). :37–43. Abstractfu_dynamic_agg.pdf

Data aggregation is a fundamental building block of modern distributed systems. Averaging based approaches, commonly designated gossip-based, are an important class of aggregation algorithms as they allow all nodes to produce a result, converge to any required accuracy, and work independently from the network topology. However, existing approaches exhibit many dependability issues when used in faulty and dynamic environments. This paper extends our own technique, Flow Updating, which is immune to message loss, to operate in dynamic networks, improving its fault tolerance characteristics. Experimental results show that the novel version of Flow Updating vastly outperforms previous averaging algorithms, it self adapts to churn without requiring any periodic restart, supporting node crashes and high levels of message loss.

Borges P, Machado J, Villani E, Campos JC.  2010.  From SFC Specification to C Programming Language on the Context of Aerospace Systems Control. Workshop on Intelligent Control Systems . :46-51. Abstractborgesvmfc2010a.pdf

Aerospace systems software is developed taking into account some precautions to avoid dangerous situations. Usually the controllers of these systems are critical embedded real-time controllers and the respective software programs are developed in the C programming language. This paper is developed on the context of developing embedded critical real-time systems software, for aerospace systems applications, based on formalisms commonly used in the industrial automation field. More precisely, the approach proposed, in this paper, consists in translating a SFC specification to C programming language code considering also the behaviour of the controller device, where the specification will be implemented. An illustrative case study is presented in the end of the paper in order to facilitate the understanding of the proposed approach.

Silva JL, Ribeiro O, Fernandes JP, Campos JC, Harrison M.  2010.  The APEX framework: prototyping of ubiquitous environments based on Petri nets. Human-Centred Software Engineering - HCSE. 6409:6-21. Abstractsilvarfch10b.pdf

The user experience of ubiquitous environments is a determining factor in their success. The characteristics of such systems must be explored as early as possible to anticipate potential user problems, and to reduce the cost of redesign. However, the development of early prototypes to be evaluated in the target environment can be disruptive to the ongoing system and therefore unacceptable. This paper reports on an ongoing effort to explore how model-based rapid prototyping of ubiquitous environments might be used to avoid actual deployment while still enabling users to interact with a representation of the system. The paper describes APEX, a framework that brings together an existing 3D Application Server with CPN Tools. APEX-based prototypes enable users to navigate a virtual world simulation of the envisaged ubiquitous environment. The APEX architecture and the proposed CPN-based modelling approach are described. An example illustrates their use.

Silva JC, Silva CE, Campos JC, Saraiva JA.  2010.  GUI Behavior from Source Code Analysis. 4a. Conferência de Grupo Português de Computação Gráfica. :81-88. Abstractsilvascs10.pdf

When developing interactive applications, considering the correctness of graphical user interfaces (GUIs) code is essential. GUIs are critical components of today's software, and contemporary software tools do not provide enough support for ensuring GUIs' code quality. GUIsurfer, a GUI reverse engineering tool, enables evaluation of behavioral properties of user interfaces. It performs static analysis of GUI code, generating state machines that can help in the evaluation of interactive applications. This paper describes the design, software architecture, and the use of GUIsurfer through an example. The tool is easily re-targetable, and support is available to Java/Swing, and WxHaskell. The paper sets the ground for a generalization effort to consider rich internet applications. It explores the GWT web applications' user interface programming toolkit.

Silva JL, Ribeiro O, Fernandes JP, Campos JC, Harrison M.  2010.  Prototipagem rápida de ambientes ubíquos. 4a. Conferência de Grupo Português de Computação Gráfica. :121-128. Abstractsilvarfch10.pdf

A experiência de utilização de ambiente ubíquos é um factor determinante no seu sucesso. As características de tais sistemas devem ser exploradas o mais cedo possível para antecipar potenciais problemas de utilização por parte do utilizador e para reduzir custos de re-design. No entanto, o desenvolvimento antecipado de protótipos a serem avaliados no ambiente final pode ser disruptivo e tornar-se inaceitável. O desenvolvimento de protótipos de ambientes ubíquos pode ajudar, fornecendo indicações de como o utilizador irá reagir perante os ambientes. Este artigo descreve o APEX, uma plataforma de prototipagem rápida de ambientes ubíquos que junta a CPN Tools com um servidor de aplicações 3D existente. Os protótipos desenvolvidos com o APEX permitem que os utilizadores naveguem num mundo virtual, podendo experimentar muitas das características do design proposto. A arquitectura do APEX e a modelação baseada em CPN são descritas. Um exemplo ilustra a abordagem.

Borges P, Villani E, Machado J, Campos JC, F. J.  2010.  Abordagem Sistemática para o Controlo Seguro de Sistemas aeroespaciais. XIV International Congress on Project Engineering. :10p. Abstractciip10_2666_2676.2958.pdf

A verificação formal do comportamento de sistemas tempo-real é uma tarefa complexa, por várias razões. Há múltiplos trabalhos desenvolvidos na área de verificação formal, por model-checking de sistemas tempo-real, sendo que diversos softwares foram desenvolvidos para o efeito. Um dos problemas mais complexos para serem resolvidos na análise de controladores tempo-real é a conversão das linguagens de programação dos controladores nas linguagens formais, por exemplo autómatos finitos temporizados para depois poderem ser verificados formalmente através dos model-checkers existentes. Se a metodologia de elaboração dos programas for bem desenvolvida e conhecida, essa tarefa pode ser muito facilitada. Por outro lado, grande parte dos sistemas tempo-real (principalmente os sistemas embebidos que pretendemos estudar) é programado em linguagem C. Neste artigo pretende-se estabelecer uma metodologia de criação de programas em código C, a partir do formalismo de especificação SFC, tendo em conta a verificação formal de propriedades comportamentais desejadas para o sistema, utilizando a técnica Model-Checking e o model-checker UPPAAL.

Silva JC, Silva CE, Gonçalo R, Saraiva JA, Campos JC.  2010.  The GUISurfer tool: towards a language independent approach to reverse engineering GUI code. Proceedings of the 2nd ACM SIGCHI Symposium on Engineering interactive computing systems. :181-186. Abstracteics141-silva.pdf

Graphical user interfaces (GUIs) are critical components of today's software. Developers are dedicating a larger portion of code to implementing them. Given their increased importance, correctness of GUIs code is becoming essential. This paper describes the latest results in the development of GUISurfer, a tool to reverse engineer the GUI layer of interactive computing systems. The ultimate goal of the tool is to enable analysis of interactive system from source code.

Freire L, Arezes P, Campos JC.  2010.  Princípios de Ergonomia e Design discutidos através de plataformas utilizadas para e-learning. Occupational Safety and Hygiene (SHO 2010). :256-260. Abstractfreireac2010.pdf

O presente artigo tem como objectivo principal apresentar uma revisão bibliográfica referente aos princípios de Ergonomia aplicados a ferramentas utilizadas para e-learning. É sabido que os erros na aplicação destes princípios acarretam sobrecargas cognitivas nos utilizadores destas plataformas, pelo que o artigo procura abordar o tema de forma abrangente. Deste modo, na primeira parte deste artigo apresentam-se os princípios ergonómicos mais referenciados como fundamentos de Ergonomia Cognitiva e Informacional e, em seguida, apresentam-se as directrizes de usabilidade destinadas a sistemas de informação que poderão servir como suporte para interacções académicas. Na segunda parte, realiza-se um discussão acerca de uma observação de experiências de aprendizagem colaborativa vivenciadas entre professores e alunos em projectos de pesquisa e extensão. A metodologia deste artigo aponta para uma estrutura de investigação de natureza qualitativa, onde o objectivo era perceber as interacções vividas virtualmente, enquanto utilizadores de sistemas cuja meta é favorecer a construção de conhecimentos de carácter académico. Os resultados desta investigação configuram-se como uma forma adicional de análise das questões referentes à Ergonomia, de modo a que os sistemas informacionais privilegiem sempre os diferentes perfis dos seus utilizadores.

Barbosa MB, Pinto JS, Filliâtre JC, Vieira B.  2010.  A Deductive Verification Platform for Cryptographic Software. Proceedings of the 4th International Workshop on Foundations and Techniques for Open Source Software Certification - OpenCert. 33 Abstract2010_opencert_10_a.pdf

In this paper we describe a deductive verification platform for the CAO language. CAO is a domain-specific language for cryptography. We show that this language presents interesting challenges for formal verification, not only in the rich mathematical type system that it introduces, but also in the cryptography-oriented language constructions that it offers. We describe how we tackle these problems, and also demonstrate that, by relying on the Jessie plug-in included in the Frama-C framework, the development time of such a complex verification tool could be greatly reduced. We base our presentation on real-world examples of CAO code, extracted from the open-source code of the NaCl cryptographic library, and illustrate how various cryptography-relevant security properties can be verified.

Barbosa MB, Farshim P.  2010.  Strong Knowledge Extractors for Public-Key Encryption Schemes. 15th Australasian Conference on Information Security and Privacy - ACISP. 6168:164-181. Abstractstrongextractorscrc.pdf

Completely non-malleable encryption schemes resist attacks which allow an adversary to tamper with both ciphertexts and public keys. In this paper we introduce two extractor-based properties that allow us to gain insight into the design of such schemes and to go beyond known feasibility results in this area. We formalise strong plaintext awareness and secret key awareness and prove their suitability in realising these goals. Strong plaintext awareness imposes that it is infeasible to construct a ciphertext under any public key without knowing the underlying message. Secret key awareness requires it to be infeasible to produce a new public key without knowing a corresponding secret key.

Barbosa MB, Farshim P.  2010.  Relations among Notions of Complete Non-malleability: Indistinguishability Characterisation and Efficient Construction without Random Oracles. Proceedings of 15th Australasian Conference - ACISP. 6168:145-163. Abstractstrongcca_full.pdf

We study relations among various notions of complete non-malleability, where an adversary can tamper with both ciphertexts and public-keys, and ciphertext indistinguishability. We follow the pattern of relations previously established for standard non-malleability. To this end, we propose a more convenient and conceptually simpler indistinguishability-based security model to analyse completely non-malleable schemes. Our model is based on strong decryption oracles, which provide decryptions under arbitrarily chosen public keys. We give the first precise definition of a strong decryption oracle, pointing out the subtleties in different approaches that can be taken. We construct the first efficient scheme, which is fully secure against strong chosen-ciphertext attacks, and therefore completely non-malleable, without random oracles.

Carvalho N, Pereira JO.  2010.  Measuring Software Systems Scalability for Proactive Data Center Management. The 12th International Symposium on Distributed Objects, Middleware, and Applications - DOA. Abstractcp10.pdf

The current trend of increasingly larger Web-based applications makes scalability the key challenge when developing, deploying, and maintaining data centers. At the same time, the migration to the cloud computing paradigm means that each data center hosts an increasingly complex mix of applications, from multiple owners and in constant evolution. Unfortunately, managing such data centers in a cost-effective manner requires that the scalability properties of the hosted workloads to be accurately known, namely, to proactively provision adequate resources and to plan the most economical placement of applications. Obviously, stopping each of them and running a custom benchmark to asses its scalability properties is not an option.
In this paper we address this challenge with a tool to measure the software scalability regarding CPU availability, towards being able to predict its behavior in face of varying resources and an increasing workload. This tool does not depend on a particular application and relies only on Linux's SystemTap probing infrastructure. We validate the approach first using simulation and then in an actual system. The resulting better prediction of scalability properties should allow improved (self-) management practices.

Araújo M, Pereira JO.  2010.  Evaluating Data Freshness in Large Scale Replicated Databases. INForum – Simpósio de Informática. Abstractpaper128.pdf

There is nowadays an increasing need for database replication, as the construction of high performance, highly available, and large-scale applications depends on it to maintain data synchronized across multiple servers. A particularly popular approach, used for instance byFacebook, is the MySQL open source database management system and its built-in asynchronous replication mechanism. The limitations imposed by MySQL on replication topologies mean that data has to go through a number of hops or each server has to handle a large number of slaves. This is particularly worrisome when updates are accepted by multiple replicas and in large systems. It is however difficult to accurately evaluate the impact of replication in data freshness, since one has to compare observations at multiple servers while running a realistic workload and without disturbing the system under test. In this paper we address this problem by introducing a tool that can accurately measure replication delays for any workload and then apply it to the industry standard TPC-C benchmark. This allows us to draw interesting conclusions about the scalability properties of MySQL replication.

Maia F, Inigo A-J, Ruiz-Fuertes M, Oliveira R.  2010.  Scalable Transactions in the Cloud: Partitioning Revisited. On the Move to Meaningful Internet Systems. 6427:785-797. Abstractpartition-revisited.pdf

Cloud computing is becoming one of the most used paradigms to deploy highly available and scalable systems. These systems usually demand the management of huge amounts of data, which cannot be solved with traditional nor replicated database systems as we know them. Recent solutions store data in special key-value structures, in an approach that commonly lacks the consistency provided by transactional guarantees, as it is traded for high scalability and availability. In order to ensure consistent access to the information, the use of transactions is required. However, it is well-known that traditional replication protocols do not scale well for a cloud environment. Here we take a look at current proposals to deploy transactional systems in the cloud and we propose a new system aiming at being a step forward in achieving this goal. We proceed to focus on data partitioning and describe the key role it plays in achieving high scalability.