Variability in a Software Product Line (SPL) is expressed in terms of a feature model. As software development efforts involve increasingly larger feature models, scalable techniques are required to manage their complexity. Furthermore, as many stakeholders have a vested interest in different aspects of a feature model, modularity techniques are required to independently expresses their views of the product line abstracting away from unnecessary details. To address these issues of scalability and modularity this paper introduces a theory of views for features models, encompassing both view compatibility and view reconciliation for plugging together different views of a product line.
In this paper, we present a framework aimed at simulating control logics specified in the higher-order logic of the Prototype Verification System. The framework offers a library of predefined modules, a method for the composition of more complex modules, and an event-driven simulation engine. A developer simulates the specified system by providing its input waveforms as functions from time to logic levels. Once the simulation experiments give sufficient confidence in the correctness of the specification, the model can serve as a basis for the formal verification of desired properties of interest. A simple case study from a nuclear power plant application is shown. This paper is a contribution to research aimed at improving the development process of safety-critical systems by integrating simulation and formal specification methods.
Giandomenico FD, Kwiatkowska M, Martinucci M, Masci P, Qu H.
2010. Dependability analysis and verification for CONNECTed systems. Proceedings of the 4th international conference on Leveraging applications of formal methods, verification, and validation - Volume Part II. :263–277. Abstract
The Connect project aims to enable the seamless composition of heterogeneous networked systems. In this context, Verification and Validation (V&V) techniques are sought to ensure that the Connected system satisfies dependability requirements. Stochastic model checking and state-based stochastic methods are two appealing V&V approaches to accomplish this task. In this paper, we report on the application of the two approaches in a typical Connect scenario. Specifically, we make clear (i) how the two approaches can be employed to enhance the confidence in the correctness of the analysis, and (ii) how the complementarity of these approaches can be fruitfully exploited to extend the analysis.
Wireless networks are starting to be populated by interconnected devices that reveal remarkable hardware and software differences. This fact raises a number of questions on the applicability of available results on dependability-related aspects of communication protocols, since they were obtained for wireless networks with homogeneous nodes. In this work, we study the impact of heterogeneous communication and computation capabilities of nodes on dependability aspects of diffusion protocols for wireless networks. We build a detailed stochastic model of the logic layers of the communication stack with the SAN formalism. The model takes into account relevant real-world aspects of wireless communication, such as transitional regions and capture effect, and heterogeneous node capabilities. Dependability-related metrics are evaluated with analytical solutions techniques for small networks, while simulation is employed in the case of large networks.
The EU Future and Emerging Technologies (FET) Project CONNECT aims at dropping the heterogeneity barriers that prevent the eternality of networking systems through a revolutionary approach: to synthesise on-the-fly the CONNECTors via which networked systems communicate. The CONNECT approach, however, comes at risk from the standpoint of dependability, stressing the need for methods and tools that ensure resilience to faults, errors and malicious attacks of the dynamically CONNECTed system. We are investigating a comprehensive approach, which combines dependability analysis, security enforcement and trust assessment, and is centred around a lightweight adaptive monitoring framework. In this project paper, we overview the research that we are undertaking towards this objective and propose a unifying workflow process that encompasses all the CONNECT dependability/security/trust concepts and models.
Dynamic, evolving systems pose new challenges from the point of view of Quality of Service (QoS) analysis, calling for techniques able to combine traditional offline methods with new ones applied at run-time. Tracking the evolution and updating the assessment consistently with such system evolution require not only advanced analysis methods, but also appropriate metrics well representative of QoS properties in the addressed context. The ongoing European project Connect addresses systems evolution, and aims at bridging technological gaps arising from heterogeneity of networked systems, by synthesising on-the-fly interoperability connectors. Moving from such ambitious goal, in this paper we present a metrics framework, whereby classical dependability/QoS metrics can be refined and combined to characterise Connect applications and to support their monitoring and analysis.
Spreadsheets can be viewed as a highly flexible endusers programming environment which enjoys wide-spread adoption. But spreadsheets lack many of the structured programming concepts of regular programming paradigms. In particular, the lack of data structures in spreadsheets may lead spreadsheet users to cause redundancy, loss, or corruption of data during edit actions. In this paper, we demonstrate how implicit structural properties of spreadsheet data can be exploited to offer edit assistance to spreadsheet users. Our approach is based on the discovery of functional dependencies among data items which allow automatic reconstruction of a relational database schema. From this schema, new formulas and visual objects are embedded into the spreadsheet to offer features for auto-completion, guarded deletion, and controlled insertion. Schema discovery and spreadsheet enhancement are carried out automatically in the background and do not disturb normal user experience.
This paper presents techniques and tools to transform spreadsheets into relational databases and back. A set of data refinement rules is introduced to map a tabular datatype into a relational database schema. Having expressed the transformation of the two data models as data refinements, we obtain for free the functions that migrate the data. We use well-known relational database techniques to optimize and query the data. Because data refinements define bidirectional transformations we can map such database back to an optimized spreadsheet. We have implemented the data refinement rules and we have constructed tools to manipulate, optimize and refactor Excel-like spreadsheets.