2017, Contributo in atti di convegno, ENG
Tingting Hu (a), Ivan Cibrario Bertolotti (b), Nicolas Navet (a)
The ever-growing complexity of present-day soft- ware systems raises new and more stringent requirements on their availability, pushing designers to make use of sophisticated fault tolerance techniques far beyond the areas they were traditionally conceived for, and bringing new challenges to both the modelling and implementation phases. In this paper, we propose a design pattern to model in a domain-specific language one of the prominent fault-tolerant techniques, namely the N-version programming. It can be integrated seamlessly into existing applications to enhance their functional correctness, while still preserving the timing characteristics, in particular the sampling times. Besides, it is also designed in a way to ease the automatic code generation. A counterpart of the same framework is also implemented in a lower-level programming language, for use when direct model execution is impractical, like in severely resource-limited embedded targets.
2013, Rapporto di ricerca (Research report), ITA
MARTELLI Massimo TONINI Emanuele GESSI Silvia
Sviluppo model-based, in ambiente Matlab/Simulink/Stateflow, della logica di controllo da implementare su centralina elettronica di una combine harvester CNH. Rapid prototyping mediante simulazioni di tipo Model-in-the-Loop (MIL), in cosimulazione con l'ambiente AMESim.
2013, Contributo in atti di convegno, ENG
Manca M., Paternò F., Santoro C., Spano L. D.
This paper presents a set of tools to support multimodal adaptive Web applications. The contributions include a novel solution for generating multimodal interactive applications, which can be executed in any browserenabled device; and run-time support for obtaining multimodal adaptations at various granularity levels, which can be specified through a language for adaptation rules. The architecture is able to exploit model-based user interface descriptions and adaptation rules in order to achieve adaptive behaviour that can be triggered by dynamic changes in the context of use. We also report on an example application and a user test concerning adaptation rules changing dynamically its multimodality.
2012, Contributo in atti di convegno, ENG
Spano L.D.; Cisternino A.; Paternò F.
The description of a gesture requires temporal analysis of values generated by input sensors and does not fit well the observer pattern traditionally used by frameworks to handle user input. The current solution is to embed particular gesture-based interactions, such as pinch-to-zoom, into frameworks by notifying when a whole gesture is detected. This approach suffers from a lack of flexibility unless the programmer performs explicit temporal analysis of raw sensors data. This paper proposes a compositional, declarative meta-model for gestures definition based on Petri Nets. Basic traits are used as building blocks for defining gestures; each one notifies the change of a feature value. A complex gesture is defined by the composition of other sub-gestures using a set of operators. The user interface behaviour can be associated to the recognition of the whole gesture or to any other subcomponent, addressing the problem of granularity for the notification events. The meta-model can be instantiated for different gesture recognition supports and its definition has been validated through a proof of concept library. Sample applications have been developed for supporting multitouch gestures on iOS and full body gestures with Microsoft Kinect.
2010, Contributo in atti di convegno, ENG
Manca M.; Paternò F.
While multimodal interfaces are becoming more and more used and supported, their development is still difficult and there is a lack of authoring tools for this purpose. The goal of this work is to discuss how multimodality can be specified in model-based languages and apply such solution to the composition of graphical and vocal interactions. In particular, we show how to provide structured support that aims to identify the most suitable solutions for modelling multimodality at various detail levels. This is obtained using, amongst other techniques, the well-known CARE properties in the context of a model-based language able to support service-based applications and modern Web 2.0 interactions. The method is supported by an authoring environment, which provides some specific solutions that can be modified by the designers to better suit their specific needs, and is able to generate implementations of multimodal interfaces in Web environments. An example of modelling a multimodal application and the corresponding, automatically generated, user interfaces is reported as well.
2009, Articolo in rivista, ENG
Paternò F.; Santoro C.; Spano L. D.
One important evolution in software applications is the spread of service-oriented architectures in ubiquitous environments. Such environments are characterized by a wide set of interactive devices, with interactive applications that exploit a number of functionalities developed beforehand and encapsulated in Web services. In this article, we discuss how a novel model-based UIDL can provide useful support both at design and runtime for these types of applications. Web service annotations can also be exploited for providing hints for user interface development at design time. At runtime the language is exploited to support dynamic generation of user interfaces adapted to the different devices at hand during the user interface migration process, which is particularly important in ubiquitous environments.
2008, Contributo in atti di convegno, ENG
Paternò F.; Santoro C.; Scorcia A.
While several solutions for desktop user interface adaptation for mobile access have been proposed, there is still a lack of solutions able to automatically generate mobile versions taking semantic aspects into account. In this paper, we propose a general solution able to dynamically build logical descriptions of existing desktop Web site implementations, adapt the design to the target mobile device, and generate an implementation that preserves the original communications goals while taking into account the actual resources available in the target device. We describe the novel transformations supported by our new solution, show example applications and report on first user tests.
2007, Contributo in atti di convegno, ENG
Paternò F.
Nowadays, everyday life is becoming a multi-platform environment where people are surrounded by different types of devices through which they can connect to networks in different ways. Most of them are mobile personal devices carried by users moving freely about different environments populated by various other devices. Such environments raise many issues for designers and developers, such as the possibility of obtaining user interfaces able to adapt to the interaction resources of the available devices. The main learning objective is to gain knowledge and skills in methods and tools for the design of multi-device interfaces that can support designers and developers to address a number of issues raised by ubiquitous computing.
2007, Contributo in atti di convegno, ENG
Paternò F.; Santos I.
The need for support of multi-user interaction is growing in several application domains, including the Web. However, there is a lack of tools able to support designers and developers of multi-user, multi-device interactive applications. In this paper we present a proposal for this purpose describing how it can provide support at both design and run-time. The design and development process can start with task model descriptions and such logical information is used to generate interfaces adapted to the target platforms and mechanisms for their coordination at run-time.
2005, Contributo in atti di convegno, ENG
Bandelloni R.; Mori G.; Paterno' F.
In this paper, we present a solution for dynamic generation of Web user interfaces that can dynamically migrate among different platforms. The solution is based on a migration/proxy server able to automatically convert a desktop service into a service accessible from a different platform, such as a mobile one. This solution can support new environments where users can freely move about and change interaction device while still continuing task performance and accessing the application in a usable manner.
2003, Rapporto tecnico, ENG
Chesta C.; Paternò F.; Santoro C.
One of the main challenges for designers and developers of interactive software systems is to address the ever-increasing availability of new types of interaction platforms. In this paper we present a solution that brings together software engineering and human-computer interaction concepts to solve such issue. Then, we discuss such a proposal on the basis of an evaluation carried out in a software development centre for mobile applications.
2002, Contributo in atti di convegno, ENG
Paternò F.; Santoro C.
The wide variety of devices currently available, which is bound to increase in the coming years, poses a number of issues for the design cycle of interactive software applications. Model-based approaches can provide useful support in addressing this new challenge. In this paper we present and discuss a modelbased method for the design of nomadic applications showing how the use of models can support their design. The aim is to enable each interaction device to support the appropriate tasks users expect to perform and designers to develop the various device-specific application modules in a consistent manner.
1999, Contributo in atti di convegno, ENG
Paternò F.; Mancini C.
In this paper we discuss the design and implementation of hypermedia able to adapt to different types of usage. Our work is based on a method whose main elements are: a strong user involvement, the identification of different types of users, and the application of task models to support the design and development of hypermedia. Different task models are associated with different types of users. We show examples of the approach proposed taken from a case study where museum information is considered.
1999, Rapporto tecnico, ENG
Paternò F.; Santoro C.
In this paper we present a method that aims to integrate the use of formal techniques in the design process of interactive applications, with particular attention to those applications where both usability and safety are main concerns. The method is supported by a set of tools. We will also discuss how the resulting environment can be helpful in reasoning about multi-user interactions using the task model of an interactive application. Examples are provided from a case study that we have performed in the field of air traffic control.