Forschungsthemen
[MA] Handling Uncertainty in Cyber-Physical Systems
Traditionally, one of the first steps of each software engineering process is the requirements analysis. The software engineers determine stakeholders, uses cases and features. One important step is to ascertain all possible circumstances (uses, misuses, environmental factors etc.) that could influence the proper function of the software, in order to treat those circumstances during design and implementation. With the emergence of connected and autonomous embedded systems (Cyber-Physical Systems), software engineering faces some major challenges. Because those systems operate in the real world, they must adapt their behavior to the dynamically changing requirements demanded by current environmental situation. Software systems supporting this kind of context-aware dynamic behavior are called “Self-Adaptive Systems”. Nevertheless, those systems are aware of the changing environment and the need to adapt their behavior accordingly, several crucial problems are still unsolved: Problem 1: The system developer is uncertain about all the possible situations, the Self-Adaptive System has to cope with and still execute the desired task reliably. One example of such systems are service robots, which need to interact with humans - HRI: Human-Robot Interaction (HRI). Because communication is highly depended on the actors, requirement engineering for each possible interaction participant becomes impossible. One strategy is to use machine learning algorithms to constantly adapt the interaction behavior of a robot to a given situation without having to model all situations beforehand. This algorithms learn the success of their actions based on a growing knowledge base. However, a huge drawback of those mechanisms is that the conceptual model of the learning model is translated to program code and thus is indirectly hidden within the system. This low level of abstraction makes it hard for developers to predict, test and evolve those algorithms at design time, while runtime adaptation becomes almost impossible. Problem 2: Self-Adaptive Systems cannot be certain about correctness and completeness of information about the environment. Usually, sensors (e.g. camera, microphone) gather data (e.g. depth image). This data is then analyzed to derive environmental information (e.g. number of people involved in an interaction). Afterwards, the derived information is used to check whether an adaption of the system is necessary. In such scenarios a lot of problems can occur: On the one hand the sensor data as well as the inference process might be incorrect and thus, resulting in an erroneous knowledge base. On the other hand the set of knowledge about the environment might be incomplete because of missing sensors or inference rules. These problems will lead to wrong adaptations and thus to wrong behavior of the Self-Adaptive System. HRI-System use sensor data and specialized interpretation mechanisms to derive a multi-facetted model of the human interacting with the robot. Because important information for the interaction (e.g. whether the human is angry etc.) cannot be sensed directly, the data of several sensors is fused to infer this information (e.g. loudness, speed, vocabulary etc.). When the Self-Adaptive System cannot deal with shades of certainty w.r.t. the integrity of the knowledge base, the system is likely to behave erroneously. Role-Based-Design is a software design approach that uses role models to express dynamically varying services of one object as well as dynamically varying relationships between several objects. Roles are a perfect foundation to model dynamically varying behavior in Self-Adaptive Systems, since managing (attaching/detaching as well as activating/deactivating) roles change structure of the whole system and the behavior of individual objects. An important aspect of Role-Based-Design for Self-Adaptive Systems is a model that describes circumstances under which roles are played by objects (Role Model Binding) and when they are activated or deactivated, respectively. Several approaches have been proposed (e.g. state charts, storyboards etc.), whereby none of them can treat potentially incorrect or incomplete knowledge. Thus, Role-Based Design is currently not capable of tackling Problem 1. Traditionally the “plays” relationship between a role and the player is discrete (i.e., an objects plays a role or does not). That is also true for the relationships between several roles within a Role Model. Self-Adaptive Systems that operate in the real world perform adaptation based on information inferred from potentially incorrect or incomplete sensor data. Because of imprecise or error-prone sensing, abstraction and reasoning, it is likely those information is partially incorrect. Consequently, changing the behavior of a system by roles, that change the behavior of an object discretely, may also lead to incorrect system behavior. Role-Based Design does currently not provide means to tackle Problem 2. The aim of this work is to develop models to handle uncertainty using probabilistic models that can treat the correctness and completeness of information for role-based self-adaptation as a First-Class-Citizen. Therefor the task of is to: a. Develop a probabilistic automaton that describes under which environmental circumstances role models are integrated into a base system and when they are activated. (PRoPAton: Probabilistic Role-Playing-Automaton) b. Extend the traditional Role-Based-Design approach by continuous roles, whereby the “plays” relationship is no longer binary but numeric (e.g. “an object plays a role with a certainty of X%”). Those roles are called “Soft Roles”. The PRoPAton allows to model the role-playing relation based on uncertain and error-prone sensor data in a Cyber-Physical Self-Adaptive System. Such a mechanism would enable the developer to model the structure of software systems w.r.t. dynamically varying environments with uncertain sensor data tackling Problem 1. The model should describe the binding and activation of role models based on probabilities w.r.t. context information sensed by the application itself or external sensors. This automaton should be able to support parallelism in order to describe the states of the overall role-class-model under different perspectives. The Soft Roles model allows to define how the original behavior of an object is changed, based on the probability that a certain set of roles is played. Such a model would enable the developer to model the behavior of software systems operating in varying environments with uncertain sensor information, tackling Problem 2. The model needs to describe how roles influence the behavior of an object w.r.t. the certainty it is played. This could be expressed using different implementations of the same role, whereby each implementation corresponds to a given range of certainty (e.g. 0-10%, 10-20% etc.). Another possibility is to use of mathematical models, describing how the default behavior of an object is changed in a continuous instead of discrete manner.
Betreuer: Christian Piechnick