Comparison of projects on industrial fault diagnosis
In several research projects, we study human-machine interaction in complex work settings. These projects have many aspects in common as they focus on the detection, diagnosis, and handling of faults by human operators in industrial settings. Moreover, they all share the assumption that it is essential for operators to develop a thorough understanding of (a) the prduction process and (b) the inner workings of the human-machine system.
However, the projects also differ in important ways. To make these differences transparent, we will provide a brief overview of the projects and then contrast them in a table.
HyTec: Hypotheses during diagnostic problem solving in technical domains: Mental basis, process, and outcome (DFG, 2021-2024)
In HyTec, we investigate what mental models technicians and apprentices have of the technical system (car or packaging machine) and what impacts this has on their mental and practical activities during fault diagnosis. To assess their mental models, we let them draw concept maps, and then test whether good maps are predictive of high-quality hypotheses and successful diagnosis.
XAI-Dia: Explainable artificial intelligence for fault diagnosis: Impacts on human diagnostic processes and performance (DFG, 2022-2025)
In XAI-Dia, we ask how people deal with explainable artificial intelligence (XAI) and how this depends on whether the AI is correct or what types of error it makes. To generate this XAI, we use Convolutional Neural Networks (CNN) to analyse pictures of chocolate bars during quality control, and then implement XAI algorithms on top to make it transparent what information the CNN algorithm used for computing its decision. We also test how these algorithms differ from human observers in terms of what image areas they attend to.
XRAISE: Explainable AI for railway safety evaluations (Eisenbahn-Bundesamt, 2024-2025)
The XRAISE project is similar to the XAI-Dia project, but is carried out in the context of railway safety evaluations. We study how people use XAI to evaluate how reliably computer vision systems detect obstacles on train tracks. We vary whether images from different sensors actually contain obstacles and whether they are recognized correctly, what environmental conditions are present, which XAI algorithms are used, and how the results are presented.
MoCa-Dia: Modeling and visualising causal relations in process chains: How do multi-linear versus systemic approaches affect fault diagnosis? (DFG, 2024-2027)
In this project, we want to investigate how to model and visualise causal relations in process chains. When a fault occurs in a complex industrial plant, what information should an assistance system provide to operators so that they are able to understand why the fault occurred? Is it better to use (a) linear modeling methods that might be easier to understand but do not capture the actual complexity of the process, (b) systemic models that might be realistic but overwhelming, or (c) combinations of both approaches? The main difference to HyTec is that the latter does not provide any intervention or assistance to problem solvers but merely studies their cognitive processes. The main difference to XAI-Dia and XRAISE is that the latter do not rely on causal relations but use purely data-driven ML and explain how the outputs were generated.
Here is an overview of the differences between the projects:
HyTec | XAI-Dia and XRAISE | MoCa-Dia | |
Aims of assistance | None |
Understanding the algorithm (instead of the process) |
Understanding the production process |
Method of assistance | None | Highlighting of image areas that were used by the ML algorithm | Presentation of functional/causal relations |
Model contents |
Abstraction hierarchy |
Data-driven ML and XAI |
Functional/causal relations |
Model visualisation | Participants' assumptions about functional structure of the technical system (concept maps) | Critical image areas (saliency maps) | Correct system functioning and faults (Diagrams of functional/causal relations) |
Comparison of modeling methods | None (only linear, expert-based Master maps) | Different XAI algorithms | Multi-linear vs. systemic |
Who generates the model visualisation? | Self-generated by apprentices (expert map is only used for data analysis) | Generated by algorithm, provided by AS (automatically generated images are used as intervention) | Generated by experts, provided by AS (expert map is used as intervention) |
System boundaries | Singular machine (car, packaging machine) | XAI-Dia: process chains (moulding and conveyor), XRAISE: railway tracks |
Process chains (food processing and packaging machine) |
Type of system comparison | Between domains (cars vs. packaging machines) | XAI-Dia: none, XRAISE: different environmental conditions | Within domain (different packaging lines) |
Focus of system comparison | Cognitive requirements for diagnosticians (no assistance) | XAI-Dia: none, XRAISE: effects on human safety evaluations | System-dependent requirements for modelling and visualisation |
Notes: AS = assistance system, ML = machine learning, XAI = explainable artificial intelligence