CPEC
How must systems be designed, be regulated, and be put to use with respect to perspicuity in order to enable effective human oversight and control?
AI-based systems are increasingly deployed in domains where their use involves societal risk. Increasing system perspicuity can be a strategy to avoid or at least significantly mitigate such risks. This is because more perspicuity promises better systems, for example by enabling the development of more robust systems or by making important
system properties and functionality verifiable. However, further possible risk-minimising effects of perspicuity are less well-understood.
This project is devoted to a particularly important, but at the same time especially complex, perspicuity-related risk-minimisation strategy: human oversight. This strategy focuses on risks that cannot be easily avoided in advance. For example, although autonomous cyber-physical systems (CPS; e. g., drones) may operate with high
precision in controlled environments, deploying them in real-world contexts can lead to life-threatening accidents in unforeseen corner cases. Similarly, AI-based systems are increasingly used to support decision-making in high-risk domains (e. g., management, medicine), where their outputs directly affect the fate of human beings.
Although AI-based decisions in such domains can excel in accuracy over that of experts, they are never perfect. Sometimes they may, for instance, provide less accurate or even unfairly biased outcomes for certain minority groups, reflecting unacceptable system behaviour that poses risks to the stakeholders involved. This pertains to people using those systems to support their decisions (e. g., biased system recommendations may lead to lawsuits) or to people affected by such decisions or systems (e. g., job applications being rejected due to faulty system decisions). Such risks may emerge at runtime and may have underlying causes that can be traced back to design time when opacity prevents the proper assessment and prediction of unintended side effects. At the same time, such risks may persist during inspection time, when opacity averts the discovery of failure causes which may undermine, for example, the ability to appropriately attribute accountability for adverse outcomes. To minimise such risks, ethical guidelines such as the UNESCO’s draft on AI ethics and the European High Level Expert Group’s Ethics Guidelines for trustworthy AI advocate human control and oversight. Simultaneously, they emphasise the central role of perspicuity for promoting human oversight. Recently, legal
frameworks such as the EU Commission’s 2021 proposal for an AI Act or the German Autonomous Driving Act have incorporated calls for oversight, control, and perspicuity and cast them into concrete (proposed) regulations, emphasising human oversight as important to ensure safety and compliance with the law. Further regulatory requirements are emerging with the growing need to manage risks associated with AI-based systems.
Whereas there seems to be global agreement in ethics guidelines and in proposed regulation that human oversight is crucial to minimise risk, and that perspicuity is a key enabler of human oversight, the exact interplay of the technical foundations of perspicuity, the legal obligations to achieve perspicuity remains unclear.
The Transregional Collaborative Research Centre 248 “Foundations of Perspicuous Software Systems” aims at enabling comprehension in a cyber-physical world with the human in the loop. It was established in January 2019. After a successful extension proposal, it is now in its second funding period, financed through 2026.
Funded by DFG Deutsche Forschungsgemeinschaft / German Research Foundation
More Information can be found here.