ScaDS.AI
Transparency as a prerequisite for trustworthy AI: legal framework conditions and challenges
Artificial intelligence and learning machines will accompany humanity in the future. In their AI strategies, the German government and the EU Commission have set themselves the goal of developing AI systems that focus on people. In its ethics guidelines, the EU Commission's High-Level Expert Group on Artificial Intelligence identified fundamental imperatives and core requirements for AI to be considered trustworthy, including the principle of transparency.
The BMBF-funded project is integrated into ScaDS.AI (Center for Scalable Data Analytics and Artificial Intelligence) Dresden/Leipzig, which is being expanded into one of the German AI centers as part of the German government's AI strategy. Other sub-projects deal with new interdisciplinary methods of machine learning and artificial intelligence as well as issues relating to privacy protection, minority protection and traceability of AI-driven decisions.
The IGETeM sub-project is investigating the question of what requirements should be placed on the principle of transparency and how legal implementation can be designed. To this end, information ethics requirements and implications for legal regulations are being developed in an interdisciplinary workshop. The ethical requirements for transparency regulations are to be identified and made manageable for legal classification. Perspectives from the disciplines of technology communication, philosophy and theology of technology, psychology and computer science will be included. From a legal perspective, the focus will be on data protection, competition and consumer protection law.
The kick-off event for the expanded SCaDS.AI took place in Leipzig on November 27, 2019.