Nov 10, 2023
Trust and Opacity in AI: Perspectives from Epistemology, Ethics, and Political Philosophy
Conference, November 16-17, 2023
Organizers: Rico Hauswald (TU Dresden), Martin Hähnel (University of Bremen), Kathi Beier (University of Bremen)
Location: TU Dresden, Bürogebäude Zellescher Weg (BZW), Zellescher Weg 17, 01069 Dresden, room SLUB Makerspace M2,
Click here for a detailed map of the conference locations and directions.
Description
As artificial intelligence (AI) is becoming part of our everyday lives, we are faced with the question of how to use it responsibly. In public discourse, this issue is often framed in terms of trust – for example by asking whether, to what extent, and under what conditions trusting AI systems is appropriate. In this context, the philosophical debates on practical, political, and epistemic trust that have been ongoing since the 1980s have recently been gaining momentum and developed within the philosophy of AI.
However, a number of fundamental questions remain unanswered. For example, some authors have argued that the concept of trust is interpersonal in nature and therefore entirely inapplicable to relationships with AI systems. According to these authors, AI systems cannot be “trusted” in the strict sense of the term, but can at best be “relied upon”. Other authors have disputed this assessment, arguing that at least certain kinds of trust can apply to relationships with AI technologies. Also controversial is the influence of AI’s notorious black-box character on its potential trustworthiness. While some authors consider AI systems to be trustworthy only to the extent that their internal processes can be made transparent and explainable, others point out that, after all, we do trust humans without being able to understand their cognitive processes. In the case of experts and epistemic authorities, we often do not even grasp the reasons and justifications they give. Another point of contention is the trustworthiness of the developers of innovative AI systems, i.e. the extent to which the trustworthiness of AI systems can be reduced to, and should be based on, trust in the developers themselves. In this context, the debate on “ethics by design” or “embedded ethics” seems to be crucial as it helps evaluate the various attempts currently being made to promote trust in AI by taking ethical principles and usability aspects into account.
The aim of this conference is to facilitate an exchange on these and related issues and to discuss the ethical as well as the political and epistemic dimensions of trust and opacity in AI systems. We would like to discuss questions such as:
- What are the emotional, psychological and normative preconditions of trust, and can they be meaningfully applied to AI systems or robots, or is speaking of “trust in AI” a category mistake?
- Is trust a value (perhaps a value in itself) that makes interaction with AI systems possible in the first place? What are the dangers and disadvantages of trusting AI technologies, and when is mistrust justified?
- Does AI need to be explainable in order to be trustworthy? If so, what exactly does “explainability” mean and how can it be established?
- When AI systems take over tasks from humans (e.g. as care robots), what are the similarities and differences in trusting them compared to trust in human actors?
- When AI systems are used as sources of information (e.g. in the form of diagnostic systems in medicine), is trust in them similar to or different from classical testimonial trust and epistemic trust in experts and epistemic authorities?
- How promising are ethics-by-design approaches, and what are the possibilities and limits of attempts to “embed” trust in AI systems?
- Do AI technologies (e.g. ChatGPT) contribute to the destruction of existing trust relationships (e.g. in schools, universities, etc.)? How should relationships of trust between humans and AI systems be structured to meet ethical norms and standards without undermining existing human-to-human trust relationships? What influence do political and legal regulatory processes have on these trust-building micro-processes?
Program (updated November 16, 2023)
Thursday, November 16, 2023
(SLUB / BZW, Zellescher Weg 17, room Makerspace M2)
10:30 – 11:00 | Arrival and Welcome |
11:00 – 12:30 |
“Stakes and Understanding the Decisions of Artificial Intelligent Systems” |
12:30 – 14:00 | Lunch |
14:00 – 15:30 | “The Effects of Opacity on Trust: From Concepts to Measurements” Ori Freiman (McMaster University, Hamilton) |
15:30 – 16:00 | Coffee Break |
16:00 – 17:30 | “Trust and Opacity: Comparing AI Systems and Human Experts” Rico Hauswald (TU Dresden) |
18:30 | Dinner |
Friday, November 17, 2023
(SLUB / BZW, Zellescher Weg 17, room Makerspace M2)
10:30 – 11:00 | Arrival and Welcome |
11:00 – 12:30 | "Meaningful Human Control without Authority” Philip J. Nickel (Eindhoven University of Technology) |
12:30 – 14:00 | Lunch |
14:00 – 15:30 |
“Codes and Agency” |
15:30 – 16:00 |
Coffee Break |
16:00 – 17:30 | “Trust and Participation: The Transformation of the Public Sphere Through Automated Decision-making” Martin Baesler (University of Freiburg) |
18:30 | Dinner |
[Unfortunately, the talks by Christian Budnik, Juan M. Durán, Andreas Kaminski, Karoline Reinhardt, and Ines Schröder had to be canceled.]
Registration
Participation is free, but please register by sending an email to: rico.hauswald@tu-dresden.de
Travel
Here you can find useful information on how to reach Dresden by train, plane, or car: https://www.dresden-convention.com/en/dresden/destination-dresden/getting-to-dresden
The main train station of Dresden is about 20 minutes by foot, or roughly 15 minutes by public transport to the TU Dresden, where the conference will take place.