SOCIALBRIDGES 7: TRUST: SOCIAL BRIDGE BETWEEN HUMANS AND TECHNOLOGY
When: 21-22 September 2023.
Where: Live on Youtube
21 September
- 10:00-10:15 INTRODUCTION and WELCOME
Merle Fairhurst, CeTI, Chair of Acoustics and Haptics, TU Dresden - 10:15-11:00 Allowing for Complexity in Rethinking Trust in Automation: The Three Stages of Trust Framework and Current Challenges for the Field.
Johannes Kraus, Ulm University, Department of Human Factors
Abstract: Starting from a consideration of the manifold of variables and psychological processes playing a role in the formation and calibration of trust in automated and intelligent technology, in this talk the Three Stages of Trust framework for the investigation of trust in automated technology is presented. The framework integrates former conceptualizations of trust in automation with theorizing from other domains of trust research. Building on this, findings of studies which applied the Three Stages of Trust as a framework to human-robot interaction and automated driving are discussed supporting some of the framework’s key propositions. These serve as a basis for the discussion of current challenges for investigating trust in automation and a research agenda to enhance the theoretical underpinnings of trust in automation. One key challenge thereby is – in line with the underlying goal of this conference - the exchange and harmonization with other disciplines in which trust is investigated.
Bio: Johannes Kraus is a postdoctoral researcher at the Human Factors department of Ulm University and head of the subject area "human- robot interaction". He received his Doctoral degree at Ulm University in 2020 and his M.Sc. in Psychology from University of Mannheim in 2013. His research interests lie in the decision processes in the interaction with automated systems, especially automated vehicles and robots. He focuses on trust processes and the role of user personality and attitudes.
- 11:00-11:30 Self-explaining autonomous systems for better human-machine collaboration
Verena Klös, CeTI, TU Dresden
Abstract: If humans have to rely on autonomous systems to accomplish a task, they need to understand the behavior of the systems to some extent. In particular, they need to understand what the system can do and where its limitations are. Furthermore, if the system behavior differs from what a user expects, he or she may feel uncomfortable and lose trust. The user's expectations come from his or her mental model of the system. Explanations can help to align this mental model and, thus, improve human-machine collaboration. However, for explanations to be useful, also the timing, the format, the context and the included information is relevant. If a system is aware of these factors and can give such explanations, we call it self-explainable (able to explain itself).
Bio: Verena Klös is a junior professor for Tactile Computing at TU Dresden. She received her PhD from TU Berlin in 2020. In her PhD thesis, she developed a framework for safe, intelligent and explainable self-adaptive systems. Currently, her research focuses on explainability of autonomous and cyber-physical systems, and on safe and efficient human-machine collaboration.
- 11:30-12:00 On Risks and Anonymizations of Behavioral Biometrics
Julian Todt, Karlsruhe Institute of Technology (KIT)
Abstract: Social media is continuously increasing the quantity and quality of data that they collect on their users. In addition to existing data, new types of data -- behavioral biometrics -- are being collected such as hand motions, eye movements, the human voice, heart beat and brain activity. These types of data do not only allow the identification of the individual, but they can also be used to infer sensitive attributes of the user such as age, sex, health status and even personality. For users it is hard to determine which inferences are possible with the collected data. To mitigate these privacy risks, users generally only have the choice to completely prohibit certain sensors from being used. This however usually results in the application not being usable anymore. In this talk, we will provide an overview over these newly collected behavioral biometrics and the inferences they make possible. Then we propose finer privacy controls based on technical data protection where data is anonymized before being shared with service providers.
Bio: Julian Todt is a PhD researcher at the chair for Privacy and Security at Karlsruhe Institute of Technology (KIT) as part of KASTEL Security Research Labs. He is working on anonymization methods for biometric data in smart city scenarios -- considering both common video cameras and more recent sensors including LiDAR, WiFi Sensing and more.
- 12:00-12:45 Decoding the Trust Puzzle: Rethinking its Significance in Shaping User Behavior with Technology
Linda Onnasch, Technische Universität Berlin
Abstract: It seems to be a widely accepted fact that trust is one of the most important prerequisites for the use of technology and that it shapes people's behavior when interacting with technology. In my talk, I will try to shed more light on this assumed mediating role of trust as an attitude in actual human-technology interaction by presenting the origins of this idea and current research pointing to the fact that trust may be overrated as a predictor of people's subsequent behavior.
Bio: Linda Onnasch is professor of psychology of action and automation at the Technische Universität Berlin. Her research focuses on human interaction with automated systems and collaborative robots considering system characteristics, psychological mediators and context factors. For example, together with her team, she investigates how context factors like risk affect users’ trust attitude and trust behavior in interaction with automated systems or how an anthropomorphic robot design influences a social perception of robots and how this might benefit an intuitive interaction.
22 September
- 10:00-10:45 How do we assess system trustworthiness?
Markus Langer, Philipps Universität Marburg, Department of Psychology & Digitalization
Abstract: Designing trustworthy systems and enabling external parties to accurately assess the trustworthiness of these systems are crucial objectives. Only if trustors assess system trustworthiness accurately, they can base their trust on adequate expectations about the system and reasonably rely on or reject its outputs. However, the process by which trustors assess a system’s actual trustworthiness to arrive at their perceived trustworthiness remains surprisingly unclear. In this talk, I will introduce the two-level Trustworthiness Assessment Model (TrAM)* that draws on psychological models describing how individuals assess others people’s characteristics. The TrAM proposes that at the micro level, trustors assess system trustworthiness based on information cues associated with the system. The accuracy of this assessment depends on cue relevance and availability on the system’s side, and on cue detection and utilization on the human’s side. At the macro level, the TrAM details that individual micro-level trustworthiness assessments propagate across different trustors – one stakeholder’s trustworthiness assessment of a system affects other stakeholder’s trustworthiness assessments of the same system. I will describe how the TrAM advances existing models of trust and sheds light on factors influencing the (accuracy of) trustworthiness assessments. In the end of the talk, I will discuss implications for theory such as for the concept of “calibrated trust” as well as implications for system design, stakeholder training, and regulation related to trustworthiness assessments. *The TrAM is a model that has been developed especially by Nadine Schlicker as a main author and by myself, with support by Kevin Baum, Alarith Uhde, Sarah Sterz, and Martin Hirsch.
Bio: Markus Langer is assistant professor at the Philipps-University of Marburg. His research profile integrates industrial and organizational psychology, human factors, and human-computer interaction to investigate consequences of the implementation of Artificial Intelligence (AI)-based systems at work. Specifically, his work focuses on the area of human-AI decision-making with an emphasis on trust in AI-based systems, transparency and explainability of AI-based systems, and work design aspects of human-AI collaboration. He is working in several interdisciplinary projects on human-AI interaction together with computer scientists, philosophers, and legal scholars and has published his work in the most prestigious journals of psychology (e.g., Psychological Science), computer science (e.g., Artificial Intelligence) and in high-ranking interdisciplinary journals (e.g., Computers in Human Behavior).
- 10:45-11:00 How users’ emotions impact trust in a faulty chatbot
Tabea Berberena, Maria Wirzberger; University of Stuttgart, Interchange Forum for Reflecting on Intelligent Systems, Stuttgart, Germany.
Abstract: Assistive dialogue-oriented systems – so-called chatbots – have become increasingly prevalent in our daily lives. While some users trust such technology even with sensible information, others hesitate to do so and refuse to rely on potential support. Existing research shows that various factors, including the momentary emotional state, influence individual decision-making processes. Suppose technology proves to be unreliable and makes mistakes. In that case, it might be a source of negative emotions, potentially altering a user’s decision whether to trust a faulty system or not.
However, so far, theories and models that capture mechanisms and factors underneath human trust in technology fail to systematically consider users’ emotions and their effects on users’ intentions, attitudes, and behavior. Our research aims at closing this gap by examining the impact of negative emotions on trusting a faulty chatbot after having caused the user negative consequences. Users will complete a study onboarding procedure via a chatbot-supported dialogue, ultimately resulting in a study appointment. Upon arrival at the lab, they will learn that the chatbot mistakenly confused their appointment, and they have been there in vain. Subsequently, they will be asked to complete a feedback survey, which captures their momentary emotional state and assesses to what extent they will trust the faulty system in the future, both in terms of behavioral intentions and actual behavior. Considering all these factors, we will contribute theoretical advances to the existing trust research landscape and further provide methodological guidelines for building trustworthy technology.
-
11:00-11:30 Trust and touch in human-robot interaction
Irene Valori; CeTI, Chair of Acoustics and Haptics, TU Dresden.
Abstract: Looking into the future, we see our lives increasingly intertwined with those of technologies such as robots, which are not only tools but also partners in social exchanges. Yet, we still do not know what social norms apply to these new interactions. Can we trust a robot? Can we communicate affective meanings, perhaps with non-verbal signals like social touch? The present study investigates how human-human or human-robot partners of an observed interaction involving social touch are perceived. Participants (n = 150) were exposed to four comics depicting human-human or human-robot interactions whereby one character was emotionally vulnerable, the other initiated touch to comfort them, and the touchee reciprocated touch. Participants first rated trustworthiness of a certain character (human or robot in a vulnerable or comforting role), then they were asked to evaluate the two touch phases (initiation and reciprocity) in terms of interaction realism, touch appropriateness and pleasantness, valence and arousal attributed to the characters. Analyses delve into how individual differences — propensity to trust others and technology, attitudes toward social touch — moderate observers’ ratings. Our findings show potential limits to the social power of trust and touch in human-robot interactions, suggesting, however, that leveraging individuals' positive attitudes towards technology can reduce the distance between humans and robots.
Bio: Irene is a postdoctoral researcher at TU Dresden, with a background in Developmental and Clinical Psychology. She joins the Chair of Acoustic and Haptic Engineering and contributes to CeTI research. Specifically, she investigates the role of affective touch in promoting interpersonal trust in technology-mediated human-to-human exchanges or human-machine interactions.
- 11.30-12.00 Quasi-Metacognitive Machines: Why We Don't Need Trustworthy AI and Reliability is Enough
John Dorsch, Ludwig-Maximilians-Universität (LMU) München
Abstract: Many current policies and ethical guidelines recommend developing “trustworthy AI”. Here we argue that developing trustworthy AI is not only unethical, as it promotes trust in an entity that cannot be trustworthy, but it is also unnecessary for optimal calibration. Instead, we show that reliability, exclusive of trust, entails the appropriate normative constraints that enable optimal calibration and mitigate the problem of vulnerability that arises in high-stakes hybrid decision-making environments, without also demanding, as trust would, the anthropomorphization of artificial assistance and thus epistemically dubious behavior. Here, the normative demands of reliability for interagential action are argued to be met by an analogue to procedural metacognitive competence (i.e., the ability to evaluate the quality of one’s own informational states to regulate subsequent cognitive action). Drawing on recent empirical findings that suggest providing precision scores (such as the F1-score) to human decision-makers improves calibration on the AI-system, we argue that precision scores provide a good index of competence and enables humans to determine how much they wish to rely on the system.
Bio: After my BA (Hons) and MA (Research) at the Centrum für Integrativen Neurowissenschaften at Universität Tübingen, I completed my PhD in Philosophy at the University of Edinburgh. I also hold an undergraduate degree in Computer Science and worked for many years in IT as a technician and programmer. Now I am a postdoctoral researcher in Philosophy of Mind and Cognition at Ludwig-Maximilians-Universität (LMU) München in the interdisciplinary research lab Cognition, Values, and Behaviour. My research focuses on issues in the philosophy of cognitive science, as well as issues in the ethics of AI. I have published on metacognition, embodiment, the phenomenology of reasoning, and on the foundations of epistemic agency, as well as some vulnerabilities this agency has in the digital age.
- 12.00-12.30 PANEL DISCUSSION
- 15:00-15:30 The Problematic Problems and Potential Pitfalls of Human Trust in Robots
Lionel Robert, University of Michigan
Abstract: As robotics advances and permeates various aspects of our social and work lives, the question of how humans view and ultimately trust robots has become increasingly pertinent. Do humans view them as mere machines, automated tools designed to serve their needs or do they embrace a more empathetic approach, viewing and trusting them as actual teammates (i.e., humans)? On the one hand, proponents of robots argue that computers are social actors (CASA) and humans mindlessly interact with computers in much the same way as they do with other humans. This view is often used to justify the employment of human-to-human theories and their corresponding measures to understand human-robot interactions. On the other hand, advocates of mechanization contend that humans do not view robots as humans but instead as automated tools. This view discourages using human-to-human theories and their corresponding measures to understand human-robot interactions. They advocate for more human-to-automation theories and measures of constructs like trust. In this thought-provoking presentation, I will explore the arguments supporting both perspectives and consider the potential consequences of each approach. Ultimately, this presentation aims to provide a balanced understanding of the complexities involved to encourage a nuanced dialogue on the subject.
- 15:30-16:00 Reducing rating surprise through platform design to achieve higher level of trust in algorithmic evaluations
Allen S. Brown, Carnegie Mellon University
Abstract: A common feature of temporary, gig-based work environments is the use of algorithmically generated worker evaluations integrating a variety of inputs from customers, workers, and the organization. These systems are designed primarily to support the organization's goals and can be quite opaque, even seeming manipulative, to the workers themselves. Interventions to increase the transparency of expectations, supported by technology, is consistent with the growing focus on explainable Al and could substantially improve the work lives of the growing number of gig workers in the economy. We examine how technology can reduce task uncertainty and knowledge of results, with implications for the perceived trustworthiness of performance ratings in an online experiment including 162 participants hired to work as caregivers for a simulated online pet ("Turto"). We manipulated task uncertainty via the clarity and specificity of task instructions provided, and knowledge of results based on whether formative feedback was provided during work or only as a final rating at the conclusion of the shift. Our results demonstrate that more task uncertainty decreases the trustworthiness of performance ratings, while increased knowledge of results improves perceived trustworthiness Both of these relationships are fully mediated by the level of surprise participants report upon learning their final performance rating. Our findings suggest interventions to support explainable Al for enriching the gig work environment.
- 16:00-16:15 Scaffolding Trust and Context in Asynchronous Collaboration
Andrew Kuznetsov, Carnegie Mellon University
Abstract: Technology-enable platforms have led to an explosion of gigwork arrangements in a growing number of sectors. Initially, these jobs were individually based, involving a single worker hired to provide a specific product or service to a particular customer during a specified time period. Increasingly, gig work is expanding to include ongoing work relationships and situations where multiple workers coordinate with each other and/or a variety of different "stakeholders" on the client side. One example where this is occurring is in home healthcare, where older adults require a growing variety of supportive care, and as their needs increase, so do the number of family members and hired caregivers who need to coordinate with each other. This setting is challenging, as it involves the coordination of individuals most of whom have no formal training and share no group or organizational identity in common and yet they need to engage in a fairly high level of coordination to serve the needs of an older adult. Extant research currently has little to offer in terms of guidance for how technology can effectively support this asynchronous coordination and the development of trusting relationships among caregivers and others that are necessary to enable teamwork. Often, technological tools to serve various needs are developed via iteration or trial and error, prototypes are deployed and improved through user experience. However, in the home healthcare setting, this involves a high level of risk given the physical danger and privacy concerns related to technology use in this setting. At the same time, unlike in formal healthcare settings, where workers receive extensive training on standard approaches to documentation, a gig work environment provides limited opportunity to rely on elaborate formalized systems. Therefore, we need to identify key functions that simple, targeted tools can scaffold to enable the essential areas of coordination for teams in this environment, which then could be elaborated by the teams themselves as they work together. To identify these essential structures, we aim to conduct full-cycle research to identify key needs in the home healthcare environment and to test basic concepts for technological tools to create the foundations for asynchronous coordination and teamwork. To facilitate research on these questions, we have built an online platform to host experiments to test focused questions related to asynchronous coordination in temporary teams. Our platform simulates a caregiving environment surrounding a simulated pet, "Turto," who experiences a variety of emergent needs We use this platform to conduct experiments to examine the potential for Al tools to augment communication, information seeking, and note-taking in order to better contextualize the work of teammates in time-constrained and asynchronous settings. In particular, we explore how such tools can be deployed to provide 'high-level' context such as baselines in a manner that requires minimal training for use and facilitates connection between team members. We hope our platform and work exploring these problems can highlight the role that technological scaffolds can play in facilitating coordination and trust, even when collaboration may be brief and asynchronous.