My research investigates the relationship between trust and trustworthiness in human–AI interactions, particularly in the context of high-risk applications. Trust is an internal state within the user that refers to the subjective reliance and confidence individuals place in an AI system, particularly when facing risk and vulnerability while trustworthiness is the objective qualities and characteristics of an AI system that make it deserving of such warranted trust and confidence. Our primary research question investigates the relationship between these two concepts. In other words does a system designed to be trustworthy elicit trust from users across diverse backgrounds?
Various internal characteristics may impact human trust including demographics, background, prior experience with AI, previous dispositions, and risk tolerance. We aim to identify key human factors that influence human trust in AI systems. Trustworthiness, on the other hand is a key attribute of an AI system. However, there is no universally accepted definition or consensus on the components that contribute to it, as it is often defined relative to the specific context. Given the central role of perceived risk in shaping user trust, my study adopts the risk-based classification from the EU AI Act, the first regulatory framework of its kind, as a guiding framework to select two AI Systems within high-risk contexts. These domains represent scenarios where the consequences of AI decisions are significant and trust is more difficult to establish.
My goal is then to define trustworthiness within those contexts by drawing on existing literature and conducting interviews with domain experts. Ultimately, we aim to understand whether users from diverse backgrounds perceive and respond to this "trustworthy" AI system with trust or not.
Following the development of two hypothetical high-risk trustworthy AI systems within these domains, a diverse participant pool will be recruited to engage with the systems. The study will then measure users' trust levels and analyze how internal human factors correlate with their trust in these contextually trustworthy high-risk AI systems.
Over the first twenty months of my PhD program as a part-time student, I conducted an extensive literature review, explored the challenges of responsible AI in industry, and aligned current research gaps with my personal interests to define a clear research direction. Last year I published my work and presented at the Second International Symposium on Trustworthy Autonomous Systems (TAS '24) in Austin, Texas. I anticipate completing my PhD by October 2028.
This diagram illustrates the experimental design steps and how they integrate to explore the relationship between user trust and a trustworthy AI system.