Peking U., School of Psychological and Cognitive Sciences, China
Due to the unpredictability and opacity of artificial intelligence (AI) decision-making processes, trust in human-AI interactions has become a significant concern. However, existing studies on trust in AI often yield inconsistent findings and lack a comprehensive summary of the underlying mechanisms, highlighting the need for a review in this field. Therefore, we conducted a systematic review of 560 empirical studies to examine the antecedents and consequences of human trust in AI. The review identified key antecedents of trust, including capability, anthropomorphism, individual factors, explainability, among others; and consequences such as behavioral intention, attitude, usefulness, acceptance, etc. From a global cross-cultural analysis perspective, our study revealed significant variations in how cultural backgrounds influenced the perception and prioritization of factors such as AI capability, anthropomorphism, and transparency, underscoring the need for a multifaceted approach to foster a comprehensive environment for trustworthy AI. The findings suggested several directions for future research, including exploring dynamic trust formation processes, the evolution of trust over time, reciprocal trust relationships, context-specific trust requirements, the shift from individual to group trust dynamics and AI agency. Addressing these areas could enhance our understanding of trust in AI and contribute to the development of culturally sensitive AI technologies.