Weatherhead School of Management, Case Western Reserve U.
The paper brings together two major research areas: the effects of technology in workplaces and trust in technology, in the context of Artificial Intelligence’s (AI) proliferation. By doing so, the paper answers the call to go beyond the automation augmentation debate to understand the nuances caused by the continuance of augmentation on knowledge workers within AI systems. We introduce a theory of trust in AI systems, proposing a dynamic form of trust that emerges when the usual epistemic relationship between trust and knowing (that technology delivers an expected result) is broken. The creation of doubts in the system, due to the occurrences of different types of errors, questions the veracity of the AI model’s results. The implications of this theory are shown in the context of different AI models used in a Department of Radiology, through an extensive ethnographic fieldwork. Unlike previous workplace technologies, where trust is developed gradually, the trust-building process for each AI model follows a unique path, offering a combination of two forms of trust: fleeting and stable. In order to further understanding this phenomenon, the data suggests integrating trusting building with social actions, system functions and the daily work context within the department. The study points to the transient nature of trust in AI systems, which depends on the context and features of the trusting object such as data, algorithm, task, and user or user community. The paper finds that trust in metahuman systems (a sociotechnical system where both humans and machines learn) requires human to trust the machine components, as machines continuously learn and evolve with humans. And this dynamic, context-specific form of trust building must be cultivated and maintained for effective interaction and integration.