This paper proposes a conceptual model that positions AI-mediated communication as a key driver of relational dynamics in multilingual virtual teams, ultimately affecting the formation and maintenance of a psychologically safe climate. The recent rise of Generative AI tools such as ChatGPT, Google Translate, and DeepL is transforming communication in multilingual virtual teams. Unlike traditional digital tools that merely transmit human-generated messages, Generative AI tools actively modify, augment, and even generate messages, introducing a phenomenon we conceptualize as AI-mediated communication. Existing theories fail to capture the nuanced effects of this new phenomenon. Using phenomenon-based theorizing, we propose a conceptual model that integrates insights from multiple disciplines to explore the countervailing pathways through which AI-mediated communication impacts psychological safety in multilingual virtual teams. On one hand, the technological capabilities of Generative AI tools can reduce language barriers, thus promoting psychologically safe climates. On the other hand, the inherent limitations may undermine trust and inclusion, both of which are critical to psychologically safe climates. These effects are further moderated by team members’ literacy in AI-mediated communication. Our conceptual model advances research on psychologically safe climates by theorizing AI-mediated communication as a transformative force that—paradoxically—both fosters and weakens psychological safety in multilingual virtual teams. Thereby, we also theorize how the foundations of psychological safety in multilingual workplaces change in the presence of new Generative AI tools. For managers, we finally recommend targeted organizational interventions to foster psychologically safe climates in increasingly multilingual and AI-mediated workplaces.