Artificial intelligence (AI) is widely recognized for its efficiency in performing tasks traditionally managed by humans. However, AI systems are not without flaws, and their occasional errors can erode user trust. To mitigate this, ‘apologies’ – oral or textual communications from AI to users – are employed to recover trust. These apologies vary in form and effectiveness but research on their impact is fragmented across various disciplines. Here, we conduct a systematic review to integrate the existing studies on AI apologies using both qualitative and quantitative methods. First, a narrative review extends the established human-to-human six-component apology model (Lewicki and Brinsfield, 2017) to AI-to-human interactions, and identifies three additional strategies: denial, humor, and gratitude. Second, a subsequent meta-analysis confirms the positive impact of AI apologies on user trust, whereas denial strategies are found to be ineffective. Further examination of several moderators (such as AI tangibility or failure type) yields inconclusive results, suggesting that the underlying mechanisms and moderating factors remain only partially understood. This paper highlights the theoretical and practical importance of AI trust repair strategies, and identifies future research opportunities towards the development of human-centered AI systems.