Algorithm aversion, when individuals prefer decisions made by humans over those made by superior algorithms, impedes the adoption of AI technologies. Previous research on interventions to reduce algorithm aversion has yielded mixed findings, highlighting the need for clarity on their effectiveness and underlying mechanisms. This meta-analysis synthesizes findings from 48 studies (113 effect sizes, N = 36,682) to categorize interventions into human-, algorithm-, and context-focused strategies and evaluate their impact. Results show that all three intervention categories significantly reduce algorithm aversion (overall effect size = 0.21***), with context-targeted strategies, such as task framing (e.g., emphasizing an algorithm's strengths for objective tasks, g = 0.76*) and social recognition (e.g., highlighting peer endorsements of algorithmic performance, g = 0.69*), being the most impactful. Algorithm-focused approaches, like transparency and humanization, yielded smaller but significant effects, while human-targeted interventions produced mixed outcomes. Cultural factors emerged as key moderators of the relationship between intervention strategies and algorithm aversion, with studies based on U.S. participants showing stronger intervention effectiveness. These findings emphasize the need for tailored, context-sensitive interventions to build trust in algorithms and guide future research and practical applications in human-algorithm collaboration.