AI systems are disrupting organizing due to their groundbreaking effects. Based on vast amounts of data, organizations can employ AI systems in various sectors, from entertainment to healthcare. Organizations establish partnerships to get access to resources and knowledge to develop a proper AI system. However, AI systems involve and affect multiple actors with contradicting goals, creating challenges like data frictions and data-related tensions. If such challenges are left unmanaged, data flow across the partnership becomes compromised, jeopardizing AI development. Therefore, coordination between partners and the underlying system becomes crucial and largely unaddressed in previous studies. Then, we studied the Streams project, an AI partnership between Google DeepMind and the Royal Free Hospital in the UK through multiple secondary sources. Streams received wide media coverage and public scrutiny due to controversies around the data exchanged and the project’s sensitive nature. We showed the triggers for data-related tensions and data frictions, how the frictions amplified one pole of each tension, and how they contributed to unintended consequences. We contribute to the paradox theory in two ways: one, by arguing that organizations must solve frictions to keep running AI projects, going on the contrary direction of the persistence of paradoxes; two, by moving away from relational tensions between partners to tensions in which partners are aligned but misaligned with their underlying system. We also contribute to research on AI in organizing by studying how data dynamics affect AI development, a perspective rarely addressed in prior studies.