The recent open revealing of numerous AI systems challenges the notion that the exclusivity of an AI system’s data and model constitutes a source of competitive advantage. We explore the mechanisms behind revealing AI and the characteristics of AI systems associated with it. Specifically, we examine two dimensions of the selective revealing of AI systems: Its completeness, describing which components are revealed (none, the model, or model and data), and its degree determined by the license type (proprietary, restrictive, permissive). Employing a mixed-methods approach, we draw on 12 interviews with decision-makers at AI-focused organizations and prior theory to construct hypotheses that we empirically test on a sample of 716 AI systems. We hypothesize, and find supported in the data, that organizations tend to reveal larger and more novel models to a lesser degree and less completely and that data modality moderates the association between model size and revealing completeness. These findings suggest, in line with our qualitative findings, that revealing AI system components serves to promote their adoption and to establish a lock-in across AI system versions rather than collaborative development. Our study contributes to the academic discourse on open innovation and competitive advantage. For strategists and policymakers, we provide guidance in navigating their pathways toward opening AI.