The societal, technical, and organizational risks posed by artificial intelligence (AI) increasingly motivate organizations to invest in governance efforts to promote responsible AI development and use. Over three years, we studied a European telecommunications company to investigate the micro-processes of AI governance. Beyond creating new roles (e.g., governance officer) and governance bodies (e.g., ethical council), the organization saw an AI development platform as a key part of governing AI. Our findings revealed a frequent misalignment between the platformâs intended use and its actual adoption in practice. Instead of relying on the platform, organizational actors often relied on simpler shadow tools to perform AI governance activities. Establishing, encoding, and enforcing rules around responsible AI unfolded through a series of discursive practices including frequent model review meetings, community workshops, and self-regulating standards. These practices socialized values and norms around AI and helped bridge the gap between aspiration-driven policy and day-to-day practice. Our findings highlight the role of discourse in governing through technological tools as means for overcoming limitations of bureaucratic controls.