Alan Turing and Alonzo Church showed the fundamental impossibility of ensuring that a computer algorithm meets certain properties without running the algorithm. The rise of generative AI (genAI) further highlights the unpredictability of advanced technologies. This study examines how an organization handles genAI’s impact on core functions while managing genAI’s unpredictability. I conducted a 20-month-long qualitative study of a higher education organization, focusing on how members develop strategies to protect core functions – teaching and learning – from genAI’s negative impacts. I show how the specific aspects of genAI’s unpredictability 1) made the locus of organizational control efforts a moving target, 2) prompted the delegation of sensemaking around the emergent technology to lower organizational levels; and 3) often led to organizational members adopting “highest risk” assumptions. Adopting the “highest risk” assumptions led to prioritizing misuse prevention at the expense of potential benefits, and some additional harms were incurred to the organization’s core functions. I draw on social construction of risk object theory to show how genAI challenges traditional ways of governing technological risks in organizations.