Drawing on an autoethnographic field study within a public employment service, we examine how an organization anticipates and organizes for the performativity of errors produced by an AI system. We outline how, during the development of a machine learning model predicting jobseekers’ risk of long-term unemployment, the interdisciplinary project team started unpacking the model’s prediction errors and anticipating their implications. They uncovered three layers of performativity that came with erroneous predictions: the actions taken based on the prediction, the signals sent by the predictions, and the interactions between predictions and reality. This deeper understanding of the performative dynamics of predictions led the organization to shift its focus from striving for accuracy to actively organizing for errors, by designing mitigating practices and adjusting the algorithm design. Our findings reveal that errors, often concealed behind accuracy metrics, are not just passive byproducts of AI systems but actively shape organizational decisions and practices. This study contributes to theory on performativity of predictions by demonstrating the complex interdependencies between algorithmic outputs and social systems.