The introduction of AI-based monitoring systems in organisations is not only reshaping the employee-customer relationship, but also changing the way colleagues interact with each other. Specifically, in response to facial, voice and/or text-based emotion recognition systems, employees must perform additional emotional labour to ensure that they are perceived as positive, high-performing individuals. Based on workshops in which individuals could safely experiment with such systems, and on follow-up interviews (group and individual), we theorise a new form of emotional labour, which we call Algorithmic Emotional Labour (AEL). We define AEL as a set of practices triggered by the need to decipher the algorithm’s evaluative criteria and to ‘speak’ the algorithm’s language. These practices amount to a ‘double’ form of emotional labour, directed at interlocutors and at the algorithm itself. While AEL builds on the foundational concepts of emotional labour, such as the management of emotions to present a desired emotional state in response to specific display rules, we show that these rules are now shaped by the requirements of the algorithm, rather than solely by social or organisational norms. This introduces additional layers of regulation, strategies, and effort by individuals who must continually regulate their emotional expression in ways that respond to the unpredictable criteria set by algorithms and compensate for their limitations, such as their inability to understand nuanced or context-specific emotional cues.