The rapid adoption of AI chatbots in management education poses significant pedagogical challenges, particularly in large-scale courses where personalized interaction is constrained. This paper examines the potential of Retrieval-Augmented Generation (RAG) systems to address the misalignment between generic AI-generated content and course-specific learning objectives. Three key mechanisms of misalignment are identified: (1) reliance on generalized training data rather than tailored course materials, disrupting the alignment between learning objectives, teaching activities, and assessments; (2) inconsistencies in student learning experiences due to varying outputs across AI platforms; and (3) the opacity of AI-generated content, which impedes students’ ability to critically evaluate responses and undermines disciplined academic thinking. A case study involving a RAG-based chatbot deployed in a large-scale Philosophy of Science course at a Danish business school demonstrates the system’s potential to mitigate these challenges. Grounded exclusively in teacher-curated materials, the chatbot provided curriculum-aligned responses with precise citations, enhancing epistemic transparency. Student engagement was remarkable, with over 20,000 interactions logged. We argue that unlike traditional LLMs, which rely on broad, generalized training data, RAG systems offer a fundamentally different approach by providing tailored outputs grounded in curated content, ensuring alignment with pedagogical objectives and course-specific needs.