Generative AI (GAI) systems, powered by large language models (LLMs), are shaping how knowledge workers collaborate with AI across diverse contexts. This paper introduces a novel role concept for prompt engineering (PE), refining inputs to GAI models to produce high-quality outputs, that accommodates the rapidly evolving nature of GAI usage. Drawing on a qualitative proof-of-concept approach, we interviewed 17 AI experts to explore emerging PE practices. Our findings identify four archetypical roles—the LLM Developer, the GPT Developer, the Subject Matter Expert, and the Prompt User—reflecting the varying degrees of domain knowledge re-garding GAI, domain knowledge regarding the subject matter, AI literacy, linguistic skills, and metacognitive abilities needed for effective engagement with GAI. We propose that each role addresses distinct work tasks, from fine-tuning LLMs and building domain-specific GPTs to de-veloping or applying prompts for individual purposes. The findings highlight the need for user-friendly GAI systems that embed PE techniques into back-end systems, lowering the barriers for non-expert users but also creating awareness for the potential risk of overreliance of non-expert users. As knowledge work increasingly relies on GAI, the framework provides a roadmap for equipping the future workforce with the necessary skills and competencies. The paper thus chal-lenges the monolithic view of “prompt engineering,” urging stakeholders to systematically refine PE approaches and training.