This inductive study, based on two large human-AI co-creation design competitions, examines how experts evaluate creative products generated through human-AI collaboration. We find that experts initially undergo a product-oriented evaluation process, in which novelty and usefulness remain the core criteria. The collaborative nature of human-AI creation raises experts’ expectations regarding both novelty and usefulness. However, the broadened reference frame often leads to unmet novelty expectations, resulting in a negative novelty bias against human-AI co-created products. The evaluation process does not end there. Experts proceed to a creation process-oriented evaluation, in which they seek evidence of human involvement and reward it accordingly. The extent of this recognition depends on experts’ understanding of AI and their identification with their own professional expertise. We propose a process theory that explains how experts evaluate outputs from human-AI creative collaboration, highlighting the interplay between the product-oriented and process-oriented evaluation stages. Our findings underscore the critical importance of maintaining human dominance in the creative process to enhance expert evaluations of co-created products