Advances in digital technology, particularly in generative Large Language Models (LLMs), have initiated a paradigm shift towards further automating the creative processes traditionally performed by humans. While prior research has predominantly focused on leveraging LLMs for idea generation, we explore the potential of LLMs in automating another critical phase of the creative process: idea evaluation. Evaluating ideas is inherently challenging due to the uncertain value of ideas before they are implemented, a situation made even more complex by the era of distributed innovation where ideas are plentiful. This complexity often translates into a costly and labor-intensive process, leading to prolonged feedback cycles that can discourage future idea submissions. We propose that idea evaluation through LLMs offers a more time-efficient, and less cumbersome alternative for organizations, potentially streamlining the evaluation process. We seek to determine if the idea evaluation and selection process can be automated with LLMs. Through the examination of two distinct samples of product ideas and concepts, evaluated by both human experts and ChatGPT 4, our findings reveal a consistent trend of LLMs rating ideas more favorably across various dimensions compared to human evaluations. Further, the LLM showcased a robust predictive capacity for mirroring human evaluations, evidenced by significant correlations across several dimensions. This reveals a strong congruence between AI-based and human judgment, underscoring the potential of LLMs to enhance the creative process of idea evaluation.