China Europe International Business School (CEIBS), China
This paper examines the comparative performance of professional investors and GPT-4o in evaluating innovation projects of early-stage technology ventures to predict the ventures’ prospective success. We assess their performance using precision, sensitivity, and specificity, revealing distinct strengths: AI demonstrates higher sensitivity, while human evaluators exhibit greater specificity. We explore the factors driving these differences, highlighting the roles of decision-making characteristics and project complexity. We further investigate AI-human collaboration, demonstrating that integrating human input enhances AI performance in tasks that require expert intuition but can hinder it in more complex evaluation scenarios. This study contributes to understanding AI’s role in evaluation tasks that involve high risk, high uncertainty and long feedback cycles, and offers insights into optimizing AI-human collaboration.