Brandenburg U. of Technology Cottbus-Senftenberg, Germany
We explore the capabilities of large language models (LLMs)—a type of generative AI—in evaluating early-stage ventures and how different information cues influence these evaluations. Based on pre-registered experiments, we compare 1,368 venture evaluations by ChatGPT and human investors as performance benchmarks with data from new ventures’ actual fundraising campaigns and their post-campaign survival rates. Our findings show that ChatGPT outperforms human investors in both investment evaluations and survival predictions. Information cues result in more accurate evaluations by ChatGPT. However, exploratory analyses of ChatGPT's responses reveal human-like biases, including anchoring effects, herding behavior, and the disregard of relevant information.