We analyze the predictiveness and accuracy of evaluators’ assessments of research proposals. Using a two-stage estimation that considers both funding decision and subsequent outcomes -measured by publications, citations, field-citation ratios, and public attention (Altmetric)- we find that the predictive validity of evaluations is contingent on the evaluators’ expertise. Specifically, assessments by domain experts, defined as those with specialized knowledge in the proposal’s area, are predictive of scientific outcomes after funding, unlike those by evaluators with average or minimal domain expertise. Experts’ assessments also carry more weight in funding decisions, indicating that their abilities are factored into the decision-making process. However, experts are not particularly accurate in assessing the actual outcome levels. While their unfavorable opinions are generally correct, their favorable opinions are prone to significant errors. Experts are also better at predicting non-scientific impacts, although the latter can also be predicted, to a lesser extent, by non-experts. Our findings suggest that, although domain expertise is critical for funding excellent scientific research, domain experts also favor proposals that underperform in their area of expertise.