Crowdsourcing is commonly portrayed as a tool to tap into the diverse expertise of large (often unknown) crowds to address various problems, a practice we metaphorically label as “fishing”. However, we empirically show that solver appropriateness, i.e. the proximity of solvers’ knowledge and/ or background to the problem requirements, is critical for the effectiveness of crowdsourcing. Through a meta-synthesis of 17 qualitative case studies , we identify different levels of targeting in solver identification for problems with heterogeneous attributes (generic, urgent, highly technical and complex problems). We demonstrate that generic and urgent problems typically benefit from “chance encounters”, as well as semi-targeted recruitment mechanisms. In contrast, adopting more informed approaches in solver identification or preselection increases the likelihood of success for complex or highly technical problems. Put differently, in cases where the alignment between problem attributes and solver appropriateness is blurred or non-existent, the results tend to be meagre and/or the screening costs and information overload for organizations are huge. Finally, we discuss the directions for future research to advance the literature on crowdsourcing at a problem level of analysis.