Peer feedback—such as comments, likes, and evaluations—play a crucial role in shaping competitive dynamics in crowdsourcing contests, acting as social interaction drivers that influence solvers’ behavior. This study applies the attention-based view to explore which peer feedback mechanisms may only capture initial attention of one-time solvers, and which are able to sustain attention of serial solvers, who are considered to be the source of multiple, high-quality entries. Using fuzzy-set Qualitative Comparative Analysis (fsQCA) across 86 contests, the findings reveal distinct potential of diverse mechanisms. Likes and evaluations draw solvers’ initial attention through visible signals of approval, while lacking depth to sustain long-term engagement. In contrast, by fostering meaningful interaction, detailed feedback, and actionable guidance, comments are essential for encouraging continuous participation, especially if complemented by likes. These findings contribute to crowdsourcing research by distinguishing between attention-capturing and attention-sustaining peer feedback mechanisms, offering a deeper understanding of peer interactions on digital platforms. They also provide practical insights for designing crowdsourcing platforms that balance these mechanisms to optimize both initial and continuous engagement.