In this conceptual paper, we shift focus from the prevalent use of AI in management and information systems (IS) to the less explored area of AI development, emphasizing the crucial role of data and the process of ground truth data labeling for developing AI in uncertain contexts. We challenge the conventional objective perspective on ground truth labels and human-dominant approaches, advocating for a collaborative human-AI effort to introduce necessary diversity and specialization for constructing ground truth labels in AI development. Drawing from data labeling literature, we introduce ground truth uncertainty as a salient construct to capture the inherent subjectivity and complexity in data labeling, a process that involves iterations among multiple annotators with diverse backgrounds and expertise to constantly refine labels and the underlying labeling knowledge. Utilizing distributed cognition as our overarching lens, we describe labeling as cognitive activities distributed across humans and AI, occurring within their interactions. We highlight the feedback loops that drive mutual learning and mutual adjustment in this cognitive system, leading to the bracketing of representational fluidity and the construction of what is temporarily accepted as ground truth. Our theoretical framework offers valuable insights for the burgeoning research on AI development in management and IS and also enriches related research on data work, data labeling, and human-AI collaboration.