Advancements in artificial intelligence have sparked increasing interest in whether AI can influence human moral reasoning—a uniquely human construct that has traditionally been considered resistant to change through external persuasion or force. While some scholars suggest the theoretical possibilities of AI’s potential to influence human moral reasoning, others remain skeptical, citing factors such as algorithm aversion and AI’s deficit in affective persuasion. Despite the emergence of two contradictory perspectives, the question of whether AI can affect human moral reasoning remains unresolved. The goal of the current research is twofold: first, to provide empirical evidence of AI’s influence on human moral reasoning, and second, to investigate the downstream consequences of such influences. Across two studies (n = 403), we found that interacting with AI reliably shifted participants’ moral reasoning and that these shifts translated into broader effects on policy preferences and receptivity to emerging technologies. These findings underscore both the opportunities and risks of an increasingly AI-integrated future.