This paper incorporates the newly emerging literature on AI as a decision maker into a business context. Most prominently it parallels the concept of corporate moral agency to the idea of AI agency. The idea of corporations having moral agency and by extension moral responsibility has long been debated in the business ethics literature. Some scholars argue that organizations have enough elements of agency, via their internal structures and rules, that they should be held responsible for their actions. Other scholars argue corporations do not have responsibility as they lack important elements of agency such as intentionality and the ability to act. A third middle ground approach suggests the idea of corporate moral agency, that corporations have some but not complete agency. This debate is being heavily mirrored within AI discussions. I therefore propose the idea of AI agency as a mirror to corporate moral agency. By paralleling the corporate agency literature, the concept of AI agency can help us understand when it is acceptable to blame AI for unethical actions. This is important to organizations as they continue to use AI for many of their operations such as forecasting, customer service and hiring. This paper looks to highlight the importance of understanding AI as a receiver of blame. It uses the parallel of corporate moral agency to find a solution to whom is liable when AI makes unethical decisions. It also bridges the gap between the between the growing AI literature and organizational decision making.