Putting AI on the Org Chart: Evidence on Oversight and Accountability
Published:
Motivated by the potential for large productivity gains from AI, firms are increasingly deploying agentic AI systems capable of independent action. Moreover, they are increasingly branding these AI agents not as tools but instead as “AI teammates” or “AI employees”. While existing research heavily explores the effects of using AI as a standalone productivity tool, the behavioral and governance consequences of treating AI as an organizational peer remain largely unexplored. We argue that framing AI as an employee fundamentally alters oversight and workplace dynamics as long as human employees remain in the loop to review, approve, or collaborate with AI. In a survey of 1,261 managers we find that 23% of managers already work in organizations where AI agents have been formally institutionalized on organizational charts. In a randomized experiment we provide those managers with identical documents containing built in errors, where we vary whether we say the document was produced by an AI Tool, an AI Employee, or a Human Employee. In the subgroup of managers whose organizations have already “put AI on the org chart”, categorizing identical drafts as coming from an AI employee (versus an AI tool) reduces managers’ error catching by 16%, increases requests for additional review by 44%, and shifts perceived accountability away from the manager and toward the AI system. By contrast, we find little evidence of effects of the AI employee framing among managers in organizations without such institutionalization. These findings imply that how organizations categorize AI is not neutral: when institutionally credible, treating AI as an organizational member changes oversight behavior and perceived accountability in AI-mediated work.
