AI Competence as Human Capital

Published:

Generative AI can raise productivity, but realized gains depend on how workers use the tool. We study AI competence as a form of human capital defined by the practical ability to organize work with a model, evaluate its output, and retain judgment while using it to extend one’s existing skills. In a preregistered lab-in-the-field experiment, 332 full-time management consultants whose jobs did not routinely require coding were assigned a difficult Python data-analysis task with or without access to a Gen AI tool. AI access increased scores by 34 percentage points, raised completion by 7 percentage points, reduced time on the task by 12%, and improved debugging efficiency. We open the black box of these gains by combining Gen AI transcripts, code-execution logs, task outputs, and surveys. Using an ethnography-inspired agentic coding procedure, we identify nine distinct AI-collaboration practices that capture how workers frame requests, decompose problems, rely on generated code, edit independently, verify outputs, and manage scope. These practices explain substantial heterogeneity in overall performance. Gains concentrate when workers engage proactively with AI while retaining substantive judgment. Finally, we show that productivity gains and workers’ interpretation of those gains are distinct. AI access does not significantly increase average confidence, trust, enjoyment, or behavioral trust in AI, and belief responses vary substantially by gender and prior coding experience. The results suggest that firms should treat AI training as human-capital investment, teaching workers not only how to prompt, but how to divide labor with AI, evaluate its output, and build confidence in using it well.