The Economics of AI Are Not What You Think

In 2010 I was in Afghanistan advising the Afghan National Army, helping oversee the early stages of a base perimeter construction project. The plans were drawn to Western standards: steel posts, chain link, the kind of fencing you’d spec out of a GSA catalog. The local commander had other ideas. He wanted a three-foot solid wall, two feet taller than what we’d planned.

My first instinct was to push back on cost. I walked him through the material upgrade, the added labor hours, the timeline impact. I was confident I was right. He looked at me like I’d said something genuinely stupid, and then he explained why I had. Labor in Afghanistan was so cheap that the shift to masonry more than offset the material premium. His version was going to save more than half the project budget. I had done the math correctly with entirely the wrong inputs.

That lesson has stayed with me for fifteen years, and it applies directly to how most leadership teams are thinking about AI right now.

You’re Running the Wrong Economic Model

Most organizations are evaluating AI through a cost-displacement lens: identify a task, price the human doing it, calculate savings from automation. That’s the Western fence model. It feels rigorous because it uses numbers. It produces confident projections. And it routinely misses what actually matters.

The economics of an AI-augmented workforce don’t work like traditional labor substitution. The unit costs are different, the scaling behavior is different, and the constraint is rarely what you think it is. When you hire a human analyst, you’re paying for availability, not just output. When you deploy AI on that same work, you’re paying for output, and the marginal cost of the next unit of output approaches zero. That is a structurally different model, not a cheaper version of the old one.

The organizations getting real value from AI aren’t the ones who ran a headcount reduction exercise. They’re the ones who asked a different question: given near-zero marginal cost on certain cognitive tasks, what can we now do that we couldn’t afford to do before?

The Constraint Has Shifted

For most of the last century, the binding constraint on knowledge work was human capacity. You could only analyze so many contracts, review so many customer interactions, generate so many scenarios per quarter. That constraint shaped strategy, org design, and what got prioritized.

AI doesn’t just lower the cost of existing tasks. It removes the capacity ceiling on a specific class of work. The question that follows from that is not “how many people can we cut?” It’s “what decisions were we making poorly because we couldn’t afford enough analysis?” and “what opportunities were we leaving on the table because we didn’t have the throughput?”

That’s a different conversation, and it requires executives to think like the Afghan colonel, not like the American advisor with the GSA catalog. The inputs have changed. The model has to change with them.

What Sound Decisions Look Like Here

The organizations that will get this right are the ones that treat AI capacity as a new resource to be deployed, not a substitute for existing ones. That means leadership teams need to do three things differently.

First, they need to get explicit about where their real capacity constraints have been. Not where they thought the constraint was, but where decisions were actually limited by throughput. That’s where the leverage is.

Second, they need to separate workforce strategy from cost reduction. Those can overlap, but they’re not the same problem. Conflating them produces bad decisions in both directions: either you underinvest because the ROI model only credits headcount savings, or you cut capacity you still need because it looks redundant on paper.

Third, they need to hold the organization accountable for what it does with the freed capacity. The risk isn’t that AI fails to deliver efficiency. The risk is that it delivers efficiency and leadership banks it rather than redeploying it toward something that compounds. Efficiency that doesn’t fund growth or sharpen competitive position is just a smaller version of the same organization.

The Lesson From Kabul

The US spent years in Afghanistan applying frameworks that were sophisticated, well-funded, and wrong for the environment. The economics of the place didn’t match the assumptions behind the strategy. The gap between what was planned and what was possible never fully closed, in part because the people making decisions kept reasoning from Western inputs about an Afghan reality.

AI is that kind of inflection point. The underlying economics are genuinely different from what most executive teams have modeled. The organizations that figure that out early, and build their workforce strategy around what’s actually true rather than what feels familiar, will have a durable advantage over the ones still specing chain link.

Written on April 16, 2026