The Cowan Paradox: Why AI Agents Create More Governance, Not Less

Labour-saving technology has never saved labour. AI agents won't be the exception — and the governance implications are massive.

Roshan Ghadamian··6 min read

The Vacuum Cleaner That Didn't Save Any Time

In 1983, historian Ruth Schwartz Cowan published *More Work for Mother*, documenting one of the most counterintuitive findings in the history of technology: labour-saving household appliances didn't actually save any labour. Economist Joel Mokyr later called this the "Cowan Paradox."

The vacuum cleaner is the clearest example. Before it existed, cleaning a carpet was a brutal seasonal project — move all the furniture, drag heavy rugs outside, beat them with a paddle. It took multiple people. It happened once or twice a year.

Then the electric vacuum arrived. Cleaning became easy. So what happened?

The frequency spiked. A dusty floor for six months was no longer acceptable. Weekly — then daily — vacuuming became the norm. The helpers got fired. The work that used to require hired help was reclassified as "light housekeeping." And time-use studies from the 1930s through 1950s show housewives still spent roughly 51 hours a week on housework — barely changed from before the appliances arrived.

The tools changed. The hours didn't. The standard of what was expected simply rose to absorb the new capacity.

The Pattern Is 160 Years Old

The Cowan Paradox is a cousin of the Jevons Paradox, identified by economist William Stanley Jevons in 1865. Jevons observed that when coal engines became more efficient, total coal consumption *increased* — because lower cost per unit drove more usage. Build more highway lanes, traffic increases. Make cars more fuel-efficient, people drive more miles. Make a task easier, people do more of it.

The pattern repeats in every domain: efficiency gains don't reduce consumption. They get absorbed by demand expansion.

The difference with AI is the speed. The vacuum cleaner took decades to reset household expectations. AI agents are resetting expectations in months.

This Is Already Happening with AI Agents

Jason Lemkin at SaaStr documented this in real-time. SaaStr went from 20+ employees to 3 humans and 20+ AI agents. Revenue flipped from -19% to +47% year-over-year. They now process hundreds of thousands of startup valuations monthly — something that would have required 50+ analysts.

Are they working less? No. They're doing dramatically more than they ever did with 20+ people. The AI agents didn't let them do the same work cheaper. They let them do entirely new categories of work that were previously impossible. And now that they can do it, they have to — because competitors can too.

Aaron Levie, CEO of Box, said it plainly: "I'm just finding more stuff to have the AI do — and then I end up doing more work as a result." He called it the death of the four-day work week.

The Berkeley Haas researchers confirmed it in an eight-month field study published in HBR: workers who adopted AI felt more productive but not less busy. Task boundaries dissolved. People absorbed others' work. Multitasking exploded. Without intentional guardrails, AI doesn't contract work — it intensifies it.

The Governance Implication Nobody Is Talking About

Here's what the Cowan Paradox means for institutional governance — and why the AI governance market is structurally underestimated.

If AI agents multiply organisational output by 5-10x, they multiply ungoverned decisions by 5-10x. Every new action needs a constraint check, an authority trace, and an audit record. The work doesn't just expand — the governance surface area expands with it.

Task expansion means more authority boundary crossings. When workers absorb others' jobs via AI, they're making decisions outside their traditional authority boundaries. A product manager writing code via an AI agent is now making engineering decisions. Who authorised that? Under what constraints?

Frequency spikes mean continuous governance, not periodic. Quarterly releases become daily deploys. Weekly reports become real-time dashboards. Annual audits become continuous monitoring. The governance infrastructure built for quarterly cadences breaks under daily load.

Volume explosion means more traces needed. If 3 people produce what 15 did, that's 5x more actions flowing through the organisation. Each action that crosses an authority boundary — spending, data access, deployment, external communication — needs a governance trace. The trace volume scales with the action volume.

Cognitive overload means more governance shortcuts. When humans manage multiple AI agents in parallel, they're more likely to skip checks, approve without reviewing, or let agents operate outside their delegated scope. Speed pressure creates exactly the conditions where governance failures occur.

The Market Is Larger Than Anyone Projects

Current AI governance market projections ($492M in 2026, ~$1B+ by 2030 at 45% CAGR) are sized on a linear deployment assumption: companies deploy X agents, need governance for X agents.

The Cowan Paradox says this is wrong. The actual demand curve is exponential:

A company deploys 5 AI agents. Those agents enable 10x more actions. Workers absorb new tasks because AI makes them "easy." The expanded scope requires 10 more agents. Governance demand doesn't grow 5x — it grows 20-50x.

The market isn't "how many AI agents get deployed." It's "how many actions get taken in an AI-augmented world." And the Cowan Paradox tells us that number is an order of magnitude larger than the pre-AI baseline.

Every vendor selling "do more with less" is simultaneously expanding the governance surface area. Every AI agent deployed at every company is another action that needs authority checking. The paradox that makes AI valuable — it raises the bar on output — is the same paradox that makes governance infrastructure mandatory.

What This Means for Boards and Institutions

If you're a board director, the Cowan Paradox has three immediate implications.

First, your organisation is already doing more than you think. If your team has adopted AI agents (and they have, whether you sanctioned it or not), the volume of institutional actions has expanded — potentially by an order of magnitude. Most of those actions are ungoverned. You can't audit what you don't know happened.

Second, periodic governance is dead. The quarterly board meeting was designed for an era when institutional actions happened at a pace boards could review retrospectively. In an AI-augmented organisation, thousands of actions cross authority boundaries every week. Governance must be continuous and infrastructure-level, not periodic and process-level.

Third, the legal standard has already shifted. In *ASIC v Bekier* [2026] FCA 196, the Federal Court established that directors must maintain "active monitoring" — not just receive reports, but have systems that produce contemporaneous evidence of governance. The Cowan Paradox means the volume of actions requiring that evidence is growing faster than any manual process can track.

The organisations that understand this will build governance infrastructure that scales with their AI adoption. The ones that don't will discover the gap when a court, regulator, or auditor asks: "How many decisions did your AI agents make last quarter, and which ones did the board know about?"

The answer, for most organisations, is: they don't know. And that gap is growing every day the Cowan Paradox does what it has always done — turns efficiency into expanded expectations, and expanded expectations into new categories of institutional risk.

Governance as Infrastructure, Not Process

The solution isn't more humans reviewing more actions. That's the equivalent of hiring back the servants after the vacuum cleaner arrived — it doesn't scale, and the economics don't work.

The solution is governance infrastructure that operates at the speed and scale of AI-augmented organisations. Constraints enforced at the moment of action. Authority checked before execution, not reconstructed after the fact. Audit trails born contemporaneously with every decision.

This is what we built Constellation to do. Not because we predicted the Cowan Paradox would apply to AI agents — but because we experienced it firsthand. One founder, AI agents, three production applications, 700+ API routes, 4,000+ governance traces in 90 days. The work expanded exactly as Cowan predicted. And every expanded action was governed.

The vacuum cleaner didn't free anyone from cleaning. It made cleaning a daily expectation. AI agents won't free organisations from governance. They'll make continuous governance a mandatory expectation.

The only question is whether that governance is infrastructure that scales — or process that breaks under the weight of the paradox.

Related Glossary Terms

See governance infrastructure in action

Constellation enforces corporate governance at the moment of action — for both humans and AI agents.