The Agency Gap: Why the Best AI Products Make You More Powerful, Not More Passive
80,508 people told Anthropic they want productivity from AI. When pressed on why, they described wanting their lives back. Most AI products are built for the stated need. The winners are built for the real one.

The surface-level ask is always productivity. Make me faster. Automate the boring stuff. Save me three hours a week. Every AI product pitch deck starts here, and most never leave.
Then Anthropic published findings from what it calls the largest qualitative study ever conducted: 80,508 people across 159 countries, interviewed in 70 languages by a conversational AI designed to push past first answers. 19% cited professional excellence. Handle the mundane so I can do strategic work. Standard stuff. But when the interviewer kept pushing, responses drifted somewhere the builders didn't expect. Personal transformation, 14%. Life management, 14%. Time freedom, 11%. Financial independence, 10%.
One respondent: "With AI support I can now leave work on time to pick up my kids from school."
These people weren't describing wanting smarter software. They were describing wanting their lives back. The job they're hiring AI to do isn't automation. It's agency.
And yet almost every AI product ships the opposite.
The graveyard of automation
The numbers are brutal. 1,698 AI tools have already gone defunct. 90% of AI startups fail within their first year. Nearly 30% of all AI tools ever launched have ceased operations. At the enterprise level, McKinsey found that more than 80% of organisations report no tangible impact on enterprise-level EBIT from generative AI. Gartner estimates only 9% of enterprises have scaled AI beyond experimental stages.
The pattern in these failures is what analysts call shiny object syndrome: teams building features that optimise for automation rather than solving the user's actual problem. The AI tools graveyard is filled with products whose entire value proposition was "call GPT and display response." Automation nobody asked for, in a form that stripped users of the control they actually wanted.
This is the agency gap: the distance between what builders think users want (do it for me) and what users actually want (help me do it better).
Light and shade
The Anthropic study surfaced a tension that most product teams ignore entirely. The researchers identified what they call a "light and shade" effect: the same AI capabilities that generate benefits also produce fears. Someone who values emotional support from Claude is three times more likely to fear becoming dependent on it. 22% of all respondents specifically worried about losing autonomy: not understanding what AI does under the hood, feeling like AI draws the lines instead of them. Unreliability was the single largest fear at 27%, the only category where negatives outweighed positives.
So the user is simultaneously saying "help me" and "don't take over." That tension is the entire design problem. And almost nobody is designing for it.
The companies that figured it out
The companies winning in AI have recognised this tension and built for it explicitly.
Duolingo's AI-first strategy drove a 51% surge in daily active users and put the company on a path to $1 billion revenue. But the design deliberately keeps the user cognitively engaged. Their team articulates it directly: "People learn when they feel agency. The moment a system starts doing the work for learners, engagement drops." AI selects the right examples, nudges timing, and offers corrections. The user still lifts the cognitive weight. What Duolingo built is AI that removed friction from learning without removing learning from learning. That distinction is everything.
GitHub Copilot is named Copilot, not Autopilot, and the naming captures the design philosophy precisely. It provides suggestions and completes code while remaining inherently dependent on the developer for direction and judgement. Its success came from a phased approach that gradually expanded AI scope as systems became more capable, never outpacing user comfort with autonomy loss. Developers kept saying yes because they never felt the tool was trying to replace their thinking.
Notion positioned AI as a thinking partner within a "tool for thought." Rather than replacing existing workflows, the rollout respected how users already worked. Users bring their own structure. AI helps them move through it faster.
The common design principle: AI removes friction so the user retains agency. None of these products try to do the job for the user. They make the user better at the job.
Agency is not a feature preference
The way I see it, this points to something more fundamental than product design. Agency isn't a UX preference you can A/B test. It's closer to a psychological need.
People don't want outcomes. They want to be the cause of outcomes. Strip that away and even a "better" result feels hollow. When 81,000 people consistently described wanting "not do more for me but help me do more with what I have," they weren't filing a feature request. They were articulating something about human nature that holds whether you're talking about software, education, or work itself.
Academic research reinforces this. A 2025 study published in PMC found that an optimal balance of autonomy improves user acceptance, while both excessive automation and insufficient autonomy damage trust and engagement. Purely automated experiences backfire because they shift decision authority from users to AI systems, undermining the sense of control that drives continued use. The mechanism that actually works is shared responsibility: users refine AI-generated outputs and retain ownership of final decisions. Autonomy preserved, friction removed.
There is a darker angle. A Microsoft and Carnegie Mellon study found that knowledge workers who relied most heavily on AI assistants thought less critically about their conclusions. The "do everything for you" model actively degrades the capabilities that make users valuable. Full automation is corrosive to the user over time.
If agency is a fundamental human need, then the fully automated AI product is designed against human nature. And products designed against human nature fail, no matter how capable the underlying model.
The design test
The practical implication for builders: the job your users are hiring AI to do is probably not the one they first describe. The surface request is "make me more productive." The actual job is "give me more control over my time, my decisions, and my work."
Products that misread this build fully automated features users try once and abandon. Products that get it right build tools that make users feel more capable, handling the tedious parts while preserving meaningful decisions.
Here is a design test for any AI feature: does this make the user more capable or more dependent? If you removed the AI tomorrow, would the user be better at their job than before they started using it, or worse?
Glen Rhodes made the point that the appetite users showed for being heard "is itself a product opportunity most builders are leaving on the table." 80,508 people had a lot to say about this technology when someone actually asked. The answer wasn't "give me more automation." It was "give me more control."
Maybe the real competitive moat in AI products isn't model capability, data, or UX. Maybe it's the discipline to not automate the parts that matter to the user. To know which friction to remove and which friction is actually the point.
The best AI products don't make you more passive. They make you more powerful. The gap between those two outcomes is where most of the industry's value is being destroyed right now.