The easy work was load-bearing

Easy tasks aren't just easy. They train new hires, pace veterans, and keep fundamental skills alive. When AI skims them off, the remaining human work becomes 100% hard cases — and most organisations aren't ready for a workforce that never gets an easy rep.

·7 min read
The easy work was load-bearing
Viewing the AI-enhanced version of the article I wrote.
Contact me  for the prompt used to generate the AI formatted version.

A customer support team deploys an AI agent. Deflection hits 60% within weeks. The dashboard shows exactly what the business case predicted: ticket volume down, cost per resolution down, first-response time down.

Three months later, average handle time on the remaining tickets is climbing. New hires are taking twice as long to ramp. Agent satisfaction scores are slipping. The team lead notices it on the floor before she sees it in any report. Every single conversation is now a hard conversation. The easy tickets are gone, and what's left needs product knowledge, empathy, policy judgement, and improvisation.

The metric that justified the project went down. The metric nobody was tracking went through the roof.

The same pattern, everywhere

A radiology department rolls out AI to handle routine scans. The cases left for human radiologists are disproportionately ambiguous: exactly where fatigue and concentration matter most. A radiologist who used to read a mix of normal and complex images now sees nothing but edge cases, all day. Fewer reps on easy reads means less calibration. The job didn't get easier. It got relentlessly harder, with less warm-up.

A recruiting team deploys AI screening to filter the obvious yes-and-no candidates. What's left for human recruiters is a pile of borderline decisions. More judgement per application, not less. Throughput looks flat. Cognitive load per decision is way up.

Three domains, one mechanism. AI took the easy work, and the human work that remained became harder in ways that nobody modelled. In each case, the organisation automated itself into needing a different, more expensive workforce. Nobody budgeted for it.

What easy work actually does

Easy work in any operational system performs at least five structural functions that have nothing to do with its output.

Training ground. New hires learn by doing dozens of simple tickets first. They build confidence, absorb product knowledge, develop instinct for what normal looks like before encountering what's abnormal. Remove the easy tickets and you remove the entire apprenticeship layer. The AI ate the curriculum.

Cognitive relief. Easy cases are the recovery intervals in interval training. A support agent handling a mix of easy and hard across a shift can sustain attention for eight hours. The same agent doing thirty hard tickets straight burns out by lunch. Easy work was pacing, not padding.

Skill maintenance. Routine repetition keeps fundamental abilities calibrated. A radiologist who only sees ambiguous scans starts second-guessing normal ones. A support agent who only handles escalations forgets baseline product behaviour. You need regular contact with the ordinary to stay sharp on the extraordinary.

Pattern-recognition base. High volume on easy cases builds the baseline that lets experienced workers spot genuine anomalies. You need a deep sense of normal before abnormal jumps out.

Rhythm and pace. The mix of difficulty is what makes a full shift sustainable. You can't sprint for eight hours because someone removed the walking intervals.

When AI strips out the easy cases, it strips all five functions at once. What remains is 100% hard cases, staffed by people who were hired, trained, and managed for a mix.

A 43-year-old warning

Lisanne Bainbridge identified this mechanism in 1983, in a paper called 'Ironies of Automation.' Her observation was precise: by taking away the easy parts of a task, automation makes the difficult parts of the human operator's task more difficult. She also noted that the most automated systems demand the most skilled operators, because when human intervention is needed, it is always in the worst conditions.

Air France Flight 447 in 2009 made the fatal version visible. Pilots who rarely hand-flew the aircraft couldn't recover from a stall when the autopilot disconnected in bad weather. Two hundred and twenty-eight people died because the humans responsible for the hardest moments had the least practice with them.

AI product deployments are producing the commercial version of the same irony in support queues and back offices now. Nobody dies. But the organisational damage compounds quietly, quarter by quarter.

The dashboard can't see it

Most AI deployment dashboards track volume reduction, deflection rate, and cost per ticket. None of them track difficulty per remaining case or cognitive load distribution. The instrumentation is built around the automation's success, not around the experience of the humans who remain.

The commercial incentives reinforce the blind spot. The AI vendor sells on deflection rate. The customer buys on cost reduction. Neither party has a reason to measure what happens to the remaining human work. The business relationship is structured around the automated portion. The damage accumulates in the residual portion.

Look at a typical business case spreadsheet. There is a line for "tickets handled by AI" and a line for "cost savings." There is no line for "increased difficulty of remaining tickets" or "training programme rebuild cost" or "new hiring profile premium." The automated work is modelled in detail. The transformed remainder is invisible.

The redesign that never happens

The counterargument is real. Some organisations report that removing routine work increases satisfaction. Agents say they're doing more meaningful work, engaging with problems that require actual thought. But this only holds when the organisation deliberately redesigns around the new difficulty profile: restructuring teams, changing hiring criteria, rebuilding training from apprenticeship models instead of script-based onboarding, introducing workload pacing that accounts for the absence of easy cognitive breaks.

Most organisations do none of this. They celebrate the deflection rate, hold headcount flat, and wonder why quality metrics soften six months later. Deploy AI, easy cases vanish, remaining cases get harder, same people with same training face different expectations, quality drops, somebody blames the people or the AI, and everyone misses that the job itself changed underneath them.

Then the second-order costs arrive, two to three quarters after the celebration. Hiring costs rise because you need a more expensive profile. Training costs rise because there are no easy cases to practise on. Attrition rises because the job is relentlessly hard without relief. Quality on hard cases drops because there is no cognitive recovery time. By the time these costs surface, the AI project has been declared a success and the team that deployed it has moved on.

Building for the remainder

There is a product feature nobody builds: a human calibration queue, where AI deliberately routes a fraction of easy cases to human workers. Not for efficiency. For skill maintenance, training, and pacing. Nobody builds it because everyone optimises for maximum deflection. That is what the ROI model rewards.

I think most product teams genuinely don't see this coming. They model the automated portion with care and treat the human remainder as a smaller version of the same job. It isn't. It is a fundamentally different job, and it needs to be designed as one.

Deployment planning for any AI that handles easy work should include the downstream workforce redesign as core scope, not something to sort out when the attrition numbers look odd. How much to automate is the easy question. What kind of work you're leaving behind, and whether anyone is prepared to do it, is the one that determines whether the value holds.

Removing easy work without replacing what it provided is pulling a load-bearing wall because the room looks bigger without it. The room does look bigger. Six months later, someone will have to explain the cracks.


Stay up to date

Get notified when I publish something new, and unsubscribe at any time.

More articles