AI workslop
Image credit: Pexels

Two-thirds of employees say they spend up to six hours a week fixing low-quality, AI-generated work, according to a new study.

As organisations adopt AI tools at increasing speed, a growing number of workers say quality standards are quietly slipping, creating a new and often invisible burden at work. New research from resume platform Zety, has revealed that low-quality AI-generated output, better known as “workslop,” is now a routine part of many employees’ workloads, costing time, increasing stress and eroding morale.

As companies race to integrate AI tools, many employees say standards are slipping. The result isn’t efficiency. It’s rework. In short, low-quality AI output is creating a new workplace tax and employees are paying for it in time, stress and burnout.

RISE OF WORKSLOP

Zety’s Rise of Workslop Report suggests most workplaces haven’t fully accepted low-quality AI output, but tolerance is growing. While 39% of employees say AI errors are completely unacceptable and corrected, a significant share admit that poor-quality work is often overlooked if deadlines are met.

Speed, it seems, is starting to outrank quality. Key findings from the report reveal that:

  • Workslop is a widespread issue: One in five employees say low-quality AI work is frequently ignored if deadlines are hit. Nearly one-third say it’s noticed but tolerated.
  • Timeloss is significant: 66% spend up to six hours or more each week correcting AI-generated errors.
  • Rising strain of workslop: Workers link “workslop” to higher stress (29%), lower morale (25%), reduced productivity (25%) and burnout (21%).
  • Generational divide: 53% believe younger workers are more tolerant of AI-generated shortcuts than older colleagues.

SHIFTING STANDARDS

Workplace attitudes towards AI-generated work are changing, noted the report. According to surveyed workers, AI-generated errors are:

  • Completely unacceptable and corrected: 39%
  • Somewhat unacceptable but tolerated: 31%
  • Overlooked if deadlines are met: 21%
  • Fully acceptable, with speed prioritised: 9%

Outright approval remains limited. But tolerance is spreading, especially in fast-moving environments, notes the report.

HIDDEN COSTS OF WORKSLOP

When flawed AI work shows up, nearly half of employees (49%) fix it themselves rather than flag it or reject it. That invisible labour rarely appears in performance reviews. But it adds up, noted the study.

Two-thirds say they spend up to six hours a week correcting AI-related mistakes. But the impact runs deeper. The study revealed that:

  • 70% say it harms stress levels.
  • 67% say it damages productivity.
  • 65% say it lowers morale.
  • 53% say it increases burnout risk.

“Workslop creates a layer of invisible labour that rarely shows up in job descriptions,” said Jasmine Escalera, career expert at Zety. “Employees are quietly fixing mistakes just to keep work moving. Over time, that unrecognised effort leads to exhaustion and disengagement.”

GROWING RISKS

The consequences extend beyond individual frustration. Workers say the biggest risks include:

  • Wasted time and lost productivity (36%).
  • The spread of misleading or false information (30%).
  • Reputational damage (24%).

The report highlights a mounting tension in modern workplaces. AI may accelerate output, but humans are absorbing the quality cost. If speed becomes the only metric that matters, “workslop” may become less of a glitch, and more of the norm, which will inevitably impact business in the long term.

HOW TO MANAGE & AVOID WORKSLOP

If low-quality AI output is costing employees hours each week, the issue isn’t just the tool, it’s the leadership and systems around it. “Workslop” emerges when speed is rewarded, oversight is vague and accountability is blurred. Avoiding it requires structure, clarity and cultural discipline.

Here’s how leaders can get ahead of the problem:

1. Set clear quality standards

AI should operate within defined guardrails. Organisations need clear policies outlining when AI can be used, what level of human review is required and who has final sign-off. For high-risk work – such as client materials, financial reporting, regulatory filings – a “human” review at higher levels must be mandatory, not optional. Quality cannot be left to AI/ junior judgment.

2. Invest in practical AI training

Most AI errors aren’t malicious, they’re misunderstood. Employees need hands-on training that goes beyond “how to use the tool” and focuses on how to evaluate output. That includes spotting AI hallucinations, verifying data, refining prompts and editing for tone and context. AI literacy reduces correction time and increases confidence.

3. Make AI use transparent

Hidden AI use creates hidden accountability gaps. Encourage teams to be transparent and disclose when AI has been used in drafts or research. This normalises usage while making review processes clearer.

4. Recognise and track invisible labour

Nearly half of workers say they fix AI errors themselves. That quiet correction work adds up and contributes to burnout. Managers should actively ask how much time is being spent reviewing or repairing AI output and build that time into project planning. Editing and verification are skilled work processes, and not administrative afterthoughts.

5. Rebalance incentives towards accuracy

If speed is the only KPI, quality will erode. Performance metrics should include accuracy, clarity and reliability. Reward teams for thoughtful reviews and well-checked output, not just rapid turnaround. Slowing down in critical moments to review work prevents larger reputational risks later.

6. Clarify accountability

Someone must own the final version of any AI-assisted work. Without defined responsibility, errors cascade and frustration grows, often landing disproportionately on junior staff. Clear senior ownership is vital.

7. Create space to question AI output

Employees should feel safe challenging low-quality or inappropriate AI use. Leaders can encourage open discussions about limitations, recurring issues and pressure to rely on automation. A culture that allows questions prevents quiet resentment and declining standards.

8. Treat AI as a tool, not a replacement

AI accelerates drafting, but it should not replace judgment, context or ethical reasoning. The most resilient organisations position AI as an assistant that supports human expertise, and not as a substitute for it.

CONCLUSION: THE BIGGER PICTURE

“Workslop” isn’t a tech failure. It’s a leadership one. It shows up when speed is preferred over standards, when fixing mistakes is invisible and when no one truly owns the outcome. In that environment, AI doesn’t save time, it quietly generates more unnecessary work.

The organisations that avoid “workslop” are disciplined about the basics. They protect quality, make ownership explicit, and treat human oversight as non-negotiable. That’s what turns AI into leverage — not a liability.

Businesses are entering 2026 with renewed confidence and ambitious recruitment plans, yet almost all expect hiring to remain a challenge despite rising investment in AI.

Employees in Asia Pacific are embracing generative AI tools faster than almost anywhere else in the world, but along with enthusiasm comes anxiety about job security, according to a new survey.

Three in ten companies in the US are planning to replace employees with AI by 2026, according to a recent survey.

More than half of US employers say they worry AI could eventually replace teamwork, despite the fact that it is helping employees collaborate more effectively.

Employees using AI are saving the equivalent of a full working day every week, according to new research.

Companies are struggling to find talent to ensure their workforce remains competitive in an AI-driven world of work.

AI is moving faster than most employees around the world can keep up with, causing ‘AI anxiety’ in the workplace, according to a new study.

Sign up for our newsletter