Employees - Pexels

Generative AI tools promise speed, polish and efficiency. Yet a new study by BetterUp Labs in collaboration with Stanford Social Media Lab warns that careless use of AI is producing a growing workplace hazard: “workslop.”

Workslop refers to AI-generated output that appears impressive but lacks the depth or context required to advance a project. This might be a sleek presentation that avoids the hard questions, a long report with no real insight, or a block of code missing essential explanation. While it may save the sender time, it often creates more work for colleagues downstream, who must interpret, correct, or redo it.

The consequences go beyond wasted hours. The research highlights a striking social cost: Colleagues judge senders of AI workslop harshly. Among survey respondents, 37 per cent said they perceived the sender as less intelligent, 42 per cent as less trustworthy, and half saw them as less creative, capable and reliable than before. One third of employees also reported being less likely to want to collaborate with that person again.

This perception gap poses a reputational risk for leaders and managers. Even a single instance of sending poor-quality AI output can erode credibility within a team. A director in retail who received workslop explained: “I had to waste more time following up on the information and checking it with my own research. I then had to waste even more time setting up meetings with other supervisors to address the issue. Then I continued to waste my own time having to redo the work myself.”

These frustrations compound into what the researchers call the “workslop tax.” On average, employees reported spending nearly two hours dealing with each incident of low-quality AI work, at an estimated cost of $186 per employee per month. For a company with 10,000 employees, this amounts to more than $9 million annually in lost productivity.

Beyond the financial implications, the damage to professional trust is harder to quantify. Once colleagues start viewing someone as unreliable or lazy, reversing that perception can take significant effort.

The message for leaders is clear: Careless use of AI is not a shortcut – it is a reputational liability. To safeguard collaboration and trust, the research urges organisations to set guardrails on AI use, model thoughtful application, and encourage what BetterUp calls a “pilot mindset,” where employees use AI to enhance creativity and clarity rather than to avoid work.

As AI becomes embedded in the workplace, the professional cost of “workslop” will not be measured only in hours lost, but also in how colleagues view your competence and leadership. In today’s AI-powered offices, reputation and trust remain firmly human.

Building better businesses: Why we set out to become Malta’s first Certified B Corporation™

26 November 2025
by MaltaCEOs

Steves&Co.'s long-held purpose-driven approach now has a global, measurable standard behind it.

‘One minute you’re laughing, the next you’re planning surgery’

26 November 2025
by Robert Fenech

iGaming executive Mauro Miceli is taking a sudden fractured elbow as a lesson in resilience.

Simon Alexander Ong to address FPEI Business Breakfast in Malta next December

26 November 2025
by Adel Montanaro

He is an international author, a globally recognised speaker and coach known for his work on energy management.

Malta’s new family office rules ‘give speed and clarity international families expect’ – Karl Micallef

26 November 2025
by Robert Fenech

The changes are widely viewed as a meaningful update to Malta’s private wealth offering.