Beyond the Hype: What AI Slop Reveals About the Next Wave of B2B AI
AI has reshaped our daily lives at a remarkable speed. With a single prompt, tasks that once required hours can now be completed in seconds. Naturally, we assumed this would make work faster and more efficient.
Yet inside organizations, the reality feels different.
A single phrase has emerged to capture this growing gap between expectation and experience: AI slop. This term allows us to examine the dynamics shaping today’s turbulent AI transition.
AI Slop Started Online
— But It Didn't Stay There
Last December, Merriam-Webster named “slop” its Word of the Year for 2025. Traditionally, the term referred to muddy residue or wet food scraps, often fed to animals. Its newly added definition reflects a more contemporary concern.
Slop has come to refer to low-quality, often meaningless output generated by AI. The fact that this term has moved beyond niche discourse and into mainstream recognition is telling.
Much of the AI-generated content people now encounter on a daily basis is mediocre at best, and users are becoming increasingly aware of it.
From Spam to Slop:
How AI Flooded the Attention Economy
Early examples of AI slop flooded familiar digital spaces like social media feeds and search results. Much like spam once overwhelmed email inboxes, context-free AI-generated images began to crowd the web.
These images often have little connection to users’ actual intent. Their primary function is not to inform, but to capture attention and drive traffic.
As The Guardian warned in 2024, spam-like AI content was already beginning to overwhelm the internet. As content optimized for the attention economy continues to accumulate, the web gradually shifts from a library of information into a space dominated by noise.
When Slop Moves Into the Workplace
The more serious issue is that this phenomenon no longer remains confined to the internet. Since 2023, companies have aggressively adopted AI in pursuit of higher productivity.
But AI does not always deliver polished, decision-ready results. Instead, intermediate outputs like rough drafts and half-formed summaries have multiplied rapidly. Harvard Business Review labeled this growing phenomenon AI workslop.
Today, AI slop appears in internal memos, report drafts, email responses, and meeting summaries.
On the surface, these outputs appear useful. In practice, however, they shift responsibility. The burden of interpretation, verification, and final judgment falls back on the reader. This is where inefficiency begins.
The Root Cause:
We're Delegating to AI the Wrong Way
Many people expect AI to function as a near-perfect decision-making proxy. We want it to handle routine tasks so we can focus on higher-level thinking. Some even hope it will highlight insights we might otherwise overlook.
This mindset is reflected in Silicon Valley’s recent trend toward archiving. Operating on the assumption that more data leads to better decisions, teams record meetings, transcribe conversations, and feed every possible document into AI systems.
The Jagged Frontier:
Where AI Competence Breaks Down
Ironically, this attempt to delegate more work has often produced more workslop instead.
A concept introduced in research from Harvard Business School helps explain why: the jagged frontier. Andrej Karpathy has also described this uneven boundary as one of the core challenges large language models must overcome.
AI capability does not improve evenly across tasks. Instead, it resembles a gear with jagged teeth. In some domains, AI performs at the level of top human experts. In closely adjacent tasks, however, it can produce surprisingly poor results, even at a basic level.
More importantly, this boundary does not align with human intuition. For instance, AI can structure a consulting report draft clearly and organize arguments logically. But when asked to synthesize quantitative data with nuanced interview notes to extract a decisive, context-sensitive insight, it may draw flawed conclusions.
To humans, both tasks appear to involve similar forms of analysis. To AI, the first falls within well-learned patterns, while the second may lie beyond its jagged edge of competence. This mismatch between expectation and capability is where slop emerges.
The Automation Gap:
Why the Tasks We Want to Delegate Are the Hardest to Automate
A research team at Stanford University examined which types of tasks AI has been most successful at automating so far. Their findings, summarized in a widely cited framework, highlight two task zones that are especially important.
🟢 Automation Green Light Zone
Tasks such as idea generation, marketing copy drafts, and straightforward summarization fall into this category. AI performs them quickly and effectively, and minor errors are rarely consequential.🟡 R&D Opportunity Zone
This zone includes tasks requiring subtle organizational nuance, interpretation under incomplete information, or complex judgment.
These tasks are cognitively demanding, and people are eager to delegate them to AI. Yet this is precisely where AI performance remains fragile and where slop accumulates.
The irony is clear. The tasks people most want to automate tend to sit in the very zone where AI is not yet reliable. The research team labeled these areas as an R&D opportunity zone, indicating that further development is required before AI can handle them reliably.
When AI is pushed beyond its effective boundary, it produces outputs that sound convincing but lack substance. That is slop.
The Missing Layer:
Why AI Alone Isn't Enough
The Real Problem Is Workflow Design, Not Data
The root problem is not a lack of data. It is workflow design. Workslop emerges when workflows are built without accounting for AI’s capability boundaries. When AI services cannot reliably interpret user intent or contextual nuance, they generate outputs that are incomplete.
To produce meaningful results, systems must first break user intent down into granular categories and decision points. Only then can large volumes of information be reorganized into coherent insights.
Many current AI services focus on automating only the front end of work, such as recording, drafting, and summarizing, while neglecting the contextual reasoning layer. As a result, responsibility for completing the meaning shifts back to the user. The output feels unfinished because, in many cases, it is.
Not All Slop Is Equal: How to Evaluate AI Output by Domain
From a provider’s perspective, not every instance of workslop carries the same weight. It would be too simplistic to treat all incomplete AI outputs as equally problematic. Their significance depends on the nature of the work itself.
Consider software development. Code is fundamentally all or nothing. If it does not run, it has little practical value. In this domain, output that appears plausible but fails to function is pure slop.
Now consider video editing. Trial and error is intrinsic to the process. Intermediate outputs are part of exploration rather than failure. Producing multiple imperfect versions and observing which cuts or compositions resonate with users is itself a learning mechanism. Even rough drafts can contribute to learning and iteration.
The real question, then, is not how much AI generates, but under what conditions imperfection is acceptable. Without thoughtful structural design, faster generation simply compounds technical debt.
History Rhymes:
What the Quantified Self Tells Us About AI's Next Phase
The rise of AI slop echoes an earlier trend from the healthcare world: the Quantified Self.
As wearable devices became widespread, people gained the ability to measure steps, heart rate, sleep patterns, and more. But the mere act of measurement did not automatically make anyone healthier. The essential question “So what?” remained unanswered. Insight and meaningful action require interpretation, not just data.
Today’s situation reflects a similar pattern. AI has dramatically lowered the cost of recording and generating information, but it has not reduced the cost of deciding what truly matters or assigning meaning to it. In many cases, the opposite may be happening. As slop accumulates, more attention is required to identify what is genuinely important.
AI slop, therefore, reflects a shift away from an era defined by how fast and how much we can generate. The next phase will be defined by how thoughtfully we design context and exercise judgment.
If you have perspectives or experiences to add to this discussion, we welcome the conversation.
Kakao Ventures continues to support startups that challenge convention and strive to change the world.