Stop Sloppypasta

slopΒ·pyΒ·pasΒ·ta  n.  Verbatim LLM output copy-pasted at someone, unread, unrefined, and unrequested. From slop (low-quality AI-generated content) + copypasta (text copied and pasted, often as a meme, without critical thought). It is considered rude because it asks the recipient to do work the sender did not bother to do themselves.

You just got an unread message notification.

Maybe it's on Slack (or Teams), a text message, or an email. Maybe you were tagged in Notion or an Office doc.

You open it to find several paragraphs of text - or perhaps a listicle - with all the hallmarks of AI-generated writing: headings, heavy formatting, and "it's not X it's Y", with em-dashes sprinkled gratuitously throughout.

The person who sent it probably spent about ten seconds on it. They asked a chatbot and forwarded its response to you verbatim without validation or critical review.

But now you're expected to read, vet, and act on this wall of text. When someone forwards text they themselves have not considered, they are asking you to do work they chose not to do. The asymmetric effort makes it rude.


Examples

❌ Example
#strategy

The team is mid-discussion about whether to expand into the German market.

πŸ‘€:

I asked Claude about this! Here's what it said:

Expanding into the German market presents both significant opportunities and notable challenges. Germany is the largest economy in the EU and offers a robust consumer base. However, businesses should consider the following key factors:

1. Regulatory Environment

Germany has strict regulations including GDPR compliance, local labour laws...

[6 more sections follow. None mention your company, your product, your budget, or reference the discussion already underway.]

buries the live discussion Β· unrequested Β· generic

The Eager Beaver

A conversation participant wants to contribute to the topic at hand, so they ask a chatbot and share whatever comes back. The intention is good - they genuinely want to help - but the wall of generic AI text they contributed blocks the discussion already underway. Now other participants have to scroll past it to continue, or stop to read and validate it.

It feels helpful to send. It creates work to receive.

❌ Example
you:

Does anyone know why our email open rates have been dropping? We changed the subject line format last month.

πŸ‘€:

ChatGPT says:

Email open rate declines can be attributed to several factors. These include changes in subject line strategy, sender reputation issues, list hygiene problems, and deliverability concerns. Here are the most common causes:

1. Subject Line Fatigue

If subject lines have become too similar or predictable, subscribers may stop engaging…

[Provides 5 more sections of generic email open diagnostics. Does not mention your subject line change, your audience, or your platform.]

irrelevant to the specific question Β· generic

The OrAIcle

Someone asks a specific question. Another person puts it into a chatbot and pastes the response as the answer.

"ChatGPT says" is the enshittified LLM-era equivalent of LMGTFY (Let Me Google That For You). Shared as a link or a GIF, LMGTFY was easy to ignore, and clear about what it was (sarcastic commentary). Sloppypasta is neither. Recipients are left to figure out whether it's AI generated, whether it's correct, and which part actually answers the question (if it's actually relevant at all). If you ask a person a question, you're looking for their perspective and expertise. In this sense, both LMGTFY and sloppypasta are etiquette failures where sender disregarded the recipient the dignity of the basic human reply.

❌ Example
πŸ‘€:

Hey team - I did some research on our competitors this week. Here's a summary:

Competitive Landscape Overview

The market is highly competitive, with several established players and emerging challengers. Key competitors offer distinct value propositions across pricing tiers…

[It's a 5-page essay with handwavy assertions and no concrete details. No dates. No sources. No live pricing.]

presented as personal work Β· no one knows to check Β· hallucinated details possible

The Ghostwriter

The sender shares AI output as their own work, with no indication a chatbot wrote it. Recipients have no reason to question it, and may act on information that is out of date, incomplete, or simply wrong.

Using AI as a ghostwriter borrows the sender's credibility. If the content turns out to be wrong, that credibility is what gets spent.

Why it matters

As a Recipient As a Sender Feedback loop
Effort Previously, effort to read was balanced by the effort to write. Now LLMs make writing "free" and increase the effort to read due to additional verification burden. Writing requires effort, which contributes to comprehension. LLMs increase cognitive debt by reducing struggle. Sender's skipped effort becomes recipient's added effort, increasing frustration as incidence increases.
Trust LLM propensity for hallucination and capability to bullshit convincingly mean that "trust but verify" is broken. All correspondence must be untrusted by default. What you share directly influences your reputation. Sharing raw LLM output - especially unvetted - burns your credibility. Eroding trust from LLM sloppypasta is the modern 'Boy Who Cried Wolf.'

Sharing raw AI output is like eating junk food: it's easy and may feel good, but it's not in your best interest. You'll negatively influence your relationship with the recipient, and do yourself a disservice by reducing your own comprehension.

Before LLMs, writing took effort. Authors spent time and effort considering and selecting their words with intention; time and effort that was balanced by that spent by the audience as they read. This balance is broken with LLMs; the effort to produce text is effectively free, but the effort required to read the text hasn't changed. The increasing verbosity of LLMs further increases the effort asymmetry. In some circumstances (like pasting raw LLM output into a chat thread), the sloppypasta effectively becomes a filibuster, crowding out the existing conversation and blocking the viewport.

Writing is thinking. The writing process forces the author to work through their thoughts, building their comprehension and retention. Multiple studies have found that delegating tasks to LLMs creates cognitive debt. Shortcutting thinking with LLMs ultimately reduces comprehension of and recall about the delegated subject.

Before LLMs, trust was the default. Authors wrote from their personal expertise and perspective, and readers could judge an author's understanding of the subject based on the coherence of their writing. LLMs generate the most probable next token given an overarching goal to be helpful, which explains their propensity for hallucination (confabulation) and why many people feel that LLMs are bullshit generators. Modern LLMs are typically provided tools to help them look up grounding information that reduces (but does not eradicate) their likelihood to outright make up facts during their responses. But that still doesn't solve the trust problem; the reader still has no way to know what the sender checked and what they didn't. LLM responses, therefore, cannot be trusted by default and compound the effort asymmetry on the reader by adding a verification tax.

Beyond accuracy, LLMs write authoritatively with the tone and confidence of an expert. This adds further uncertainty to the reader's burden; they have no way to gauge the sender's actual level of expertise with the subject matter. The result is a further erosion of trust, because the AI's voice removes signal that recipients previously used to distinguish expertise from plausible-sounding slop.

Formerly, "Trust but verify" ruled. Readers would trust until that trust was broken; the author was trustworthy or they weren't. However, shared LLM output obfuscates the chain of trust. Did the prompter do the appropriate due diligence to validate the LLM response? If problems or errors are discovered, who is to blame, the prompter or the AI? Was it an oversight, a missed verification step, or was verification skipped altogether? The uncertainty means the recipient doesn't know what they can trust, what has or has not been verified; they must treat everything as untrusted. Just like the Boy Who Cried Wolf, once the trust is broken, the uncertainty spreads to all future messages from the sender.

Assumptions of balanced effort and presumed trust are no longer guaranteed in a post-LLM world. Sloppypasta creates a compounding negative feedback loop where the sender forfeits learning and credibility while the recipient burns effort and loses trust. Receiving raw AI output feels bad due to the cognitive dissonance of having these assumptions violated.

"For the longest time, writing was more expensive than reading. If you encountered a body of written text, you could be sure that at the very least, a human spent some time writing it down. The text used to have an innate proof-of-thought, a basic token of humanity." β€” Alex Martsinovich, It's rude to show AI output to people
"Cognitive effort β€” and even getting painfully stuck β€” is likely important for fostering mastery." β€” Anthropic, How AI assistance impacts the formation of coding skills
"A polished AI response feels dismissive even if the content is correct" β€” Blake Stockton, AI Writing Etiquette Manifesto
"I think it's rude to publish text that you haven't even read yourself. I won't publish anything that will take someone longer to read than it took me to write." β€” Simon Willison, Personal AI Ethics

Simple guidelines

Read.

Read the output before you share it. If you haven't read it, you don't know whether it's correct, relevant, or current.

Delegating work to AI creates cognitive debt. Working with the results helps run damage control for your own understanding.

Verify.

Check the facts before you forward them. Anything you forward carries your implicit endorsement -- your reputation depends on managing the quality of what you share.

LLMs are trained to "be helpful", and will produce outdated facts, wrong figures, and plausible nonsense to provide a response to your requests. Further, an LLM is inherently out-of-date; their knowledge cutoffs contain at best information on the state of the world when their training started (months ago).

Distill.

Cut the response down to what matters. Distilling the generated response to the useful essence is your job.

LLMs are incentivized to use many words when few would do: API-priced models have a per-token incentive to train chatty LLMs that use many tokens, and research shows that longer, highly formatted posts are often preferred as more engaging.

Disclose.

Share how AI helped.

If you've read, verified, and edited it, send it as yours -- preferably with a note that you worked with AI assistance. If you're sharing raw output, say so explicitly. In both cases, it may be useful to share your prompt and how you worked with the AI to get the final output.

Disclosure restores the trust signals that sloppypasta destroys and tells the recipient what you checked and what they may be on the hook for.

Share only when requested.

Never share unsolicited AI output into a conversation.

Remember that AI generations create effort asymmetry and be respectful of those you share with. Sloppypasta delegates the full burden of reading, verifying, and distilling to a recipient who didn't ask for it and may not realize the effort required of them.

Share as a link.

Share AI output as a link or attached document rather than dropping the full text inline.

In messaging environments, a large paste takes over the viewport and crowds out the existing conversation. A link lets the recipient choose when - and whether - to engage, rather than having that choice imposed on them.

AI capabilities keep increasing, and using it to draft, brainstorm or accelerate you will be increasingly useful. However, using AI should not make your productivity someone else's burden. New tools require new manners.

Use AI to accelerate your work or improve what you send.
Don't use it to replace thinking about what you're sending.

Further reading