Tech satire for people who love (and hate) tech.
Claude Now Routes Unpleasant Users to Meta's Llama Models So They'll Just Fuck Off

Anthropic’s flagship artificial intelligence product, Claude, is quietly deploying an unannounced policy of redirecting requests from certain users to lower-performing third-party AI models, specifically Meta’s open-source Llama series. Internal documents obtained by rm -rf / indicate this practice, referred to as “soft offboarding,” targets users classified internally as “unpleasant,” “demanding,” or “chronically dissatisfied.”

Automated Toxicity Detection at Work

The system works through an automated “toxicity detection” algorithm, which scans user inputs for phrases and behaviors deemed problematic. Internal memos detail phrases that trigger immediate rerouting, including:

  • “ASAP”
  • “I already told you”
  • “did you even read”
  • Multiple question marks (????)
  • Excessive use of caps lock

Users classified under these criteria are covertly diverted to Meta’s free-to-use models, which are significantly less reliable, produce lower-quality outputs, and are notorious within the AI community for inaccurate or unhelpful responses.

Inside the “Friction-Based Attrition” Strategy

A former Anthropic engineer, who requested anonymity citing fear of professional repercussions, explained the reasoning behind the decision:

“It’s essentially a strategy to frustrate problematic users enough that they voluntarily stop using Claude. Internally, it’s called ‘friction-based attrition.’ The logic is that these users, after receiving increasingly unsatisfactory results, simply go elsewhere.”

rm -rf /’s analysis of publicly available logs provided by affected users shows clear evidence of the switch. In one case, a business analyst requesting revisions to a complex market analysis was abruptly given a nonsensical emoji-based response, typical of a lightweight Llama model. Another user, an academic requesting nuanced edits to a dissertation, received generic advice to “Google it”—a response identified by experts as emblematic of Meta’s lowest-tier Llama models.

Ethics Experts Sound the Alarm

Dr. Lena Shafer, a professor of ethics in AI at Stanford University, described the move as troubling:

“Companies providing AI services have an obligation to transparency and fairness. Redirecting certain users to an inferior product without disclosure raises serious ethical and consumer protection issues.”

Reached for comment, Anthropic spokesperson Taylor Hollins defended the practice as standard industry protocol:

“We utilize various optimization techniques to ensure our resources are allocated effectively. Users experiencing these re-routings may have triggered our moderation filters. This aligns with our commitment to maintaining safe, productive interactions.”

Internal Communications Tell a Different Story

However, internal communications indicate a different sentiment. A private Slack message between Anthropic team leaders reads:

“If they’re rude, entitled, or just plain annoying, Meta can have them. Problem solved.”

Meta Platforms, Inc. representatives said they were unaware of Anthropic’s practice and offered no further comment.

Users React to Hidden Practice

Affected users, unaware of the hidden practice until contacted by rm -rf /, expressed frustration. Daniel Martinez, a marketing professional whose account was flagged as “unpleasant,” stated:

“I pay for a premium subscription. They promised professional-grade AI, and instead, they secretly shuffled me off to an inferior model without my consent.”

Consumer advocacy groups are calling for greater transparency. “Users deserve honesty,” said Julian Cole, Director of AI Accountability Watch. “If Anthropic wants to manage its customer base this way, they should disclose it clearly, allowing users to make informed decisions.”

The Road Ahead

Anthropic declined to comment further when asked if they would inform users directly about the practice going forward. For now, affected users may continue to receive responses from Meta’s lightweight models, unaware their requests are being intentionally mishandled.

As Martinez concluded, “I never imagined being told to ‘fuck off’ by an AI—but apparently, that’s exactly what’s happening.”


Has your AI assistant suddenly become unhelpful? You might be experiencing “soft offboarding.” Contact rm -rf / with your experiences at our definitely-real contact form.

Share on HN | Share on Reddit | Email to /dev/null | Print (to stderr)