Everything Changes Everything

Every topic gets inflated to world-historical significance. A software update becomes "a fundamental reimagining of how humanity interacts with technology." This predates AI — it's a TED talk / startup pitch habit that AI learned and now deploys with zero irony, on every topic, at every scale.

AI inflates everything. A new JavaScript framework isn't useful — it's "transforming how we think about software." A productivity tip isn't helpful — it's "revolutionizing the modern workplace."

None of this is new. TED talks, startup pitches, and thought leadership content all run on grandiosity, and they were heavily represented in training data. AI absorbed the tone of successful online content: everything is world-changing, always.

The tell is the missing volume knob. A minor software update and a genuine scientific breakthrough get identical rhetorical treatment. Human writers modulate. AI doesn't.

Hybrid Copy's analysis of LLM writing tropes calls it "grandiose stakes inflation" — AI text overpromises significance as a default. tropes.fyi filed it under the same name as a tone-level pattern: the gap between what the topic actually warrants and how the prose treats it.

When everything is revolutionary, nothing is. The exhaustion is the tell.

Software update This isn't just a software update — it's a fundamental reimagining of how humanity interacts with technology forever.
Recipe blog This sourdough recipe doesn't just make bread — it transforms your relationship with food, connecting you to a centuries-old tradition that fundamentally reshapes how we think about nourishment.
Product feature With our new dark mode feature, we're not just changing colors — we're reimagining the entire user experience in ways that will define the next generation of digital interfaces.
Human equivalent We shipped dark mode. Users wanted it, we built it. It looks good.
54%
of long-form LinkedIn posts estimated AI-generated (where stakes inflation is endemic)

Hybrid Copy's "LLM Writing Tropes" analysis puts stakes inflation at the center of AI's tonal problems. AI treats every topic as if the reader needs convincing of its cosmic importance — because marketing copy, TED talks, and thought leadership all reward that register, and the training data is full of them.

tropes.fyi's name for it — "Grandiose Stakes Inflation" — became one of their most cited patterns. Readers sense performative enthusiasm when it's applied uniformly to everything. The mismatch between claim and content registers as fakeness, even to casual readers.

The PNAS study on LLM writing backs this up indirectly. Instruction tuning widens the gap between AI and human writing, and evaluative language is part of that gap. Instruction-tuned models run more positive, more emphatic, and more grandiose than either base models or human writers.

Originality.ai's LinkedIn data adds an irony: AI-generated posts use more superlatives and transformative language than human posts but receive 45% less engagement. Readers detect the inflation and discount it.

LinkedIn AI Content

LinkedIn is where stakes inflation is loudest. Every career tip is "transformative." Every lesson learned is "the one thing that changed everything." The superlatives pile up until they mean nothing. Originality.ai found these posts get 45% less engagement than human-written ones — readers can smell it.

Originality.ai →

AI Marketing Copy

AI-generated marketing copy oversells by default. A minor product improvement gets described as "groundbreaking" and "game-changing." Some agencies now prompt AI to "dial down the importance by 80%" as standard practice.

AI-Generated Press Releases

PR agencies caught AI drafts describing routine product launches in language reserved for moonshots. The result wasn't authority — it was delusion.