Reaching for vague grandeur instead of concrete language: "tapestry", "landscape", "realm", "paradigm", "multifaceted", "nuanced". A Forbes editor: "I no longer believe there's a way to innocently use the word tapestry in an essay." AI prefers abstraction because it's statistically safer than specificity.
The Pattern
AI defaults to abstract, elevated vocabulary when concrete words would do. "Tapestry" instead of "mix." "Landscape" instead of "situation." "Paradigm" instead of "approach."
Abstraction is statistically safe. Specific claims can be wrong. Abstract claims almost never are. "The evolving landscape of digital innovation" is unfalsifiable -- and that's the point.
PNAS research found "tapestry" at 155x the human rate in GPT-4o output. "Camaraderie" at 162x. These aren't overused words. They're alien-frequency words.
A Forbes editor: "I no longer believe there's a way to innocently use the word tapestry in an essay." AI contaminated the word so thoroughly that it reads as a tell no matter who wrote it.
And it goes beyond individual words. AI will describe a restaurant review as "a multifaceted exploration of the culinary landscape" when it means "a review of a Thai place."
Examples
The Research
"Do LLMs write like humans?" (PNAS, Feb 2025) compared grammatical features between GPT-4o output and human writing. Abstract nouns were wildly overrepresented. "Tapestry" at 155x human rate. "Camaraderie" at 162x. Not elevated -- in a different statistical universe.
VU Amsterdam's ALP Guide found AI text uses 4x fewer unique words than human writing. Same small rotation of grand-sounding words, over and over, instead of the specific and varied vocabulary humans reach for.
The Algorithmic Bridge coined "abstraction trap" for this -- AI escalating every observation to universal significance through abstract vocabulary. A bug report becomes "a window into the challenges of modern software engineering."
Instruction tuning makes it worse. The PNAS study found that instruction-tuned models diverge more from human writing than base models. The "helpful assistant" persona actively encourages abstract, elevated language -- the training reward signal pushes toward grandiosity.
Caught in the Wild
A Forbes editor declared "tapestry" dead in essays. AI vocabulary contamination has made the word toxic regardless of who uses it or why. Human writers now self-censor their own vocabulary to avoid sounding like a chatbot.
Wikipedia editors track vocabulary by model generation. GPT-4 era (2023-mid 2024): peak "tapestry" and "landscape." GPT-4o: "highlighting" and "showcasing." GPT-5 narrowed further. The AI Cleanup team's generational tracking is itself a research contribution -- a living fossil record of model vocabulary drift.
Wikipedia →Kobak et al. found 379 excess style words across 15M PubMed abstracts. "Landscape," "paradigm," "realm" -- all spiking in lockstep with ChatGPT's release date. The contamination got so bad it distorted statistical analyses of language trends in scientific publishing.
Kobak et al. →Sources