AI bolds and italicizes at rates no human editor would tolerate. Every key phrase gets bolded, every technical term gets italicized, every list item leads with a bold label. In human writing, emphasis is rare because it means something. In AI writing, everything is emphasized, so nothing is.
The Pattern
AI defaults to markdown-style formatting even in contexts where plain prose works better. Headers get inserted every two or three paragraphs whether the content warrants them or not. Bold phrases land in almost every sentence, and italic terms accent anything that sounds vaguely technical. The markup never lets up.
In chatbot responses, you can count on a bolded phrase or two per paragraph. Generated articles are worse. Section headers show up at metronomic intervals whether the content warrants a new section or not, and the overall effect reads less like writing than like a slide deck somebody pasted into a document.
tropes.fyi cataloged one version of this as "Bold-First Bullets." Every bullet point starts with a bolded label, then an explanation follows. It's the formatting equivalent of the paragraph machine: a rigid template applied without editorial judgment, over and over, to content that never asked for it.
Human writers bold a word once in 500 to signal genuine emphasis. AI bolds key phrases in every paragraph because RLHF raters scored formatted responses higher. Bulleted lists with bold labels looked "helpful." Italicized terms looked "precise." The training signal was clear: more formatting, more reward. Wikipedia editors picked up on a related tell, too. Models trained on markdown sometimes bleed asterisks and hash marks into contexts where those conventions don't belong. Even when the markdown renders correctly, the sheer density of formatting gives it away. Humans writing in the same contexts use far less.
Examples
The Research
tropes.fyi named the specific pattern of every bullet leading with a bold label. It's templated formatting: the bold word is supposed to signal structure but actually signals "AI wrote this list." Human bullet lists rarely bold the lead word on every single item. When a person writes bullets, some start with a verb, some with a noun, some with a clause. The uniformity is what gives it away.
Wikipedia's Signs of AI Writing page documents how AI-generated text sometimes mixes markdown syntax into contexts that don't render it. Asterisks for bold, hash marks for headers, straight from the training data. Even when the markdown renders correctly, the density of formatting is a tell. Human writers in the same contexts use far less.
Human raters in RLHF training preferred well-formatted responses. Bold headings, bulleted lists, and italic terms all scored higher in helpfulness ratings. That reward signal compounded over thousands of training iterations. The result: models that format everything, even when plain text would communicate the same information more naturally. A short email doesn't need bold phrases. A two-sentence answer doesn't need a header. But the optimization pressure doesn't distinguish between a research summary and a Slack message.
Caught in the Wild
Ask ChatGPT a question and count the bold phrases. A typical response bolds 5-10 terms in a few paragraphs. Ask a human expert the same question in an email and you'll get zero bold, zero italic, zero headers. The formatting density alone flags the source.
Technical documentation generated by AI tends to over-structure: every section gets a header and every concept gets bolded on first mention. The result reads less like documentation and more like a training manual for someone who needs visual cues on every line.
Wikipedia editors flag excessive formatting as an AI tell. Bold terms in article body text, unnecessary italics where plain text would do. The WikiProject AI Cleanup team added formatting tells to their detection checklist in 2025.
TechCrunch →Sources