Emphasis Epidemic

AI bolds and italicizes at rates no human editor would tolerate. Every key phrase gets bolded, every technical term gets italicized, every list item leads with a bold label. In human writing, emphasis is rare because it means something. In AI writing, everything is emphasized, so nothing is.

AI defaults to markdown-style formatting even in contexts where plain prose works better. Headers get inserted every two or three paragraphs whether the content warrants them or not. Bold phrases land in almost every sentence, and italic terms accent anything that sounds vaguely technical. The markup never lets up.

In chatbot responses, you can count on a bolded phrase or two per paragraph. Generated articles are worse. Section headers show up at metronomic intervals whether the content warrants a new section or not, and the overall effect reads less like writing than like a slide deck somebody pasted into a document.

tropes.fyi cataloged one version of this as "Bold-First Bullets." Every bullet point starts with a bolded label, then an explanation follows. It's the formatting equivalent of the paragraph machine: a rigid template applied without editorial judgment, over and over, to content that never asked for it.

Human writers bold a word once in 500 to signal genuine emphasis. AI bolds key phrases in every paragraph because RLHF raters scored formatted responses higher. Bulleted lists with bold labels looked "helpful." Italicized terms looked "precise." The training signal was clear: more formatting, more reward. Wikipedia editors picked up on a related tell, too. Models trained on markdown sometimes bleed asterisks and hash marks into contexts where those conventions don't belong. Even when the markdown renders correctly, the sheer density of formatting gives it away. Humans writing in the same contexts use far less.

AI chatbot response Understanding the basics of machine learning requires grasping several key concepts. Supervised learning uses labeled data to train models, while unsupervised learning finds patterns in unlabeled datasets. The key difference lies in the training approach.
AI blog post Why Remote Work Matters

The shift to remote work represents a fundamental change in how we approach productivity. Here are the key benefits:

Flexibility: Workers can choose their own hours
Cost savings: Companies reduce overhead
Talent access: Hire from anywhere
AI email I wanted to share some key updates on the project. The timeline has been adjusted, and there are two important changes to note. First, the deadline has moved to Friday. Second, we need additional review from the stakeholder team.
Human equivalent I wanted to share some updates on the project. The timeline shifted — deadline is now Friday, and we need another round of review from stakeholders.
33
patterns on tropes.fyi, including Bold-First Bullets
4x
fewer unique words in AI text vs human, driving repetitive formatting (VU Amsterdam)

Bold-First Bullets

tropes.fyi named the specific pattern of every bullet leading with a bold label. It's templated formatting: the bold word is supposed to signal structure but actually signals "AI wrote this list." Human bullet lists rarely bold the lead word on every single item. When a person writes bullets, some start with a verb, some with a noun, some with a clause. The uniformity is what gives it away.

Markdown Bleed

Wikipedia's Signs of AI Writing page documents how AI-generated text sometimes mixes markdown syntax into contexts that don't render it. Asterisks for bold, hash marks for headers, straight from the training data. Even when the markdown renders correctly, the density of formatting is a tell. Human writers in the same contexts use far less.

RLHF Formatting Bias

Human raters in RLHF training preferred well-formatted responses. Bold headings, bulleted lists, and italic terms all scored higher in helpfulness ratings. That reward signal compounded over thousands of training iterations. The result: models that format everything, even when plain text would communicate the same information more naturally. A short email doesn't need bold phrases. A two-sentence answer doesn't need a header. But the optimization pressure doesn't distinguish between a research summary and a Slack message.

ChatGPT Default Output

Ask ChatGPT a question and count the bold phrases. A typical response bolds 5-10 terms in a few paragraphs. Ask a human expert the same question in an email and you'll get zero bold, zero italic, zero headers. The formatting density alone flags the source.

AI-Generated Documentation

Technical documentation generated by AI tends to over-structure: every section gets a header and every concept gets bolded on first mention. The result reads less like documentation and more like a training manual for someone who needs visual cues on every line.

Wikipedia AI Content

Wikipedia editors flag excessive formatting as an AI tell. Bold terms in article body text, unnecessary italics where plain text would do. The WikiProject AI Cleanup team added formatting tells to their detection checklist in 2025.

TechCrunch →