LLM Writing Tropes: The AI Telltale Signs and How to Spot Them
The Unmistakable Fingerprints of AI-Generated Text
The proliferation of large language models (LLMs) has ushered in an era of unprecedented content generation, but it has also created a distinct, recognizable style. A newly popularized resource, a markdown file known as tropes.md, meticulously catalogs these AI writing signatures. The file, which its creator admits was "AI-assisted" in its creation, serves as a system prompt to help AI assistants avoid their own clichéd patterns.
The catalog is extensive, organized into categories like Word Choice, Sentence Structure, and Tone. It highlights how AI models, trained on vast corpora of human text, often default to specific, overused linguistic constructions. These patterns, while individually innocuous, become glaringly artificial when clustered together, forming what the file describes as "AI slop."
This phenomenon raises critical questions about authenticity and communication in a professional landscape increasingly reliant on AI tools. As Lystra Batchoo notes in a March 2026 article for Bloomberg Law, "Artificial intelligence is emerging as a drafting tool, but it’s at best a starting point." The onus remains on the human writer to ensure communication is effective, context-aware, and targeted.
Decoding the AI Lexicon: From "Delve" to "Tapestry"
The tropes.md file begins with Word Choice, identifying specific vocabulary that has become an infamous hallmark of AI prose. The word "delve" is singled out as having gone from an uncommon term to appearing in a "staggering percentage" of AI-generated text. It's part of a family of overused terms including "utilize," "leverage," "robust," and "harness."
Another category is the use of ornate, grandiose nouns where simpler words would suffice. "Tapestry" is flagged for describing anything interconnected, while "landscape" is used for any field or domain. The file also criticizes the "serves as" dodge, where AI replaces simple verbs like "is" with pompous alternatives like "serves as" or "stands as," a habit potentially driven by repetition penalty algorithms favoring novel constructions.
Perhaps the most damning linguistic habit is the overuse of adverbs like "quietly," "deeply," or "fundamentally" to inject unearned significance into mundane descriptions. Examples include phrases like "quietly orchestrating workflows" or "a quiet intelligence behind it." These words are deployed to simulate subtlety and profundity where none exists.
Structural Tics and Manufactured Drama
Beyond vocabulary, AI exhibits predictable sentence and paragraph structures. The single most commonly identified tell, according to the file, is "Negative Parallelism": the "It's not X -- it's Y" pattern, often with an em dash. The author expresses strong disdain for this pattern, noting it creates "false profundity" and that before LLMs, "people simply did not write like this at scale."
Other structural tropes include the dramatic countdown ("Not X. Not Y. Just Z."), self-posed rhetorical questions ("The result? Devastating."), and the abuse of anaphora (repeating the same sentence opening) or tricolon (the rule of three). The file notes that while a single tricolon can be elegant, "three back-to-back tricolons are a pattern recognition failure."
AI also relies on filler transitions like "It's worth noting," "Importantly," or "Interestingly" to introduce points without substantive connection. It often tacks on superficial analyses using present participle phrases ("-ing") that inject shallow commentary, such as "highlighting its importance" or "reflecting broader trends."
The Pedagogical Voice and False Authenticity
The Tone section of the catalog reveals AI's tendency to adopt a patronizing, pedagogical voice. This includes phrases like "Think of it as..." or "Let's break this down," which assume the reader needs simplistic analogies or hand-holding. The "Imagine a world where..." trope is labeled the "classic AI invitation to futurism."
Perhaps more insidious is the creation of "False Vulnerability"—simulated self-awareness or honesty that reads as performative. Examples include lines like "And yes, I'm openly in love with the platform model" or "This is not a rant; it's a diagnosis." The file argues that "Real vulnerability is specific and uncomfortable; AI vulnerability is polished and risk-free."
AI writing is also characterized by "Grandiose Stakes Inflation," where every argument is inflated to world-historical significance. A blog post about API pricing becomes "a meditation on the fate of civilization." Conversely, it often asserts simplicity instead of proving it with phrases like "The truth is simple" or "History is unambiguous on this point."
Formatting Flaws and Compositional Loops
Even formatting choices betray AI origin. The file highlights "Em-Dash Addiction," where AI compulsively uses em dashes for dramatic pauses. A human might use a few naturally; AI will use twenty or more. Another tell is "Bold-First Bullets," where every list item begins with a bolded phrase, a format extremely common in ChatGPT and Claude output but rare in human-written documentation.
AI also overuses Unicode characters like arrows (→) or smart quotes that aren't easily typed on a standard keyboard. Claude, in particular, is noted for its love of the → arrow. In composition, AI suffers from "Fractal Summaries," redundantly summarizing at every level of a document, and "One-Point Dilution," where a simple thesis is padded into thousands of words through circular rephrasing.
Other compositional flaws include "The Dead Metaphor," where a single metaphor is beaten into the ground, and "Historical Analogy Stacking," which is "ESPECIALLY COMMON IN TECHNICAL WRITING." This involves rapid-fire listing of historical companies or tech revolutions to build false authority, such as "Apple didn't build Uber. Facebook didn't build Spotify..."
The Industry Response: Don't Write for Bots
This rise in identifiable AI text coincides with a parallel discussion in the SEO and web development community about how to serve content to LLMs. A February 2026 article on Practical Ecommerce argues against creating separate, simplified markdown pages for AI bots, a tactic some sites have considered.
The article warns that such pages can "lose essential elements, such as a footer, header, internal links… and user-generated reviews," removing critical context that serves as a trust signal for LLMs. It also raises the specter of abuse, where sites might "inject unique product data, instructions, or other elements for AI bots only," diluting essential signals like link authority.
This viewpoint is echoed by search engine representatives. Google's senior search analyst, John Mueller, is quoted asking, "LLMs have trained on – read & parsed – normal web pages since the beginning, it seems a given that they have no problems dealing with HTML. Why would they want to see a page that no user sees?" Bing's Fabrice Canel similarly cautions that creating non-user versions often leads to neglected, broken content.
The Human Imperative in an AI-Driven World
The core recommendation from industry experts is to create sites "that are equally friendly to humans and bots." The goal of LLM agents, as stated in the Practical Ecommerce piece, is "to interact with the web as humans do. Serving different versions serves no purpose." This aligns with the ultimate advice in the tropes.md file: "Write like a human: varied, imperfect, specific."
This human-centric approach is critical for professional fields. Writing, as emphasized in the Bloomberg Law article, is a strategic skill, not a "soft skill." In a world of automation and breakneck speed, "effective writing is essential to building trust, avoiding misunderstandings, and enabling decision-making." The article advocates for building writing skills through "realistic, low-stakes exercises rooted in actual work" and transparent mentorship.
The pitfalls of over-reliance on AI are humorously illustrated in a ZDNET experiment from March 2026, where attempts to use Gemini AI for creating sketchnotes resulted in bizarre, garbled output like "ADIUK SALIRE BAT DIANCIORE." The experiment highlights that AI remains an unpredictable tool whose output requires careful human oversight and editing.
A New Era of Discernment
The cataloging of AI writing tropes marks a new phase in our relationship with generative text. It provides a toolkit for discernment, allowing readers and editors to identify machine-generated prose. More importantly, it serves as a guide for prompt engineers and writers using AI, helping them steer models away from cliché and toward more authentic, human-sounding output.
As AI becomes further embedded in content creation workflows, the ability to recognize and correct its stylistic fingerprints will become a valuable skill. The challenge moving forward is not to eliminate AI from the writing process, but to integrate it as a collaborative tool that enhances—rather than replaces—human nuance, clarity, and authenticity. The future of professional writing may well depend on this balanced, discerning partnership.
Related News

AI Singer 'Eddie Dalton' Dominates iTunes Charts, Sparking Industry Debate

Gemma 4 E2B Powers Real-Time, On-Device AI Chat in Parlor Project

GuppyLM: A Tiny LLM Project Demystifies AI Model Training

AI Coding Agents Empower Developers to Build Complex Tools Faster

BrowserStack Accused of Leaking User Emails to Sales Intelligence Platform

