My Life With AI—Part VI: How To Spot AI Content – Or – Apparently, I Am A Robot

Bookmark and Share

spot AI contentI’ve learned how to spot AI content because I’m never sure if a potential source is real or a computer. Well, not exactly. It’s usually easy to confirm the person is real.

Financial professionals often have public footprints. I find them by perusing firm bios, scrolling LinkedIn’s polished profiles, and searching for prior quotes. A business email helps, too. I rarely consider replies from generic addresses like Gmail or Yahoo.

The problem isn’t the people. It’s their answers. Are they genuine—or pasted from a GenAI platform?

Last Monday, I opened an email from a “retirement planning expert” responding to my interview request. The answer was polished—maybe too polished. Three-part structure. Colon in the lead-in. A callback to my original question wrapped in a tidy bow. I stared at my screen. Human or ChatGPT?

Granted, detectors exist. But tools to spot AI content falter. One quote platform embeds an AI-use checker. It isn’t reliable; it triggers plenty of false positives.

I use another tool, and it also throws false positives—even on my own writing. (More on that soon.) A 2023 academic study by Weber-Wulff et al. concluded that “the available detection tools are neither accurate nor reliable.”

What I really need is the moral equivalent of the Voight-Kampff test. No, it’s not what you’re thinking. That’s the Kobayashi Maru test. That one’s from Star Trek II: The Wrath of Khan (with a reprise in the Kelvin Timeline’s 2009 Star Trek).

You saw the Voight-Kampff test in the opening scene of the original Blade Runner. It probes whether someone is human or a replicant (i.e., a robot). You show pictures and watch for authentic versus fabricated memories.

If the memories are real, no problem. It’s a human. If the pictures trigger false memories, then the replicant pulls the trigger, and you vaporize in a shower of sparks. Or something like that. I don’t know. It’s been a while since I’ve seen the movie.

Well, there’s no Voight-Kampff test for source responses. I’m stuck using my own wits. And whatever the so-called experts have to say on the subject. My intuition tells me they check the relevant boxes. That’s what bothers me.

Some tells are obvious in video. You’ve seen the clickbait YouTube titles—inside: images that don’t match the narration, the same few photos looping, and a too-slick voice reading a prefab script.

To spot AI content, start with the voice. It’s not mechanical. It sounds real. It’s like the perfect news anchor.

Yet it’s a little off. You can’t quite pin it down. The inflections land a half-beat late, like dubbed dialogue that doesn’t quite sync with the actor’s lips. They’re subtle, but they grate on you like fingernails on a chalkboard.

The script reveals more—and the same patterns expose AI in text. Watch for repeated words, phrases, and points repeated beyond ad nauseam into full annoyance. Let’s explore how this looks in text.

Visually, AI generates colons and em-dashes in abundance. (An em-dash looks like this: —) GenAI notoriously places colons in titles—technically correct, but rare in human writing. Some editors prohibit the use of em-dashes and veto the use of colons in titles. Fair point: you can’t save a file name with a colon, so there’s another reason to skip them.

Here’s my problem: long before AI, I used em dashes (sorry, editors), and I’ll use a colon in a title (when I absolutely have to). Oh well. I guess that’s one strike against me.

Sometimes AI likes to get a little fancy. This leads to repeated sentence structures. For example, “If x, then y.” or “It’s not x, but it’s y.” OK, I’m guilty of this, too. Oops! Strike two.

Then there’s the use of “rule of three” (a.k.a., “lists of three,” or any other odd number, but mainly three). This makes sense. Headline editors have a bias for titles with odd-numbered lists (or a list of “ten”). But the rule of three rules them all. It’s ancient—not Tolkien Middle Earth ancient, but Aristotelian Greece ancient. (There’s that sentence structure thing again.)

That’s right. Aristotle literally wrote the book on rhetoric (in a fit of creative energy, he cleverly titled it Rhetoric). In it, he outlined the three appeals of persuasion: ethos (credibility), pathos (emotion), and logos (logic). We see this “rule of three” everywhere, from religion (the Holy Trinity) to comedy (the three-joke sequence) to Vince Lombardi (God, Country, and the Green Bay Packers).

The rule of three is so sticky that people “remember” Churchill saying, “blood, sweat, and tears.” He actually said “blood, toil, sweat, and tears.” Churchill must have skipped school the day they taught Aristotle.

Does it surprise you that AI mimics this popular rule? Does it surprise you that I try to adhere to it? Strike three—but I’m not quite out yet.

Here’s how GenAI assembles text: it ingests millions of documents—news articles, books, Reddit threads—and learns statistical patterns. Feed it a prompt, and it predicts the next most probable word, then the next, then the next. The output mimics human writing because it’s trained on human writing. But it over-indexes on patterns that worked in its training data: colons, em-dashes, the rule of three. Not because it understands rhetoric, but because those patterns showed up frequently in “high-quality” sources.

Now you see why analyzers spot AI content in my writing (including the very piece you’re reading). I keep a structure. I employ classic tropes, including call-backs (that would be repeated words, phrases, and concepts). And I absolutely adore the rule of three. (Did you see I used it three times in just this paragraph?)

If that makes me a robot, so be it. Just don’t tell me I failed the Voight-Kampff test.

Otherwise, someone’s ending in a shower of sparks.

Speak Your Mind

*

You cannot copy content of this page

Skip to content