Trackbacks
-
[…] the press, can we expect truth when the well is tainted? Read this week’s Carosa Commentary, “My Life With AI—Part V: Why GenAI (And All Search Engines) Fail,” to see why Garbage In, Garbage Out still […]
Award-Winning Journalist & Speaker - Expert in ERISA Fiduciary, Child IRA, and Hamburger History
[…] the press, can we expect truth when the well is tainted? Read this week’s Carosa Commentary, “My Life With AI—Part V: Why GenAI (And All Search Engines) Fail,” to see why Garbage In, Garbage Out still […]
Did you ever have a dream you kept putting off? A place you always wanted to visit? A story you always wanted to tell?
So did I. (Notice the past tense.)
This site might give you a clue about how I accomplished this. Who knows? It may even reveal to you how you can realize your own greatest goals.
Interested in learning more? Find me on Twitter and LinkedIn. You can also subscribe to the RSS feed.
Copyright © 2025 Pandamensional Solutions, Inc.
You cannot copy content of this page
My Life With AI—Part V: Why GenAI (And All Search Engines) Fail
Ah, 2024. Those were the good old days…
Code that tapped LLMs did achieve—a bit—what I wanted, but the inconsistency drove me crazy. I kept trying. Same frustration. (Hmm, isn’t that the classic definition of insanity?) Incidentally, do you want to know what LLMs are? See “My Life With AI—Part III: What Comes Around Goes Around,” Mendon-Honeoye Falls-Lima Sentinel, February 13, 2025.
Short on time to improve the programs, I asked “experts” to refine my code. The “experts” were GenAI platforms—Grok, ChatGPT, and Claude. I used them all to see which worked best (at the time, Grok did).
That was nearly a year ago, during a flurry of weekly upgrades. First Grok grabbed headlines, then ChatGPT. Claude lagged, so I swapped in Perplexity, then moved to Google’s Gemini. (I’ve since added Copilot and brought Claude back.)
Somewhere along the way, my attitude changed. I found many other useful applications—but this column isn’t about GenAI’s upsides. It’s about how GenAI—and search engines and even Wikipedia—fail.
The answer is simple; it’s not the tools’ fault.
How GenAI Fails: The Upstream Problem
It reminds me of that scene in Who Framed Roger Rabbit. You remember. Jessica Rabbit glides into Eddie Valiant’s office and, after some back-and-forth, she says, “You don’t know how hard it is being a woman looking the way I do.” Then she delivers the now-classic kicker: “I’m not bad. I’m just drawn that way.”
GenAI isn’t bad. It’s just programmed that way. Like search engines, like Wikipedia, like lazy newsrooms—the problem lies upstream. Whether from big-city papers or best-selling books, bad reporting poisons the well of information. And when the source well is polluted, every bucket comes up muddy.
It’s not the bucket’s fault.
Likewise, it’s not GenAI’s fault. Or the search engine’s. Or Wikipedia’s.
They share the same vulnerability: they don’t create original content. They’ve “always depended on the kindness of strangers”—outsourcing credibility to brand-name gatekeepers like Pulitzer and Nobel winners.
You see where this is going.
The Hamburger Test
GenAI isn’t the villain. It’s simply the bucket you dip into the tainted well. Let me give you an example.
A few weeks ago, I attended a Google presentation at the New York Press Association’s Publishers Conference. The speaker, in a thick Irish accent, confidently demonstrated the AI tools Google offers. He asked the audience for a prompt. A man in front offered, “Who sold the first hamburger?”
Yes, I was that man. And I had an ulterior motive.
I chose an obscure topic I know cold. This let me invoke the Gell-Mann amnesia effect to my advantage. Coined by Michael Crichton, the term refers to how experts spot media errors in their field, even as they trust coverage in fields they don’t know. Most readers aren’t experts in the topic at hand. As Gell-Mann suggests, they naively assume the story is accurate.
That’s how misinformation spreads. Once a major news outlet reports a story, it’s already too late. Smaller outlets repeat it, assuming the facts have been vetted. Soon, a chorus of “independent” sources echoes the same claim. Even if the original outlet corrects the error—usually buried where no one sees it—the copies overwhelm the correction in search algorithms.
Now you know why I picked the hamburger topic. I knew the right answer. No one else in the room was a historian—let alone a hamburger historian—so they couldn’t spot the error. The Google search yielded an answer encompassing a range of myths and legends. I was shocked that the correct answer didn’t appear.
Wikipedia’s Bias
A quick trace showed Google had picked up the lead paragraph from Wikipedia.
As it happens, I’m enrolled in a Wikipedia training class for historians. I brought this example to the trainer. Here’s what I found out.
Wikipedia prefers secondary source citations over primary sources. As a researcher, my professors taught me to rely only on primary sources and to eschew secondary sources (because they don’t contain the original data). Worse, Wikipedia ranks media sources by “reliability.”
As I cited in my book Hamburger Dreams, The New York Times ran a widely referenced article that misstated when the hamburger was first sold. According to Wikipedia editors, “There is consensus that The New York Times is generally reliable.” Sure, secondary sources can confirm the notoriety of a particular person, place, or event (a key criterion for qualifying as a Wikipedia entry). It can also mean there’s a consensus on the topic—a poor man’s peer review. But without a review of the original data (i.e., an actual peer review), this consensus could be wrong. In the case of the hamburger, it was.
How could I fix this? Alas, Wikipedia bars original research in its articles, so as the author of the original research, I couldn’t simply correct the entry. The trainer suggested I post on the Talk page and ask other editors to add it. If a topic doesn’t attract editors, no one is there to make the correction.
Why GenAI Fails: The GIGO Lesson
GenAI fails because it relies on those same sources. In other words, everything old is new again.
Or, as we old programmers used to say, “Garbage in, garbage out.”
Next Week: How To Spot AI Content
Related