My Life With AI—Part III: What Comes Around Goes Around

Bookmark and Share

If you recall from past columns in this space, you might remember one of the first things I did during my second run at publishing this esteemed periodical was to replace myself.

Allow me to explain.

Among the software we found buried deep within the bowels of one of the two extremely outdated computers we inherited was an ancient DBase III program. For those not old enough to recognize the name or even the purpose of that old code, it allowed you to collect and update a customized database you could then manipulate in any way you desired. I installed this program in 1989 when we started the newspaper. I then created a database in which we could keep our subscriptions on. We used it not only to know when people needed to renew but also to print out the mailing labels to attach to the newspapers every week.

Nearly thirty years later, the previous owners of the Sentinel were still using this archaic software. The very first thing I did was extract the data (into a comma-delimited file for all you nerds out there), which I promptly imported into an Excel spreadsheet. From that spreadsheet, I wrote a mail-list merge routine in Word to print out the mailing labels.

And with that, I had replaced myself. (For those keeping score at home—and who notice how their mailing address now appears on their paper—we’ve since replaced that process with a newer one that eliminates the label altogether.)

I’m guessing you’re all curious to know what this has to do with artificial intelligence. The quick answer: It doesn’t. The more complex answer (which you will soon discover): It does in the sense that you can’t teach an old dog new tricks.

OK, so that makes me an old dog who likes to rely on old tricks. That’s how we get to this next episode in “My Life In AI.” For those who continue to be curious (and want to know why I keep using the word “curious,”) the two previous installments can be found here: “My Life With AI—Part I: Early Geekdom,” Mendon-Honeoye Falls-Lima Sentinel, May 25, 2023; “My Life With AI—Part II: The Search For The Holy Grail,” Mendon-Honeoye Falls-Lima Sentinel, June 1, 2023.

When we last left off, I declined an offer to lead a new software company that searched the Internet for articles relevant to new patents. This effort was merely a glorified search engine, but unlike search engines at the time, it could produce plain English summaries of the articles it found. I felt it had broader applications (like replacing encyclopedias). The developers, however, thought the idea too banal (and less profitable) for their purposes.

Had they gone ahead with my idea, that 1990s software would have been Wikipedia before there was a Wikipedia. Moreover, the software could have been sold to firms and industries wishing to create their own specialized Wikis, making it potentially more profitable than the developer’s original business model. But, hey, what did I know? To these forty- and fifty-year-olds, I was just some kid barely out of his twenties who didn’t know a lick about business.

From my standpoint, beyond the ability to summarize in plain English (arguably the most important aspect of this venture), the application represented a simple brute-force program. I had already done some real work in artificial intelligence, and I had other plans for my life that didn’t involve programming.

At the time of the first two parts of this series, so-called “generative AI assistants” or “AI chatbots” began capturing the fancy of just about everyone. The most popular of these today are Grok (available through X), ChatGPT, Claude, and Gemini (available through Google).

Here’s the interesting thing: they all fall under the category of “Large Language Model (LLM)-Powered Chatbots.” They all possess the same plain-English response capabilities that the 1990s patent analysis program had. Who knew what I was exposed to thirty years ago would now be called an LLM? Though decades older, from my own perspective, that 1990s program was much better in the sense that it didn’t hallucinate.

“Hallucination” is a term now used to describe one of the bugs of these newer AI chatbots. I described an example in “My Like With AI—Part II.” Essentially, these applications fabricate false facts in some of their responses. In other words, they’re not reliable.

It’s not like they’re trying to lie to you. ChatGPT is very forthcoming when asked about this phenomenon. It says, “Hallucinations occur because generative AI models do not ‘know’ facts in the way a human does. Instead, they predict text based on probabilities from their training data, which can sometimes result in confidently stated misinformation. Researchers are actively working to reduce hallucinations by improving AI training methods, incorporating retrieval-based systems, and refining post-processing techniques.”

In the case of Claude, which doesn’t provide up-to-date information, it will typically add this disclaimer to its responses: “While I strive to provide accurate and comprehensive information, I should note that my knowledge cutoff date is April 2024. For the most up-to-date and accurate information, especially regarding current events, regulations, or market conditions, please verify with authoritative sources and qualified professionals in the relevant field.”

I don’t recall a problem with hallucination with the 1990s program. Then again, that program cost users a pretty penny, and it couldn’t afford the luxury of inaccuracy found in these mostly free AI chatbots. That limits the application practicality of the current iteration of generative AI assistants. They might be good at providing a first draft, but you better check their work.

How much checking is required? The more detailed the response, the more checking you’ll need to do.

We’re almost done with this session, and we still haven’t tied in the “replacing myself” to artificial intelligence. For that, we need to revisit the New York State Press Association Publishers Conference this past Fall. Initially shunned by the journalistic community, “artificial intelligence” has gained broader acceptance. During the sessions, several speakers showed us different aspects of using AI tools to publish newspapers. Most of them were narrowly defined online applications, mostly search tools that produced output in attractive formats. We could use it on our own content or to identify relevant sources for creating new content.

One of the speakers, however, alluded to something more intriguing. He told us how some publishers were programming their own AI applications to go behind the generic offerings described in the sessions. He even threw out a few names, not that he expected anyone to follow up. After all, he was talking to a roomful of old-time publishers, not a batch of eager young geeks.

He never expected that one of those old dogs had just discovered the possibility of returning to some old tricks with a new tool.

I made a beeline for him after the session and arranged to have lunch with him later. From there, we scheduled a longer conference call. He convinced me I could, once again, replace myself.

Next Week: Curses! Foiled Again!

Speak Your Mind

*

You cannot copy content of this page

Skip to content