My Life With AI—Part II: The Search For The Holy Grail

Bookmark and Share

If you stumbled upon this without reading Part I first, you can read it here.

I always had a certain curiosity (there’s that word again) with the idea of artificial intelligence. You can’t blame science fiction for this. It was simply the challenge. Artificial intelligence represents the Holy Grail of mathematics. It’s not simply ramming a bunch of formulas through faster and faster processors. It’s going a step beyond. It’s giving the computer a basic set of instructions, then allowing it to begin programming itself by building on top of that foundation.

Naturally, I monitored the subject. This is one reason I knew about the Boston-based investment adviser BatteryMarch, although, technically, it wasn’t artificial intelligence; it was just another example of brute force processing. For the inside dope on AI, I didn’t rely on the Wall Street Journal or even the data processing trade press. No, I paid attention to science journals and magazines as my source for news on AI.

There was one common element in those articles: the Lisp programming language. This was the language of choice when creating an “expert system.” What a database was for a standard information system, a “knowledge base” was for an expert system. That’s what made it AI.

While still in use today, back then “expert system” meant artificial intelligence. It was what all the cool kids (i.e., the Fortune 500 companies) were doing. They were buying Lisp compilers with their discretionary R&D bucks.

My company didn’t have that kind of budget. It was easy to buy an inexpensive C++. Lisp was another animal altogether. Fortunately, we had an affordable alternative. Called “Prolog,” it was just beginning to be used as the “poor man’s” Lisp.

I gobbled it up.

In no time, I had created a computer version of our New York City securities analyst. When I showed him the result, he wasn’t intimidated; he was motivated (or curious, it’s getting harder and harder to tell the difference). He insisted we could make it better than it was.

So, off to work we went.

Then something out of the blue happened.

At this point in my career, I had made contacts with a handful of journalists, primarily in the computer and data processing industry. As a result, although they’re probably lost to time, reporters splattered a few innocuous quotes of mine in their pages. No one seemed to mind this.

For reasons gone down the memory hole, a reporter called to talk about computer applications in the finance industry. He may have gotten my number from the New York City analyst. I casually mentioned what we were working on. The next thing you know, it’s the cover story in the June 1987 edition of the Wall Street Computer Review. Under the banner title “Teaching Computers to Emulate Great Thinkers,” reporters today often quote the article to show why computers cannot pick stocks.

When the article came out, boy, did I get in trouble, but not for the reason you think.

Was it a problem that I released to the press information on proprietary technology? No. I disclosed nothing of substance.

Instead, I got in trouble for not asking permission to be quoted in my capacity at the firm.

They were right. To be honest, they never bothered me before about getting quoted in the trade press. I should have asked permission. I probably didn’t because I knew they’d say “no.” But I knew the publicity would benefit the firm.

Apparently, so did they. No sooner had they scolded me that I noticed a large package arrive at the front desk. Then I saw the marketing director go up to get it. He brought it to the supply room to open it up. When the coast was clear, I snuck into the empty room to take a peek.

The box contained hundreds of reprints of the article. I snagged one for my trophy case.

In the months that immediately followed, every time a client came to the office, they received a copy of that reprint with flowering praise for our leading-edge computer systems.

Then the market crashed. The firm let the New York City analyst go, and the project died before I could complete it. I never got the chance to figure out if I could have made the giant leap from brute force database processing to creating a machine learning knowledge base system.

A few years later, the opportunity presented itself again. By this time, I had started the firm’s trust company and was angling towards eventually leaving to start my firm. (To give you a sense of things, this was a two-year plan, so it wasn’t something done as a spur-of-the-moment retaliation for some perceived slight.)

Separate from my work, a related “Information Services” division was busy developing a natural language search engine. This was in the days when AltaVista ruled the search engine roost and Google was a gleam in Larry Page’s eye.

I was often asked by the firm’s owner to, in an unofficial capacity, check in and talk to the programmers of this other division. As development progressed, I noticed something no one else did. The intention of the program was to help intellectual property lawyers search the digitized patent database. But I realized there was a better application.

First, here’s how the program worked. You would ask it a question and it would search the text of all articles written relevant to the keywords in your question. That’s exactly what a search engine does. Here was the differentiating value offered by this new system: it wouldn’t return a simple listing of articles; it would rewrite a summary of all those articles in natural language.

This wasn’t a theory. It worked. I saw it with my own eyes. It wasn’t traditional artificial intelligence. It didn’t rely on a knowledge base and it didn’t create new rules based on original rules. Instead, it relied on brute force to scan a database, then, using the probability of words being next to other words, would output what amounts to an essay or article to answer the question you asked.

Here’s the twist I suggested. I told the owner of the firm and the developers, “You didn’t eliminate all the work lawyers do when researching patents, you just eliminated all encyclopedias. You also just completed every student’s homework assignment.”

They laughed at me. “Who would want that?”

You could probably understand why I turned down the offer to head the division, much to the surprise of the owner.

With that, I shunned the last possibility of finding the artificial intelligence Holy Grail. But it doesn’t mean I haven’t stopped paying attention.

When these newfangled artificial intelligence applications appeared, I couldn’t wait to try them. My first question was “Who is Chris Carosa?” I figured I’d know the answer to that question and it would be easy for me to spot an error.

The response was pretty good. Except for the part where it said I graduated from St. Bonaventure (not that there’s anything wrong with that). That was an acceptable “first try” error.

Things got dicey, though, when I started asking it about hamburger history. It repeated common errors. I conceded this might just be a relic of the half century of newspaper articles that repeated these same fallacies. I asked about a couple of these articles. It responded it was not a search engine. (Really?)

So, I asked about my book Hamburger Dreams. At first, it couldn’t find it. When it did, it said a columnist from Alaska wrote the book. She never wrote a book by that name, but she did once write a column about working at a hamburger stand.

I directed it to the book’s Amazon page. It took several iterations before it admitted I was the author.

Here’s the deal. This isn’t real artificial intelligence. It’s artificial AI. It’s brute force database processing. Just not as good as that information services division program I was advising on in the 1990s.

Mind you, this is one way artificial intelligence is defined now. AI has been dumbed down. It depends on the same sort of probabilistic analysis used by Modern Portfolio Theory (“MPT”) a half century ago. Those stock-picking AI programs in the 1980s failed because they incorporated MPT formulas. Financial firms back then learned the hard way that coincidence does not equal causation.

Just because a correlation exists doesn’t mean you can draw any reliable conclusions from it.

That’s why portfolio managers didn’t have true AI forty years ago. And it’s why we don’t have true AI apps today.

Trackbacks

  1. […] did these early attempts to create AI applications fail? Read this week’s Carosa Commentary “My Life With AI—Part II: The Search For The Holy Grail,” and see why, for today’s AI apps, the past is […]

Speak Your Mind

*

You cannot copy content of this page

Skip to content