Dragons in the Algorithm
Adventures in Programming
by Michael Chermside

It's Not Exactly Artificial Intelligence

Edsger W. Dijkstra possessed an extraordinary ability to communicate elegantly and precisely. I admire, but could never emulate his ability to put an entire essay into a single statement. Yesterday I came across the following gem:

The question of whether a computer can think is no more interesting than the question of whether a submarine can swim.

—E. W. Dijkstra [cite]

This is insightful and bears careful consideration. I'll add my own statement into the mix:

Artificial Intelligence (AI) is defined as that which a human can do, and a computer cannot (yet).

Dijkstra's point is that most of the arguments about whether computers can "think" are really just debates about how we want to define the word "think". If we want to ask whether computers are capable of performing any particular task, then there are very clear and specific ways to answer this question. My point is that the goal posts are moving. AI, like fusion power, is always 20 years away.

Kid TypingThe first attempt to set a concrete goal for AI success was the Turing Test proposed by A. M. Turing in 1950. In the Turing Test, a computer and a human each converse with a person over a teletype. The computer is deemed to have passed the test if the examiner can't tell which is human and which is an AI. There are apocryphal stories of the 1966 program Eliza fooling unwarned humans who met it at a chat site—but today there's a $100,000 prize which remains unclaimed because the best chat programs available still can't fool a forewarned person.

But there are other areas where computers have succeeded. It's not exactly AI, but a pocket calculator can outperform any mathematician at computation. It's not exactly AI, but today many of the workers on the auto assembly line are there to tend the robots. It's not exactly AI, but the 1950's image of every businessman having his own personal secretary has been replaced with every businessperson having their own desktop PC. It's not exactly AI, but billions of dollars are invested by "quants" on the basis of formulas in spreadsheets. It's not exactly AI, but what could once be done in hours by a skilled reference librarian can now be done in seconds by anyone using Google and Wikipedia. It's not exactly AI, but my IDE pops up with a list of suggested method names if my typing slows for second. It's not exactly AI, but most artists use Photoshop, digital mixers, Windows Movie Maker/iMovie, or whatever digital tool is appropriate to their medium.

Clerical Occupations

I think that in the future AI will be a logical extension of this trend. Aside from the notoriety (and prize money), there is little reason to invent an AI that can replace a human: we have no shortage of humans. Nor are we likely to invent an AI that can will provide the motivation (deciding what to accomplish) because it's a very hard problem and humans are so very willing to be the deciders. But I think we are likely to invent more tools that allow humans to augment their abilities: to help us to research, plan, decide, communicate. A human with these aids will be far smarter and more capable than one without. In fact, that's already true today: without a pen and paper (and no internet research) I could never have written this essay—but of course, that's not exactly artificial intelligence.

Posted Sun 09 December 2007 by mcherm in Programming