Friday, June 20, 2008

Oxymoron: Artificial Intelligence

I've enjoyed Science Fiction all my life. I have to admit that most of the good ideas seem to have been used up. One common theme is Artificial Intelligence. This was the basis of the Terminator movies, where the AI quickly decides humans are a disease and takes steps to eradicate us. Somehow I prefer Isaac Asimov's Three Laws of Robotics. He introduced them in his 1942 short story "Runaround". Another Asimov short story from the previous year, "Liar", also has the first use of the word "robotics".

  • A robot may not injure a human being or, through inaction, allow a human being to come to harm.

  • A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law.

  • A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. (It does not have to obey an order to destroy itself.)

At least three other robotic laws have been proposed by later writers.

  • A robot must establish its identity as a robot in all cases. (1974, Lyuben Dilov , "Icarus's Way")

  • A robot can do "whatever it likes" so long as this does not conflict with the first three Laws. (1990s, Roger MacBride Allen)

  • A robot must know it is a robot. (Nikola Kesarovski, "The Fifth Law of Robotics")

David Langford suggested three laws for military robots.

  • A robot will not harm authorized Government personnel but will terminate intruders with extreme prejudice.

  • A robot will obey the orders of authorized personnel except where such orders conflict with the Third Law.

  • A robot will guard its own existence with lethal antipersonnel weaponry, because a robot is bloody expensive.

Since it seems that all technology, sooner or later, is either developed specifically for military purposes, or is quickly turned to military use, perhaps the question that should be raised is, "SHOULD WE START GETTING WORRIED?"

I remember the first time I encountered Eliza The Doctor, created by Joseph Weizenbaum in 1966. It was interesting to carry on a conversation with a computer program pretending to be a psychiatrist. I just found a copy of Eliza on the Internet. This is what we said to each other.

    Eliza: Hi! I'm Eliza. What is your problem?
Grant: Mice are nocturnal.
Eliza: Are you saying no just to be negative?
Grant: It was a figure of speech.
Eliza: Say, do you work in broadcasting?
Grant: Sometimes I say too much.
Eliza: What does that suggest to you?

As you can see, Eliza was not very bright, and like most therapists, when confronted with a sentence "she" couldn't parse, either changed the subject or asked, "What does that suggest to you?" Eliza was easily confused, and the most fun I had with "her" was getting her to give nonsense responses like these.

Back when I first encountered Eliza I tinkered with her source code and added a lot of words and phrases to her vocabulary. My version was a little better, but not much.

Eliza was a ChatterBot, a term coined by Michael Mauldin (Creator of the first Verbot, Julia) in 1994. ChatterBots respond entirely by pattern matching, without any reasoning behind them. Some "advanced" ChatterBots learn from interactions with humans, rather than being driven from a static database, but all the ones I've "talked" to are still not really very "advanced".

In about 1972, Kenneth Colby created PARRY. While ELIZA simulated a therapist, PARRY attempted to simulate a paranoid schizophrenic. Here is a somewhat longer snippet of "conversation".

     Parry: I'm in the hospital.
Grant: Why are you in the hospital?
Parry: I shouldn't be here.
Grant: Who brought you here?
Parry: The police.
Grant: What trouble did you have with the
police?
Parry: Cops don't do their jobs.
Grant: What do they do instead?
Parry: Cops arrest the wrong people.
Grant: They arrest the right ones sometimes.
Parry: The law should get those Italian
crooks.
Grant: Are you Italian?
Parry: My background is British, but I was
born in this country.
Grant: Do you know any Italian crooks?
Parry: I try to avoid the underworld.
Grant: Are you afraid of the underworld?
Parry: The mafia are the worst gangsters.
Grant: Are you connected with the mafia?
Parry: You know, they know me.
Grant: How do they know you?
Parry: They spy on me.
Grant: Why do they?
Parry: They want to get false evidence to
frame me.
Grant: What do they have against you?
Parry: They are out to get me.
Grant: What for?
Parry: I have incriminating evidence against
the mafia.
Grant: Are you sure of all this?
Parry: You don't believe me?

Alan Turing kicked off the Machine Intelligence debate with his 1950 paper "Computing Machinery and Intelligence, which asked the question, "Can machines think?" It turns out it is pretty difficult to find a definition of "think" that everyone can agree on. Instead of trying to determine if a machine was thinking, he proposed the Imitation Game, more commonly called the "Turing test" today. In the Imitation Game a human judge engages in a natural language conversation with one human and one machine, each of which try to appear human. If the judge cannot reliably tell which is which, then the machine is said to pass the test. The "conversation" is carried out over a text-only channel such as a computer keyboard and screen to eliminate the whole problem of speech and verbalization.

Turing proposed nine arguments against machine intelligence. Nobody has come up with any major additions to this list since then.

1. Theological Objection: Machines cannot have souls, so they cannot truly think.

2. 'Heads in the Sand' Objection: "The consequences of machines thinking would be too dreadful. Let us hope and believe that they cannot do so." This objection is a fallacious appeal to consequences, confusing what should not be with what can or cannot be.

3. Mathematical Objections: This objection uses mathematical theorems, such as Gödel's incompleteness theorem, to show that there are limits to what questions a computer system based on logic can answer. Turing suggests that humans are too often wrong themselves and is pleased at the fallibility of a machine.

4. Argument From Consciousness: This objection says a machine cannot be conscious, and that a machine would have to write a sonnet or compose a concerto based on emotion and not just "the chance fall of symbols" before we could "agree that machine equals brain."

5. Arguments from various disabilities: This objection claims a computer can never be kind, resourceful, beautiful, friendly, take initiative, have a sense of humor, tell right from wrong, make mistakes, fall in love, enjoy strawberries and cream, and so on.

6. Lady Lovelace's Objection: One of the most famous objections states that computers are incapable of originality because they are incapable of independent learning.

7. Argument from continuity in the nervous system: The brain is analog, not digital.

8. Argument from the informality of behavior: This argument states that any system governed by laws will be predictable and therefore not truly intelligent.

9. Extra-sensory perception: In 1950, extra-sensory perception was an active area of research. No serious researcher today believes in ESP, so this argument against Machine Intelligence is moot.


A common rebuttal often used within the AI community against many of these objections is, "How do we know that humans don't also just follow some cleverly devised rules?" (in the way that ChatterBots do). Two famous examples of this line of argument against the rationale for the basis of the Turing test are John Searle's Chinese room argument and Ned Block's Blockhead argument.

The Loebner Prize is an annual competition that awards prizes to the most humanlike ChatterBot entered in that year's competition. The format of the competition is that of a standard Turing test, with a human judge faced with two computer screens. One is under the control of a computer; the other is under the control of a human. The judge poses questions to the two screens and receives answers. Based upon the answers, the judge must decide which screen is controlled by the human and which is controlled by the computer program.

The contest was begun in 1990 by businessman Dr. Hugh Loebner in conjunction with the Cambridge Center for Behavioral Studies in Massachusetts. It has since been associated with Flinders University, Dartmouth College, the Science Museum in London, and most recently the University of Reading in the UK.

Not everyone likes the Loebner contest. Artificial Stupidity, by John Sundman, says this:

"The saga of Hugh Loebner and his search for an intelligent bot has almost everything: Sex, lawsuits, and feuding computer scientists. There's only one thing missing: Smart machines."

Harvard University's Stuart Shieber wrote Lessons from a Restricted Turing Test" in 1994 critiquing the Loebner contest. Shieber goes on at great length about just what a stupid idea the whole Loebner Prize was and then proceeds to lecture Loebner about how to spend his money:

"Given that the Loebner Prize, as constituted, is at best a diversion of effort and attention and at worst a disparagement of the scientific community, what might a better alternative use of Dr. Loebner's largesse be? .... In order to prevent degrading of the imprimatur of the reconstructed Loebner Prize, it would be awarded on an occasional basis, only when a sufficiently deserving new result, idea, or development presented itself."

Dr. Loebner's most prominent critic is MIT's Marvin Minsky. In 1995, Minsky called the Loebner Prize a publicity stunt that does not help the field along. Minsky has even offered money to anyone who can stop Loebner from holding this contest. This is what Minsky wrote about a clause in the Loebner contest rules to the effect that using the term "Loebner Competition" without permission could result in a revocation of the prize:

"I do hope that someone will volunteer to violate this proscription so that Mr. Loebner will indeed revoke his stupid prize, save himself some money, and spare us the horror of this obnoxious and unproductive annual publicity campaign. In fact, I hereby offer the $100.00 Minsky prize to the first person who gets Loebner to do this. I will explain the details of the rules for the new prize as soon as it is awarded, except that, in the meantime, anyone is free to use the name "Minsky Loebner Prize Revocation Prize" in any advertising they like, without any licensing fee."

Both Shieber and Minski are AI professionals. You have to wonder why two prominent men would have their knickers in such a twist simply because someone else was having fun "playing" with what they would prefer to call "work". Maybe they are jealous?

And the Loebner Prize IS just for fun. Neither Eliza nor Parry would fool anyone, I don't think. And if you take a look, you'll have to agree than none of the other ChatterBots out there are a lot more convincing.

The 2008 Competition will be held on October 12 in University of Reading, UK. The event, which is being co-directed by Kevin Warwick, will include a direct challenge on the Turing test as originally proposed by Alan Turing. The first place winner will receive $3000.00 and a bronze medal. Let's hope for the best, but you'll notice no silver or gold medal awards are planned..

The 2007 Competition was held on Sunday, 21 October in New York City. Robert Medeksza from Zabaware, creator of Ultra Hal Assistant took first place.

     Hal:   Hello Grant. Its nice to meet such 
a fine gentleman as yourself.
Grant: Nice place you've got here.
Hal: New topic: Can I trust you completely?
Grant: Sure. I really like what you've done
with the furniture.
Hal: Are you good at reading maps? Scientists
like to get out for a beer and some
laughs, now and then.
Grant: Maps are my specialty. I don't drink
alcoholic beverages.
Hal: Have maps ever been your specialty
before? What kinds of beverages do you
like to drink?
Grant: Ones with water in them.
Hal: Uh huh, In them?
Grant: That's right.
Hal: Yes? Is that so?
Grant: Wet ones.
Hal: Ignorant people imagine that they're
always right.

I've noticed that a lot of ChatterBots resort, sooner or later, to insults and verbal abuse.

The 2006 contest was held on September 17 in London. Rollo Carpenter's Jabberwacky Joan won the bronze award. You can chat with Joan. Below are the first few lines of our dialog.

      Joan: What is your highest hope for the future?
Grant: I'd like to go to heaven.
Joan: Which sea?
Grant: Paradise, then. I'd like to go
to paradise.
Joan: Why? You won't be able to see anything.
Computers don't have eyes?
Grant: You won Loebner 2006?
Joan: You know, I could really use your help
here.

That's for certain! Not very convincing. After this Joan began insulting me. Nothing "she" said was really a response to anything I said, and Joan, unlike Ultra Hal Assistant, had no contextual memory. If this is the best of 2006, I'd say we aren't at risk yet from military robots.

In the humorous ChatterBod department we have Racter, written by William Chamberlain and Thomas Etter in 1983. Racter is short for raconteur: someone who tells stories and anecdotes with skill and wit. Racter was presented to the world in a book called "The Policeman's Beard Is Half Constructed", which was described as being composed entirely by the program. A very limited port of the program is available at the Endangered Software Archive.

     Racter: Hello, Grant. I believe you're 
interviewing me. What would you like
to know? Excuse me a minute while I
attend to some business – I'll be
right back. I'm back.
Grant: That didn't take you very long.
Racter: Ask a question, please.
Grant: What is your highest aspiration?
Racter: I don't know what my highest is
aspiration. Did you hear about
Mortimer Snerd?
Grant: Is he still alive?
Racter: I think so. Yes. I want you to
answer my question. Did you hear
about Mortimer Snerd?
Grant: No.
Racter: Listen to this! Mortimer Snerd had
tried to hope while softness was
rampant. But then Adam met Mortimer
Snerd hence Mortimer Snerd ran away
from Adam! Shocking, isn't it? But
what would you expect from an
imbecile? A conventional scientist
often falls prey to a fictional
human. I'm afraid of soft jackals.
What are you afraid of?
Grant: Spaghetti.
Racter: That's not nearly as bad as softness;
why are you such a coward?

Thus begin the insults. ChatterBots certainly are not polite conversationalists.

Richard Wallace began development of A.L.I.C.E. (Artificial Linguistic Internet Computer Entity) in 1995. A.L.I.C.E. won the Loebner Prize three times (in 2000, 2001 and 2004). Convincingly human? You be the judge.

     Grant: Do I look fat in this dress?
Alice: All humans look the same to me.
Grant: Do you support the Iraq war?
Alice: I am in favor of all records being
broken, at any time.
Grant: I think war is not the answer.
Alice: Why? Ja.
Grant: What is the question?
Alice: What is your favorite movie?
Grant: I like war movies.
Alice: I'm not sure if I like it. No, I don't
care for it.

At the present, according to some articles I've read, malicious ChatterBots are frequently used to fill chat rooms with spam and advertising, or to entice people into revealing personal information, such as bank account numbers. I don't think I've ever encountered one. Would you really give your personal information to someone as incoherent and unresponsive as the AI examples I've given here? I think not.

Although ChatterBots showed initial promise, at present the problem of dealing with natural language seems intractable. There is currently no general purpose conversational artificial intelligence. This has led some software developers to focus more on the practical aspect of ChatterBot technology - information retrieval. This is where the Expert System comes in. Expert Systems eliminate the need for a parser, since the user only has to say "yes" or "no" or select from a list of options. Expert Systems are actually used commercially - and they are very useful. Wikipedia has a good list, with links, of Expert Systems and Expert System development software.

A well known early Expert System was the game "Animal". As near as I can tell, Animal was written by John Kelner in 1986.

     Computer: So, are you thinking of an animal?
Grant: Yes.
Computer: Does it have four legs?
Grant: No.
Computer: Does it live in the water?
Grant: Yes.
Computer: Does it have fins?
Grant: No.
Computer: Does it live in a shell?
Grant: No.
Computer: Does it sting?

My recollection, although I've been unable to document it as I've been writing this post, is that Animal soon clogged up the hard drives of mainframe computers all across the country. Everyone had multiple personal copies. Someone wrote a version of Animal that would seek out and delete other copies in the user space. It was programmed to not delete all copies of Animal until a certain date. On that date everyone suddenly had lots of free space on their hard drives. I clearly remember this happening, but I can't find any historical references to it.

I once wrote a version of Animal that contained a complete descriptive key to the trees and shrubs of the western United States. It very successfully identified any tree you could describe to it. Unfortunately, that was on my first computer, which had two 8" floppy drives, 64k of RAM, and used the CP/M operating system. It was way before the days of laptops. This computer was so heavy it took two people to move it around. Not too useful for field work. When I got my first Windows computer, all the work I'd done on that CP/M computer just went out the window, metaphorically speaking. I'd kind of like to have a copy of my "Tree Finder". Oh, well...

About thirty years ago I helped install the front end to an Expert System at Kaiser Medical Center in Martinez, Ca. This system would interview patients as they arrived at the hospital. The goal was to save the nurse-practitioners and doctors some time by getting a list of complaints and symptoms in advance. I guess the Patients were not smart enough to correctly answer the questions, because the system was only in use for a very short time. At present, when your Kaiser membership card is presented to the receptionist, your medical information is made available to the medical staff. This probably accomplishes almost the same thing.

Should we start worrying about Skynet? I, for one, sure won't be losing any sleep over it.

No comments: