Sunday, March 09, 2003

South by Southwest 2003 X

Doug Lenat: Understanding Common Sense

Lenat is founder of Cycorp. Here is a rough transcript of Lenat's remarks:


I'm going to tell you about the last 30 years of my life in 45 minutes. I'm going to tell you about building an artificial intelligence, how we're doing it, and why. In a way, this is a talk about the relationship between computers and common sense. It's an adversial relationship.

I got bitten by the bug to do this while watching the Stanley Kubrick adaptation of Arthur C. Clarke's 2001. By far the most interesting character was Hal. Human beings would be more effective if only computers worked like that. If we could amplify the ways our brains work, we'd be smarter.

There'd be a qualitative change in how smart humanity is. You look at the last time there was a change like that, and it was the introduction of language. Let's go even further back to Alan Turing. In 1965, Joe Weizenbaum's work with a program called Doctor or Eliza took another step.

In the latest runs of these Turing type competitions, there's no trouble telling the human from the computer apart. If you ask "What color is a blue car?" there's a garbled Eliza-like response that's just parroting back what you just said. The programs are good at manipulating bits, but they're not really understanding what they're manipulating.

Intelligence requires immense knowledge about the world. Why does natural-language understanding require huge amounts of common sense? There are differences in the order of quantifiers hidden in the English language. Why is the Turing test so hard? Intelligence -- even just keeping up your end of a conversation well -- requires having lots of knowledge and applying it fast.

We forget things. We do arithmetic slowly. And we make mistakes that are random. There are dozens of these translogical phenomena that make it harder to simulate human thinking. Early hominids were pre-rational decision makers. Only the later hominids became rational. We are the early hominids.

The question is: Is artificial intelligence a dodo or a phoenix? 20 years ago, anyone who could spell A.I. was working on it. Nowadays, I'm optimistic about A.I. even though it's kind of rude to talk about it. Why am I optimistic? I knew we had to codify some of the common sense. We need to bridge the gap between people designing expert systems and fundamental philosophical questions about existence and time and space.

In terms of finding information by inference, I'm talking about asking a question like, "Find me a picture of someone smiling," and getting a picture of a man watching his daughter take her first step. That requires knowing that parents love their children and that taking your first step is an accomplishment.

It's not complicated reasoning. It's two- or three-step reasoning. Something called predicate calculus converts queries into their operative parts. Some things may not violate the date type of the relational database, but they violate common sense.

We also combine information from multiple sources. You can do fairly shallow reasoning and answer a question experts couldn't answer without drawing on those multiple sources.

How do we educate Cyc? The more you know, the more you can learn. To get to that crossover point, we'd have to follow the ugly duckling approach and cram knowledge into the program.

[At this point, Lenat's colleague Robert Kahlert demonstrated Cyc, running a scenario looking for ideas of how Lenat could use his new Segway while attending SXSW.]


I'm going to skip the first seven lessons we learned and go straight to the eighth and last lesson. We had to give up global consistency. Inconsistency seems like a bad idea, but when you have hundred rules, they're hard to keep straight. And maintaining thousands of rules is humanly impossible.

We're not trying to make trying to make the knowledge base as large as possible, we're trying to make it trying it as small as possible. We've still had to add millions of pieces of information to the system by hand.

Our original motivating applications are still our motivating applications. We've been automating the white space instead of the black space. What did the author already know about the world?

We've gotten half of our money from government and half of it from commercial sources. Nowadays we get more from government because corporations are less interested in looking far ahead into the future.

The system is not done, but it's done enough that it can attend to its own growth. And you can help.

No comments: