AI is about solving problems that you can't solve yourself. You can program a computer to beat you at chess. You understand the search algorithm, but can't execute it in your head. If you could, then you could beat the computer, and your program will have failed.

I disagree. AI is about creating a reasoning system (that may well be faster than I am). Even if it is slower than I am, I still will have succeeded (if only because Moore's Law will ensure that it will eventually become faster). The computer can beat me at chess because it can brute-forced search faster (and thus, more during a given time period) than I can. However, with hindsight, I can certainly understand how it beat me.

Likewise, you should be able to program a computer to solve problems that are beyond your capacity to understand. You understand the learning algorithm, but not what it has learned. If you could understand how it arrived at a particular solution, then you have failed to create an AI smarter than yourself.

I disagree. I don't believe that there is anything that is beyond my capacity to understand (given sufficient time). I may not be able to calculate something but if some reasoning system can explain it's reasoning, I can certainly verify it. I keep challenging you to show me something that is beyond my understanding. Phil Goetz has argued that vector systems are not understandable but it is my contention that vector systems are merely curve-fitting approximation systems that don't have anything to understand (since in virtually all cases they either conflate real-world variables -- if n is too small -- or split and overfit real-world variables -- if n is to large).

----- Original Message ----- From: "Matt Mahoney" <[EMAIL PROTECTED]>
To: <agi@v2.listbox.com>
Sent: Wednesday, November 29, 2006 2:13 PM
Subject: Re: Re: Re: [agi] A question on the symbol-system hypothesis


AI is about solving problems that you can't solve yourself. You can program a computer to beat you at chess. You understand the search algorithm, but can't execute it in your head. If you could, then you could beat the computer, and your program will have failed.

Likewise, you should be able to program a computer to solve problems that are beyond your capacity to understand. You understand the learning algorithm, but not what it has learned. If you could understand how it arrived at a particular solution, then you have failed to create an AI smarter than yourself.

-- Matt Mahoney, [EMAIL PROTECTED]

----- Original Message ----
From: Mark Waser <[EMAIL PROTECTED]>
To: agi@v2.listbox.com
Sent: Wednesday, November 29, 2006 1:25:33 PM
Subject: Re: Re: Re: [agi] A question on the symbol-system hypothesis

A human doesn't have enough time to look through millions of pieces of
data, and doesn't have enough memory to retain them all in memory, and
certainly doesn't have the time or the memory to examine all of the
10^(insert large number here) different relationships between these
pieces of data.

True, however, I would argue that the same is true of an AI.  If you assume
that an AI can do this, then *you* are not being pragmatic.

Understanding is compiling data into knowledge.  If you're just brute
forcing millions of pieces of data, then you don't understand the problem --
though you may be able to solve it -- and validating your answers and
placing intelligent/rational boundaries/caveats on them is not possible.

----- Original Message ----- From: "Philip Goetz" <[EMAIL PROTECTED]>
To: <agi@v2.listbox.com>
Sent: Wednesday, November 29, 2006 1:14 PM
Subject: Re: Re: Re: [agi] A question on the symbol-system hypothesis


On 11/14/06, Mark Waser <[EMAIL PROTECTED]> wrote:
> Even now, with a relatively primitive system like the current
> Novamente, it is not pragmatically possible to understand why the
> system does each thing it does.

    Pragmatically possible obscures the point I was trying to make with
Matt.  If you were to freeze-frame Novamente right after it took an
action,
it would be trivially easy to understand why it took that action.

> because
> sometimes judgments are made via the combination of a large number of
> weak pieces of evidence, and evaluating all of them would take too
> much time....

    Looks like a time problem to me . . . . NOT an incomprehensibility
problem.

This argument started because Matt said that the wrong way to design
an AI is to try to make it human-readable, and constantly look inside
and figure out what it is doing; and the right way is to use math and
statistics and learning.

A human doesn't have enough time to look through millions of pieces of
data, and doesn't have enough memory to retain them all in memory, and
certainly doesn't have the time or the memory to examine all of the
10^(insert large number here) different relationships between these
pieces of data.  Hence, a human shouldn't design AI systems in a way
that would require a human to have these abilities.

The question is all about pragmatics.  If you dismiss pragmatics, you
are not part of this conversation.

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to