Very good, I agree,
and this is one of the requirements for the Project Halo contest (took and passed the AP chemistry exam)
http://www.projecthalo.com/halotempl.asp?cid=30

Also it is a critical task for expert systems to explain why they are doing what they are doing, and for business application, I for one am not goign to blindy trust what the AI says, without a little background.

James Ratcliff

Mark Waser <[EMAIL PROTECTED]> wrote:
> Even now, with a relatively primitive system like the current
> Novamente, it is not pragmatically possible to understand why the
> system does each thing it does.

Pragmatically possible obscures the point I was trying to make with
Matt. If you were to freeze-frame Novamente right after it took an action,
it would be trivially easy to understand why it took that action.

> because
> sometimes judgments are made via the combination of a large number of
> weak pieces of evidence, and evaluating all of them would take too
> much time....

Looks like a time problem to me . . . . NOT an incomprehensibility
problem.

I would also argue that a large number of weak pieces of evidence also
means that Novamente does not *understand* the domain that it is making a
judgment in. It is merely totally up weight of evidence.

> Sooo... understanding the reasons underlying a single decision made by
> even this AI system **with a heavy reliance on transparent knowledge
> representations** is hard. And it's hard not because aspects of the
> KR are not transparent, it's hard because even simple decisions may
> incorporate masses of internally-generated judgments...

But those masses of internally-generated judgements -- translated into
ordinary language -- would provide a very nice comprehensible explanation of
why Novamente made the decision that it did. (And yes, I know that
Novamente can't currently translate those internal representations into
ordinary language -- but a human level AI that resides in our world is going
to have to).

Your argument is akin to the fact that my son can't understand what I'm
doing when I'm programming when he's watching me in real time. Everything
is easily explainable given sufficient time . . . .


----- Original Message -----
From: "Ben Goertzel" <[EMAIL PROTECTED]>
To:
Sent: Tuesday, November 14, 2006 11:03 AM
Subject: Re: Re: Re: [agi] A question on the symbol-system hypothesis


> Even now, with a relatively primitive system like the current
> Novamente, it is not pragmatically possible to understand why the
> system does each thing it does.
>
> It is possible in principle, but even given the probabilistic logic
> semantics of the system's knowledge it's not pragmatic, because
> sometimes judgments are made via the combination of a large number of
> weak pieces of evidence, and evaluating all of them would take too
> much time....
>
> Sooo... understanding the reasons underlying a single decision made by
> even this AI system **with a heavy reliance on transparent knowledge
> representations** is hard. And it's hard not because aspects of the
> KR are not transparent, it's hard because even simple decisions may
> incorporate masses of internally-generated judgments...
>
> -- Ben G
>
> On 11/14/06, BillK <[EMAIL PROTECTED]> wrote:
>> On 11/14/06, James Ratcliff wrote:
>> > If the "contents of a knowledge base for AGI will be beyond our ability
>> > to
>> > comprehend" then it is probably not human level AGI, it is something
>> > entirely new, and it will be alien and completely foriegn and unable to
>> > interact with us at all, correct?
>> > If you mean it will have more knowledge than we do, and do things
>> > somewhat
>> > differently, I agree on the point.
>> > "You can't look inside the box because it's 10^9 bits."
>> > Size is not a acceptable barrier to looking inside. Wiki, is huge and
>> > will
>> > get infineltly huge, yet I can look inside it, and see that "poison ivy
>> > causes rashes" or whatnot.
>> > The AGI will have enourmous complexity, I agree, but you should ALWAYS
>> > be
>> > able to look inside it. Not in the tradional sense of pages of code
>> > maybe
>> > or simple set of rules, but the AGI itself HAS to be able to generalize
>> > and
>> > tell what it is doing.
>> > So something like, I see these leafs that look like this, supply
>> > picture,
>> > can I pick them up safely, will generate a human readable output that
>> > can
>> > itself be debugged. Or asking about the process of doing something,
>> > will
>> > generate a possible plan that the AI would follow, and a human could
>> > say, no
>> > thats not right, and cause the AI to go back and reconsider with new
>> > possible information.
>> > We can always look inside the 'logic' of what the AGI is doing, we
>> > may not
>> > be able to directly change that ourselves easily.
>> >
>>
>>
>> Doesn't that statement cease to apply as soon as the AGI starts
>> optimizing it's own code?
>> If the AGI is redesigning itself it will be changing before our eyes,
>> faster than we can inspect it.
>>
>> You must be assuming a strictly controlled development system where
>> the AGI proposes a change, humans inspect it for a week then tell the
>> AGI to proceed with that change.
>> I suspect you will only be able to do that in the very early development
>> stages.
>>
>>
>> BillK
>>
>> -----
>> This list is sponsored by AGIRI: http://www.agiri.org/email
>> To unsubscribe or change your options, please go to:
>> http://v2.listbox.com/member/?list_id=303
>>
>
> -----
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?list_id=303
>


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



_______________________________________
James Ratcliff - http://falazar.com
New Torrent Site, Has TV and Movie Downloads! http://www.falazar.com/projects/Torrents/tvtorrents_show.php


Cheap Talk? Check out Yahoo! Messenger's low PC-to-Phone call rates.
This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id=303

Reply via email to