My point is that humans make decisions based on millions of facts, and we do 
this every second.  Every fact depends on other facts.  The chain of reasoning 
covers the entire knowledge base.

I said "millions", but we really don't know.  This is an important number.  
Historically we have tended to underestimate it.  If the number is small, then 
we *can* follow the reasoning, make changes to the knowledge base and predict 
the outcome (provided the representation is transparent and accessible through 
a formal language).  But this leads us down a false path.

We are not so smart that we can build a machine smarter than us, and still be 
smarter than it.  Either the AGI has more algorithmic complexity than you do, 
or it has less.  If it has less, then you have failed.  If it has more, and you 
try to explore the chain of reasoning, you will exhaust the memory in your 
brain before you finish.

 
-- Matt Mahoney, [EMAIL PROTECTED]

----- Original Message ----
From: Mark Waser <[EMAIL PROTECTED]>
To: agi@v2.listbox.com
Sent: Thursday, November 16, 2006 3:16:54 PM
Subject: Re: [agi] A question on the symbol-system hypothesis

I consider the last question in each of your examples to be unreasonable 
(though for very different reasons).

In the first case, "What do you see?" is a nonsensical and unnecessary 
extension on a rational chain of logic.  The visual subsystem, which is not 
part of the AGI, has reported something and, unless there is a good reason 
not to, the AGI should believe it as a valid fact and the root of a 
knowledge chain.  Extending past this point to ask a spurious, open question 
is silly.  Doing so is entirely unnecessary.  This knowledge chain is 
isolated.

In the second case, I don't know why you're doing any sort of search 
(particularly since there wasn't any sort of question preceding it).  The AI 
needed gas, it found a gas station, and it headed for it.  You asked why it 
waited til a given time and it told you.  How is this not isolated?

----- Original Message ----- 
From: "Matt Mahoney" <[EMAIL PROTECTED]>
To: <agi@v2.listbox.com>
Sent: Thursday, November 16, 2006 3:01 PM
Subject: Re: [agi] A question on the symbol-system hypothesis


Mark Waser <[EMAIL PROTECTED]> wrote:
>Give me a counter-example of knowledge that can't be isolated.

Q. Why did you turn left here?
A. Because I need gas.
Q. Why do you need gas?
A. Because the tank is almost empty.
Q. How do you know?
A. Because the needle is on "E".
Q. How do you know?
A. Because I can see it.
Q. What do you see?
(depth first search)

Q. Why did you turn left here?
A. Because I need gas.
Q. Why did you turn left *here*?
A. Because there is a gas station.
Q. Why did you turn left now?
A. Because there is an opening in the traffic.
(breadth first search)

It's not that we can't do it in theory.  It's that we can't do it in 
practice.  The human brain is not a Turing machine.  It has finite time and 
memory limits.

-- Matt Mahoney, [EMAIL PROTECTED]



-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to