I dont hold to that belief though.  

For my information, do you have any references for the million, or any other 
number there?  I would like that info.

I believe that there could well be million of these backwards chaining reasons, 
but they Have to by necessity be either compressible so we can estimate them 
all using a fewer small set of rules, or likewise be weighted, such that as you 
go further and futher back away from something, it becomes less important.

Take the simple example of Why I am attracted to A Person standing in front of 
me.
If it is a hunchbacked girl, the reasoning is fairly simple:
  I dont like: 90%  cause I see she is hunchbacked.
  I dont like:  5% she has short hair
  I dont like:  .00001 her shoelace is almost touching the floor.

So does the shoe lace affect my decision, yes, at a base level, but it is too 
lowly weighted to be realisticaly considered.   So I could base it on the top N 
reasons instead.

In the example of the car, the logic stops generally at the level of seeing, as 
the site of the 'E' level was the basis for getting gas, and a rule learned 
internally that when it hits 'E' I need to get gas, would have the most weight, 

At each level of backwards reasoning, the breadth spread will only cover so 
many nodes with a high weight, and as we go back on the chain, the weight will 
increase.

"Smarter" is another red flag word we cant just throw around, but another way 
to contradict you last statement is simple.

If we have two people, one 'smarter' than the other, and they do soemthing, 
they can generally explain why they did it.  The second person, can understand 
this, and learn that ability.

We Have to model this type of explanatory options in the AI, of varying 
degrees, not just for us, but for other AI's.

If there is a robot driving the car, and another robot watching, it has to be 
able to explain why it did what it did to the other AI.  LIkewise to a human.

Likewise even in the opposite direction, the easiest way to get information to 
an AI about how to drive a car, is to ask the person, or to watch them.  One of 
the AI projects was tied up to a car, the steering wheel and a camera, and 
watched  a person.   The person could easily add in things like,   
  "I looked down and saw the guage here was close to empty" and decided to 
create a plan to go to the nearest gas station.
  The AI would need to understand a generalization of this rule, and combine it 
with the thousands of other things we know about driving, such as not pulling 
over the curb or running over someone to get gas.
  For this example it has a single major overpowering causitive factor.

If you have another couple examples that are slightly more nebulous but still 
able to be discussed decently, please post them here so I can think about them.

The only other thing I can think of is, "Why did you pick out that shirt this 
morning?"  kind of question.
  And on some level there, if it cannot be explained whatsoever, then maybe I 
would be happy with just a random choice there, or the reason list is all so 
lowly weighted as to not really matter, and any of the shirts could have been 
picked.

I had my black jeans and my blue jeans out this morning, and I picked the blue 
ones, the reason, not really known... The difference between picking one or the 
other, no real effect.

Now, how do we add something like that to an architecture.

James Ratcliff



Matt Mahoney <[EMAIL PROTECTED]> wrote: My point is that humans make decisions 
based on millions of facts, and we do this every second.  Every fact depends on 
other facts.  The chain of reasoning covers the entire knowledge base.

I said "millions", but we really don't know.  This is an important number.  
Historically we have tended to underestimate it.  If the number is small, then 
we *can* follow the reasoning, make changes to the knowledge base and predict 
the outcome (provided the representation is transparent and accessible through 
a formal language).  But this leads us down a false path.

We are not so smart that we can build a machine smarter than us, and still be 
smarter than it.  Either the AGI has more algorithmic complexity than you do, 
or it has less.  If it has less, then you have failed.  If it has more, and you 
try to explore the chain of reasoning, you will exhaust the memory in your 
brain before you finish.

 
-- Matt Mahoney, [EMAIL PROTECTED]

----- Original Message ----
From: Mark Waser 
To: [email protected]
Sent: Thursday, November 16, 2006 3:16:54 PM
Subject: Re: [agi] A question on the symbol-system hypothesis

I consider the last question in each of your examples to be unreasonable 
(though for very different reasons).

In the first case, "What do you see?" is a nonsensical and unnecessary 
extension on a rational chain of logic.  The visual subsystem, which is not 
part of the AGI, has reported something and, unless there is a good reason 
not to, the AGI should believe it as a valid fact and the root of a 
knowledge chain.  Extending past this point to ask a spurious, open question 
is silly.  Doing so is entirely unnecessary.  This knowledge chain is 
isolated.

In the second case, I don't know why you're doing any sort of search 
(particularly since there wasn't any sort of question preceding it).  The AI 
needed gas, it found a gas station, and it headed for it.  You asked why it 
waited til a given time and it told you.  How is this not isolated?

----- Original Message ----- 
From: "Matt Mahoney" 
To: 
Sent: Thursday, November 16, 2006 3:01 PM
Subject: Re: [agi] A question on the symbol-system hypothesis


Mark Waser  wrote:
>Give me a counter-example of knowledge that can't be isolated.

Q. Why did you turn left here?
A. Because I need gas.
Q. Why do you need gas?
A. Because the tank is almost empty.
Q. How do you know?
A. Because the needle is on "E".
Q. How do you know?
A. Because I can see it.
Q. What do you see?
(depth first search)

Q. Why did you turn left here?
A. Because I need gas.
Q. Why did you turn left *here*?
A. Because there is a gas station.
Q. Why did you turn left now?
A. Because there is an opening in the traffic.
(breadth first search)

It's not that we can't do it in theory.  It's that we can't do it in 
practice.  The human brain is not a Turing machine.  It has finite time and 
memory limits.

-- Matt Mahoney, [EMAIL PROTECTED]



-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



_______________________________________
James Ratcliff - http://falazar.com
New Torrent Site, Has TV and Movie Downloads! 
http://www.falazar.com/projects/Torrents/tvtorrents_show.php
 
---------------------------------
Check out the all-new Yahoo! Mail beta - Fire up a more powerful email and get 
things done faster.

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to