Jim, 

 

Your letter proved to be very thought-provoking for me. I read it more than
once and will peruse it even more. For now, the following statement you made
worries me very much because it seems to contradict several things I know: 

 

> Much of our knowledge is based on non-causative relations. 

 

But algorithms are causal. Computers are causal, our brains are causal, a
neuron fires only if some "preceding" neurons fire in turn. They use neural
networks to simulate brain function, and they are causal. If our knowledge
is non-causative, what are we doing representing it with causative means? 

 

It would help me if you gave an example or two. I have an example: a system
of simultaneous equations. They must all be satisfied at once, so there is
no "first" or "last" or any sort of causal relationship. However, the
causation is not in the fact that they are simultaneous. The cause-effect
relationship is "simultaneous equations ==>solution." I know that, if I have
simultaneous equations, then I have a solution (or, in some cases, no
solution, or many solutions) and this is what I use for my thinking. 

 

How do I know "simultaneous equations ==>solution?" Because I have studied
all the methods for finding that solution. And all the methods, no
exception, are causative. You select any arbitrary equation to be the
"first" and process it in some way. Then you select the "next." Now next
implies that there is another that precedes it, so you are forcing a
cause-effect relationship. And so on. 

 

What is happening, is that I think of "simultaneous equations" as an object,
and then I use causation at a higher level: if I have the equations, then I
have a solution, omitting the intermediate steps. 

 

Sergio

 

 

 

From: Jim Bromer [mailto:[email protected]] 
Sent: Thursday, June 21, 2012 2:15 PM
To: AGI
Subject: Re: [agi] Prediction Did Not Work (except in narrow ai.)

 

On Thu, Jun 21, 2012 at 11:04 AM, Sergio Pissanetzky
<[email protected]> wrote: 

Jim,

 

thanks. I was thinking about how we use prediction for survival. Without
prediction I would put my hand in the fire and leave it there, because I
would not be able to predict that fire causes pain. Or that food is good for
hunger. Just like a tree. Locomotion goes with prediction, without it I
would be able to avoid pain, or seek food. Just like a tree. That's why we
have a brain, to predict and to move. 

 

Sergio

 

Yes, prediction is an important method of human thought.  Perhaps I should
have focused on saying that "prediction" as it has stood so far has not been
reliable in producing higher intelligence.  That seems like a strange idea
since it is so useful in native intelligence.

 

Much of our knowledge is based on non-causative relations.  It is useful
because we do not usually see the full scope of the causal relations.  (The
use of terms like, "full scope" become philosophically defeasible when we
are talking about knowing because it is only by limiting the scope of what
we are thinking about could we then say that we understand the full scope of
that idea.)  Similarly, much of our knowing is not based on hard edged
prediction.  But for the most part, if you can't get the airplane off the
ground you cannot reliably discover advanced methods to improve the flight
characteristics of the aircraft.

 

What has happened is that we have discovered that our thinking is both more
complicated then we imagined and more mysterious than we thought it should
be at the beginning of the information age.  

 

On the other hand we can create extreme situations where the human mind
fails just as our AGI programs have or would fail (for less extreme
situations).  For example, even if you could reliably pick out a number of
objects in a scene, by reducing the light on the scene sufficiently, your
analysis would fail just as miserably as most AGI programs would fail.  This
is an important thought experiment because it does reveal that the human
mind is capable of effectively using a wider variety of methods in analyzing
scenes than a computer program is.  (This is a conclusion but it is a
reasonable conclusion.)  This then shows that theory behind AGI is not
totally wrong.  We can buttress this conclusion by pointing out that if the
lighting of a scene (imagine an industrial setting) could be guaranteed to
produce ideal lighting, many visual AI methods would succeed. If a
researcher could establish what kinds of AI methods would work in the ideal
situations, he could then systematically move to deal with individual
variations that tend to produce worse results.  And so on.

 

Jim Bromer


AGI |  <https://www.listbox.com/member/archive/303/=now> Archives
<https://www.listbox.com/member/archive/rss/303/18883996-f0d58d57> |
<https://www.listbox.com/member/?&;
ad2> Modify Your Subscription

 <http://www.listbox.com> 

 




-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to