Jim,

 

thanks. I was thinking about how we use prediction for survival. Without
prediction I would put my hand in the fire and leave it there, because I
would not be able to predict that fire causes pain. Or that food is good for
hunger. Just like a tree. Locomotion goes with prediction, without it I
would be able to avoid pain, or seek food. Just like a tree. That's why we
have a brain, to predict and to move. 

 

Sergio

 

 

From: Jim Bromer [mailto:[email protected]] 
Sent: Wednesday, June 20, 2012 2:53 PM
To: AGI
Subject: Re: [agi] Prediction Did Not Work (except in narrow ai.)

 

 

 

On Wed, Jun 20, 2012 at 9:01 AM, Sergio Pissanetzky <[email protected]>
wrote:

> Jim,

> 

> I see prediction as essential for survival. In order to survive we need to
predict the consequences of what we do to the world. We predict by
establishing chains of causality, hence the importance of causality in
survival.

> 

> You are discussing a different angle. You are discussing the use of
prediction for verification of theories. But isn't that also a form of
prediction? You predict something, then compare your prediction with what
actually happened in the world, and adjust your process of prediction in
order to account for what really happened.

> 

> Would you explain your take on this?

> 

> Sergio

 

You seem to be saying that the verification of theories is a form of
prediction.

Of course people are able to make and utilize predictions.  However, there
is no strong evidence that this method can be used reliably to produce
Artificial General Intelligence without a lot of complications.  Even
Francis Bacon in discussing his modern view about scientific method,
recognized that during a discussion of the observations of a scientific
experiment people might disagree on the nature of some of the objects of the
observation.  He had a rather simple response for this problem.  If there
was any dispute about an object (or method) used in the observation that
object could also be subject to scientific method.  The problem with this is
that contemporary AGI programs tend to fail in a variety of simple
situations.

 

It is easy to talk about adjusting a theory based on comparison amongst
human beings but when you are talking about AGI the whole process often
fails just because the abilty to recognize what occurred is a major part of
the problem.  If we were able to create a good general AI program then all
these things would be feasible because a general intelligence is able to
have an idea about the things that happen around it.

 

If the ground was as simple as a cartoon where the ground is painted with a
solid color and any object is painted with a different color then our AI
programs could pick out the objects against the ground.  But when you are
talking about an image provided from a camera of a landscape, slightly
different kinds of things may not be of different color and under different
conditions a particular color does not always look the same.  The real
world, or a representation of the variety of events that can occur in the
real world is just too complicated and varied to make the problem of strong
AI easy.

 

Jim Bromer

 

 


AGI |  <https://www.listbox.com/member/archive/303/=now> Archives
<https://www.listbox.com/member/archive/rss/303/18883996-f0d58d57> |
<https://www.listbox.com/member/?&;
ad2> Modify Your Subscription

 <http://www.listbox.com> 

 




-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to