That's why we have an *unconscious* brain. You're talking about automatic 
routines and reflexes.

The reason we have a conscious brain is to deal with the things we CAN'T 
predict - the new things the world is continually throwing at us and also 
offering us. We're goalSEEKERS, waySEEKERS - not goal predictors - who 
continually have and want to find NEW ways, new paths past new obstacles to our 
goals (as well as finding altogether new goals).

Like everyone else, you're talking about the narrow AI equivalents of the lower 
parts of the human mind, and totally missing what AGI is about - which is 
producing new courses of action, not old ones.

We've got machines that can do the same old predictable, We want AGI's  that 
can like you can  new, unpredictable, surprising things - all the time.

P.S. Even if you're just focussing on vision, real AGI's continually have to 
look at NEW scenes, NEW objects that can't possibly be predicted from what has 
been seen before.


From: Sergio Pissanetzky 
Sent: Thursday, June 21, 2012 4:04 PM
To: AGI 
Subject: RE: [agi] Prediction Did Not Work (except in narrow ai.)


Jim,

 

thanks. I was thinking about how we use prediction for survival. Without 
prediction I would put my hand in the fire and leave it there, because I would 
not be able to predict that fire causes pain. Or that food is good for hunger. 
Just like a tree. Locomotion goes with prediction, without it I would be able 
to avoid pain, or seek food. Just like a tree. That's why we have a brain, to 
predict and to move. 

 

Sergio

 

 

From: Jim Bromer [mailto:[email protected]] 
Sent: Wednesday, June 20, 2012 2:53 PM
To: AGI
Subject: Re: [agi] Prediction Did Not Work (except in narrow ai.)

 

 

 

On Wed, Jun 20, 2012 at 9:01 AM, Sergio Pissanetzky <[email protected]> 
wrote:

> Jim,

> 

> I see prediction as essential for survival. In order to survive we need to 
> predict the consequences of what we do to the world. We predict by 
> establishing chains of causality, hence the importance of causality in 
> survival.

> 

> You are discussing a different angle. You are discussing the use of 
> prediction for verification of theories. But isn't that also a form of 
> prediction? You predict something, then compare your prediction with what 
> actually happened in the world, and adjust your process of prediction in 
> order to account for what really happened.

> 

> Would you explain your take on this?

> 

> Sergio

 

You seem to be saying that the verification of theories is a form of prediction.

Of course people are able to make and utilize predictions.  However, there is 
no strong evidence that this method can be used reliably to produce Artificial 
General Intelligence without a lot of complications.  Even Francis Bacon in 
discussing his modern view about scientific method, recognized that during a 
discussion of the observations of a scientific experiment people might disagree 
on the nature of some of the objects of the observation.  He had a rather 
simple response for this problem.  If there was any dispute about an object (or 
method) used in the observation that object could also be subject to scientific 
method.  The problem with this is that contemporary AGI programs tend to fail 
in a variety of simple situations.

 

It is easy to talk about adjusting a theory based on comparison amongst human 
beings but when you are talking about AGI the whole process often fails just 
because the abilty to recognize what occurred is a major part of the problem.  
If we were able to create a good general AI program then all these things would 
be feasible because a general intelligence is able to have an idea about the 
things that happen around it.

 

If the ground was as simple as a cartoon where the ground is painted with a 
solid color and any object is painted with a different color then our AI 
programs could pick out the objects against the ground.  But when you are 
talking about an image provided from a camera of a landscape, slightly 
different kinds of things may not be of different color and under different 
conditions a particular color does not always look the same.  The real world, 
or a representation of the variety of events that can occur in the real world 
is just too complicated and varied to make the problem of strong AI easy.

 

Jim Bromer

 

 

      AGI | Archives | Modify Your Subscription
     
     

 

      AGI | Archives  | Modify Your Subscription   



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to