On Thu, Jun 21, 2012 at 11:04 AM, Sergio Pissanetzky <[email protected]>
wrote:
Jim,

thanks. I was thinking about how we use prediction for survival. Without
prediction I would put my hand in the fire and leave it there, because I
would not be able to predict that fire causes pain. Or that food is good
for hunger. Just like a tree. Locomotion goes with prediction, without it I
would be able to avoid pain, or seek food. Just like a tree. That's why we
have a brain, to predict and to move.

Sergio

Yes, prediction is an important method of human thought.  Perhaps I should
have focused on saying that "prediction" as it has stood so far has not
been reliable in producing higher intelligence.  That seems like a strange
idea since it is so useful in native intelligence.

Much of our knowledge is based on non-causative relations.  It is useful
because we do not usually see the full scope of the causal relations.  (The
use of terms like, "full scope" become philosophically defeasible when we
are talking about knowing because it is only by limiting the scope of what
we are thinking about could we then say that we understand the full scope
of that idea.)  Similarly, much of our knowing is not based on hard edged
prediction.  But for the most part, if you can't get the airplane off the
ground you cannot reliably discover advanced methods to improve the flight
characteristics of the aircraft.

What has happened is that we have discovered that our thinking is both more
complicated then we imagined and more mysterious than we thought it should
be at the beginning of the information age.

On the other hand we can create extreme situations where the human mind
fails just as our AGI programs have or would fail (for less extreme
situations).  For example, even if you could reliably pick out a number of
objects in a scene, by reducing the light on the scene sufficiently, your
analysis would fail just as miserably as most AGI programs would fail.
This is an important thought experiment because it does reveal that the
human mind is capable of effectively using a wider variety of methods in
analyzing scenes than a computer program is.  (This is a conclusion but it
is a reasonable conclusion.)  This then shows that theory behind AGI is not
totally wrong.  We can buttress this conclusion by pointing out that if the
lighting of a scene (imagine an industrial setting) could be guaranteed to
produce ideal lighting, many visual AI methods would succeed. If a
researcher could establish what kinds of AI methods would work in the ideal
situations, he could then systematically move to deal with individual
variations that tend to produce worse results.  And so on.

Jim Bromer



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to