Eric -

Thanks to the pointer to your paper.  Upon reading I quickly saw what I
think provoked your reaction to my observation about understanding.  We
were actually saying much the same thing there.  My point was that no
human understands the world, because our understanding, as with all
examples of intelligence that we know of, is domain-specific.  I used
the word context as synonymous with domain.  My point was that not that
humans don't *understand* the world, but that humans don't understand
the *world*.  I tried to make that clear in my follow-up, but it appears
I lost your interest very early on.  In reading your paper, I see that
you seem to use the terms "world" and "domain" quite synonymously, but
I'm sure you can appreciate that "domain" connotes a limitation of scope
while "world" connotes expanded or ultimate scope. Our domain specific
knowledge is of the world, but one cannot derive the world from our
domain-specific knowledge since a great deal of information is lost in
the compression process, and that really speaks to the core of what it
means to "understand".

When I read in your paper "The claim is that the world has structure
that can be exploited to rapidly solve problems which arise, and that
underlying our thought processes are modules that accomplish this.",
that rang a familiar bell for me.  I can remember the intellectual
excitement I felt when I first came across this idea back in the 1990s,
probably from Gigerenzer, Kahneman & Tversky, Tooby & Cosmides or some
combination of their thinking on fast and frugal heuristics and bounded
rationality.  You might have deduced my bias toward the domain-specific
theory of (evolved) intelligence by my statement that the internal model
must represent what seems to work, rather than what seems to be, in the
environment.

As I see it, the present key challenge of artificial intelligence is to
develop a fast and frugal method of finding fast and frugal methods, in
other words to develop an efficient time-bound algorithm for recognizing
and compressing those regularities in "the world" faster than the
original blind methods of natural evolution.

- Jef 



 

> -----Original Message-----
> From: Eric Baum [mailto:[EMAIL PROTECTED] 
> Sent: Tuesday, November 07, 2006 1:44 PM
> To: [email protected]
> Subject: RE: [agi] Natural versus formal AI interface languages
> 
> 
> James and Jef, my appologies for misattributing the question.
> 
> There is a phenomenon colloquially called "understanding" 
> that is displayed by people and at best rarely displayed 
> within limitted domains by extant computer programs. If you 
> want to have any hope of constructing an AGI, you are going 
> to have to come to grips with what it is and how it is 
> achieved. As to what I believe the answer is, I refer you to 
> the top (new) paper at http://whatisthought.com/eric.html
> entitled "A Working Hypothesis for General Intelligence"
> (and to my book What is Thought? if you want more background.)
> 
> Eric Baum
> http://whatisthought.com
> 
> -----
> This list is sponsored by AGIRI: http://www.agiri.org/email 
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?list_id=303
> 

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to