Re: [agi] A thought.

2003-02-07 Thread Brad Wyble
Philip,
 
 I can understand the brain structure we see in intelligent animals would 
 emerge from a process of biological evolution where no conscious 
 design is involved (ie. specialised non conscious functions emerge first, 
 generalised processes emerge later), but why should AGI design 
 emulate this given that we can now apply conscious design processes, 
 in addition to the traditional evolutionary incremental trial and error 
 methods? 
 
 Cheers, Philip

An excellent question.  I don't think there's any long term need for AGI to follow 
evolution's path, and there are certainly some benefits to eschewing that approach.  
However, I don't think we're yet at a point in which we can afford to ignore the 
structure of the brain as a rubric.  It seems to make the most sense that if we are 
going to develop an AGI that we can communicate with and understand, there's no reason 
to start from scratch. 

-Brad



---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



RE: [agi] A thought.

2003-02-06 Thread Ben Goertzel

The issue of general versus specialized intelligence has been visited on
this list before!!

My thoughts on this, as I said when the topic last came up, are roughly
that:

1)
No finite system can have truly general intelligence.  There will always be
possible environments too complex for it to adapt to, possible problems too
hard for it to solve.  (Under reasonable assumptions, one can show this
using algorithmic information theory.)

2)
Any successful AGI system is going to have some subcomponent C that has
truly general intelligence capability, in the sense that: For any problem
P, there is some level of resources R, so that if the system were given R
resources, C could solve P.  This could be referred to as the system having
the potential for truly general intelligence, if its hardware were beefed
up enough.

3)
Any successful AGI system is also going to have components in two other
categories:

a) specialized-intelligence components that solve particular problems in
ways having little or nothing to do with truly general intelligence
capability

b) specialized-intelligence components that are explicitly built on top of
components having truly general intelligence capability


To me, the weaving-together of components with truly general intelligence
capability, and specialized-intelligence components built on top of these,
is the essence of AGI design.

-- Ben Goertzel



 -Original Message-
 From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]On
 Behalf Of Eliezer S. Yudkowsky
 Sent: Wednesday, February 05, 2003 11:42 PM
 To: [EMAIL PROTECTED]
 Subject: Re: [agi] A thought.


 James Rogers wrote:
 Just as there is
 no general environment, there is no general intelligence.  A mind
 must be matched to its environment.
 
  Huh?  The point of a generally intelligent mind is that it CAN match
  itself to its environment.  You don't want to design an intelligence
  that is matched to a particular environment, you want a general
  intelligence that will match itself to ANY environment.

 You do have to specify that the environment is a low-entropy one.

 --
 Eliezer S. Yudkowsky  http://singinst.org/
 Research Fellow, Singularity Institute for Artificial Intelligence

 ---
 To unsubscribe, change your address, or temporarily deactivate
 your subscription,
 please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]


---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] A thought.

2003-02-06 Thread Brad Wyble

 3)
 Any successful AGI system is also going to have components in two other
 categories:
 
 a) specialized-intelligence components that solve particular problems in
 ways having little or nothing to do with truly general intelligence
 capability
 
 b) specialized-intelligence components that are explicitly built on top of
 components having truly general intelligence capability
 

Are you willing to explain why you put them in this order, or has this available 
elsewhere, perhaps on agiri.org?   I ask because it's my perspective that the brain is 
built the other way around, with specialized intelligence modules on the bottom and 
AGI built on top of them.

I know you're not trying to build a brain per se, but I'm curious why you choose this 
manner to stack ASI and AGI.  It's my belief that in the case of our brains, what we 
call AGI is the seamless combination of many ASI's.  Our problem solving looks 
general, but it really isn't.  There's AGI wiring on top to glue it all together, but 
most of the work is being done subconsciously in specialized regions.  


-Brad

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



RE: [agi] A thought.

2003-02-06 Thread Ben Goertzel
  3)
  Any successful AGI system is also going to have components in two other
  categories:
 
  a) specialized-intelligence components that solve particular problems in
  ways having little or nothing to do with truly general intelligence
  capability
 
  b) specialized-intelligence components that are explicitly
 built on top of
  components having truly general intelligence capability
 

 Are you willing to explain why you put them in this order, or has
 this available elsewhere, perhaps on agiri.org?   I ask because
 it's my perspective that the brain is built the other way around,
 with specialized intelligence modules on the bottom and AGI built
 on top of them.

 I know you're not trying to build a brain per se, but I'm curious
 why you choose this manner to stack ASI and AGI.  It's my belief
 that in the case of our brains, what we call AGI is the seamless
 combination of many ASI's.  Our problem solving looks general,
 but it really isn't.  There's AGI wiring on top to glue it all
 together, but most of the work is being done subconsciously in
 specialized regions.

Visual metaphors like on top of fail us here, I'm afraid.

I think we do have different perspectives, but I'm afraid I wasn't 100%
clear on what my perspective is.

I think there is

* an underlayer of basic representational and dynamic mechanisms.  In the
brain it's neurons, synapses and neurotransmitters, etc.  In Novamente it's
Nodes, Links, MindAgents, etc.

* a collection of intelligence processes that use these mechanisms.  Some
of these processes have potentially general intelligence capability
(though on finite hardware/wetware they can never manifest truly general
intelligence).  Others are intrinsically specialized in nature.  These
intelligence processes interact richly, and some of the specialized ones can
do what they do only via interaction with the potentially-general one (my
category 3b before).

Next, I think that an intelligent system (human or AGI) consists of a
collection of functionally specialized units, each of which involves some of
the above-mentioned intelligence processes, in different combinations and
with different tweaks.

Perception and action and socialization and language processing are examples
of things carried out in the human brain by units that are dominated by
specialized intelligence processes (but as much 3b as 3a based processes,
especially in the socialization and language processing cases).  Cognition
is an example of something carried out in the brain by units dominated by
potentially-generally-intelligent processes.

So, in the sense that cognition is on top of perception, action, language,
etc., I agree that there's a pattern of general-intelligence methods
operating on top of specialized methods.

But I do think there are very important general intelligence processes in
the brain (and in any successful AGI) that act independently of -- and more
broadly than -- any specialized process.

Yes, this is a somewhat complicated picture I'm painting, of the
interweaving of generality and specialization in the mind.  But I think this
kind of complicatedness is the lot of a finite mind in a (comparatively)
essentially unboundedly complex universe...

-- Ben Goertzel






---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] A thought.

2003-02-06 Thread Philip Sutton
Brad,

 But I think that the further down you go towards the primitive level,
 the more and more specialized everything is.  While they all use
 neurons, the anatomy, and neurophysiology of low level brain areas are
 so drastically different from one another as to be conceptually
 distinct. 

I can understand the brain structure we see in intelligent animals would 
emerge from a process of biological evolution where no conscious 
design is involved (ie. specialised non conscious functions emerge first, 
generalised processes emerge later), but why should AGI design 
emulate this given that we can now apply conscious design processes, 
in addition to the traditional evolutionary incremental trial and error 
methods? 

Cheers, Philip

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] A thought.

2003-02-05 Thread Brad Wyble
I've just joined this list, it's my first post.  Greetings all.

1.5 line summary of me: AI enthusiast since 10 yrs old, CS undergrad degree, 3 mo.'s 
from finishing psych/neuroscience PHD. 


Mike, you are correct that an AI must be matched to its enviornment.  It's likely that 
a sentience optimized to function in an alien environment would behave in a way that 
appeared initially as random noise to a human observer.  

However when you say this:

 nd understandable.  There is ONE general organizational structure that opti=
 mizes this AGI for our environment.  All deviations from the one design onl=
 y serve to make the AGI function less effectively.  Any significant departu=

I could not disagree more.  There are an infinite number of ways an AI could be 
designed within a given social/cultural context.  The nature of the designs would 
allow them to provide different solutions, some of which are more or less  effective 
in any particular situation.

Evolution figured this out(forgive my anthropomorphizing), and this is why our minds 
contain many different forms of intelligence.  They all attack any problem in a 
parallel fashion, share their results, and come to a sort of consensus.  

 res cease to function in any way we would consider intelligent.  The SAI of=
  the future will be vastly more intelligent, powerful, and amazing.  It wil=
 l not be incomprehensible.  It will be a lot like us.

It might be incomprehensible if it's too much like us.  One of the dangers of creating 
an AI to study brain function is that the result might be even more inscrutable than 
our brain.  

-Brad Wyble

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] A thought.

2003-02-05 Thread SMcClenahan
I tried to discuss this on SL4. See the thread at 
http://www.sl4.org/archive/0212/5995.html

What you call complex environment is what I called reality. Reality for a 
digital computer running an AGI program is different for (intelligent) humans 
living in meatspace, and I would even go as far as dividing reality up at the 
animal level where dog-reality is different than human-reality, and maybe even 
at the entity level where person A's reality is different than person B's.

For an AGI that lives in its digital domain to communicate with us in our 
physical domain, there must be a translation that takes place. To implement 
these translations (e.g. code, hardware components, devices, peripherals, etc.) 
it would be useful to define a meta-reality that can be used to describe all 
realities.

Your final statement about the SAI being a lot like us I agree with partially. 
It will seem very much alike to us humans (depending on what qualities you are 
comparing), and conversely humans, or computer programmers probably look very 
much like an SAI from its point of view in its reality or operating environment.

cheers,
Simon

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]