Like many problems in the field, the answer to this one depends on what you mean by 
"intelligence".

As you said, the behavior of a self-improving system will inevitably become 
specialized, guided by the initial goals and restricted by the available knowledge --- 
this is exactly how "adaptation" is defined.  However, the adaptation ability and 
mechanism remain general.

Therefore, if you define "intelligence" by "what a system can do", then an AGI will 
gradually (though not completely) become an ASI.  If you define "intelligence" by 
"what a system can learn", as I do, then an AGI remains general, even after the 
system's behaviors become specialized.  

According to the latter opinion, an ASI is not general, not because its behaviors are 
domain-specific, but because it does not have the potential to work in a different 
domain.  An AGI, like a human baby, has such a potential, though it cannot be realized 
in every domain.

Pei

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
If an AGI can self-improve what is the likelihood that the AGI will remain general and will not instead evolve itself rapidly to be a super- intelligent specialist following the goal(s) that grab its attention early in life?  I think that most humans tend to move in the specialist direction as they develop.

Would biological life and especially humans face more of a threat from super-intelligent AGIs or from super-intelligent artificial specialist intelligences (ASIs)?

In the dystopian scenarios that people have played out on this list most of the intelligence upgrade paths seem to be implicitly from AGI to super-intelligent ASIs.

If AGIs are to be ethical (have compassionate concern for otherness) then I wonder whether they need to remain AGIs ie. to be able to think and empathises in a very rounded multi-faceted way.

If so what goals and structural features need to be built in to drive the AGI stably 'forever' in the direction of building 'general' intelligence (no matter what specialist intelligence might be developed along the way)?

By the way, is anyone on the list into cybernetics or control theory?  It seems to me that one of the useful leads from this area is the use of clusters of goals that result in the desired behavioural trajectory (in effect a super goal) as an emergent.  In other words the cluster of apparently lower order goals provide the necessary variety of feedbacks that are needed to keep the emergent super-system on track despite having to deal with a complex and unpredictable environment.

It might require something like a complex goal-set, built in at the start, to keep an AGI wanting to stay a general intelligence as it gets more intellectually powerful.  Possibly sub-goals like curiosity and prudence when paired (and almost certainly when also combined with a number of other sub-goals) could deliver the persistent 'general intelligence'- seeking behavour that might be desirable?

Cheers, Philip

Reply via email to