If an AGI can self-improve what is
the likelihood that the AGI will
remain general and will not instead evolve itself rapidly to be a super-
intelligent specialist following the goal(s) that grab its attention early in
life? I think that most humans tend to move in the specialist direction
as they develop.
Would biological life and especially
humans face more of a threat from
super-intelligent AGIs or from super-intelligent artificial specialist
intelligences (ASIs)?
In the dystopian scenarios that people
have played out on this list most
of the intelligence upgrade paths seem to be implicitly from AGI to
super-intelligent ASIs.
If AGIs are to be ethical (have compassionate
concern for otherness)
then I wonder whether they need to remain AGIs ie. to be able to think
and empathises in a very rounded multi-faceted way.
If so what goals and structural features
need to be built in to drive the
AGI stably 'forever' in the direction of building 'general' intelligence (no
matter what specialist intelligence might be developed along the way)?
By the way, is anyone on the list
into cybernetics or control theory? It
seems to me that one of the useful leads from this area is the use of
clusters of goals that result in the desired behavioural trajectory (in
effect a super goal) as an emergent. In other words the cluster of
apparently lower order goals provide the necessary variety of
feedbacks that are needed to keep the emergent super-system on track
despite having to deal with a complex and unpredictable environment.
It might require something like a
complex goal-set, built in at the start,
to keep an AGI wanting to stay a general intelligence as it gets more
intellectually powerful. Possibly sub-goals like curiosity and prudence
when paired (and almost certainly when also combined with a number
of other sub-goals) could deliver the persistent 'general intelligence'-
seeking behavour that might be desirable?
Cheers, Philip
