Hi Mike
The idea that intellgences are created
to operate in classes of
environments seems reasonable. But the environments that exist in a
place as localised and moderately uniform as the Earth are very
diverse. Which is why we have plus or minus about 1--20 million
species of life on this planet. Needless to say, the minds of these
species are quite diverse.
> An artificial
mind that has been programmed with all of our knowledge
> about our environment
and all of our skills at problem solving our
> problems will
be like us except for the advantages supplied by the
> machine hardware.
These advantages are limited in number and very
> specifically
describable and predictable and understandable. There is
> ONE general organizational
structure that optimizes this AGI for our
> environment.
All deviations from the one design only serve to make
> the AGI function
less effectively.
This argument seems to me to be very
similar to the one used by many
mainstream economists that there is only one optimally functioning
economy (free market based etc. etc.) and that any deviation from the
idea will reduce the productivity of the economy.
This is based on a very simplistic
interpretation of optimisation theory -
that in effect says that if you knew everything and could do anything
you wanted to do, you would find only one best optimisation state for
any set of goals. But this is about as useful as saying that we should
start by assuming we are all God.
We live in a reality in which we cannot
know everything and in which
we cannot do everything that we want to - and this will continue to apply
to super intelligent AGs that exceed human capacity (because the
introduction of super AGIs imediately makes the environment more
complex than it was before). This notion that we live in a world where
perfect optimisation is never possible is called in economics the "theory
of second best". But this name is actually a bit misleading. The
inevitable limitations on what we can know and what we can do are so
great that we are nowhere near finding second best solutions - we
should think notionally in terms of nth best.
What this means then is that it is
quite possible that clever thinking can
come up with all sorts of quite different but in a sense equally useful
solutions to any optimisation issues - especially if we are dealing with
very compex issues - ie. the environments are complex and the goals
are complex.
So my guess is that there is a high
probablility that the intelligent minds
that emerge from AGI work will be very diverse in their characteristics
and behaviour and that many will not be much like us.
Apart from anything else, an environment
into which super intelligent
AGIs are introduced is no longer the environment for which human
intelligence was optimised by the slow and constrained process of
evolution.
Cheers, Philip
