> I don't think anyone would argue that the amount knowledge possessed by our
> civilization is not increasing. If the physical laws of this universe are
> deterministic then there is some algorithm describing the process for an
> ever increasing growth in knowledge. Some of this knowledge may be applied
> toward creating improved versions of memory or processing hardware. Thus
> creating a feed-back loop where increased knowledge leads to better
> processing, and better processing leads to an accelerated application and
> generation of knowledge.
There are already formulations of optimal predictive algorithms and
even optimal intelligent agents but they are completely impractical
even with nanotech and computers the size of the Sun. From this
perspective humans are intelligent not because of some general
component (I'm now thinking of singinst with their AGI program) but
lots of specialized components that allow us to take shortcuts, kind
of similarly to how humans play chess vs. how machines play chess. As
you say, it's a critical question how much beneficial feedback there
> Lets take a different example, a genetic algorithm which optimizes computer
> chip design, forever searching for more efficient and faster hardware
> designs. After running for some number of generations, the most fit design
> is taken, assembled, and the the software is copied to run on that new
> hardware. Would the rate of evolution on this new, faster chip not exceed
> the previous rate?
Yes. Then it would get stuck and the next 1% speedup would take 10^10
> To active participants in the process, it would never seem that intelligence
> ran away, however to outsiders who shun technology, or refuse to augment
> themselves, I think it would appear to run away. Consider at some point,
> the technology becomes available to upload one's mind into a computer, half
> the population accepts this and does so, while the other half reject it. On
> this new substrate, human minds could run at one million times the rate of
> biological brains, and in one year's time, the uploaded humans would have
> experienced a million years worth of experience, invention, progress, etc.
> It would be hard to imagine what the uploaded humans would even have in
> common or be able to talk about after even a single day's time (2,700 years
> to those who uploaded). In this sense, intelligence has run away, from the
> perspective of the biological humans.
To me this seems to be the only practical scenario where an actual TS
would take place (but it's frighteningly plausible). Once computers
exceed human computational capacity they'll still be as stupid as
ever, whereas digitized humans would be intelligent. The virtual and
real worlds would evolve in lockstep and with time more and more of
the economy would be converted to employ digital humans. I guess at
some point meatspace humans would become economically unviable, as
they wouldn't be able to compete in wages.
But the preceding doesn't really take into account all the complex
issues of control and politics that will determine how the
technologies develop. If TS becomes probable in a near future then it
would become a matter of supreme strategic importance and there would
probably be attempts to restrict the spread of technologies enabling
TS, for example by keeping them military secrets. It will be even
worse if the powers that be believe in an intelligence explosion as
then, for example, the US couldn't accurately deduce from the amount
of resources spent in North Korea's TS program how much they have
advanced in "intelligence", and if they couldn't obtain that
information by spying they would have good strategic reasons to invade
rather now than later to prevent a North Korean super AI from taking
control of the world.
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to
For more options, visit this group at