I believe Stephen Gould indicated evolution was a random walk with a lower
bound.  It seems reasonable that the longest random walk would more or less
double in length more or less periodically i.e. exponential growth.


Hal Ruhl 



From: everything-list@googlegroups.com
[mailto:everything-l...@googlegroups.com] On Behalf Of Jason Resch
Sent: Sunday, April 04, 2010 10:46
To: everything-list@googlegroups.com
Subject: Re: everything-list and the Singularity


Hello Skeletori,

Welcome to the list.  I enjoy your comments and rationalization regarding
personal identity and of why we should consider I to be the universe /
multiverse / or the everything.  I have some comments regarding the
technological singularity below.

On Sat, Apr 3, 2010 at 5:23 PM, Skeletori <sami.per...@gmail.com> wrote:


I have some tentative arguments on TS and wanted to put them somewhere
where knowledgeable people could comment. This seemed like a good
place. I also believe in an ultimate ensemble but that's a different

Let's start with intelligence explosion. This part is essentially the
same as Hawkins' argument against it (it can be found on the Wikipedia
page on TS).

When we're talking about self-improving intelligence, making improved
copies of oneself, we're talking about a very, very complex
optimization problem. So complex that our only tool is heuristic
search, making guesses and trying to create better rules for taking
stabs in the dark. 

The recursive optimization process improves by making better
heuristics. However, an instinctual misassumption behind IE is that
intelligence is somehow a simple concept and could be recursively
leveraged not only descriptively but also algorithmically. If the
things we want a machine to do have no simple description then it's
unlikely they can be captured by simple heuristics. And if heuristics
can't be simple then the metasearch space is vast. I think some people
don't fully appreciate the huge complexity of self-improving search.

The notion that an intelligent machine could accelerate its
optimization exponentially is just as implausible as the notion that a
genetic algorithm equipped with open-ended metaevolution rules would
be able to do so. It just doesn't happen in practice, and we haven't
even attempted to solve any problems that are anywhere near the
magnitude of this one.

So I think that the flaw in IE reasoning is that there should, at some
higher level of intelligence, emerge a magic process that is able to
achieve miraculous things.

If you accept that, it precludes the possibility of TS happening
(solely) through an IE. What then about Kurzweil's law of accelerating
returns? Well, technological innovation is similarly a complex
optimization problem, just in a different setting. We can regard the
scientific community as the optimizing algorithm here and come to the
same conclusions as with IE. That is, unless humans possess some kind
of higher intelligence that can defeat heuristic search. I don't think
there's any reason to believe that.

Complex optimization problems exhibit the law of diminished returns
and the law of fits and starts, where the optimization process gets
stuck in a plateau for a long time, then breaks out of it and makes
quick progress for a while. But I've never seen anything exhibiting a
law of accelerating returns. This would imply that, e.g., Moore's law
is just "an accident", a random product of exceedingly complex
interactions. It would take more than some plots of a few data points
to convince me to believe in a law of accelerating returns. 

If not the plots what would it take to convince you?  I think one should
accept the law of accelerating returns until someone can describe what
accident caused the plot.  Kurzweil's page describes a model and assumptions
which re-create the real-world data plot:


It is a rather long page, Ctrl+F for "The Model considers the following
variables:" to find where he describes the reasoning behind the law of
accelerated returns.


It also
depends on how one defines exponential growth, as one can always take
X as exp(X) - I suppose we want the exponential growth of some
variable that is needed for TS and whose linear growth corresponds to
linear increase in "technological ability" (that's very vague, can
anybody help here?).

In conclusion, I haven't yet found a credible lawlike explanation of
anything that could cause a "runaway" TS where things become very

All comments are welcome.

I think intelligence optimization is composed of several different, but
interrelated components, and that it makes sense to clearly define these
components of intelligence rather than talk about intelligence as a single
entity.  I think intelligence embodies.

1. knowledge - information that is useful for something
2. memory - the capacity to store, index and organize information
3. processing rate - the rate at which information can be processed

The faster the processing rate, the faster knowledge can be applied and the
faster new knowledge may be acquired.  There are several methods in which
new knowledge can be be generated:  Searching for patterns and relations
within the existing store of knowledge (data mining). Proposing and
investigating currently unknown areas (research).  Applying creativity to
find more useful forms of knowledge (genetic programming / genetic

All three of these methods accelerate given a faster processing rate.
Consider for example, our knowledge of protein folding. Our knowledge of it
is almost entirely dependent on our ability to process information.

I don't think anyone would argue that the amount knowledge possessed by our
civilization is not increasing.  If the physical laws of this universe are
deterministic then there is some algorithm describing the process for an
ever increasing growth in knowledge.  Some of this knowledge may be applied
toward creating improved versions of memory or processing hardware.  Thus
creating a feed-back loop where increased knowledge leads to better
processing, and better processing leads to an accelerated application and
generation of knowledge.

I think it is easy for one's intuition to get stuck, when considering the
possibility of something like a (programming language) compiler which is so
good at optimization that when run against its own code, it will create an
even more optimized form of itself, which could in turn make an even better
version of itself, ad infinitum.  I think the difficulty in imagining this
is that it assumes only one piece of the puzzle, I think in this case,
knowledge of building a better compiler.  Knowledge of building a better
compiler alone can't generate any new information about building the next
best compiler.

Lets take a different example, a genetic algorithm which optimizes computer
chip design, forever searching for more efficient and faster hardware
designs.  After running for some number of generations, the most fit design
is taken, assembled, and the the software is copied to run on that new
hardware.  Would the rate of evolution on this new, faster chip not exceed
the previous rate?

A more human example would be that some scientists discovered some genes
that affected intelligence, and developed a drug to moderate those genes in
a way that gave the entire population an intelligence on par with Newton or
Leonardo.  Would the amount of time until the next breakthrough discovery
regarding human intelligence take more or less time, now that the entire
populace consists of super geniuses?

To active participants in the process, it would never seem that intelligence
ran away, however to outsiders who shun technology, or refuse to augment
themselves, I think it would appear to run away.  Consider at some point,
the technology becomes available to upload one's mind into a computer, half
the population accepts this and does so, while the other half reject it.  On
this new substrate, human minds could run at one million times the rate of
biological brains, and in one year's time, the uploaded humans would have
experienced a million years worth of experience, invention, progress, etc.
It would be hard to imagine what the uploaded humans would even have in
common or be able to talk about after even a single day's time (2,700 years
to those who uploaded).  In this sense, intelligence has run away, from the
perspective of the biological humans.

A deeper question is what is the upper limit to intelligence?  I haven't yet
mentioned the role of memory in this process.  I think intelligence is bound
by the complexity of the environment.  From within the computer, new, more
complex environments can be created. (Just think how much more complex our
present day environment is than 200 years ago), however the ultimate limit
of the complexity of the environment that can be rendered depends on the
amount of memory available to represent that environment.  Evolution to this
point has leveraged the complexity of the physical universe and the presence
of other evolved organisms to create complex fitness tests, but evolution
would hit a wall if it reached a point where DNA molecules couldn't get any


You received this message because you are subscribed to the Google Groups
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to
For more options, visit this group at

You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
For more options, visit this group at 

Reply via email to