Hello!

I have some tentative arguments on TS and wanted to put them somewhere
where knowledgeable people could comment. This seemed like a good
place. I also believe in an ultimate ensemble but that's a different
story.

Let's start with intelligence explosion. This part is essentially the
same as Hawkins' argument against it (it can be found on the Wikipedia
page on TS).

When we're talking about self-improving intelligence, making improved
copies of oneself, we're talking about a very, very complex
optimization problem. So complex that our only tool is heuristic
search, making guesses and trying to create better rules for taking
stabs in the dark.

The recursive optimization process improves by making better
heuristics. However, an instinctual misassumption behind IE is that
intelligence is somehow a simple concept and could be recursively
leveraged not only descriptively but also algorithmically. If the
things we want a machine to do have no simple description then it's
unlikely they can be captured by simple heuristics. And if heuristics
can't be simple then the metasearch space is vast. I think some people
don't fully appreciate the huge complexity of self-improving search.

The notion that an intelligent machine could accelerate its
optimization exponentially is just as implausible as the notion that a
genetic algorithm equipped with open-ended metaevolution rules would
be able to do so. It just doesn't happen in practice, and we haven't
even attempted to solve any problems that are anywhere near the
magnitude of this one.

So I think that the flaw in IE reasoning is that there should, at some
higher level of intelligence, emerge a magic process that is able to
achieve miraculous things.

If you accept that, it precludes the possibility of TS happening
(solely) through an IE. What then about Kurzweil's law of accelerating
returns? Well, technological innovation is similarly a complex
optimization problem, just in a different setting. We can regard the
scientific community as the optimizing algorithm here and come to the
same conclusions as with IE. That is, unless humans possess some kind
of higher intelligence that can defeat heuristic search. I don't think
there's any reason to believe that.

Complex optimization problems exhibit the law of diminished returns
and the law of fits and starts, where the optimization process gets
stuck in a plateau for a long time, then breaks out of it and makes
quick progress for a while. But I've never seen anything exhibiting a
law of accelerating returns. This would imply that, e.g., Moore's law
is just "an accident", a random product of exceedingly complex
interactions. It would take more than some plots of a few data points
to convince me to believe in a law of accelerating returns. It also
depends on how one defines exponential growth, as one can always take
X as exp(X) - I suppose we want the exponential growth of some
variable that is needed for TS and whose linear growth corresponds to
linear increase in "technological ability" (that's very vague, can
anybody help here?).

In conclusion, I haven't yet found a credible lawlike explanation of
anything that could cause a "runaway" TS where things become very
unpredictable.

All comments are welcome.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to