Eric Baum wrote:

Eliezer> It should be emphasized that I wrote LOGI in 2002;
Didn't know that. Are the rest of the papers in that 2005 book as old?

Eliezer> Nonetheless, calling something "complex" doesn't explain it.

Methinks you protest too much, although I take the point.

You may be right.  Still, better to protest too much than too little.

But I did like the presentation-- you didn't just say it was complex, you
pointed out it was layered, which some in the AI community had failed
to adequately credit (cf your critique of semantic nets).

Oh, I'm willing enough to say that *human* intelligence is complex, because I have a specific image of human intelligence as including a hugely subdivided cerebral cortex, layers of organization, multiple centers of gravity, many individually evolved instincts and intuitions in conflict, et cetera.

Saying that *intelligence* is complex is a whole different story.

Eliezer> A giant lookup table is a simple process that may know an
Eliezer> arbitrarily large amount, depending on the incompressibility
Eliezer> of the lookup table.  A human programmer turned loose on the
Eliezer> purely abstract form of a simple problem (e.g. stacking
Eliezer> towers of blocks), who invents a purely abstract algorithm
Eliezer> (e.g. mergesort) without knowing anything about which
Eliezer> specific blocks need to be moved, is an example of a complex
Eliezer> process that used very little specific knowledge about that
Eliezer> specific problem to come up with a good general solution.

I respectfully suggest that the human programmer couldn't do that
unless he knew a lot, in fact unless he had most of the program
(in chunks, not exactly assembled, capable of being assembled in
different ways to solve different problems) already in his head before
attacking the problem.

But that is not knowledge *specifically* about the blocks problem. It is not like having a giant lookup table in your head that says how to solve all possible blocks problems up to 10 blocks. The existence of "knowledge" that is very generally applicable, far beyond the domains over which it was generalized, is what makes general intelligence possible. It is why the problem is not NP-hard.

Even an untrained human couldn't do it, and an untrained human
is 10^44 creatures worth of evolution away from a tabula rasa.
>
Eliezer> Is the term "top level" really all that useful for describing
Eliezer> evolutionary designs?  The human brain has more than one
Eliezer> center of gravity.  The limbic system, the ancient goal
Eliezer> system at the center, is a center of gravity; everything grew
Eliezer> up around it.  The prefrontal cortex, home of reflection and
Eliezer> the self-model, is a center of gravity.  The cerebellum,
Eliezer> which learns the realtime skill of "thinking" and projects
Eliezer> massively to the cortex, is a center of gravity.

What do you mean by center of gravity?

About the same thing you mean by "top module"? An axis around which cognition turns? A major command-and-control outpost? I'm not sure that I have a better definition than the intuitive sound of the words plus the examples given.

The center of a self-modifying AI would be none of these things; it would be the criterion against which self-written code is checked.

I talked about levels in part because it was the subject of
your paper :^) , and because the comment I was discussing seemed to have a two level nature (the human and the culture);
but I do tend to think that big hierarchic programs tend to
have top modules to a (potentially somewhat fuzzy) extent, and I tend to think of information as
being filtered and processed up to a point where decisions are made,
and the brain certainly has a somewhat layered structure.

I have my skepticism about the proper design for an AI being a big hierarchic program. Or a lot of little agents. Or a lot of little agents controlled by an incorruptible central broker. My current thinking tends to turn around stages of processing - *not* necessarily layers of organization as in the human idiom. Does information have to be filtered *up* to the decision-making level? Or is making the decision just one more *stage* of processing? Ultimately, a mind is the cognition that happens between sense input and motor output. If we write down the stages of an AI, and find a natural mountain - beginning with complex sense information, being processed toward a peak of simplicity and a direct decision, then increasingly complex translation toward motor stages - then we might call the peak of simplicity the "top".

But it doesn't follow that the optimal stages between sense and motor *must* obey any such neat progression. Maybe, when you design the system properly - as opposed to blindly accreting it by natural selection - some stages are more complicated and some are less complicated, and there's no natural top.

*Within humans*, the evolutionary idiom of levels of organization, and the actual design of the architecture, are such that we can speak comprehensibly of humans having a "top" level. In fact, I can think of at least three of them: the limbic system, the prefrontal cortex, and the cerebellum.

Eliezer>  From my perspective, this argument over "top levels" doesn't
Eliezer> have much to do with the question of recursive
Eliezer> self-improvement!  It's the agent's entire intelligence that
Eliezer> may be turned to improving itself.  Whether the greatest
Eliezer> amount of heavy lifting happens at a "top level", or lower
Eliezer> levels, or systems that don't modularize into levels of
Eliezer> organization; and whether the work done improves upon the
Eliezer> AI's top layers or lower layers; doesn't seem to me to
Eliezer> impinge much upon the general thrust of I.  J. Good's
Eliezer> "intelligence explosion" concept.  "The AI improves itself."
Eliezer> Why does this stop being an interesting idea if you further
Eliezer> specify that the AI is structured into levels of organization
Eliezer> with a simple level describable as "top"?

As I said:

even if there would be some way to keep modifying the top level to make it better, one could presumably achieve just as powerful an ultimate intelligence by keeping it fixed and adding more powerful lower levels (or maybe better yet, middle levels) or more or better chunks and modules within a middle or lower level.

You had posed a 2 level system: humans and culture,
and said this was different from a seed AI, because the humans modify
the culture, and that's not as powerful as the whole AI modifying
itself.

Okay. I would still defend that. Not sure how the internal structure of the AI directly relates to the above issue.

But what I'm arguing is that there is no such distinction,
the humans modifying the culture really does modify the humans in
a potentially arbitrarily powerful way;
within most AI's I can conceive, there will in any case be some
fixed top level, even within an AIXI or in
Schmidhuber's OOPS or whatever to the extent I understand them,

AIXI has an unalterably fixed top level; it cannot conceive of the possibility of modifying itself.

Schmidhuber's OOPS, if I recall correctly, supposedly has no invariants at all. If you can prove the new code has "greater expected utility", according to the current utility function (even if the new code includes changes to the utility function), and taking into account all changes that will be adopted by the new code, the new code gets adopted. But Schmidhuber is very vague about exactly how this proof takes place.

My own thinking tends to the idea of a preserved optimization target, preserved preferences over outcomes, rather than protected bits in memory.

yet this doesn't preclude these things from powerful self modification,
having a 2 level system where the top level can't modify its very
top level (eg the humans can't modify their genome-- positing for the sake of argument they don't and we only talk about progress that's
occurred to date) does not make it weakly self-improving in some sense
that bars it from gaining as much power as a "strongly self-improving"
alternative.

It is written in the _Twelve Virtues of Rationality_ that the sixth virtue is empiricism: "Do not ask which beliefs to profess, but which experiences to anticipate. Always know which difference of experience you argue about."

So let's see if we can figure out where we anticipate differently, and organize the conversation around that.

The main experience I anticipate may be described intuitively as "AI go FOOM". Past some threshold point - definitely not much above human intelligence, and probably substantially below it - a self-modifying AI undergoes an enormously rapid accession of optimization power (unless the AI has been specifically constructed so as to prefer an ascent which is slower than the maximum potential speed). This is a testable prediction, though its consequences render it significant beyond the usual clash of scientific theories.

The basic concept is not original with me and is usually attributed to a paper by I. J. Good in 1965, "Speculations Concerning the First Ultraintelligent Machine". (Pp. 31-88 in Advances in Computers, vol 6, eds. F. L. Alt and M. Rubinoff. New York: Academic Press.) Good labeled this an "intelligence explosion". I have recently been trying to consistently use the term "intelligence explosion" rather than "Singularity" because the latter term has just been abused too much.

Now there are many different imaginable ways that an intelligence explosion could occur. As a physicist, you are probably familiar with the history of the first nuclear pile, which achieved criticality on December 2nd, 1942. Szilard, Fermi, and friends built the first nuclear pile, in the open air of a squash court beneath Stagg Field at the University of Chicago, by stacking up alternating layers of uranium bricks and graphite bricks. The nuclear pile didn't exhibit its qualitative behavior change as a result of any qualitative change in the behavior of the underlying atoms and neutrons, nor as a result of the builders suddenly piling on a huge number of bricks. As the pile increased in size, there was a corresponding quantitative change in the effective neutron multiplication factor (k), which rose slowly toward 1. The actual first fission chain reaction had k of 1.0006 and ran in a delayed critical regime.

If Fermi et. al. had not possessed the ability to quantitatively calculate the behavior of this phenomenon in advance, but instead had just piled on the bricks hoping for something interesting to happen, it would not have been a good year to attend the University of Chicago.

We can imagine an analogous cause of an intelligence explosion in which the key parameter is not the qualitative ability to self-modify, but a critical value for a smoothly changing quantitative parameter which measures how many additional self-improvements are triggered by an average self-improvement.

But this isn't the only potential cause of behavior that empirically looks like "AI go FOOM". The species Homo sapiens showed a sharp jump in the effectiveness of intelligence, as the result of natural selection exerting a more-or-less steady optimization pressure on hominids for millions of years, gradually expanding the brain and prefrontal cortex, tweaking the software architecture. A few tens of thousands of years ago, hominid intelligence crossed some key threshold and made a huge leap in real-world effectiveness; we went from caves to skyscrapers in the blink of an evolutionary eye. This happened with a continuous underlying selection pressure - there wasn't a huge jump in the optimization power of evolution when humans came along. The underlying brain architecture was also continuous - our cranial capacity didn't suddenly increase by two orders of magnitude. So it might be that, even if the AI is being elaborated from outside by human programmers, the curve for effective intelligence will jump sharply. It's certainly plausible that *the* key threshold was culture, but because we wiped out all our nearest relatives, it's hard to disentangle exactly which improvements to human cognition were responsible for what.

Or perhaps someone builds an AI prototype that shows some promising results, and the demo attracts another $100 million in venture capital, and this money purchases a thousand times as much supercomputing power. I doubt a thousandfold increase in hardware would purchase anything like a thousandfold increase in effective intelligence - but mere doubt is not reliable in the absence of any ability to perform an analytical calculation. Compared to chimps, humans have a threefold advantage in brain and a sixfold advantage in prefrontal cortex, which suggests (a) firmware is more important than hardware and (b) small increases in hardware can support large improvements in firmware.

Humans, thinking, certainly cause changes to their neurons; and it may even be possible that with a theoretically perfect series of instructions to our introspective levers, we could reprogram the firmware into whatever we liked. Just as it's theoretically possible that the genome could contain a series of DNA instructions which built something that built something that built diamondoid nanotechnology and placed it under the control of our high-level decision process, thus obviating all discussion of protected levels. But the genome *doesn't* contain those instructions, and naive humans don't even know the visual cortex exists, let alone have the power to reprogram it, and this is not coincidence. In theory, a sub-critical nuclear pile could have every single emitted neutron just happen to strike another nucleus, and so explode; but it's not very *probable*.

There is a level at which an AI is doing exactly the same thing as a human, who in turn is doing exactly the same thing as a chimp, who is doing exactly the same thing as a bacterium, who is doing exactly the same thing as a rock. This level is called physics. There'll be some level on which the behavior of the system is smoothly continuous with all its past history, changing neither qualitatively nor quantitatively.

I do not insist that an AI reaching down to its hardware and firmware levels must change *everything*. It doesn't have to violate the laws of physics. The important point of debate is not that the AI is "different" in some sense of how we describe it; the question is observed behavior. If the pragmatic result of an AI being able to modify and improve its own hardware and firmware is that the AI increases its effective self-improvement multiplication factor past 1 - metaphorically speaking - and goes "critical", then that's the important thing from my perspective. Or, if humans have already achieved cultural criticality, but the AI goes prompt critical (metaphorically speaking) and ascends at rates far faster than human culture, then again I regard that as the important empirical consequence.

I don't think there should be a question that being able to improve your hardware (possibly by millionfold or greater factors) and rewrite your firmware should provide *some* benefit. *How much* benefit is the issue here. Whether the change I'm describing is "qualitatively different" is a proxy question, which may turn on matters of mere definition; the key issue is what we observe in real life.

Now, if you said that humans are already self-modifying to such a degree that we should expect *no substantial additional benefit* from an AI having direct access to its own source code, *then* I'd know what difference of empirical anticipation we were arguing about.

I think the hard problem about achieving intelligence is crafting
the software, which problem is "hard" in a technical sense of being
NP-hard and requiring major computational effort,

Eliezer> As I objected at the AGI conference, if intelligence were
Eliezer> hard in the sense of being NP-hard, a mere 10^44 nodes
Eliezer> searched would be nowhere near enough to solve an environment
Eliezer> as complex as the world, nor find a solution anywhere near as
Eliezer> large as the human brain.

Eliezer> *Optimal* intelligence is NP-hard and probably
Eliezer> Turing-incomputable.  This we all know.

Eliezer> But if intelligence had been a problem in which *any*
Eliezer> solution whatsoever were NP-hard, it would imply a world in
Eliezer> which all organisms up to the first humans would have had
Eliezer> zero intelligence, and then, by sheer luck, evolution would
Eliezer> have hit on the optimal solution of human intelligence.  What
Eliezer> makes NP-hard problems difficult is that you can't gather
Eliezer> information about a rare solution by examining the many
Eliezer> common attempts that failed.

Eliezer> Finding successively better approximations to intelligence is
Eliezer> clearly not an NP-hard problem, or we would look over our
Eliezer> evolutionary history and find exponentially more evolutionary
Eliezer> generations separating linear increments of intelligence.
Eliezer> Hominid history may or may not have been "accelerating", but
Eliezer> it certainly wasn't logarithmic!

Eliezer> If you are really using NP-hard in the technical sense, and
Eliezer> not just a colloquial way of saying "bloody hard", then I
Eliezer> would have to say I flatly disagree: Over the domain where
Eliezer> hominid evolution searched, it was not an NP-hard problem to
Eliezer> find improved approximations to intelligence by local search
Eliezer> from previous solutions.

I am using the term NP-hard to an extent metaphorically, but
drawing on real complexity notions that problems can really be hard.
I'm not claiming that constructing an intelligence is a decision problem with a yes-no answer; in fact I'm not claiming it's an infinite class
of problems, which is necessary to talk about asymptotic behavior
altogether. It's a particular instance-- we are trying to construct
one particular program that works in this particular world,
meaning solves a large collection of problems of certain types.

Okay, problems *can* be hard; what reason do you have to believe that this particular problem *is* hard?

(I don't buy into the notion of "general intelligence" that solves
any possible world or any possible problem.)

I agree. An AI is supposed to work in the unusual special case of own low-entropy universe, not all possible worlds. No-Free-Lunch theorems etc.

I think the problem of constructing the right code
for intelligence is a problem like finding a very short tour in
a particular huge TSP instance. A human can't solve it by hand, (for
reasons that are best understood by thinking about complexity theory
results about infinite problem classes and in the limit behavior,
which is why I appeal to that understanding). To solve it, you are going to have to construct a good algorithm,
*and run it for a long time*. If you do that, you can get a better
and better solution, just like if you run Lin-Kernighan on a huge
TSP instance, you will find a pretty short tour.

Finding a *short*, but not *optimal*, tour in a particular huge TSP instance, is not an NP-hard problem - there are algorithms that do it, as you mention. And much more importantly from the perspective of AI design, it was not an NP-hard problem for a programmer to find those algorithms.

I furthermore note that the problem of constructing intelligent code doesn't seem to me at all like the problem of finding a short tour in a huge TSP instance. The world has a vast number of exploitable regularities, which have similarities and differences between themselves; there are meta-regularities in the regularities which can in turn be exploited. You can eat them one at a time, or swallow metaproblems in whole gulps.

Magic takes many forms. When you don't know how to do something, you can appeal to complexity, to emergence, to huge heaps of hardware, to vague similarities to the human brain... Are you sure that you aren't saying "We'll need to run the code for a long time" in order to generate, within yourself, a feeling of having thrown something really powerful at the problem? Like de Garis talking about ten thousand neural-net-module-engineers constructing an intelligent being? Do you know specifically what is the algorithm that you think *must* be run to generate an intelligence, and can you calculate quantitatively how long it takes to run?

We know that natural selection took a long time to run, but natural selection is a bloody inefficient algorithm. Natural selection is so ridiculously simple that we can even calculate quantitatively how inefficient it is, and come up with estimates like 2 ln(N) / s generations to fix a single mutation with advantage s in population N.

I wouldn't be surprised if, in the course of building an AI, there were points where I found it convenient to run simple algorithms for a long time. But too much of this would signify that I was trying to brute-force the problem and failing to exploit important regularities in it.

Evolution ran for a heck of a lot of computation on the problem.
It is possible that humans will be able to jump start a lot of
that, but its also true we are not going to be able to run
for as much computation. Its an open question whether we can get
there, but I suggest it may take a composite algorithm-- both jump starting the code design and then running a lot to improve it.

Actually, I very much like the idea of running simple programs for a long time to boot up an intelligence. Not because it's the only way to get intelligence, or even because it's convenient, but because it means that the humans have less complexity to potentially get wrong. I wouldn't use an evolutionary program because then I'd lose control of the resulting complexity, thus obviating the whole point of starting out simple.

Eliezer> Now as Justin Corwin pointed out to me, this does not mean
Eliezer> that intelligence is not *ultimately* NP-hard.  Evolution
Eliezer> could have been searching at the bottom of the design space,
Eliezer> coming up with initial solutions so inefficient that there
Eliezer> were plenty of big wins.  From a pragmatic standpoint, this
Eliezer> still implies I. J. Good's intelligence explosion in
Eliezer> practice; the first AI to search effectively enough to run up
Eliezer> against NP-hard problems in making further improvements, will
Eliezer> make an enormous leap relative to evolved intelligence before
Eliezer> running out of steam.

I don't know what you mean here at all.

Did previous paragraphs clear it up? In other words, Corwin's notion is that a *properly designed* intelligence is good enough that making further improvements is NP-hard, but human intelligences are operating far short of the level where this happens. Like starting out with a random traversal of the TSP graph; there'll be plenty of low-hanging fruit, and if you only take them one at a time, they'll last quite a while - you might start thinking it was an easy problem. Corwin's notion is that human intelligence is so poorly designed as to still occupy this regime; single mutations can still lift us up.

so the ability to make sequential small improvements, and bring to
bear the computation of millions or billions of (sophisticated,
powerful) brains, led to major improvements.

Eliezer> This is precisely the behavior that does *not* characterize
Eliezer> NP-hard problems.  Improvements on NP-hard problems don't add
Eliezer> up; when you tweak a local subproblem it breaks something
Eliezer> else.

I suggest these improvements are not merely "external", but
fundamentally affect thought itself. For example, one of the
distinctions between human and ape cognition is said to be that we
have "theory of mind" whereas they don't (or do much more
weakly). But I suggest that "theory of mind" must already be a
fairly complex program, built out of many sub-units, and that we
have built additional components and capabilities on what came
evolutionarily before by virtue of thinking about the problem and
passing on partial progress, for example in the mode of bed-time
stories and fiction. Both for language itself and things like
theory of mind, one can imagine some evolutionary improvements in
ability to use it through the Baldwin effect, but the main point
here seems to be the use of external storage in "culture" in
developing the algorithms and passing them on. Other examples of
modules that directly effect thinking prowess would be the
axiomatic method, and recursion, which are specific human
discoveries of modes of thinking, that are passed on using language
and improve "intelligence" in a core way.

Eliezer> Considering the infinitesimal amount of information that
Eliezer> evolution can store in the genome per generation, on the
Eliezer> order of one bit, Actually, with sex its theoretically possible to gain something like sqrt(P) bits per generation (where P is population size), cf Baum, Boneh paper
could be found on whatisthought.com and also Mackay paper. (This is
a digression, since I'm not claiming huge evolution since chimps).

That's for human-built genetic algorithms, not natural selection. For natural selection see e.g. http://dspace.dial.pipex.com/jcollie/sle/index.htm. (I don't buy some of the author's claims here, but the central principle of which he gives a heuristic explanation is something I've heard of before in evolutionary biology; I think it goes back to Kimura.) Natural selection does run on O(1) bits per generation.

I furthermore note that gaining one standard deviation per generation, which is what your paper describes, is not obviously like gaining sqrt(P) bits of Shannon information per generation. Yes, the standard deviation is proportional to sqrt(N), but it's not clear how you're going from that to gaining sqrt(N) bits of Shannon information in the gene pool per generation. It would seem heuristically obvious that if your algorithm eliminates roughly half the population on each round, it can produce at most one bit of negentropy per round in allele frequencies. I only skimmed the referenced paper, though; so if there's a particular paragraph I ought to read, feel free to direct me to it.

Eliezer> it's certainly plausible that a lot of our
Eliezer> software is cultural.  This proposition, if true to a
Eliezer> sufficiently extreme degree, strongly impacts my AI ethics
Eliezer> because it means we can't read ethics off of generic human
Eliezer> brainware.  But it has very little to do with my AGI theory
Eliezer> as such.  Programs are programs.

It has to do with the subject of my post, which was that by modifying
the culture, humans have modified their core intelligence, so
there is no distinction from strongly self improving.

You might as well say that, since evolution built humans, evolution is intelligent, therefore humans are nothing new... but pragmatically speaking, there seems to be a large qualitative difference in there somewhere. Ultimately it's all just the same 'ol physics. You could equally well argue that if we build powerful AIs that shows the power of human intelligence, but again, it seems like the system went through an important transition somewhere.

Humans may have modified their core intelligence a *little*, but what about all the results showing the perseverance of cognitive biases against self-willed remediation attempts?

Eliezer> But try to teach the human operating system to a chimp, and
Eliezer> you realize that firmware counts for *a lot*.  Kanzi seems to
Eliezer> have picked up some interesting parts of the human operating
Eliezer> system - but Kanzi won't be entering college anytime soon.

I'm not claiming there was 0 evolution between chimp and man--
our brains are 4 times bigger.

(Terrence Deacon, in _The Symbolic Species_, says our brains are three times too large for an ape our size, and that our prefrontal cortex is relatively six times too large.)

I'm claiming that the hard part--
discovering the algorithms-- was mostly done by humans using storage
and culture. Then there was some simple tuning up in brain size,
and some slightly more complex Baldwin-effect etc tuning up
programming grammar into the genome in large measure, so we become
much more facile at learning the stuff quickly, and maybe other
similar stuff. I don't deny that if you turn all that other stuff
off you get an idiot, I'm just claiming it was computationally
easy.

Arguably, in a certain sense it *must* have been computationally easy because natural selection is incapable of doing anything computationally *hard*; evolution can't sit back and design complex interdependent machinery with hundreds of interlocking parts in a single afternoon, like a human programmer.

However, chimps can recognize themselves in mirrors and implement complex political strategies in which A anticipates B's reaction to C, so there's clearly some level of hardware support among chimps for empathy and theory of mind, despite the (presumable) lack of sufficiently complex culture to give rise to a proper Baldwin effect.

The real-world impressive power of human culture dates back largely to the last hundred thousand years which is an eyeblink of evolutionary time. Space shuttles are pure products of accumulated culture without much in the way of space-shuttle-specific adaptive support. Science is so much larger than the genome that even if we didn't know the answer in advance, we could guess that most scientific information *had* to be on paper somewhere, not in the genes.

The question is, when all that lovely knowledge gets written down on paper, what is the force that does the writing? What is the generator that produces all this lovely knowledge we're accumulating? Could a more powerful generator produce knowledge orders of magnitude faster? Obviously yes, because human neurons run at speeds that are at least six orders of magnitude short of what we know to be physically possible. (Drexler's _Nanosystems_ describes sensory inputs and motor outputs that operate at a similar speedup.) What about better firmware? Would that buy us many additional orders of magnitude?

If most of the generator complexity lay in a culturally transmitted human operating system that was open to introspection, then further improvements to firmware might be trivial. But then scientists would have a much better understanding of how science works; but most scientists proceed mostly by instinct, and they don't have to learn rituals on anything remotely approaching the complexity of a human brain. Most people would find learning the workings of the human brain a hugely intimidating endeavor - rather than being an easier and simpler version of something they did unwittingly as children, in the course of absorbing the larger and more important "human operating system" you postulate. This human operating system, this modular theory of mind that gets transmitted - where is it written down? There's a sharp limit on how much information you can accumulate without digital fidelity of transmission between generations. The vast majority of human evolution took place long before the invention of writing.

I don't believe in a culturally transmitted operating system, that existed over evolutionary periods, which contains greater total useful complexity than that specified in the brain-constructing portions of the human genome itself. And even if such a thing existed, the fact that we haven't written it down implies that it is largely inaccessible to introspection and hence to deliberative, intelligent self-modification.

I don't understand any real distinction between "weakly self
improving processes" and "strongly self improving processes", and
hence, if there is such a distinction, I would be happy for
clarification.

Eliezer> The "cheap shot" reply is: Try thinking your neurons into
Eliezer> running at 200MHz instead of 200Hz.  Try thinking your
Eliezer> neurons into performing noiseless arithmetic operations.  Try
Eliezer> thinking your mind onto a hundred times as much brain, the
Eliezer> way you get a hard drive a hundred times as large every 10
Eliezer> years or so.

Eliezer> Now that's just hardware, of course.  But evolution, the same
Eliezer> designer, wrote the hardware and the firmware.  Why shouldn't
Eliezer> there be equally huge improvements waiting in firmware?  We
Eliezer> understand human hardware better than human firmware, so we
Eliezer> can clearly see how restricted we are by not being able to
Eliezer> modify the hardware level.  Being unable to reach down to
Eliezer> firmware may be less visibly annoying, but it's a good bet
Eliezer> that the design idiom is just as powerful.

Eliezer> "The further down you reach, the more power."  This is the
Eliezer> idiom of strong self-improvement and I think the hardware
Eliezer> reply is a valid illustration of this.  It seems so simple
Eliezer> that it sounds like a cheap shot, but I think it's a valid
Eliezer> cheap shot.  We were born onto badly designed processors and
Eliezer> we can't fix that by pulling on the few levers exposed by our
Eliezer> introspective API.  The firmware is probably even more
Eliezer> important; it's just harder to explain.

Eliezer> And merely the potential hardware improvements still imply
Eliezer> I. J. Good's intelligence explosion.  So is there a practical
Eliezer> difference?

The cheapshot reply to your cheapshot reply, is that if we construct
an AI, that AI is just another part of the lower level in the weakly
self-improving process, its part of our "culture", so we can indeed
realize the hardware improvement. This may sound cheap, but it shows there is no real difference between the 2 layered system
and the entirely self-recursive one.

The cheap-cheap-cheap-reply is that if a self-improving AI goes off and builds a Dyson Sphere, and that is "no real difference", I'm not sure I want to see what a "real difference" looks like. Again, the cheap^3 reply seems to me valid because it asks what difference of experience we anticipate.

--
Eliezer S. Yudkowsky                          http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to