Eliezer> It should be emphasized that I wrote LOGI in 2002; 

Didn't know that. Are the rest of the papers in that 2005 book as old?

Eliezer> Nonetheless, calling something "complex" doesn't explain it.

Methinks you protest too much, although I take the point. But I did 
like the presentation-- you didn't just say it was complex, you
pointed out it was layered, which some in the AI community had failed
to adequately credit (cf your critique of semantic nets).


Eliezer> A giant lookup table is a simple process that may know an
Eliezer> arbitrarily large amount, depending on the incompressibility
Eliezer> of the lookup table.  A human programmer turned loose on the
Eliezer> purely abstract form of a simple problem (e.g. stacking
Eliezer> towers of blocks), who invents a purely abstract algorithm
Eliezer> (e.g. mergesort) without knowing anything about which
Eliezer> specific blocks need to be moved, is an example of a complex
Eliezer> process that used very little specific knowledge about that
Eliezer> specific problem to come up with a good general solution.

I respectfully suggest that the human programmer couldn't do that
unless he knew a lot, in fact unless he had most of the program
(in chunks, not exactly assembled, capable of being assembled in
different ways to solve different problems) already in his head before
attacking the problem.

Even an untrained human couldn't do it, and an untrained human
is 10^44 creatures worth of evolution away from a tabula rasa.


Eliezer> Is the term "top level" really all that useful for describing
Eliezer> evolutionary designs?  The human brain has more than one
Eliezer> center of gravity.  The limbic system, the ancient goal
Eliezer> system at the center, is a center of gravity; everything grew
Eliezer> up around it.  The prefrontal cortex, home of reflection and
Eliezer> the self-model, is a center of gravity.  The cerebellum,
Eliezer> which learns the realtime skill of "thinking" and projects
Eliezer> massively to the cortex, is a center of gravity.

What do you mean by center of gravity?

I talked about levels in part because it was the subject of
your paper :^) , and because the comment I was discussing seemed to 
have a two level nature (the human and the culture);
but I do tend to think that big hierarchic programs tend to
have top modules to a (potentially somewhat fuzzy) extent, 
and I tend to think of information as
being filtered and processed up to a point where decisions are made,
and the brain certainly has a somewhat layered structure.

Eliezer>  From my perspective, this argument over "top levels" doesn't
Eliezer> have much to do with the question of recursive
Eliezer> self-improvement!  It's the agent's entire intelligence that
Eliezer> may be turned to improving itself.  Whether the greatest
Eliezer> amount of heavy lifting happens at a "top level", or lower
Eliezer> levels, or systems that don't modularize into levels of
Eliezer> organization; and whether the work done improves upon the
Eliezer> AI's top layers or lower layers; doesn't seem to me to
Eliezer> impinge much upon the general thrust of I.  J. Good's
Eliezer> "intelligence explosion" concept.  "The AI improves itself."
Eliezer> Why does this stop being an interesting idea if you further
Eliezer> specify that the AI is structured into levels of organization
Eliezer> with a simple level describable as "top"?

As I said:
> even if there would be some way to keep modifying the top level 
> to make it better, one could presumably achieve just as powerful an 
> ultimate intelligence by keeping it fixed and adding more powerful 
> lower levels (or maybe better yet, middle levels) or more or better 
> chunks and modules within a middle or lower level.

You had posed a 2 level system: humans and culture,
and said this was different from a seed AI, because the humans modify
the culture, and that's not as powerful as the whole AI modifying
itself.

But what I'm arguing is that there is no such distinction,
the humans modifying the culture really does modify the humans in
a potentially arbitrarily powerful way;
within most AI's I can conceive, there will in any case be some
fixed top level, even within an AIXI or in
Schmidhuber's OOPS or whatever to the extent I understand them,
yet this doesn't preclude these things from powerful self modification,
having a 2 level system where the top level can't modify its very
top level (eg the humans can't modify their genome-- positing for the 
sake of argument they don't and we only talk about progress that's
occurred to date) does not make it weakly self-improving in some sense
that bars it from gaining as much power as a "strongly self-improving"
alternative.





>> I think the hard problem about achieving intelligence is crafting
>> the software, which problem is "hard" in a technical sense of being
>> NP-hard and requiring major computational effort,

Eliezer> As I objected at the AGI conference, if intelligence were
Eliezer> hard in the sense of being NP-hard, a mere 10^44 nodes
Eliezer> searched would be nowhere near enough to solve an environment
Eliezer> as complex as the world, nor find a solution anywhere near as
Eliezer> large as the human brain.

Eliezer> *Optimal* intelligence is NP-hard and probably
Eliezer> Turing-incomputable.  This we all know.


Eliezer> But if intelligence had been a problem in which *any*
Eliezer> solution whatsoever were NP-hard, it would imply a world in
Eliezer> which all organisms up to the first humans would have had
Eliezer> zero intelligence, and then, by sheer luck, evolution would
Eliezer> have hit on the optimal solution of human intelligence.  What
Eliezer> makes NP-hard problems difficult is that you can't gather
Eliezer> information about a rare solution by examining the many
Eliezer> common attempts that failed.

Eliezer> Finding successively better approximations to intelligence is
Eliezer> clearly not an NP-hard problem, or we would look over our
Eliezer> evolutionary history and find exponentially more evolutionary
Eliezer> generations separating linear increments of intelligence.
Eliezer> Hominid history may or may not have been "accelerating", but
Eliezer> it certainly wasn't logarithmic!

Eliezer> If you are really using NP-hard in the technical sense, and
Eliezer> not just a colloquial way of saying "bloody hard", then I
Eliezer> would have to say I flatly disagree: Over the domain where
Eliezer> hominid evolution searched, it was not an NP-hard problem to
Eliezer> find improved approximations to intelligence by local search
Eliezer> from previous solutions.

I am using the term NP-hard to an extent metaphorically, but
drawing on real complexity notions that problems can really be hard.
I'm not claiming that constructing an intelligence is a decision problem 
with a yes-no answer; in fact I'm not claiming it's an infinite class
of problems, which is necessary to talk about asympotic behavior
altogether. 
It's a particular instance-- we are trying to construct
one particular program that works in this particular world,
meaning solves a large collection of problems of certain types.
(I don't buy into the notion of "general intelligence" that solves
any possible world or any possible problem.)
I think the problem of constructing the right code
for intelligence is a problem like finding a very short tour in
a particular huge TSP instance. A human can't solve it by hand, (for
reasons that are best understood by thinking about complexity theory
results about infinite problem classes and in the limit behavior,
which is why I appeal to that understanding). 
To solve it, you are going to have to construct a good algorithm,
*and run it for a long time*. If you do that, you can get a better
and better solution, just like if you run Lin-Kernighan on a huge
TSP instance, you will find a pretty short tour.

Evolution ran for a heck of a lot of computation on the problem.
It is possible that humans will be able to jump start a lot of
that, but its also true we are not going to be able to run
for as much computation. Its an open question whether we can get
there, but I suggest it may take a composite algorithm-- both 
jump starting the code design and then running a lot to improve it.

Eliezer> Now as Justin Corwin pointed out to me, this does not mean
Eliezer> that intelligence is not *ultimately* NP-hard.  Evolution
Eliezer> could have been searching at the bottom of the design space,
Eliezer> coming up with initial solutions so inefficient that there
Eliezer> were plenty of big wins.  From a pragmatic standpoint, this
Eliezer> still implies I. J. Good's intelligence explosion in
Eliezer> practice; the first AI to search effectively enough to run up
Eliezer> against NP-hard problems in making further improvements, will
Eliezer> make an enormous leap relative to evolved intelligence before
Eliezer> running out of steam.

I don't know what you mean here at all.

>> so the ability to make sequential small improvements, and bring to
>> bear the computation of millions or billions of (sophisticated,
>> powerful) brains, led to major improvements.

Eliezer> This is precisely the behavior that does *not* characterize
Eliezer> NP-hard problems.  Improvements on NP-hard problems don't add
Eliezer> up; when you tweak a local subproblem it breaks something
Eliezer> else.


>> I suggest these improvements are not merely "external", but
>> fundamentally affect thought itself. For example, one of the
>> distinctions between human and ape cognition is said to be that we
>> have "theory of mind" whereas they don't (or do much more
>> weakly). But I suggest that "theory of mind" must already be a
>> fairly complex program, built out of many sub-units, and that we
>> have built additional components and capabilities on what came
>> evolutionarily before by virtue of thinking about the problem and
>> passing on partial progress, for example in the mode of bed-time
>> stories and fiction. Both for language itself and things like
>> theory of mind, one can imagine some evolutionary improvements in
>> ability to use it through the Baldwin effect, but the main point
>> here seems to be the use of external storage in "culture" in
>> developing the algorithms and passing them on. Other examples of
>> modules that directly effect thinking prowess would be the
>> axiomatic method, and recursion, which are specific human
>> discoveries of modes of thinking, that are passed on using language
>> and improve "intelligence" in a core way.

Eliezer> Considering the infinitesimal amount of information that
Eliezer> evolution can store in the genome per generation, on the
Eliezer> order of one bit, 

Actually, with sex its theoretically possible to gain something like 
sqrt(P) bits per generation (where P is population size), cf Baum, Boneh paper
could be found on whatisthought.com and also Mackay paper. (This is
a digression, since I'm not claiming huge evolution since chimps).

Eliezer> it's certainly plausible that a lot of our
Eliezer> software is cultural.  This proposition, if true to a
Eliezer> sufficiently extreme degree, strongly impacts my AI ethics
Eliezer> because it means we can't read ethics off of generic human
Eliezer> brainware.  But it has very little to do with my AGI theory
Eliezer> as such.  Programs are programs.

It has to do with the subject of my post, which was that by modifying
the culture, humans have modified their core intelligence, so
there is no distinction from strongly self improving.

Eliezer> But try to teach the human operating system to a chimp, and
Eliezer> you realize that firmware counts for *a lot*.  Kanzi seems to
Eliezer> have picked up some interesting parts of the human operating
Eliezer> system - but Kanzi won't be entering college anytime soon.

I'm not claiming there was 0 evolution between chimp and man--
our brains are 4 times bigger. I'm claiming that the hard part--
discovering the algorithms-- was mostly done by humans using storage
and culture. Then there was some simple tuning up in brain size,
and some slightly more complex Baldwin-effect etc tuning up
programming grammar into the genome in large measure, so we become
much more facile at learning the stuff quickly, and maybe other
similar stuff. I don't deny that if you turn all that other stuff
off you get an idiot, I'm just claiming it was computationally
easy.

>> I don't understand any real distinction between "weakly self
>> improving processes" and "strongly self improving processes", and
>> hence, if there is such a distinction, I would be happy for
>> clarification.

Eliezer> The "cheap shot" reply is: Try thinking your neurons into
Eliezer> running at 200MHz instead of 200Hz.  Try thinking your
Eliezer> neurons into performing noiseless arithmetic operations.  Try
Eliezer> thinking your mind onto a hundred times as much brain, the
Eliezer> way you get a hard drive a hundred times as large every 10
Eliezer> years or so.

Eliezer> Now that's just hardware, of course.  But evolution, the same
Eliezer> designer, wrote the hardware and the firmware.  Why shouldn't
Eliezer> there be equally huge improvements waiting in firmware?  We
Eliezer> understand human hardware better than human firmware, so we
Eliezer> can clearly see how restricted we are by not being able to
Eliezer> modify the hardware level.  Being unable to reach down to
Eliezer> firmware may be less visibly annoying, but it's a good bet
Eliezer> that the design idiom is just as powerful.

Eliezer> "The further down you reach, the more power."  This is the
Eliezer> idiom of strong self-improvement and I think the hardware
Eliezer> reply is a valid illustration of this.  It seems so simple
Eliezer> that it sounds like a cheap shot, but I think it's a valid
Eliezer> cheap shot.  We were born onto badly designed processors and
Eliezer> we can't fix that by pulling on the few levers exposed by our
Eliezer> introspective API.  The firmware is probably even more
Eliezer> important; it's just harder to explain.

Eliezer> And merely the potential hardware improvements still imply
Eliezer> I. J. Good's intelligence explosion.  So is there a practical
Eliezer> difference?

The cheapshot reply to your cheapshot reply, is that if we construct
an AI, that AI is just another part of the lower level in the weakly
self-improving process, its part of our "culture", so we can indeed
realize the hardware improvement. This may sound cheap, but it 
shows there is no real difference between the 2 layered system
and the entirely self-recursive one.

 

>> Eric Baum http://whatisthought.com

Eliezer> -- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow,
Eliezer> Singularity Institute for Artificial Intelligence

Eliezer> ------- To unsubscribe, change your address, or temporarily
Eliezer> deactivate your subscription, please go to
Eliezer> http://v2.listbox.com/member/[EMAIL PROTECTED]

-------
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to