Eric Baum wrote:
>>> even if there would be some way to keep modifying the top level to
>>> make it better, one could presumably achieve just as powerful an
>>> ultimate intelligence by keeping it fixed and adding more powerful
>>> lower levels (or maybe better yet, middle levels) or more or
>>> better chunks and modules within a middle or lower level.
>> You had posed a 2 level system: humans and culture, and said this
>> was different from a seed AI, because the humans modify the
>> culture, and that's not as powerful as the whole AI modifying
>> itself.

Eliezer> Okay.  I would still defend that.  Not sure how the internal
Eliezer> structure of the AI directly relates to the above issue.

>> But what I'm arguing is that there is no such distinction, the
>> humans modifying the culture really does modify the humans in a
>> potentially arbitrarily powerful way; within most AI's I can
>> conceive, there will in any case be some fixed top level, even
>> within an AIXI or in Schmidhuber's OOPS or whatever to the extent I
>> understand them,

Eliezer> AIXI has an unalterably fixed top level; it cannot conceive
Eliezer> of the possibility of modifying itself.

Right, that's the point. AIXI is asserted to be optimally intelligent
(given arbitrary resources, I think this is uninteresting but maybe that's
beside the point). Thus its an existence proof that fixing the top
level and just modifying stuff below it can achieve optimal
intelligence. Given this example, what can the self-modifying
AI do that  humans modifying  "culture" can't?

Eliezer> Schmidhuber's OOPS, if I recall correctly, supposedly has no
Eliezer> invariants at all.  If you can prove the new code has
Eliezer> "greater expected utility", according to the current utility
Eliezer> function (even if the new code includes changes to the
Eliezer> utility function), and taking into account all changes that
Eliezer> will be adopted by the new code, the new code gets adopted.
Eliezer> But Schmidhuber is very vague about exactly how this proof
Eliezer> takes place.

I think you are confused about OOPS. Maybe you are talking about
Schmidhuber's Goedel machine (which I haven't read closely). But
even there, I think you are confused. The apparatus for proving
and adoping new code is fixed, no? 

But anyway, this is a digression. If you accept AIXI is "optimally"
intelligent, and has a fixed top level, 
then it proves there is no distinction between what is achievable with
fixed top level ("weakly self-improving") and with self-modification
("strongly-self improving"), which I believe is what you claimed.

Maybe you could argue that self-modification can be faster.
But we'd need to understand a lot more about both processes 
than we do to decide that as an affirmative fact.

Eliezer> My own thinking tends to the idea of a preserved optimization
Eliezer> target, preserved preferences over outcomes, rather than
Eliezer> protected bits in memory.

>> yet this doesn't preclude these things from powerful self
>> modification, having a 2 level system where the top level can't
>> modify its very top level (eg the humans can't modify their
>> genome-- positing for the sake of argument they don't and we only
>> talk about progress that's occurred to date) does not make it
>> weakly self-improving in some sense that bars it from gaining as
>> much power as a "strongly self-improving" alternative.

Eliezer> It is written in the _Twelve Virtues of Rationality_ that the
Eliezer> sixth virtue is empiricism: "Do not ask which beliefs to
Eliezer> profess, but which experiences to anticipate.  Always know
Eliezer> which difference of experience you argue about."

Eliezer> So let's see if we can figure out where we anticipate
Eliezer> differently, and organize the conversation around that.

Eliezer> The main experience I anticipate may be described intuitively
Eliezer> as "AI go FOOM".  Past some threshold point - definitely not
Eliezer> much above human intelligence, and probably substantially
Eliezer> below it - a self-modifying AI undergoes an enormously rapid
Eliezer> accession of optimization power (unless the AI has been
Eliezer> specifically constructed so as to prefer an ascent which is
Eliezer> slower than the maximum potential speed).  This is a testable
Eliezer> prediction, though its consequences render it significant
Eliezer> beyond the usual clash of scientific theories.

Well, that sounds a lot like what already happened to human
intelligence, when we gained the ability to speak and later to 
print. 

Eliezer> The basic concept is not original with me and is usually
Eliezer> attributed to a paper by I. J. Good in 1965, "Speculations
Eliezer> Concerning the First Ultraintelligent Machine".  (Pp. 31-88
Eliezer> in Advances in Computers, vol 6, eds. F. L. Alt and
Eliezer> M. Rubinoff.  New York: Academic Press.)  Good labeled this
Eliezer> an "intelligence explosion".  I have recently been trying to
Eliezer> consistently use the term "intelligence explosion" rather
Eliezer> than "Singularity" because the latter term has just been
Eliezer> abused too much.

The term "singularity" seems to imply that the rapid accession of
optimization power becomes infinite. Whereas, the human example
is that it merely becomes more rapid.

Eliezer> Now there are many different imaginable ways that an
Eliezer> intelligence explosion could occur.  As a physicist, you are
Eliezer> probably familiar with the history of the first nuclear pile,
Eliezer> which achieved criticality on December 2nd, 1942.  Szilard,
Eliezer> Fermi, and friends built the first nuclear pile, in the open
Eliezer> air of a squash court beneath Stagg Field at the University
Eliezer> of Chicago, by stacking up alternating layers of uranium
Eliezer> bricks and graphite bricks.  The nuclear pile didn't exhibit
Eliezer> its qualitative behavior change as a result of any
Eliezer> qualitative change in the behavior of the underlying atoms
Eliezer> and neutrons, nor as a result of the builders suddenly piling
Eliezer> on a huge number of bricks.  As the pile increased in size,
Eliezer> there was a corresponding quantitative change in the
Eliezer> effective neutron multiplication factor (k), which rose
Eliezer> slowly toward 1. The actual first fission chain reaction had
Eliezer> k of 1.0006 and ran in a delayed critical regime.

Eliezer> If Fermi et. al. had not possessed the ability to
Eliezer> quantitatively calculate the behavior of this phenomenon in
Eliezer> advance, but instead had just piled on the bricks hoping for
Eliezer> something interesting to happen, it would not have been a
Eliezer> good year to attend the University of Chicago.

Eliezer> We can imagine an analogous cause of an intelligence
Eliezer> explosion in which the key parameter is not the qualitative
Eliezer> ability to self-modify, but a critical value for a smoothly
Eliezer> changing quantitative parameter which measures how many
Eliezer> additional self-improvements are triggered by an average
Eliezer> self-improvement.

Eliezer> But this isn't the only potential cause of behavior that
Eliezer> empirically looks like "AI go FOOM".  The species Homo
Eliezer> sapiens showed a sharp jump in the effectiveness of
Eliezer> intelligence, as the result of natural selection exerting a
Eliezer> more-or-less steady optimization pressure on hominids for
Eliezer> millions of years, gradually expanding the brain and
Eliezer> prefrontal cortex, tweaking the software architecture.  A few
Eliezer> tens of thousands of years ago, hominid intelligence crossed
Eliezer> some key threshold and made a huge leap in real-world
Eliezer> effectiveness; we went from caves to skyscrapers in the blink
Eliezer> of an evolutionary eye.  This happened with a continuous
Eliezer> underlying selection pressure - there wasn't a huge jump in
Eliezer> the optimization power of evolution when humans came along.
Eliezer> The underlying brain architecture was also continuous - our
Eliezer> cranial capacity didn't suddenly increase by two orders of
Eliezer> magnitude.  

I say again: everything I know about this is straightforwardly and
naturally explainable by it being caused by the discovery of language,
which allowed cumulative improvement in the "cultural" component of
the program.

And this discovery itself may have been simply crossing a valley in 
a fitness landscape, such as going to digital rather than analog
encoding in verbal communication, or realizing that you should be
naming objects and trying to figure out what the other guy meant
when he spoke a name.


So it might be that, even if the AI is being
Eliezer> elaborated from outside by human programmers, the curve for
Eliezer> effective intelligence will jump sharply.  It's certainly
Eliezer> plausible that *the* key threshold was culture, but because
Eliezer> we wiped out all our nearest relatives, it's hard to
Eliezer> disentangle exactly which improvements to human cognition
Eliezer> were responsible for what.

I don't think the case can be proved, but I think it is much the
simplest explanation and is not contradicted by anything I know.

...

Eliezer> I don't think there should be a question that being able to
Eliezer> improve your hardware (possibly by millionfold or greater
Eliezer> factors) and rewrite your firmware should provide *some*
Eliezer> benefit.  *How much* benefit is the issue here.  Whether the
Eliezer> change I'm describing is "qualitatively different" is a proxy
Eliezer> question, which may turn on matters of mere definition; the
Eliezer> key issue is what we observe in real life.

Eliezer> Now, if you said that humans are already self-modifying to
Eliezer> such a degree that we should expect *no substantial
Eliezer> additional benefit* from an AI having direct access to its
Eliezer> own source code, *then* I'd know what difference of empirical
Eliezer> anticipation we were arguing about.

I objected to the claim of a qualitative difference. 
I also think humans have been very strongly self-modifying, that we
have in fact observed a "singularity" of sorts (which was very fast,
but not unbounded) when language and press were discovered.
I think that is good evidence of the nature of "singularities".
I think it will be hard enough to get an AI to human level; 
without very good evidence that I haven't seen 
I don't think we should expect some qualitatively different kind of
singularity.



-------
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to