Eric,

I hear what you say, but I do have principled reasons for characterizing it the way I did: as far as I can make out there is a difference between the 'thinking' component of an intelligence and the 'motivation' component, and that difference allows us to talk about manipulating and controlling the things that give us fun. Thinking is fun, but not because of something intrinsic to the thinking itself, but because of the external motivational system.

What that means is that if you build a system with the ability to do vastly more complex forms of thinking, that fact by itself does not have any implications for whether it is more 'enjoyable' or 'satisfying' to the system to engage in the the more complex form of thinking. What determines the satisfaction would be the way the more complex form of thinking affected the motivational system (if at all).

An analogy. Suppose I could insert a module in my brain right now to make me the most effective crossword puzzle solver in the world. Would that ability make me want to crosswords all the time? Would it be so satisfying that I would *never* consider extracting that crossword ability from my brain in the future? Would I ever cast around for other new skills that might give me pleasure? And if I did go looking for new skills to acquire, wouldn't that mean that in some sense I was not quite as satisfied with the pleasure I got from the crosswords?

What I am trying to do in this thought experiment is to drive a wedge between the idea of [the pleasure I get from doing intellectual task X] and [intellectual task X].

If you rephrased your suggestion to say that once I had experienced the possibility of changing the forms of pleasure that I am capable of experiencing, I would never want to go back to a state in which I had no options for changing my forms of pleasure, then I would completely agree with that. [I wonder why?]

But if you were to claim that once I experienced superintelligent intellectual capacity, I would never want to spend any time existing inside a lesser form of mind, that seems somehow unbelievable ... if only because in my present state I *am* curious about the possibility of spending time with less knowledge than I have now (e.g. being a tiger for a while, or being Isaac Newton before 1660), and it would seem that your hypothesis would say that I would not want that.


Richard Loosemore





Eric B. Ramsay wrote:
Actually Richard, these are the things you imagine you would like to do given your current level of intelligence. I suspect very much that the moment you went super intelligent there would be a paradigm change in what you consider "fun".
Eric

*/Richard Loosemore <[EMAIL PROTECTED]>/* wrote:

    Eugen Leitl wrote:
     > On Wed, Apr 18, 2007 at 03:54:50AM -0400, Randall wrote:
     >
     >> I can't for the life of me imagine why anyone who had seen the
     >> elephant would choose to go back to being Mundane.
     >
     > The question is also whether they could, if they wanted to.
     > A neanderthal wouldn't function well in today's society,
     > and anything lesser would run a good chance of becoming roadkill.
     >
     >> If I could flip a switch and increase my _g_ by two orders of
     >> magnitude, I'd never flip that switch back. Why would anybody?
     >
     > I wouldn't. But I wouldn't max out the knob immediately, either.
     > I would just go for a slow, sustainable growth, at least as long
     > nobody else is rushing ahead.
     >

    [META COMMENT. Is it my imagination, or have some funny things have
    been happening to the AGI and/or Singularity lists recently... e.g.
    delivery of messages as if they were offlist?]

    I think you are looking at the possibilities through far to narrow a
    prism.

    Consider. Would it be interesting to find what it is like to be, say, a
    tiger? A whale? A dolphin? I can think of ways to temporarily get
    transferred into the form of any reasonably high-level animal, then
    come
    back again to human later, with at least some memories of what it was
    like to have been in that state.

    In a future in which all these things are possible, why would people
    not
    be interested in having this kind of fun?

    Now imagine the possibility of becoming superintelligent. That could
    get kind of heavy after a while. I do not necessarily think that I want
    to know about all of the science in human history, for example, to such
    a deep extent that it would be as if I had been teaching it for
    centuries, and was bored with every last bit of it. Would you?

    I would want to have fun. And the big part of having fun would be
    finding out new stuff.

    So, yes, I would want to become superintelligent occasionally, but it
    seems to me that the more intelligent I become, the more I know about
    complex problems I cannot fix, and the more that frustrates me. That's
    not fun after a while. Sometimes it would be nice to go back to just
    being a kid for a while.

    Then there is the possibility of recreating historical situations. I
    would like to be able to be one of the people who was around when none
    of modern science existed, just so I could try to discover that stuff
    when it was new. To do that I would have to reduce my current knowledge
    by putting it on ice for a while.

    And on and on.... I can think of vast numbers of reasons not to do the
    boring thing of just trying to get into a high-intelligence brain.

    It's not the destination, folks, its the journey.




    Richard Loosemore

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936

Reply via email to