Re: [agi] A theorem of change and persistence????

2005-01-05 Thread Eugen Leitl
On Wed, Jan 05, 2005 at 12:29:10PM +1100, Philip Sutton wrote:

  Co-evolution of AGI populations guarantees unpredictability, and an
  arms race in capabilities.
 
 Provided there is indeed a population of AGIs and not just one.

You can't keep clones synchronous in a relativistic universe. Growth of any
kind is fraught with fragmentation -- and of course any plausible AGI
scenario would involve co-evolution in the global network to start with.
 
 But even if there was just one AGI - diversity would most likely develop 
 within 
 the brain of the AGI - unless the AGI itself decided not to think about 
 diverse 
 dynamics - effectively to go into a coma.  My guess is that thinking about 
 diverse dynamics (eg. modelling hypothetical behaviours of virtual autonomous 
 agents) would recreate at least some degree of uncertainty.  Sort of along 
 the 
 lines that if the diversity isn't 'out there' in the real universe, then it 
 will be 'in 
 here' in the mind of the super AGI - so the mind of the super AGI becomes the 
 'ground' for a new domain of diversity, evolution and uncertainty.

I think postbiology will have dramatically more diversity and evolutionary
dynamics, not less.

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a
__
ICBM: 48.07078, 11.61144http://www.leitl.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
http://moleculardevices.org http://nanomachines.net

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


pgpHq60hbTF1V.pgp
Description: PGP signature


RE: [agi] A theorem of change and persistence????

2005-01-04 Thread Philip Sutton
Hi Ben,

 If you model a system in approximate detail then potentially you can
 avoid big surprises and have only small surprises.  

In chaotic systems, my guess is that compact models would capture many 
possibilities that would otherwise be surprises  - especially in the near term. 
 
But I think it's unlikely that these models would capture all the big potential 
surprises leaving only small surprises to happen.  I would imagine that 
compact models would fail to capture at least some lower-probability very big 
surprises.

 If a super-AI were reshaping the universe, it could reshape the
 universe in such a way that from that point on, the dynamics of the
 universe would be reasonably well predictable via compact approximative
 models.  In fact this would probably be a clever thing to do, assuming
 it could be done without sacrificing too much of the creative potential
 of the universe... 

My guess is that, to make the universe a moderately predictable place, 
creativity would have to be kept at a very low level - with only creativity 
space 
for one super-AGI.  Trying to knock the unpredictability out of the universe 
could be engaging for a super-AGI (that was so inclined) for a while (given the 
resistance it would encounter). But I reckon the super-AGI might find a 
moderately predictable universe fairly unstimulating in the long run.  

Cheers, Philip

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] A theorem of change and persistence????

2005-01-04 Thread Ben Goertzel

  If a super-AI were reshaping the universe, it could reshape the
  universe in such a way that from that point on, the dynamics of the
  universe would be reasonably well predictable via compact approximative
  models.  In fact this would probably be a clever thing to do, assuming
  it could be done without sacrificing too much of the creative potential
  of the universe...

 My guess is that, to make the universe a moderately predictable place,
 creativity would have to be kept at a very low level - with only
 creativity space
 for one super-AGI.  Trying to knock the unpredictability out of
 the universe
 could be engaging for a super-AGI (that was so inclined) for a
 while (given the
 resistance it would encounter). But I reckon the super-AGI might find a
 moderately predictable universe fairly unstimulating in the long run.

 Cheers, Philip

Well, you might be right.  But this comes down to you and me speculating
about the aesthetic preferences of a massively superhuman being, and I
really doubt the accuracy of either of our speculations in this regard ;-)

-- Ben


---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] A theorem of change and persistence????

2005-01-04 Thread Eugen Leitl
On Wed, Jan 05, 2005 at 02:52:50AM +1100, Philip Sutton wrote:

 My guess is that, to make the universe a moderately predictable place, 
 creativity would have to be kept at a very low level - with only creativity 
 space 
 for one super-AGI.  Trying to knock the unpredictability out of the universe 
 could be engaging for a super-AGI (that was so inclined) for a while (given 
 the 
 resistance it would encounter). But I reckon the super-AGI might find a 
 moderately predictable universe fairly unstimulating in the long run.  

Co-evolution of AGI populations guarantees unpredictability, and an arms
race in capabilities.

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a
__
ICBM: 48.07078, 11.61144http://www.leitl.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
http://moleculardevices.org http://nanomachines.net

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


pgpJRbVlmIvS6.pgp
Description: PGP signature


RE: [agi] A theorem of change and persistence????

2005-01-03 Thread Ben Goertzel

 It seems to me that the only way to 'model' the universe is to
 use the real
 whole-universe - so a whole-universe intelligence would not have enough
 computing power to model itself in complete detail therefore the
 future would
 still hold surprises that the whole-universe intelligence would
 need to expend
 energy on to manage - while its internal low entropy lasted.

But you don't need to model a system in complete detail to avoid big
surprises.  If you model a system in approximate detail then potentially you
can avoid big surprises and have only small surprises.  This depends on how
good your model is, and what the system's dynamics are like.  If a super-AI
were reshaping the universe, it could reshape the universe in such a way
that from that point on, the dynamics of the universe would be reasonably
well predictable via compact approximative models.  In fact this would
probably be a clever thing to do, assuming it could be done without
sacrificing too much of the creative potential of the universe...

-- Ben G



---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] A theorem of change and persistence????

2004-12-30 Thread Philip Sutton
Hi Ben,

On 23 Dec you said:
 I would say that if the universe remains configured roughly as it is now,
 then your statement (that long-term persistence requires goal-directed
 effort) is true.
 
 However, the universe could in the future find itself in a configuration
 in which your statement was FALSE, either
 
 -- via self-organization, or
 
 -- via the goal-directed activity of an intelligent system, which then
 stopped being goal-directed after it had set the universe in a
 configuration where its persistence could continue without goal-directed
 effort
 

Taking the last first.. wouldn't option 2 require the intelligent system to 
end 
the evolution of the universe to achieve this result..ie. bring on the heat 
death of the universe!

I can't see why 'self-organisation' would lead to a universe where persistence 
through deep time of apects of the universe that an intelligence favours did 
not 
require goal directed effort/expenditure of energy. How could you see this 
happening?

Even if the intelligence actually absorbed the whole of the universe into 
itself I 
think my theorum would still hold - because a whole-universe intelligence 
would find it's internal sub-systems still evolving in surprising ways.  

It seems to me that the only way to 'model' the universe is to use the real 
whole-universe - so a whole-universe intelligence would not have enough 
computing power to model itself in complete detail therefore the future would 
still hold surprises that the whole-universe intelligence would need to expend 
energy on to manage - while its internal low entropy lasted. 

Cheers, Philip


---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] A theorem of change and persistence????

2004-12-20 Thread Brad Wyble
On Sun, 19 Dec 2004, Ben Goertzel wrote:
Hmmm...
Philip, I like your line of thinking, but I'm pretty reluctant to extend 
human logic into the wildly transhuman future...

Ben, this isn't so much about logic as it is about thermodynamics and it's 
going to be a very long time indeed before we can get around that one.

Phil's idea comes down to stating that the entity will need to exert 
energy to counteract local entropy in order to remain a coherent being.

I'd agree and state a trivial extension:  The larger the sphere  (in 
physical or 
some other space) of entropy that 
the entity is counteracting by expending energy, the greater it's chance 
of survival.

Consider humanity, let's assume we'll survive as a species long as the 
earth remains intact.  We're still vulnerable from asteroids (admittedly a 
miniscule chance).  If we extend our sphere of control of entropy into 
space (by building gizmos and whatsits to protect the earth), we further 
increase our chance of deep time survival.  We've made a bubble of entropy 
control around the earth.

I'd also put forth this one: it's more energy efficient to ensure deep 
time longevity by reproduction than by protection.

There's a book called the Robot's Rebellion which espouses the view that 
humans are a deep-time survival mechanism for our DNA.  I haven't read it 
yet, but it sounds right on target for this topic.

-Brad
---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] A theorem of change and persistence????

2004-12-20 Thread Brad Wyble
The Robot's Rebellion : Finding Meaning in the Age of Darwin
by Keith E. Stanovich
University of Chicago Press (May 15, 2004)
ISBN: 0226770893
Cheers, Philip

I'm glad you looked this up and posted it, as there are two books titled 
The Robot's Rebellion, the other being a very controversial attack on 
organized religion.

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] A theorem of change and persistence????

2004-12-19 Thread Ben Goertzel
Hmmm...
Philip, I like your line of thinking, but I'm pretty reluctant to extend 
human logic into the wildly transhuman future...

The very idea of separating persistence from change is an instance of 
human-culture thinking that may not apply to the reasoning of a transhuman 
being.

Consider for instance that quantum logic handles disjunctions (A or B) 
quite differently than ordinary Boolean logic.  What kind of logic might a 
massively transhuman mind apply?

-- Ben
- Original Message - 
From: Philip Sutton [EMAIL PROTECTED]
To: [EMAIL PROTECTED]; [EMAIL PROTECTED]
Sent: Sunday, December 19, 2004 12:01 PM
Subject: [agi] A theorem of change and persistence


I think I might have just worked out a basic theorem of relevance to 
artificial
general intelligences.  I'd be interested to know what you think.

Let's postulate that an AGI is created that is committed to generating 
change
in the universe (possibly fast or even accelerating change).  Let's also
postulate that this AGI wishes to persist through deep time (and/or that 
the
AGI wishes some other entity or attribute to persist through deep time - 
note:
this bracketted addendum is not necessary for the argument if the AGI 
wishes
itself to persist).

In the face of a changing world, where there is at least one thing that 
the AGI
wishes to survive with (effectively) 100% certainty through deep time, 
then the
AGI will need to *systematically* generate a stream of changes that 
'locally'
offset the general change in the universe sufficient to enable the chosen 
thing
to persist.

Conclusion: This means that an AGI that wants to persist through deep time
(or that wants anything else to persist through deep time) will need to 
devote
sufficient thinking and action time and resources to successfully managing 
its
persistence agenda.  In a reality of resource constraints, the AGI will 
need to
become highly efficient at pursuing its persistence agenda (given the
tendency for changes in the universe to radiate/multiply) and it will 
(most
likely) need to manage its broader change promotion agenda so as not to
make its persistence agenda too hard to fulfill.

What do you think?
Cheers, Philip
---
To unsubscribe, change your address, or temporarily deactivate your 
subscription,
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]



---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]