RE: [agi] A theorem of change and persistence????

2005-01-04 Thread Philip Sutton
Hi Ben,

 If you model a system in approximate detail then potentially you can
 avoid big surprises and have only small surprises.  

In chaotic systems, my guess is that compact models would capture many 
possibilities that would otherwise be surprises  - especially in the near term. 
 
But I think it's unlikely that these models would capture all the big potential 
surprises leaving only small surprises to happen.  I would imagine that 
compact models would fail to capture at least some lower-probability very big 
surprises.

 If a super-AI were reshaping the universe, it could reshape the
 universe in such a way that from that point on, the dynamics of the
 universe would be reasonably well predictable via compact approximative
 models.  In fact this would probably be a clever thing to do, assuming
 it could be done without sacrificing too much of the creative potential
 of the universe... 

My guess is that, to make the universe a moderately predictable place, 
creativity would have to be kept at a very low level - with only creativity 
space 
for one super-AGI.  Trying to knock the unpredictability out of the universe 
could be engaging for a super-AGI (that was so inclined) for a while (given the 
resistance it would encounter). But I reckon the super-AGI might find a 
moderately predictable universe fairly unstimulating in the long run.  

Cheers, Philip

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] A theorem of change and persistence????

2005-01-04 Thread Ben Goertzel

  If a super-AI were reshaping the universe, it could reshape the
  universe in such a way that from that point on, the dynamics of the
  universe would be reasonably well predictable via compact approximative
  models.  In fact this would probably be a clever thing to do, assuming
  it could be done without sacrificing too much of the creative potential
  of the universe...

 My guess is that, to make the universe a moderately predictable place,
 creativity would have to be kept at a very low level - with only
 creativity space
 for one super-AGI.  Trying to knock the unpredictability out of
 the universe
 could be engaging for a super-AGI (that was so inclined) for a
 while (given the
 resistance it would encounter). But I reckon the super-AGI might find a
 moderately predictable universe fairly unstimulating in the long run.

 Cheers, Philip

Well, you might be right.  But this comes down to you and me speculating
about the aesthetic preferences of a massively superhuman being, and I
really doubt the accuracy of either of our speculations in this regard ;-)

-- Ben


---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] A theorem of change and persistence????

2005-01-04 Thread Eugen Leitl
On Wed, Jan 05, 2005 at 02:52:50AM +1100, Philip Sutton wrote:

 My guess is that, to make the universe a moderately predictable place, 
 creativity would have to be kept at a very low level - with only creativity 
 space 
 for one super-AGI.  Trying to knock the unpredictability out of the universe 
 could be engaging for a super-AGI (that was so inclined) for a while (given 
 the 
 resistance it would encounter). But I reckon the super-AGI might find a 
 moderately predictable universe fairly unstimulating in the long run.  

Co-evolution of AGI populations guarantees unpredictability, and an arms
race in capabilities.

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a
__
ICBM: 48.07078, 11.61144http://www.leitl.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
http://moleculardevices.org http://nanomachines.net

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


pgpJRbVlmIvS6.pgp
Description: PGP signature