Re: Computation as an explanation of the universe (was Re: [agi] Recursive self-change: some definitions)

2008-09-04 Thread Abram Demski
 OK, then the observable universe has a finite description length. We don't 
 need to describe anything else to model it, so by universe I mean only the 
 observable part.


But, what good is it to only have finite description of the observable
part, since new portions of the universe enter the observable portion
continually? Physics cannot then be modeled as a computer program,
because computer programs do not increase in Kolmogorov complexity as
they run (except by a logarithmic term to count how long it has been
running).

 I am saying that the universe *is* deterministic. It has a definite quantum 
 state, but we would need about 10^122 bits of memory to describe it. Since we 
 can't do that, we have to resort to approximate models like quantum mechanics.


Yes, I understood that you were suggesting a deterministic universe.
What I'm saying is that it seems plausible for us to be able to have
an accurate knowledge of that deterministic physics, lacking only the
exact knowledge of particle locations et cetera. We would be forced to
use probabilistic methods as you argue, but they would not necessarily
be built into our physical theories; instead, our physical theories
act as a deterministic function that is given probabilistic input and
therefore yields probabilistic output.

 I believe there is a simpler description. First, the description length is 
 increasing with the square of the age of the universe, since it is 
 proportional to area. So it must have been very small at one time. Second, 
 the most efficient way to enumerate all possible universes would be to run 
 each B-bit machine for 2^B steps, starting with B = 0, 1, 2... until 
 intelligent life is found. For our universe, B ~ 407. You could reasonably 
 argue that the algorithmic complexity of the free parameters of string theory 
 and general relativity is of this magnitude. I believe that Wolfram also 
 argued that the (observable) universe is a few lines of code.


I really do not understand your willingness to restrict universe to
observable universe. The description length of the observable
universe was very small at one time because at that time none of the
basic stuffs of the universe had yet interacted, so by definition the
description length of the observable universe for each basic entity is
just the description length of that entity. As time moves forward, the
entities interact and the description lengths of their observable
universes increase. Similarly, today, one might say that the
observable universe for each person is slightly different, and indeed
the universe observable from my right hand would be slightly different
then the one observable from my left. They could have differing
description lengths.

In short, I think you really want to apply your argument to the
actual universe, not merely observable subsets... or if you don't,
you should, because otherwise it seems like a very strange argument.

 But even if we discover this program it does not mean we could model the 
 universe deterministically. We would need a computer larger than the universe 
 to do so.

Agreed... partly thanks to your argument below.

 There is a simple argument using information theory. Every system S has a 
 Kolmogorov complexity K(S), which is the smallest size that you can compress 
 a description of S to. A model of S must also have complexity K(S). However, 
 this leaves no space for S to model itself. In particular, if all of S's 
 memory is used to describe its model, there is no memory left over to store 
 any results of the simulation.

Point conceded.


--Abram


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: Computation as an explanation of the universe (was Re: [agi] Recursive self-change: some definitions)

2008-09-04 Thread Matt Mahoney
To clarify what I mean by observable universe, I am including any part that 
could be observed in the future, and therefore must be modeled to make accurate 
predictions. For example, if our universe is computed by one of an enumeration 
of Turing machines, then the other enumerations are outside our observable 
universe.

-- Matt Mahoney, [EMAIL PROTECTED]


--- On Thu, 9/4/08, Abram Demski [EMAIL PROTECTED] wrote:

 From: Abram Demski [EMAIL PROTECTED]
 Subject: Re: Computation as an explanation of the universe (was Re: [agi] 
 Recursive self-change: some definitions)
 To: agi@v2.listbox.com
 Date: Thursday, September 4, 2008, 9:43 AM
  OK, then the observable universe has a finite
 description length. We don't need to describe anything
 else to model it, so by universe I mean only the
 observable part.
 
 
 But, what good is it to only have finite description of the
 observable
 part, since new portions of the universe enter the
 observable portion
 continually? Physics cannot then be modeled as a computer
 program,
 because computer programs do not increase in Kolmogorov
 complexity as
 they run (except by a logarithmic term to count how long it
 has been
 running).
 
  I am saying that the universe *is* deterministic. It
 has a definite quantum state, but we would need about 10^122
 bits of memory to describe it. Since we can't do that,
 we have to resort to approximate models like quantum
 mechanics.
 
 
 Yes, I understood that you were suggesting a deterministic
 universe.
 What I'm saying is that it seems plausible for us to be
 able to have
 an accurate knowledge of that deterministic physics,
 lacking only the
 exact knowledge of particle locations et cetera. We would
 be forced to
 use probabilistic methods as you argue, but they would not
 necessarily
 be built into our physical theories; instead, our physical
 theories
 act as a deterministic function that is given probabilistic
 input and
 therefore yields probabilistic output.
 
  I believe there is a simpler description. First, the
 description length is increasing with the square of the age
 of the universe, since it is proportional to area. So it
 must have been very small at one time. Second, the most
 efficient way to enumerate all possible universes would be
 to run each B-bit machine for 2^B steps, starting with B =
 0, 1, 2... until intelligent life is found. For our
 universe, B ~ 407. You could reasonably argue that the
 algorithmic complexity of the free parameters of string
 theory and general relativity is of this magnitude. I
 believe that Wolfram also argued that the (observable)
 universe is a few lines of code.
 
 
 I really do not understand your willingness to restrict
 universe to
 observable universe. The description length of
 the observable
 universe was very small at one time because at that time
 none of the
 basic stuffs of the universe had yet interacted, so by
 definition the
 description length of the observable universe for each
 basic entity is
 just the description length of that entity. As time moves
 forward, the
 entities interact and the description lengths of their
 observable
 universes increase. Similarly, today, one might say that
 the
 observable universe for each person is slightly different,
 and indeed
 the universe observable from my right hand would be
 slightly different
 then the one observable from my left. They could have
 differing
 description lengths.
 
 In short, I think you really want to apply your argument to
 the
 actual universe, not merely observable
 subsets... or if you don't,
 you should, because otherwise it seems like a very strange
 argument.
 
  But even if we discover this program it does not mean
 we could model the universe deterministically. We would need
 a computer larger than the universe to do so.
 
 Agreed... partly thanks to your argument below.
 
  There is a simple argument using information theory.
 Every system S has a Kolmogorov complexity K(S), which is
 the smallest size that you can compress a description of S
 to. A model of S must also have complexity K(S). However,
 this leaves no space for S to model itself. In particular,
 if all of S's memory is used to describe its model,
 there is no memory left over to store any results of the
 simulation.
 
 Point conceded.
 
 
 --Abram
 
 
 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: Computation as an explanation of the universe (was Re: [agi] Recursive self-change: some definitions)

2008-09-04 Thread Abram Demski
On Thu, Sep 4, 2008 at 10:53 AM, Matt Mahoney [EMAIL PROTECTED] wrote:
 To clarify what I mean by observable universe, I am including any part that 
 could be observed in the future, and therefore must be modeled to make 
 accurate predictions. For example, if our universe is computed by one of an 
 enumeration of Turing machines, then the other enumerations are outside our 
 observable universe.

 -- Matt Mahoney, [EMAIL PROTECTED]


OK, that works. But, you cannot invoke current physics to argue that
this sort of observable universe is finite (so far as I know).

Of course, that is not central to your point anyway. The universe
might be spatially infinite while still having a finite description
length.

So, my only remaining objection is that while the universe *could* be
computable, it seems unwise to me to totally rule out the alternative.
As you said, the idea is something that makes testable predictions.
So, it is something to be decided experimentally, not philosophically.

-Abram


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: Computation as an explanation of the universe (was Re: [agi] Recursive self-change: some definitions)

2008-09-04 Thread Matt Mahoney
--- On Thu, 9/4/08, Abram Demski [EMAIL PROTECTED] wrote:

 So, my only remaining objection is that while the universe
 *could* be
 computable, it seems unwise to me to totally rule out the
 alternative.

You're right. We cannot prove that the universe is computable. We have evidence 
like Occam's Razor (if the universe is computable, then algorithmically simple 
models are to be preferred), but that is not proof.

At one time our models of physics were not computable. Then we discovered 
atoms, quantization of electric charge, general relativity (which bounds 
density and velocity), the big bang (history is finite) and quantum mechanics. 
Our models would still not be computable (requiring infinite description 
length) if any one of these events did not occur.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Computation as an explanation of the universe (was Re: [agi] Recursive self-change: some definitions)

2008-09-03 Thread Matt Mahoney
I think that computation is not so much a metaphor for understanding the 
universe as it is an explanation. If you enumerate all possible Turing 
machines, thus enumerating all possible laws of physics, then some of those 
universes will have the right conditions for the evolution of intelligent life. 
If neutrons were slightly heavier than they actually are (relative to protons), 
then stars could not sustain fusion. If they were slightly lighter, then they 
would be stable and we would have no elements.

Because of gravity, the speed of light, Planck's constant, the quantization of 
electric charge, and the finite age of the universe, the universe has a finite 
length description, and is therefore computable. The Bekenstein bound of the 
Hubble radius is 2.91 x 10^122 bits. Any computer within a finite universe must 
have less memory than it, and therefore cannot simulate it except by using an 
approximate (probabilistic) model. One such model is quantum mechanics.

For the same reason, an intelligent agent (which must be Turing computable if 
the universe is) cannot model itself, except probabilistically as an 
approximation. Thus, we cannot predict what we will think without actually 
thinking it. This property makes our own intelligence seem mysterious.

An explanation is only useful if it makes predictions, and it does. If the 
universe were not Turing computable, then Solomonoff induction and AIXI as 
ideal models of prediction and intelligence would not be applicable to the real 
world. Yet we have Occam's Razor and find in practice that all successful 
machine learning algorithms use algorithmically simple hypothesis sets.


-- Matt Mahoney, [EMAIL PROTECTED]

--- On Wed, 9/3/08, Terren Suydam [EMAIL PROTECTED] wrote:
From: Terren Suydam [EMAIL PROTECTED]
Subject: Re: [agi] Recursive self-change: some definitions
To: agi@v2.listbox.com
Date: Wednesday, September 3, 2008, 4:17 PM


Hi Ben, 

My own feeling is that computation is just the latest in a series of technical 
metaphors that we apply in service of understanding how the universe works. 
Like the others before it, it captures some valuable aspects and leaves out 
others. It leaves me wondering: what future metaphors will we apply to the 
universe, ourselves, etc., that will make computation-as-metaphor seem as 
quaint as the old clockworks analogies?

I believe that computation is important in that it can help us simulate 
intelligence, but intelligence itself is not simply computation (or if it is, 
it's in a way that requires us to transcend our current notions of 
computation). Note that I'm not suggesting anything mystical or dualistic at 
all, just offering the possibility that we can find still greater metaphors for 
how intelligence works. 

Either way though,
 I'm very interested in the results of your work - at worst, it will shed some 
needed light on the subject. At best... well, you know that part. :-]

Terren

--- On Tue, 9/2/08, Ben Goertzel [EMAIL PROTECTED] wrote:
From: Ben Goertzel [EMAIL PROTECTED]
Subject: Re: [agi] Recursive self-change: some definitions
To: agi@v2.listbox.com
Date: Tuesday, September 2, 2008, 4:50 PM



On Tue, Sep 2, 2008 at 4:43 PM, Eric Burton [EMAIL PROTECTED] wrote:

I really see a number of algorithmic breakthroughs as necessary for

the development of strong general AI 

I hear that a lot, yet I never hear any convincing  arguments in that regard...

So, hypothetically (and I hope not insultingly),
 I tend to view this as a kind of unconscious overestimation of the awesomeness 
of our own

species ... we feel intuitively like we're doing SOMETHING so cool in our 
brains, it couldn't
possibly be emulated or superseded by mere algorithms like the ones computer 
scientists
have developed so far ;-)


ben




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: Computation as an explanation of the universe (was Re: [agi] Recursive self-change: some definitions)

2008-09-03 Thread Abram Demski
Matt, I have several objections.

First, as I understand it, your statement about the universe having a
finite description length only applies to the *observable* universe,
not the universe as a whole. The hubble radius expands at the speed of
light as more light reaches us, meaning that the observable universe
has a longer description length every day. So it does not seem very
relevant to say that the description length is finite.

The universe as a whole (observable and not-observable) *could* be
finite, but we don't know one way or the other so far as I am aware.

Second, I do not agree with your reason for saying that physics is
necessarily probabilistic. It seems possible to have a completely
deterministic physics, which merely suffers from a lack of information
and computation ability. Imagine if the universe happened to follow
Newtonian physics, with atoms being little billiard balls. The
situation is deterministic, if only we knew the starting state of the
universe and had large enough computers to approximate the
differential equations to arbitrary accuracy.

Third, this is nitpicking, but I also am not sure about the argument
that we cannot predict our thoughts. It seems formally possible that a
system could predict itself. The system would need to be compressible,
so that a model of itself could fit inside the whole. I could be wrong
here, feel free to show me that I am. Anyway, the same objection also
applies back to the necessity of probabilistic physics: is it really
impossible for beings within a universe to have an accurate compressed
model of the entire universe? (Similarly, if we have such a model,
could we use it to run a simulation of the entire universe? This seems
much less possible.)

--Abram


On Wed, Sep 3, 2008 at 6:45 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
 I think that computation is not so much a metaphor for understanding the 
 universe as it is an explanation. If you enumerate all possible Turing 
 machines, thus enumerating all possible laws of physics, then some of those 
 universes will have the right conditions for the evolution of intelligent 
 life. If neutrons were slightly heavier than they actually are (relative to 
 protons), then stars could not sustain fusion. If they were slightly lighter, 
 then they would be stable and we would have no elements.

 Because of gravity, the speed of light, Planck's constant, the quantization 
 of electric charge, and the finite age of the universe, the universe has a 
 finite length description, and is therefore computable. The Bekenstein bound 
 of the Hubble radius is 2.91 x 10^122 bits. Any computer within a finite 
 universe must have less memory than it, and therefore cannot simulate it 
 except by using an approximate (probabilistic) model. One such model is 
 quantum mechanics.

 For the same reason, an intelligent agent (which must be Turing computable if 
 the universe is) cannot model itself, except probabilistically as an 
 approximation. Thus, we cannot predict what we will think without actually 
 thinking it. This property makes our own intelligence seem mysterious.

 An explanation is only useful if it makes predictions, and it does. If the 
 universe were not Turing computable, then Solomonoff induction and AIXI as 
 ideal models of prediction and intelligence would not be applicable to the 
 real world. Yet we have Occam's Razor and find in practice that all 
 successful machine learning algorithms use algorithmically simple hypothesis 
 sets.


 -- Matt Mahoney, [EMAIL PROTECTED]

 --- On Wed, 9/3/08, Terren Suydam [EMAIL PROTECTED] wrote:
 From: Terren Suydam [EMAIL PROTECTED]
 Subject: Re: [agi] Recursive self-change: some definitions
 To: agi@v2.listbox.com
 Date: Wednesday, September 3, 2008, 4:17 PM


 Hi Ben,

 My own feeling is that computation is just the latest in a series of 
 technical metaphors that we apply in service of understanding how the 
 universe works. Like the others before it, it captures some valuable aspects 
 and leaves out others. It leaves me wondering: what future metaphors will we 
 apply to the universe, ourselves, etc., that will make 
 computation-as-metaphor seem as quaint as the old clockworks analogies?

 I believe that computation is important in that it can help us simulate 
 intelligence, but intelligence itself is not simply computation (or if it is, 
 it's in a way that requires us to transcend our current notions of 
 computation). Note that I'm not suggesting anything mystical or dualistic at 
 all, just offering the possibility that we can find still greater metaphors 
 for how intelligence works.

 Either way though,
  I'm very interested in the results of your work - at worst, it will shed 
 some needed light on the subject. At best... well, you know that part. :-]

 Terren

 --- On Tue, 9/2/08, Ben Goertzel [EMAIL PROTECTED] wrote:
 From: Ben Goertzel [EMAIL PROTECTED]
 Subject: Re: [agi] Recursive self-change: some definitions
 To: agi@v2.listbox.com
 Date: 

Re: Computation as an explanation of the universe (was Re: [agi] Recursive self-change: some definitions)

2008-09-03 Thread Matt Mahoney
--- On Wed, 9/3/08, Abram Demski [EMAIL PROTECTED] wrote:

 From: Abram Demski [EMAIL PROTECTED]
 Subject: Re: Computation as an explanation of the universe (was Re: [agi] 
 Recursive self-change: some definitions)
 To: agi@v2.listbox.com
 Date: Wednesday, September 3, 2008, 7:35 PM
 Matt, I have several objections.
 
 First, as I understand it, your statement about the
 universe having a
 finite description length only applies to the *observable*
 universe,
 not the universe as a whole. The hubble radius expands at
 the speed of
 light as more light reaches us, meaning that the observable
 universe
 has a longer description length every day. So it does not
 seem very
 relevant to say that the description length is finite.

 The universe as a whole (observable and not-observable)
 *could* be
 finite, but we don't know one way or the other so far
 as I am aware.

OK, then the observable universe has a finite description length. We don't need 
to describe anything else to model it, so by universe I mean only the 
observable part.

 
 Second, I do not agree with your reason for saying that
 physics is
 necessarily probabilistic. It seems possible to have a
 completely
 deterministic physics, which merely suffers from a lack of
 information
 and computation ability. Imagine if the universe happened
 to follow
 Newtonian physics, with atoms being little billiard balls.
 The
 situation is deterministic, if only we knew the starting
 state of the
 universe and had large enough computers to approximate the
 differential equations to arbitrary accuracy.

I am saying that the universe *is* deterministic. It has a definite quantum 
state, but we would need about 10^122 bits of memory to describe it. Since we 
can't do that, we have to resort to approximate models like quantum mechanics.

I believe there is a simpler description. First, the description length is 
increasing with the square of the age of the universe, since it is proportional 
to area. So it must have been very small at one time. Second, the most 
efficient way to enumerate all possible universes would be to run each B-bit 
machine for 2^B steps, starting with B = 0, 1, 2... until intelligent life is 
found. For our universe, B ~ 407. You could reasonably argue that the 
algorithmic complexity of the free parameters of string theory and general 
relativity is of this magnitude. I believe that Wolfram also argued that the 
(observable) universe is a few lines of code.

But even if we discover this program it does not mean we could model the 
universe deterministically. We would need a computer larger than the universe 
to do so.

 Third, this is nitpicking, but I also am not sure about the
 argument
 that we cannot predict our thoughts. It seems formally
 possible that a
 system could predict itself. The system would need to be
 compressible,
 so that a model of itself could fit inside the whole. I
 could be wrong
 here, feel free to show me that I am. Anyway, the same
 objection also
 applies back to the necessity of probabilistic physics: is
 it really
 impossible for beings within a universe to have an accurate
 compressed
 model of the entire universe? (Similarly, if we have such a
 model,
 could we use it to run a simulation of the entire universe?
 This seems
 much less possible.)

There is a simple argument using information theory. Every system S has a 
Kolmogorov complexity K(S), which is the smallest size that you can compress a 
description of S to. A model of S must also have complexity K(S). However, this 
leaves no space for S to model itself. In particular, if all of S's memory is 
used to describe its model, there is no memory left over to store any results 
of the simulation.

 
 --Abram
 
 
 On Wed, Sep 3, 2008 at 6:45 PM, Matt Mahoney
 [EMAIL PROTECTED] wrote:
  I think that computation is not so much a metaphor for
 understanding the universe as it is an explanation. If you
 enumerate all possible Turing machines, thus enumerating all
 possible laws of physics, then some of those universes will
 have the right conditions for the evolution of intelligent
 life. If neutrons were slightly heavier than they actually
 are (relative to protons), then stars could not sustain
 fusion. If they were slightly lighter, then they would be
 stable and we would have no elements.
 
  Because of gravity, the speed of light, Planck's
 constant, the quantization of electric charge, and the
 finite age of the universe, the universe has a finite length
 description, and is therefore computable. The Bekenstein
 bound of the Hubble radius is 2.91 x 10^122 bits. Any
 computer within a finite universe must have less memory than
 it, and therefore cannot simulate it except by using an
 approximate (probabilistic) model. One such model is quantum
 mechanics.
 
  For the same reason, an intelligent agent (which must
 be Turing computable if the universe is) cannot model
 itself, except probabilistically as an approximation. Thus,
 we cannot predict what we