Re: Tegmark is too physics-centric

2004-03-09 Thread Bruno Marchal
Hi Russell,

At 11:50 09/03/04 +1100, Russell Standish wrote:


Yes, in your thesis you often talk about survival under replacement of
a digital brain (cerveau digital). Digital simply means operates with
1s and 0s. Since any analogue value can be represented arbitrarily
accurately by a digital signal, this doesn't seem much of a stretch.


Except that the processing is also digitalized, making things
a little less trivial.

In your chapter 1, you refer to a machine universelle digitale, c'est
a dire un ordinateur. The English word computer, which is the literal
translation of ordinateur, can refer to an analogue computer, which is
merely a device for performing computations - it needn't even be
Turing complete.


Not really. Ordinateur means really (in french) *universal* computer.
If not universal we say calculateur (and that one can be analog indeed).

The fact that you used the word universelle
previous does imply Turing completeness, but not that it is equivalent
to a Turing machine. After all, I might be concerned if there was some
noncomputable part of the brain that was not captured by a Turing
machine, but could be built into a digital machine of some kind (eg by
accurate copying of the physical layout of the brain).


In that case you would be simulable by a computer at *some* level.
COMP asks only that, mainly.


Now I noticed you used the word indexical. What does this mean? (I
tend to skip over terms I don't understand, in the hope that I
understand the gist of the argument).


Indexical is an adjective which applies to word like me, now, here ...
I did use indexical in Conscience et Mecanisme, but I don't use it
any more. It is implicit in the Yes Doctor part of comp where it is supposed
that it is *you* who says yes to the doctor.


Anyway, the upshot of this was that I assumed that COMP was in fact
more general that computationalism. In fact I believe the first half
of your thesis (chapters 1-4) indeed still hold for this more general
interpretation of COMP (namely the necessity for subjective
indeterminism etc).


Yes. The comp hyp can be weakened. I have not try to prove the most
general theorem.

True computationalism is perhaps only required for
the later sections where you invoke Thaetus's (is that the correct
translation of Theetete?) theories  of knowledge (connaissance). For
here, you need Goedel's theorem, which is applicable in the case of
Turing machines.


I don't think so. What *is* true is that when I interview the universal
machine (through the logic G and G*) I do choose computationalist
universal machine. Such machine believes in the use of classical logic
in arithmetic, etc.
BTW Goedel theorems apply to a lot more than simple digital machine,
in particular it applies to machine with oracle(s).

I agree one can simulate the Schroedinger equation of QM (albeit with
irrelevant exponential slowdown). However, mapping this back to your
YD postulate, this involves the doctor swapping the entire universe,
not just your brain. Perhaps you mean that one of the options the
doctor has is to upload you into a well crafted digital simulation (by
a Turing machine even) of you and your complete environment (a la
Matrix).


This is why in the new version of the argument including those I send
to the list sometimes ago I explicitely add the NEURO hypothesis
(the hypothesis that the brain is in the skull), but then I explicitly
show how the NEURO hyp. is eliminated once the Universal Dovetetailer
is introduced. After all the DU will generates the many state of my brain
whatever it is, if one accept just the fact that we are Turing-emulable.



Reminds me of the option Arthur Dent was presented with by the
pandimensional beings (aka mice) when they wanted to mince his brain
to extract the question for which the answer was '42'.


(And this reminds me that string theorists seems to succeed toward getting a
reversible theory of black hole. See the nice economist summary.
There is a link to an abstract from Mathur's paper with outline of the paper.)
http://www.economist.com/printedition/displayStory.cfm?Story_ID=2478180
Best Regards,

Bruno

http://iridia.ulb.ac.be/~marchal/



Re: Tegmark is too physics-centric

2004-03-09 Thread Bruno Marchal
Hi Stephen,


It seems to me that COMP is more general that computationalism since it
seems to include certain unfalsifiable postulations that are independent of
computationalism per say, AR, to be specific.


A can be unfalsifiable, and B can be unfalsifiable, but this does not entail
that A  B  is unfalsifiable. Take A = god exists, and B = God does not
exist as an exemple. I don't know if AR per se is unfalsifiable, but I do show
that comp is falsifiable. But you don't have begin to criticize the proof ...

My own difficulties with
Bruno's thesis hinges on this postulation. I see it as an avoidance of a
fundamental difficulty in Foundation research, how to account for the 1st
person experience of time if one assumes that Existence in itself is
Time-less.


I would like to stay modest, but this is well explained once you realise
that the simplest thaetetus definition of knowledge, where
(I know p) = (p  and (I prove p))

leads directly toward an antisymmetric form of branching time modal theory,
(S4Grz), very akin to Brouwer's theory of time/consciousness.


This is somewhere else that I trip over and fall in my thinking of your
work, Bruno. Is this no mechanism can compute the output of any
self-duplication a classical version of the no-cloning theorem?


Not so directly, but yes I do think those are related. (But it's a little
out of the scope we are presently discussing).


Does my comment above about how to bridge this gap of emulating a brain
and emulating the entire universe? If it does it would seem to dramatically
increase the computational power requirements of the emulating computation
on top of the exponential slowdown.
One technical question I have about this is: if we assume that the
emulated universe is finite, what would be the equation showing the required
computational power of the emulator given an estimate of the total
algorithmic and/or information content of the universe?


The main idea on which most agrees in this list is that the information
content of the everything (or the multiverse, or UD*, ) is zero.

Additionally, what are we to make of results such as the Kochen-Specker
theorem that show that given any quantum mechanical system that has more
than two independent degrees of freedom can not be completely represented in
terms of  Boolean algebra?


You should say can not be completely represented with a logical morphisme in
a boolean algebra. But that does not entail that some other representation will
not work. This is obvious: the theory quantum mechanics *is* a boolean 
theory;
the hilbert spaces are classical mathematical object, etc. Or better, take the
Goldblatt theorem (which plays a so prominent role in my thesis). It says that
(where B is some modal logic):

Quantum Logic proves a formula A   iff
the classical modal theory B proves  []A.
It's like the theorem of Grzegorczyk which says that Intuitionistic Logic
proves  a formula A
iff the classical modal theory S4Grz proves []p.
The transformation A =  []A  is just not a morphism in the
Kochen  Specker sense.
The physical reality I extract from comp cannot itself be embedded in a
boolean algebra, apparently (I have not yet a totally clean proof of that
statement, but let us say that only a high logical conspiracy would make
boolean the arithmetical quantum logic).
Bruno

http://iridia.ulb.ac.be/~marchal/



Re: Tegmark is too physics-centric

2004-03-08 Thread Russell Standish
On Fri, Mar 05, 2004 at 02:20:54PM +0100, Bruno Marchal wrote:
 
 How does COMP entail that I am a machine? I don't follow that step at all.
 
 
 But comp *is* the assumption that I am a machine, even a digital machine.
 My last formulation of it, easy to remember is that comp = YD + CT + RA
 YD = Yes doctor,  it means you accept a artificial digital brain.
 (and CT is Church thesis, and RA is some amount of arithmetical realism).
 In conscience et mecanisme comp is called MEC-DIG-IND, DIG is for
 digital, and IND is for indexical. It really is the doctrine that I am a 
 digital
 machine, or that I can be emulated by a digital machine.
 

Yes, in your thesis you often talk about survival under replacement of
a digital brain (cerveau digital). Digital simply means operates with
1s and 0s. Since any analogue value can be represented arbitrarily
accurately by a digital signal, this doesn't seem much of a stretch.

In your chapter 1, you refer to a machine universelle digitale, c'est
a dire un ordinateur. The English word computer, which is the literal
translation of ordinateur, can refer to an analogue computer, which is
merely a device for performing computations - it needn't even be
Turing complete. The fact that you used the word universelle
previous does imply Turing completeness, but not that it is equivalent
to a Turing machine. After all, I might be concerned if there was some
noncomputable part of the brain that was not captured by a Turing
machine, but could be built into a digital machine of some kind (eg by
accurate copying of the physical layout of the brain).

Now I noticed you used the word indexical. What does this mean? (I
tend to skip over terms I don't understand, in the hope that I
understand the gist of the argument).

Anyway, the upshot of this was that I assumed that COMP was in fact
more general that computationalism. In fact I believe the first half
of your thesis (chapters 1-4) indeed still hold for this more general
interpretation of COMP (namely the necessity for subjective
indeterminism etc). True computationalism is perhaps only required for
the later sections where you invoke Thaetus's (is that the correct
translation of Theetete?) theories  of knowledge (connaissance). For
here, you need Goedel's theorem, which is applicable in the case of
Turing machines.



 
 
  Computationnalism is really the modern digital version of Mechanism
  a philosophy guessed by early Hindouist, Plato, ... accepted for animals 
 by
  Descartes, for humans by La Mettrie, Hobbes, etc. With Church
  thesis mechanism can  leads to pretty mind/matter theories.
 
 
 If one accepts mechanisms that go beyond the Turing machine, then
 computationalism is a stricter assumption than mere mechanism (which I
 basically interpret as anti-vitalism).
 
 I would counter that a Geiger counter hooked up to a radioactive
 source is a mechanism, yet the output cannot be computed by a Turing
 machine. (Of course some people, such as Schmidhuber would disagree
 with that too, but that's another story).
 
 But no mechanism can compute the output of any self-duplication.
 With Everett formulation of QM, a Geiger counter is emulable by a turing
 machine, and the QM indeterminacy is just a first person comp indeterminacy.
 You cannot emulate with a turing machine the *first person* knowledge
 he/she gets from looking at the Geiger counts, but no machine can
 predict the first person knowledge of a Washington/Moscow self-duplication
 either.
 
 Bruno

I agree one can simulate the Schroedinger equation of QM (albeit with
irrelevant exponential slowdown). However, mapping this back to your
YD postulate, this involves the doctor swapping the entire universe,
not just your brain. Perhaps you mean that one of the options the
doctor has is to upload you into a well crafted digital simulation (by
a Turing machine even) of you and your complete environment (a la
Matrix).

Reminds me of the option Arthur Dent was presented with by the
pandimensional beings (aka mice) when they wanted to mince his brain
to extract the question for which the answer was '42'.

Cheers


A/Prof Russell Standish  Director
High Performance Computing Support Unit, Phone 9385 6967, 8308 3119 (mobile)
UNSW SYDNEY 2052 Fax   9385 6965, 0425 253119 ()
Australia[EMAIL PROTECTED] 
Room 2075, Red Centrehttp://parallel.hpc.unsw.edu.au/rks
International prefix  +612, Interstate prefix 02



pgp0.pgp
Description: PGP signature


Re: Tegmark is too physics-centric

2004-03-08 Thread Stephen Paul King
Dear Russell and Bruno,

Interleaving.

- Original Message - 
From: Russell Standish [EMAIL PROTECTED]
To: Bruno Marchal [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Monday, March 08, 2004 7:50 PM
Subject: Re: Tegmark is too physics-centric

On Fri, Mar 05, 2004 at 02:20:54PM +0100, Bruno Marchal wrote:

 How does COMP entail that I am a machine? I don't follow that step at
all.


 But comp *is* the assumption that I am a machine, even a digital machine.
 My last formulation of it, easy to remember is that comp = YD + CT + RA
 YD = Yes doctor,  it means you accept a artificial digital brain.
 (and CT is Church thesis, and RA is some amount of arithmetical realism).
 In conscience et mecanisme comp is called MEC-DIG-IND, DIG is for
 digital, and IND is for indexical. It really is the doctrine that I am a
 digital machine, or that I can be emulated by a digital machine.
[RS]
Yes, in your thesis you often talk about survival under replacement of
a digital brain (cerveau digital). Digital simply means operates with
1s and 0s. Since any analogue value can be represented arbitrarily
accurately by a digital signal, this doesn't seem much of a stretch.

In your chapter 1, you refer to a machine universelle digitale, c'est
a dire un ordinateur. The English word computer, which is the literal
translation of ordinateur, can refer to an analogue computer, which is
merely a device for performing computations - it needn't even be
Turing complete. The fact that you used the word universelle
previous does imply Turing completeness, but not that it is equivalent
to a Turing machine. After all, I might be concerned if there was some
noncomputable part of the brain that was not captured by a Turing
machine, but could be built into a digital machine of some kind (eg by
accurate copying of the physical layout of the brain).

***
[SPK]

I think that the key here is something like a 1st person version of a
Turing Test: If you can not tell a difference between one's world of
experience while in a meat machine - brain - and a digital machine then it
is not a difference. Substantivalists will try to dispute this but that is a
debate for another day.
This would seem to require only that there sufficient expressiveness
within the digital machine's n-ary representation to encode all of the
fullness of all 1st person experiences that whatever kind of machine -
meat or silicon or whatever - could have.

***
[RS]
Now I noticed you used the word indexical. What does this mean? (I
tend to skip over terms I don't understand, in the hope that I
understand the gist of the argument).

Anyway, the upshot of this was that I assumed that COMP was in fact
more general that computationalism. In fact I believe the first half
of your thesis (chapters 1-4) indeed still hold for this more general
interpretation of COMP (namely the necessity for subjective
indeterminism etc). True computationalism is perhaps only required for
the later sections where you invoke Thaetus's (is that the correct
translation of Theetete?) theories  of knowledge (connaissance). For
here, you need Goedel's theorem, which is applicable in the case of
Turing machines.

***
[SPK]

It seems to me that COMP is more general that computationalism since it
seems to include certain unfalsifiable postulations that are independent of
computationalism per say, AR, to be specific. My own difficulties with
Bruno's thesis hinges on this postulation. I see it as an avoidance of a
fundamental difficulty in Foundation research, how to account for the 1st
person experience of time if one assumes that Existence in itself is
Time-less.

  Computationnalism is really the modern digital version of Mechanism
  a philosophy guessed by early Hindouist, Plato, ... accepted for
animals
  by
  Descartes, for humans by La Mettrie, Hobbes, etc. With Church
  thesis mechanism can  leads to pretty mind/matter theories.
 
 [RS]
 If one accepts mechanisms that go beyond the Turing machine, then
 computationalism is a stricter assumption than mere mechanism (which I
 basically interpret as anti-vitalism).
 
 I would counter that a Geiger counter hooked up to a radioactive
 source is a mechanism, yet the output cannot be computed by a Turing
 machine. (Of course some people, such as Schmidhuber would disagree
 with that too, but that's another story).
 [BM]
 But no mechanism can compute the output of any self-duplication.
 With Everett formulation of QM, a Geiger counter is emulable by a turing
 machine, and the QM indeterminacy is just a first person comp
indeterminacy.
 You cannot emulate with a turing machine the *first person* knowledge
 he/she gets from looking at the Geiger counts, but no machine can
 predict the first person knowledge of a Washington/Moscow self-duplication
 either.


***
[SPK]

This is somewhere else that I trip over and fall in my thinking of your
work, Bruno. Is this mechanism can compute the output of any
self-duplication a classical version

Re: Tegmark is too physics-centric

2004-03-05 Thread Bruno Marchal
At 09:08 03/03/04 +1100, Russell Standish wrote:
On Tue, Mar 02, 2004 at 12:28:04PM +0100, Bruno Marchal wrote:
 
 RS As I understand it, COMP refers to the conjunction of:
 
 1) Arithmetic realism
 2) Church-Turing thesis
 3) Survivability of consciousness under duplication



 BM...and annihilation of the original (if not it could be trivial). I 
guess
 that's what you intended to mean.


and I add, digital duplication. (that's why Church thesis has to be 
called for)


How does COMP entail that I am a machine? I don't follow that step at all.


But comp *is* the assumption that I am a machine, even a digital machine.
My last formulation of it, easy to remember is that comp = YD + CT + RA
YD = Yes doctor,  it means you accept a artificial digital brain.
(and CT is Church thesis, and RA is some amount of arithmetical realism).
In conscience et mecanisme comp is called MEC-DIG-IND, DIG is for
digital, and IND is for indexical. It really is the doctrine that I am a 
digital
machine, or that I can be emulated by a digital machine.



 Computationnalism is really the modern digital version of Mechanism
 a philosophy guessed by early Hindouist, Plato, ... accepted for animals by
 Descartes, for humans by La Mettrie, Hobbes, etc. With Church
 thesis mechanism can  leads to pretty mind/matter theories.

If one accepts mechanisms that go beyond the Turing machine, then
computationalism is a stricter assumption than mere mechanism (which I
basically interpret as anti-vitalism).
I would counter that a Geiger counter hooked up to a radioactive
source is a mechanism, yet the output cannot be computed by a Turing
machine. (Of course some people, such as Schmidhuber would disagree
with that too, but that's another story).
But no mechanism can compute the output of any self-duplication.
With Everett formulation of QM, a Geiger counter is emulable by a turing
machine, and the QM indeterminacy is just a first person comp indeterminacy.
You cannot emulate with a turing machine the *first person* knowledge
he/she gets from looking at the Geiger counts, but no machine can
predict the first person knowledge of a Washington/Moscow self-duplication
either.
Bruno



Re: Tegmark is too physics-centric

2004-03-02 Thread Bruno Marchal
At 09:14 02/03/04 +1100, Russell Standish wrote:


On Mon, Mar 01, 2004 at 03:00:30PM +0100, Bruno Marchal wrote:


 comp assumes only that the sequence 0, 1, 2, 3, 4, 5, 6, ... lives in
 Platonia. 3-person time apparantly does not appear. 1-person time
 appears through the S4Grz logic.

Fair enough - I realised it was a consequence of your mind model.


OK. I should come back later on the relation between time,
consciousness, Brouwer intuitionist logic and the modal logic S4Grz.


 In terms of the above assumptions, 1) is a consequence of
 computationalism, which I take is a basis of your theory (although
 I've never understood how computationalism follows from COMP).



 ?   Wait a bit. COMP refers to  computationalism. I don't understand.

As I understand it, COMP refers to the conjunction of:

1) Arithmetic realism
2) Church-Turing thesis
3) Survivability of consciousness under duplication


...and annihilation of the original (if not it could be trivial). I guess
that's what you intended to mean.


Computationalism (as I understand it) is the strong AI principle -
that a program running on a Turing machine (or equivalent) is
sufficient to generate consciousness. A stronger version might be that
all conscious processes can be represented by a program. I can see how
3) follows from this stronger version - but I don't see how
computationalism follows from COMP.


Well, that's really a question of vocabulary. I prefer to say Strong AI
for ... the strong AI thesis. I guess also you intended to say that COMP
does not follow from the Strong AI thesis, because the fact that machines
can think does not entail that we are machine (machine can think does not
entail that *only* machines can think). But COMP entails the strong AI thesis,
because if I am a machine then machines can think. (accepting the
perhaps foolish idea that *I* can think :)
Computationnalism is really the modern digital version of Mechanism
a philosophy guessed by early Hindouist, Plato, ... accepted for animals by
Descartes, for humans by La Mettrie, Hobbes, etc. With Church
thesis mechanism can  leads to pretty mind/matter theories.
Bruno

http://iridia.ulb.ac.be/~marchal/



Re: Tegmark is too physics-centric

2004-03-02 Thread Stephen Paul King
Dear Bruno,
- Original Message - 
From: Bruno Marchal [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Tuesday, March 02, 2004 6:28 AM
Subject: Re: Tegmark is too physics-centric


 At 09:14 02/03/04 +1100, Russell Standish wrote:
snip

 As I understand it, COMP refers to the conjunction of:
 
 1) Arithmetic realism
 2) Church-Turing thesis
 3) Survivability of consciousness under duplication
 [BM]
 ...and annihilation of the original (if not it could be trivial). I
guess
 that's what you intended to mean.

What about 3') Survivability of consciouness under quantum
teleportation?

Stephen




Re: Tegmark is too physics-centric

2004-03-02 Thread Russell Standish
On Tue, Mar 02, 2004 at 12:28:04PM +0100, Bruno Marchal wrote:
 
 As I understand it, COMP refers to the conjunction of:
 
 1) Arithmetic realism
 2) Church-Turing thesis
 3) Survivability of consciousness under duplication
 
 
 
 ...and annihilation of the original (if not it could be trivial). I guess
 that's what you intended to mean.
 
 
 
 Computationalism (as I understand it) is the strong AI principle -
 that a program running on a Turing machine (or equivalent) is
 sufficient to generate consciousness. A stronger version might be that
 all conscious processes can be represented by a program. I can see how
 3) follows from this stronger version - but I don't see how
 computationalism follows from COMP.
 
 
 
 Well, that's really a question of vocabulary. I prefer to say Strong AI
 for ... the strong AI thesis. I guess also you intended to say that COMP
 does not follow from the Strong AI thesis, because the fact that machines
 can think does not entail that we are machine (machine can think does not
 entail that *only* machines can think). But COMP entails the strong AI 
 thesis,
 because if I am a machine then machines can think. (accepting the
 perhaps foolish idea that *I* can think :)

How does COMP entail that I am a machine? I don't follow that step at all.


 Computationnalism is really the modern digital version of Mechanism
 a philosophy guessed by early Hindouist, Plato, ... accepted for animals by
 Descartes, for humans by La Mettrie, Hobbes, etc. With Church
 thesis mechanism can  leads to pretty mind/matter theories.
 

If one accepts mechanisms that go beyond the Turing machine, then
computationalism is a stricter assumption than mere mechanism (which I
basically interpret as anti-vitalism).

I would counter that a Geiger counter hooked up to a radioactive
source is a mechanism, yet the output cannot be computed by a Turing
machine. (Of course some people, such as Schmidhuber would disagree
with that too, but that's another story).

 Bruno
 
 http://iridia.ulb.ac.be/~marchal/

-- 



A/Prof Russell Standish  Director
High Performance Computing Support Unit, Phone 9385 6967, 8308 3119 (mobile)
UNSW SYDNEY 2052 Fax   9385 6965, 0425 253119 ()
Australia[EMAIL PROTECTED] 
Room 2075, Red Centrehttp://parallel.hpc.unsw.edu.au/rks
International prefix  +612, Interstate prefix 02



pgp0.pgp
Description: PGP signature


Re: Tegmark is too physics-centric

2004-03-01 Thread Bruno Marchal
At 10:33 28/02/04 +1100, Russell Standish wrote:
I deliberately leave vague what is in the theory of the mind, but
simply assume a small number of things about consciousness:
1) That there is a linear dimension called (psycholgical) time, in which the
conscious mind find itself embedded
2) The observations are a form of a projection from the set of subsets of
possibilities onto the same set. We identify a QM state with a
subset of possibilities.
3) The Kolmogorov probability axioms
4) The anthropic principle
5) Sets of observers are measurable
Also I assume the existance of the set of all descriptions (which I
call the Schmidhuber ensemble, but perhaps more accurately should be
called the Schmidhuber I ensemble to distance it from later work of
his). This is roughly equivalent to your Arithmetic Realism, but
probably not identical. It is the form I prefer philosophically.
(I think this is the exhaustive set of assumptions - but I'm willing
to have other identified)
I only treat continuous time in Occams razor (hence the differential
equation) however I do reference the theory of timescales which would
provide a way of extending this to other types of time (discrete,
rationals etc). In any case, contact with standard QM is only achieved
for continuous time.
The justification for assuming time is that one needs time in order to
appreciate differences - and differences are the foundation of
information - so in order to know anything at all, one needs to
appreciate differences hence the need for a time dimension.
Note - computationalism requires time in order to compute mind -
therefore the assumption of time is actually a weaker assumption than
computationalism.




comp assumes only that the sequence 0, 1, 2, 3, 4, 5, 6, ... lives in
Platonia. 3-person time apparantly does not appear. 1-person time
appears through the S4Grz logic.


In terms of the above assumptions, 1) is a consequence of
computationalism, which I take is a basis of your theory (although
I've never understood how computationalism follows from COMP).


?   Wait a bit. COMP refers to  computationalism. I don't understand.



2) corresponds to your 1-3 distinction. Indeed I refer to your work as
justification for assuming the projection postulate.


That is not clear for me.




3) Causes some people problems - however I notes that some others
start from the Kolmogorov probability axioms also.


No problem at all with Kolmogorov proba axioms.




4) I know the Anthropic principle causes you problems - indeed I can
only remark that it is an empirical fact of our world, and leave it as
a mystery to be solved later on.




No problem with the so called Weak Anthropic Principle. Although
obviously I prefer a Turing-Universal-Machine--thropic principle ...




5) Measurability of observers. This is the part that was buried in the
derivation of linearity of QM, that caused you (and me too) some
difficulty in understanding what is going on. I spoke to Stephen King
on the phone yesterday, and this was one point he stumbled on
also. Perhaps this is another mystery like the AP, but appears
necessary to get the right answer (ie QM !)
Of course a more detailed theory of the mind should give a more
detailed description of physics. For example - we still don't know
where 3+1 spacetime comes from, or why everything appears to be close
to Newtonian dynamics.
Stephen King is cooking up some more ideas in this line which seems
interesting...


Thanks for your clarification,

Bruno

http://iridia.ulb.ac.be/~marchal/



Re: Tegmark is too physics-centric

2004-02-27 Thread Russell Standish
I deliberately leave vague what is in the theory of the mind, but
simply assume a small number of things about consciousness:

1) That there is a linear dimension called (psycholgical) time, in which the
conscious mind find itself embedded 
2) The observations are a form of a projection from the set of subsets of
possibilities onto the same set. We identify a QM state with a
subset of possibilities.
3) The Kolmogorov probability axioms
4) The anthropic principle
5) Sets of observers are measurable

Also I assume the existance of the set of all descriptions (which I
call the Schmidhuber ensemble, but perhaps more accurately should be
called the Schmidhuber I ensemble to distance it from later work of
his). This is roughly equivalent to your Arithmetic Realism, but
probably not identical. It is the form I prefer philosophically.

(I think this is the exhaustive set of assumptions - but I'm willing
to have other identified)

I only treat continuous time in Occams razor (hence the differential
equation) however I do reference the theory of timescales which would
provide a way of extending this to other types of time (discrete,
rationals etc). In any case, contact with standard QM is only achieved
for continuous time.

The justification for assuming time is that one needs time in order to
appreciate differences - and differences are the foundation of
information - so in order to know anything at all, one needs to
appreciate differences hence the need for a time dimension.

Note - computationalism requires time in order to compute mind -
therefore the assumption of time is actually a weaker assumption than
computationalism. 

In terms of the above assumptions, 1) is a consequence of
computationalism, which I take is a basis of your theory (although
I've never understood how computationalism follows from COMP).

2) corresponds to your 1-3 distinction. Indeed I refer to your work as
justification for assuming the projection postulate.

3) Causes some people problems - however I notes that some others
start from the Kolmogorov probability axioms also.

4) I know the Anthropic principle causes you problems - indeed I can
only remark that it is an empirical fact of our world, and leave it as
a mystery to be solved later on.

5) Measurability of observers. This is the part that was buried in the
derivation of linearity of QM, that caused you (and me too) some
difficulty in understanding what is going on. I spoke to Stephen King
on the phone yesterday, and this was one point he stumbled on
also. Perhaps this is another mystery like the AP, but appears
necessary to get the right answer (ie QM !)

Of course a more detailed theory of the mind should give a more
detailed description of physics. For example - we still don't know
where 3+1 spacetime comes from, or why everything appears to be close
to Newtonian dynamics.

Stephen King is cooking up some more ideas in this line which seems
interesting... 

Cheers

On Fri, Feb 27, 2004 at 02:55:33PM +0100, Bruno Marchal wrote:
 At 09:19 25/02/04 +1100, Russell Standish wrote:
 I think that psychological time fits the bill. The observer needs a
 a temporal dimension in which to appreciate differences between
 states.
 
 OK. That move makes coherent your attempt to derive physics,
 and makes it even compatible with the sort of approach I advocate,
 but then: would you agree that you should define or at least
 explain what is the psychological time. More generally:
 What is your psychology or your theory of mind? This is (imo)
 unclear in your Occam Paper (or I miss something).
 I find that assuming time, and the applicability of differential
 equation (especially with respect to a psychological time)
 is quite huge.
 
 Bruno
 
 
 
 
 Physical time presupposes a physics, which I haven't done in
 Occam.
 
 It is obviously a little more structured than an ordering. A space
 dimension is insufficient for an observer to appreciate differences,
 isn't it?
 
 Cheers
 
 On Tue, Feb 24, 2004 at 02:11:07PM +0100, Bruno Marchal wrote:
 
  Hi Russell,
 
  Let me try to be a little more specific. You say in your Occam paper
  at   http://parallel.hpc.unsw.edu.au/rks/docs/occam/node4.html
 
  The first assumption to be made is that observers will find themselves
  embedded in a temporal dimension. A Turing machine requires time to
  separate the sequence of states it occupies as it performs a computation.
  Universal Turing machines are models of how humans compute things, so 
 it is
  possible that all conscious observers are capable of universal 
 computation.
  Yet for our present purposes, it is not necessary to assume observers are
  capable of universal computation, merely that observers are embedded in
  time. 
 
  Are you meaning physical time,  psychological time, or just a (linear)
  order? I am just
  trying to have a better understanding.

-- 



Re: Tegmark is too physics-centric

2004-02-25 Thread Russell Standish
On Wed, Feb 25, 2004 at 12:08:43AM -0500, Stephen Paul King wrote:
 Dear Russel,
 
 Could we associate this psychological time with the orderings that
 obtain when considering successive measurements of various measurements of
 non-commutative canonically conjugate  (QM) states?

The word successive implies a time dimension already. I'm not sure
what you are proposing here.

 Also, re your Occam's razor paper, have you considered the necessity of
 a principle that applies between observers, more than that involved with the
 Anthropic principle? Something along the lines of: the allowable
 communications between observers is restrained to only those that are
 mutually consistent. We see hints of this in EPR situations. ;-)
 

No I haven't considered this second requirement. It would be
interesting to note whether it is a derivative concept (can be derived
from the standard QM principles say), or whether it needs to be added
in as a fundamental requirement (in which case comes the question of
why).

Cheers

 Kindest regards,
 
 Stephen
 
 - Original Message - 
 From: Russell Standish [EMAIL PROTECTED]
 To: Bruno Marchal [EMAIL PROTECTED]
 Cc: Russell Standish [EMAIL PROTECTED];
 [EMAIL PROTECTED]
 Sent: Tuesday, February 24, 2004 5:19 PM
 Subject: Re: Tegmark is too physics-centric
 
 I think that psychological time fits the bill. The observer needs a
 a temporal dimension in which to appreciate differences between
 states.
 
 Physical time presupposes a physics, which I haven't done in
 Occam.
 
 It is obviously a little more structured than an ordering. A space
 dimension is insufficient for an observer to appreciate differences,
 isn't it?
 
  Cheers
 
 snip
 

-- 



A/Prof Russell Standish  Director
High Performance Computing Support Unit, Phone 9385 6967, 8308 3119 (mobile)
UNSW SYDNEY 2052 Fax   9385 6965, 0425 253119 ()
Australia[EMAIL PROTECTED] 
Room 2075, Red Centrehttp://parallel.hpc.unsw.edu.au/rks
International prefix  +612, Interstate prefix 02



pgp0.pgp
Description: PGP signature


Re: Tegmark is too physics-centric

2004-02-24 Thread Bruno Marchal
Hi Russell,

Let me try to be a little more specific. You say in your Occam paper
at   http://parallel.hpc.unsw.edu.au/rks/docs/occam/node4.html
The first assumption to be made is that observers will find themselves 
embedded in a temporal dimension. A Turing machine requires time to 
separate the sequence of states it occupies as it performs a computation. 
Universal Turing machines are models of how humans compute things, so it is 
possible that all conscious observers are capable of universal computation. 
Yet for our present purposes, it is not necessary to assume observers are 
capable of universal computation, merely that observers are embedded in time. 

Are you meaning physical time,  psychological time, or just a (linear) 
order? I am just
trying to have a better understanding.

Bruno





At 18:00 23/02/04 +1100, Russell Standish wrote:
Comments interspersed.

On Sun, Jan 18, 2004 at 07:15:45AM -0500, Kory Heath wrote:

 I understand this perspective, but for what it's worth, I'm profoundly out
 of sympathy with it. In my view, computation universality is the real 
key -
 life and consciousness are going to pop up in any universe that's
 computation universal, as long as the universe is big enough and/or it
 lasts long enough. (And there's always enough time and space in the
 Mathiverse!)

Computational universality is not sufficient for open-ended evolution
of life. In fact we don't what is sufficient, as evidenced by it being
an open problem (see Bedau et al., Artificial Life 6, 363.)
I also suspect that it is not necessary for the evolution of SASes,
but this is obvious a debatable point.



Re: Tegmark is too physics-centric

2004-02-24 Thread Russell Standish
I think that psychological time fits the bill. The observer needs a
a temporal dimension in which to appreciate differences between
states.

Physical time presupposes a physics, which I haven't done in
Occam.

It is obviously a little more structured than an ordering. A space
dimension is insufficient for an observer to appreciate differences,
isn't it?

Cheers

On Tue, Feb 24, 2004 at 02:11:07PM +0100, Bruno Marchal wrote:
 
 Hi Russell,
 
 Let me try to be a little more specific. You say in your Occam paper
 at   http://parallel.hpc.unsw.edu.au/rks/docs/occam/node4.html
 
 The first assumption to be made is that observers will find themselves 
 embedded in a temporal dimension. A Turing machine requires time to 
 separate the sequence of states it occupies as it performs a computation. 
 Universal Turing machines are models of how humans compute things, so it is 
 possible that all conscious observers are capable of universal computation. 
 Yet for our present purposes, it is not necessary to assume observers are 
 capable of universal computation, merely that observers are embedded in 
 time. 
 
 Are you meaning physical time,  psychological time, or just a (linear) 
 order? I am just
 trying to have a better understanding.
 
 Bruno
 
 
 
 
 
 
 At 18:00 23/02/04 +1100, Russell Standish wrote:
 Comments interspersed.
 
 On Sun, Jan 18, 2004 at 07:15:45AM -0500, Kory Heath wrote:
 
  I understand this perspective, but for what it's worth, I'm profoundly 
 out
  of sympathy with it. In my view, computation universality is the real 
 key -
  life and consciousness are going to pop up in any universe that's
  computation universal, as long as the universe is big enough and/or it
  lasts long enough. (And there's always enough time and space in the
  Mathiverse!)
 
 Computational universality is not sufficient for open-ended evolution
 of life. In fact we don't what is sufficient, as evidenced by it being
 an open problem (see Bedau et al., Artificial Life 6, 363.)
 
 I also suspect that it is not necessary for the evolution of SASes,
 but this is obvious a debatable point.

-- 



A/Prof Russell Standish  Director
High Performance Computing Support Unit, Phone 9385 6967, 8308 3119 (mobile)
UNSW SYDNEY 2052 Fax   9385 6965, 0425 253119 ()
Australia[EMAIL PROTECTED] 
Room 2075, Red Centrehttp://parallel.hpc.unsw.edu.au/rks
International prefix  +612, Interstate prefix 02



pgp0.pgp
Description: PGP signature


Re: Tegmark is too physics-centric

2004-02-24 Thread Stephen Paul King
Dear Russel,

Could we associate this psychological time with the orderings that
obtain when considering successive measurements of various measurements of
non-commutative canonically conjugate  (QM) states?
Also, re your Occam's razor paper, have you considered the necessity of
a principle that applies between observers, more than that involved with the
Anthropic principle? Something along the lines of: the allowable
communications between observers is restrained to only those that are
mutually consistent. We see hints of this in EPR situations. ;-)

Kindest regards,

Stephen

- Original Message - 
From: Russell Standish [EMAIL PROTECTED]
To: Bruno Marchal [EMAIL PROTECTED]
Cc: Russell Standish [EMAIL PROTECTED];
[EMAIL PROTECTED]
Sent: Tuesday, February 24, 2004 5:19 PM
Subject: Re: Tegmark is too physics-centric

I think that psychological time fits the bill. The observer needs a
a temporal dimension in which to appreciate differences between
states.

Physical time presupposes a physics, which I haven't done in
Occam.

It is obviously a little more structured than an ordering. A space
dimension is insufficient for an observer to appreciate differences,
isn't it?

 Cheers

snip




Re: Tegmark is too physics-centric

2004-02-23 Thread Bruno Marchal
At 18:00 23/02/04 +1100, Russell Standish wrote:
Comments interspersed.

On Sun, Jan 18, 2004 at 07:15:45AM -0500, Kory Heath wrote:

 I understand this perspective, but for what it's worth, I'm profoundly out
 of sympathy with it. In my view, computation universality is the real 
key -
 life and consciousness are going to pop up in any universe that's
 computation universal, as long as the universe is big enough and/or it
 lasts long enough. (And there's always enough time and space in the
 Mathiverse!)

Computational universality is not sufficient for open-ended evolution
of life. In fact we don't what is sufficient, as evidenced by it being
an open problem (see Bedau et al., Artificial Life 6, 363.)


How do you know then that comp universality is not sufficient?
(Giving that comp universality entails the non existence of a complete
theory of comp-universality; I mean computer science is provably
not completely unifiable; there is no general theory for non stopping
machines or non stopping comp processes).
Are you thinking about something specific which is lacking in
comp universality?

I also suspect that it is not necessary for the evolution of SASes,
but this is obvious a debatable point.


Are you saying that comp is entirely irrelevant to explain
the origin of life, the origin of the universe(s) ?
Bruno






Re: Tegmark is too physics-centric

2004-02-22 Thread Russell Standish
Comments interspersed.

On Sun, Jan 18, 2004 at 07:15:45AM -0500, Kory Heath wrote:
 
 I understand this perspective, but for what it's worth, I'm profoundly out 
 of sympathy with it. In my view, computation universality is the real key - 
 life and consciousness are going to pop up in any universe that's 
 computation universal, as long as the universe is big enough and/or it 
 lasts long enough. (And there's always enough time and space in the 
 Mathiverse!) 

Computational universality is not sufficient for open-ended evolution
of life. In fact we don't what is sufficient, as evidenced by it being
an open problem (see Bedau et al., Artificial Life 6, 363.)

I also suspect that it is not necessary for the evolution of SASes,
but this is obvious a debatable point.

 (countably?) infinite. So why would I be more likely to find myself in one 
 of those universes rather than the other?
 
 -- Kory
 

The issue of where physics comes from is addressed in my paper Why
Occams Razor. Dynamics on complex-valued hilbert spaces is the most
likely observed universe. I have just had another discussion with
Stephen King re why we should observe 3+1 spacetime. I am somewhat
unconvinced like you by the arguments put forward in Tegmark's paper
(which aren't due to him at all), but at present its the best we
have. There should be an anthropic reason why 3+1 spacetime is
necessary, or even the most likely dimensionality seen by observers.

Cheers



A/Prof Russell Standish  Director
High Performance Computing Support Unit, Phone 9385 6967, 8308 3119 (mobile)
UNSW SYDNEY 2052 Fax   9385 6965, 0425 253119 ()
Australia[EMAIL PROTECTED] 
Room 2075, Red Centrehttp://parallel.hpc.unsw.edu.au/rks
International prefix  +612, Interstate prefix 02



pgp0.pgp
Description: PGP signature


Re: Tegmark is too physics-centric

2004-01-19 Thread Saibal Mitra
I don't think there are many intelligent beings per cubic Plank length in
our universe at all! In fact, string theorists don't know how to get to the
standard model from their favorite theory, yet they still believe in it.
Simple deterministic models could certainly explain our laws of physics, as
't Hooft explains in these articles:



Determinism beneath Quantum Mechanics:

http://arxiv.org/abs/quant-ph/0212095


Quantum Mechanics and Determinism:

http://arxiv.org/abs/hep-th/0105105

How Does God Play Dice? (Pre-)Determinism at the Planck Scale:

http://arxiv.org/abs/hep-th/0104219


- Original Message -
From: Kory Heath [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Sunday, January 18, 2004 1:15 PM
Subject: Re: Tegmark is too physics-centric


 At 1/17/04, Hal Finney wrote:
 But let me ask if you agree that considering Conway's 2D
 Life world with simply-specified initial conditions as in your example,
 that conscious life would be extraordinarily rare?

 I certainly agree that it would be extraordinarily rare, in the sense
 that the size of the lattice would need to be very big, and the number of
 clock-ticks required would need to be very large. But big and large
are
 such relative terms! Clearly, our own universe is very, very big. The
 question is, how can we sensibly determine whether life is more likely in
 our universe or in Conway's Life universe?

 I don't believe we have anywhere near enough data to answer this question,
 but I don't think it's unanswerable in principle. Fredkin actually
believes
 that our universe is a 3+1D cellular automata, and if anyone ever found
 such a description of our physics (or some other fundamentally
 computational description), then we could directly compare it with
Conway's
 Life, determining for each one how big the lattice needs to be, and how
 many clock-ticks are required, for life to appear with (say) 90%
 probability. (Of course, this determination might be difficult even when
we
 know the rules of the CAs. But we can try.)

 One thing that you'd have to take into account is the complexity of the
 rules you're comparing, including the number of states allowed per cell.
 Not only are the rules to Conway's Life extremely simple, but the cells
are
 binary. All things being equal, I would expect that an increase in the
 complexity of the rules and the number of cell-states allowed would
 decrease the necessary lattice-size and/or number of clock-ticks required
 for SASs to grow out of a pseudo-random initial state. I mention this to
 point out a problem with our intuitions about our universe vs. Conway's
 Life: the description of our universe is almost certainly more complex
than
 the description of Conway's Life with a simple initial state. If Fredkin
 actually succeeds in finding a 3+1D CA which describes our universe, it
 will almost certainly require more than 2 cell-states, and its rules will
 certainly be more complex than those of the Life universe. We have to take
 this difference into account when trying to compare the two universes, but
 we have nowhere near enough data to quantify the difference currently. We
 really don't know what size of space in the Life universe is equivalent to
 (say) a solar system in this universe.

 In a way, this is all beside the point, since I have no problem believing
 that one CA can evolve SASs much more easily than some other CA whose
rules
 and initial state are exactly as complex. (In fact, this must be true,
 since for any CA that supports life at all, there's an equally complex one
 that isn't even computation universal.) I have no problem believing that
 the Life universe is, in some objective sense, not very conducive to SASs.
 Perhaps it's less conducive to SASs than our own universe, although I'm
not
 convinced. What I have a problem believing is that CAs as a class are
 somehow less conducive to observers than quantum-physical models as a
 class. In fact, I think it's substantially more likely that there are
 relatively simple CA models (and other computational models) that are much
 more conducive to SASs than either Conway's Life universe or our own.
 Models in which, for instance, neural-net structures arise much more
 naturally from the basic physics of the system than they do in our
 universe, or the Life universe.

 In many ways, our universe seems tailor made for creating observers.

 I understand this perspective, but for what it's worth, I'm profoundly out
 of sympathy with it. In my view, computation universality is the real
key -
 life and consciousness are going to pop up in any universe that's
 computation universal, as long as the universe is big enough and/or it
 lasts long enough. (And there's always enough time and space in the
 Mathiverse!) When I think about the insane, teetering, jerry-rigged
 contraptions that we call life in this universe - when I think about the
 tortured complexity that matter has to twist itself into just to give us
 single-celled replicators - and when I

Re: Tegmark is too physics-centric

2004-01-18 Thread Hal Finney
Kory Heath, [EMAIL PROTECTED], writes:
 It is very likely that even Conway's Life universe has this feature. Its 
 rules are absurdly simple, and we know that it can contain self-replicating 
 structures, which would be capable of mutation, and therefore evolution. We 
 can specify very simple initial conditions from which self-replicating 
 structures would be overwhelmingly likely to appear, as long as the lattice 
 is big enough. (The binary digits of many easily-computable real numbers 
 would work.)

Yes, I see that that is true.  I think it points to a problem with some
of the simple conceptualizations of measure, about which I will say
more below.But let me ask if you agree that considering Conway's 2D
Life world with simply-specified initial conditions as in your example,
that conscious life would be extraordinarily rare?

I want to say, vastly more rare than in our universe, but of course we
don't know how rare life actually is in our universe, so that may be a
hard claim to justify.  But the point is that our universe has stable
structures; it has atoms of dozens of different varieties, which can form
uncountable millions of stable molecules.  It has mechanisms to generate
varieties of these different molecules and collect them together in
environments where they can react in interesting ways.  We don't have a
full picture of how life and consciousness evolved, but looking around,
it doesn't seem like it should have been THAT hard, which is where the
Fermi paradox comes from.  In many ways, our universe seems tailor made
for creating observers.

In contrast, in the Life world there are no equivalents to atoms or
molecules, no chemical reactions.  It's too chaotic; there's not enough
structure.  Replicators and life seem to require a balance between
chaos and stasis, and Life is far too dynamic.  It just looks to me
like it would be almost impossible for replicators to arise naturally.
Almost impossible, but not absolutely impossible, so if you tried enough
initial conditions as you suggest, it would happen.  I won't belabor
this argument unless you disagree about the ease with which life might
arise in a Life universe, and consciousness evolve.

And the main point is that these are exactly the kinds of considerations
which Tegmark discusses.  Issues of stability of the building blocks
of life, of providing the right amounts and kinds of interactions.
These physics-like considerations are precisely the correct issues to
consider in looking at how easily observers will arise, and that is
Tegmark's point.

I haven't read Tegmark's paper in detail recently, and to the extent
that his arguments are based on string theory or QM then I would agree
that those are too parochial.  But as I recall he had a number of broad
arguments that would apply even to a Life-like universe.

Now I'll get back to the question above about measure.  There are
universes, as in your example, where life is intrinsically unlikely,
but if you make the universe large enough, and provide all possible
initial conditions for finite-sized regions, then in all that vastness,
somewhere life will exist.

The problem is, this is not too different from separately implementing
alternate, smaller versions of that universe, with different initial
conditions for each, so that all possible initial conditions are tried
in some universe.  A small fraction of those universes will have life.
To specify just one of the life-containing universes will typically
take a lot of information, while specifying all of the universes takes
less information.

This is analogous to the even broader picture of the universal
dovetailer (UD) program, the program that runs all programs (on all
initial conditions).  It's a very small program, yet it creates all
possible universes.  Even universes with incredibly complex laws of
physics and initial conditions are created by this extremely small
UD program.  Does this mean that all universes have the same measure,
and it is large, since this small program creates them?

The answer has to be no.  It's not enough to find a small program which
generates a desired structure, somewhere in the vastness that it creates.
Otherwise all integers would have the same complexity because they are
all created by a simple counting program.

Wei Dai once suggested a heuristic that the measure of a structure ought
to have two components: the size of the program that creates it; and the
size of a program which locates it in the output of the first program.
By this argument, you could have a big program which output just the
structure in question, which was then located by a trivial one; or you
could have a small program which output the structure among a vastness,
which then required a big program to locate it.  Either way, the
structure has a large measure.

This was the motivation for the idea I proposed a few days ago, that
for applying anthropic reasoning, a universe should get a bonus if
it had a high density of observers, rather 

Re: Tegmark is too physics-centric

2004-01-17 Thread Hal Finney
Eric Hawthorne writes:
 2. SAS's which are part of a 3+1 space may not have higher measure than 
 SAS's in other spaces, but perhaps the SAS's
 in the other spaces wouldn't have a decent way to make a living. In 
 other words, maybe they'd have a hard time
 perceiving the things in their space, existing coherently physically 
 in it, being able to incrementally impact and survival-optimize
 their surroundings in the space etc.
 In other words they'd be inhabiting (and trying to perceive and act on)
 a world of NOISE, or of LIMITED DEGREES OF FREEDOM AND EVOLUTION,
 or of UNRULY, untameable hyperbolic  physical laws and functions.

I agree that this is what Tegmark is trying to say.  If we look at it
in terms of measure, there are (broadly speaking) two ways for creatures
to exist: artificial or natural.  By artificial I mean that there could
be some incredibly complex combination of laws and initial conditions
built into the simulated universe so that the creature's existence was in
effect pre-ordained.  (If we ever build a simulation containing conscious
entities, our first attempts will almost certainly be of this type,
where we have carefully crafted the program to create consciousness.)
By natural I mean that we could have simple laws of physics and initial
conditions in which the creatures evolve over a long period of time,
as we have seen in our universe.

Universes of the natural type would seem likely to have higher measure,
because they are inherently simpler to specify.  It is in those universes
where Tegmark's physics-based arguments come into play.  For creatures
to evolve, to become complex, to optimize for survival, things like
dimensionality are very relevant.  Tegmark goes into some detail on the
problems with other than 3+1 dimensional space.

Of course, there's always a risk in such arguments that we may be falling
victim to parochialism, thinking that our own way of life is the only
one possible.  It may be that there are some possible life forms that
exist in a very different mode than we have imagined, in a universe with
different dimensionality, or perhaps one where dimensionality doesn't
even make sense.  But I think overall Tegmark does a good job in avoiding
at least the most obvious flaws of parochialism and anthropomorphism.

Hal Finney



Re: Tegmark is too physics-centric

2004-01-17 Thread Eric Hawthorne


Kory Heath wrote:


Tegmark goes into some detail on the
problems with other than 3+1 dimensional space.


Once again, I don't see how these problems apply to 4D CA. His 
arguments are extremely physics-centric ones having to do with what 
happens when you tweak quantum-mechanical or string-theory models of 
our particular universe.

Well here's the thing: The onus on you is to produce a physical theory 
that describes some subset of the computations of a 4D CA
and which can explain (or posit or hypothesize if you will) properties 
of  observers (in that kind of world), and properties of the space
that they observe, which would be self-consistent and descriptive of 
interesting, constrained, lifelike behaviour and interaction
with environment and sentient representation of environment aspects etc.

My guess is that that physical theory (and that subset of computations 
or computed states) would end up being proven to
be essentially equivalent to the physical theory of  OUR universe. In 
other words, I believe in parochialism, because
I believe everywhere else is a devilish, chaotic place.

You can't just say there could be life and sentience in this 
(arbitrarily weird) set of constraints and then not bother to
define what you mean by life and sentience. They aren't self-explanatory 
concepts. Our definitions of them only apply
within universes that behave at least roughly as ours does.

You'll have to come up with the generalized criteria for generalized N-D 
SAS's (what would constitute one)
before saying they could exist.

Eric





Re: Tegmark is too physics-centric

2004-01-17 Thread CMR
 I agree that this is what Tegmark is trying to say.  If we look at it
 in terms of measure, there are (broadly speaking) two ways for creatures
 to exist: artificial or natural.  By artificial I mean that there could
 be some incredibly complex combination of laws and initial conditions
 built into the simulated universe so that the creature's existence was in
 effect pre-ordained.  (If we ever build a simulation containing conscious
 entities, our first attempts will almost certainly be of this type,
 where we have carefully crafted the program to create consciousness.)
 By natural I mean that we could have simple laws of physics and initial

I agree that the consciousness (assuming our definitions of same
correspond) would likely result from complex combination of laws and
initial conditions built into the simulated universe, but I submit that it
is just as likely to be an incidental emergent phenom of an everymore
complex interconnected distributed computational network as the result of
any planned process.

Would we even recognize such an entity, or it us? Possibly, but Wolfram
alludes to the challenges of percieving the intelligence of beings whose
ecology operates on spacial and/or temporal scales foreign to our sensory
receptivity.

Of course, there's always a risk in such arguments that we may be falling
victim to parochialism, thinking that our own way of life is the only
one possible.  It may be that there are some possible life forms that
exist in a very different mode than we have imagined, in a universe with
different dimensionality, or perhaps one where dimensionality doesn't
even make sense.  But I think overall Tegmark does a good job in avoiding
at least the most obvious flaws of parochialism and anthropomorphism.

Indeed. The constraints to, and requirements for, terrestrial life have had
to be revised and extended of late, given thermophiles and the like. Though
they obviously share our dimensional requisites, they do serve to highlight
the risk of prematurely pronouncing the facts of life.

CMR