Re: Peculiarities of our universe

2004-01-10 Thread Saibal Mitra
- Original Message -
From: Hal Finney [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Saturday, January 10, 2004 12:24 AM
Subject: Peculiarities of our universe


 There are a couple of peculiarities of our universe which it would be
 nice if the All-Universe Hypothesis (AUH) could explain, or at least
 shed light on them.

 One is the apparent paucity of life and intelligence in our universe.
 This was first expressed as the Fermi Paradox, i.e., where are the aliens?
 As our understanding of technological possibility has grown the problem
 has become even more acute.  It seems likely that our descendants
 will engage in tremendous cosmic engineering projects in order to take
 control of the very wasteful natural processes occuring throughout space.
 We don't see any evidence of that.  Similarly, proposals for von Neumann
 self reproducing machines that could spread throughout the cosmos at a
 large fraction of the speed of light appear to be almost within reach
 via nanotechnology.  Again, we don't see anything like that.

 So why is it that we live in a universe that has almost no observers?
 Wouldn't it be more likely on anthropic grounds to live in a universe
 that had a vast number of observers?

Assuming the validity of the AP, we should expect to find ourselves in the
most typical of circumstances. We should thus expect that most observers are
similar to us. So, most observers are not part of a very advanced
civilization. Maybe, as I wrote in the other posting, this is because those
civilizations consist of only one individual. This should follow from the
AUH, but it is not very clear how. If most observers are like us, then we
shouldn't expect to find much evidence of intelligent life, even if there
are hundreds of civilizations in our galaxy now.

Maybe the fact that we are in a situation in which we don't have controll
over our own bodies very much is a clue. This should again be a typical
situation observers find themselves in. They are on the verge of
understanding how the universe works, but they don't have a cure for deadly
diseases or old age. They don't have the capacity to design and build
observers like themselves. It should thus be the case that the moment they
do develop such capabilities, their numbers should decline dramatically.
This should be a universal property of civilizations evolving in a universe
with large measure.


 The second peculiarity is the seemingly narrow range of physical laws
 which could allow for our form of life to exist.  Tegmark writes about
 this at http://www.hep.upenn.edu/~max/toe.html.  He shows a chart of
 two physical constants and how if they had departed from their observed
 values by even a tiny percentage, life would be impossible.  In the
 full paper linked from there he offers many more examples of physical
 paramters which are fine-tuned for life.

 So why is this?  Why does it turn out that our form of life (or perhaps,
 any form of life) can exist for only a tiny range of variation?
 Why didn't it turn out that you could change many parameters a great
 deal and still have life form?

 I don't see anything a priori in the AUH that would have led to this
 prediction.  Now, it may just be one of those things that happens to
 happen, a fundamental mathematical property like the distribution of
 primes or the absence of odd perfect numbers.  Self-aware subsystems
 just mathematically turn out to only be possible in a very tiny region
 of parameter space.


 Now, you might be able to make the argument that tiny is not well
 defined, that there is no natural length scale for judging parameter
 ranges.  Tegmark could as easily have zoomed in on the appropriate region
 of his graph and shown a huge, enormous area where parameters could be
 moved around and life would still work.

 However I think there is a more natural way to put the question, which is,
 what fraction of computer programs would lead to simulated universes that
 include observers?  And here, if we follow Tegmark's ideas, the answer
 appears to be that it is a very small fraction.  (Of course, you still
 need to use your own judgement to decide whether that is tiny or not.)

I am not sure this is correct,  I do agree that there is a problem here.
Tegmark looks at what would happen if you change on or more parameters in
the standard model and then concludes that the parameter space for life is
very tiny. Most physicists believe that a fundamental theory with only a few
parameter, e.g. superstring theory, could be behind the standard model. The
standard model is what you get if you ''integrate out'' the as of yet
unknown physics at the smallest length scales. Given that the fundamental
theory is supposed to have only a few parameters, it should have a much
larger measure than generic versions of the standard model. So, the problem
is actually worse: Why does life only emerge in a tiny fraction of programs
describing versions of the standard model? And of those programs that do
give 

Re: Is the universe computable?

2004-01-10 Thread John M
Erick,thanks for your comments on my exchange with GeorgeQ.

Although I do not claim to have understood (digested?) all of your post,
I feel it may be in my line of thinking (pardon me the offense). I just use
less connotations to 'time' related phrases, as may be obvious from below.

Over the years I tried in several attempts to voice on this (and other)
lists
that all our phys-math considerations are secondary, coming from (and by)
human understanding of something with/by human logic.
I see no evidence that the existence (nature? everything) would follow
our approval - 'our' as part/product of it. Physical law is a model of
our thinking (I may be crucified for this) and fetishizing our understanding
is IMO narrow. Even the 'elephant/rabbit' excursions start from some
'random' arrangement of photons, which are 'our' interpretation about
sthing which may be interpreted quite differently by different mindsets.

This is the reason - I think - why GeorgeQ found my ideas mystical. In my
vocabulary mystical is what has not (yet?) been explained. I work with all
unknown/unknowables, trying to make sense of the so far 'undiscovered'
within the 'boundaries' of our mind. I call it my scientific agnosticism.
Time and space are our crutches (boundaries? see below).
Russell St. scolded me several times for my 'non-mathematical' stance as
improper, vague, undefinable etc. - he is right, I don't 'force' my (our)
understanding onto things beyond it. Equationally or not.

I appreciate your remark:
 as later will be mentioned, boviously perception play a big role in this
 value, is your definition of the univers from the perspective of a human
 being, being that self within it's self, as projected outwards from a
finite
 continuum into a supposedly infinite continuum?
(whether 'boviously' is a typo for obviously, or a hint to the early style
on
this list calling adverse ideas bovine excrement).

Somebody speculated on the way of 'thinking' on Venus where the clouds
prevent any info about the extravenereal world (cosmology, philosophy,
etc.). We are sitting closed in by a mental cloud of our understanding,
ie boundaries of our mindset (epistemically steadily widening, however).

I believe 'computation' here goes beyond the 'binary calculations' as well
as (maybe) temporal considerations. Life I consider differently, IMO
it is some natural function we overappreciate because we do it (cf the
biology etc. in our reductionistic science system). 'Consciousness' I call
the acknowledgement (by anything) and response to (incl, storage) of
information - absolutely not restricted to functions we would deem 'life'.
So I have no problem with 'universes' (not?) containing 'live' products.
We muster a reductionistic way of our mindset: using limited models of
observables, cut into (select) boundaries in a world of (wholistically)
interconnected interaction of things way beyond our cognitive inventory.

Regards

John Mikes



- Original Message -
From: Erick Krist [EMAIL PROTECTED]
To: [EMAIL PROTECTED]; John M [EMAIL PROTECTED]
Sent: Tuesday, January 06, 2004 7:33 PM
Subject: Re: Is the universe computable?


  to your series of questions I would like to add one as first:
  What do you call universe?

 i think this question is most temporally cognitively perceptual in nature.
 as explained:

  as long as we do not make this identification, it is futile to
  speculate about its computability/computed sate.

 as later will be mentioned, boviously perception play a big role in this
 value, is your definition of the univers from the perspective of a human
 being, being that self within it's self, as projected outwards from a
finite
 continuum into a supposedly infinite continuum?
 or are you looking at the univers from the point of view of a rock which
 site blindly in time without temporant perceptual motion?
 obviously there are many different perceptual universes, and any of them
 could be philosphically percieved by the mind, therefor any of them would
be
 physically coorect on a perceptual model of a temporant cyclical universe.

 we have to keep in mind, the time itself may only be a function of the
 combined perceptual receptions of our own internally functioning senses
 biologically simultaneously now.

  I see not too much value in assuming infinite memories
  and infinite time of computation, that may lead to a game

 and i may i beg to ask is a computer supposed to under any assumption
 compute a continuous value of infinite using binary logic as it's base
 computational rate?

 -calling computation the object to be computed.

 this is quite naturally the function of time works in the first place.
 time is the measure of the systematic computational functions of an
internal
 system as measured by the temporant singularity of the external structures
 of that internal system as an alternatively functional singular temporant
 system of it's own. .: the nature of a coputationally temporant universe
 involves the notion of a 

Re: Peculiarities of our universe

2004-01-10 Thread Eric Hawthorne


Hal Finney wrote:

One is the apparent paucity of life and intelligence in our universe.
This was first expressed as the Fermi Paradox, i.e., where are the aliens?
As our understanding of technological possibility has grown the problem
has become even more acute.  It seems likely that our descendants
will engage in tremendous cosmic engineering projects in order to take
control of the very wasteful natural processes occuring throughout space.
We don't see any evidence of that.  Similarly, proposals for von Neumann
self reproducing machines that could spread throughout the cosmos at a
large fraction of the speed of light appear to be almost within reach
via nanotechnology.  Again, we don't see anything like that.
 

So why is it that we live in a universe that has almost no observers?
Wouldn't it be more likely on anthropic grounds to live in a universe
that had a vast number of observers?
Could be that
1. It's extremely rare to have a window for biological evolution to our 
level. (I highly recommend
the  well written  basic-level but accurate and comprehensive new book 
called Origins of Existence
by Fred Adams ISBN 0-7432-1262-2 which gives a complete summary of what 
had to  happen
for our emergence, and all the many ways how things could have gone 
differently, very few of which
would lead to life anything like we know it.)

2. We're a distinguished member of the successful evolvers in the first 
available window-of-opportunity
club.

3. If you believe 1 and 2, then note that we ourselves have not yet made 
galactically observable construction
projects or self-replicating space-probes. Sure, we talk, but we haven't 
put our money where our mouth
is yet. The (few, lucky to have emerged unscathed) other intelligent 
lifeforms in our observable universe may
also not have done this within out lightcone (space-time horizon) of 
observability yet.





Re: Why no white talking rabbits?

2004-01-10 Thread Jesse Mazer
Eric Hawthorne wrote:

So the answer to *why* it is true that our universe conforms to simple 
regularities and produces complex yet ordered systems governed
(at some levels) by simple rules, it's because that's the only kind of 
universe that an emerged observer could have emerged
in, so that's the only kind of universe that an emerged observer ever will 
observe.
That's not true--you're ignoring the essence of the white rabbit problem! A 
universe which follows simple rules compatible with the existence of 
observers in some places, but violates them in ways that won't be harmful to 
observers (like my seeing the wrong distribution of photons in the 
double-slit experiment, but the particles in my body still obeying the 
'correct' laws of quantum mechanics) is by definition just as compatible 
with the existence of observers as our universe is. So you can't just use 
the anthropic principle to explain why we don't find ourselves in such a 
universe, assuming you believe such universes exist somewhere out there in 
the multiverse.

Jesse

_
Learn how to choose, serve, and enjoy wine at Wine @ MSN. 
http://wine.msn.com/



Maximization the gradient of order as a generic constraint ?

2004-01-10 Thread Georges Quenot
In a previous post in reply to Hal Finnay, I have suggested the use
of a particuliar case of additional conditions to the hypothetical
set of equation that would rule ou universe. This is an attempt
to clarify it while taking it out from the computation perspective
with which it has nothing to do.

Considering the kind of set of equation we figure up to now,
completely specifying our universe from them seems to require
two additional things:

1) The specification of boundary conditions (or any other equivalent
   additional constraint.
2) The selection of a set of global parameters.

My suggestion is that for 1), instead of specifying initial
conditions (what might be problematic for a number of reasons),
one could use another form of additional high level constraint
which would be that the solution universe should be as much as
possible more ordered on one side than on the other. Of course,
this rely on the possibility to give a sound sense to this, which
implies to be able to find a canonical way to tell whether one
solution of the set of equations in more more ordered on one
side than on the other than another solution.

This is a way to narrow down the set of solutions that offers
several advantages:

a) It removes the asymmetry in the choice of initial versus
   final (or any other combination of) conditions.
b) It is consistent with boundaryless universes as proposed by
   Stephen Hawking for instance.
c) It is able to make the flow of time appear as an emergent
   property instead of being postulated and built upon.
d) This kind of condition is very well appropriate to select
   those in which SASs have  chance to emerge.

This condition does not seem alone enough to define a unique
mathematical structure but there might be a little number of
ways according to which the remaining symmetries could be
canonically broken.


It might well be that this additional constraint can also be
used for selecting the appropriate set of global parameter for
the set of equations considered in 2). It does not seem
counter-intuitive that the sets of global parameters that
allows for the maximization of the gradient of order among all
possible solutions considering all possible values for global
parameters be precisely those for which SASs emerges and
therefore those we see in our universe: universes not able to
generate complex enough substructures to be self aware would
probably equally fail to exhibit large gradients of order and
vice versa.

The hypothesis of the maximization the gradient of order seems
even Popper-falsfiable. At least one prediction can be made:

Given the set of equation that describe our universe and the
corresponding set of global parameters, if we can find a canonical
way to compare the relative global gradient of order within the
universes that satisfy this set of equations:

1) It could be possible to determine the subset of universes
   that maximize the gradient for each set of global parameters
   (comparing all possible universes for a given set of global
   parameters), these being called optimal for this set of
   global parameters.

2) It could be possible to determine the sets of global parameters
   that maximize the gradient in an absolute way (comparing
   optimal universes for all possible sets of global parameters).

The prediction is that the set of global parameter that we observe
is one of those that maximizes the gradient of order within the
corresponding optimal universes.

A prediction with a weaker version of 2) would be that the set
of global parameter that we observe must be consistent with any
constraint we can obtain from the maximization constraint.

It might be possible to solve problem 2) (finding the optimal
sets of global parameter or some constraints on them) from high
level considerations without being able to solve problem 1)
finding the corresponding optimal universes.

Maybe also the constraint could be used at a third level if it
can remain consistent as a mean to select the appropriate set of
equations.

Finally, the hypothesis of the maximization of the gradient of
order within universes could offer the additional advanatges:

e) It does not involve any arbitrary parameter.
f) It might help not to require that a choice be arbitrarily
   made within an infinite set.

Do all of this make sense ? Has it already been considered ?

Georges Quénot.



Re: Maximization the gradient of order as a generic constraint ?

2004-01-10 Thread Hal Finney
Georges Quenot writes:
 Considering the kind of set of equation we figure up to now,
 completely specifying our universe from them seems to require
 two additional things:

 1) The specification of boundary conditions (or any other equivalent
additional constraint.
 2) The selection of a set of global parameters.

 My suggestion is that for 1), instead of specifying initial
 conditions (what might be problematic for a number of reasons),
 one could use another form of additional high level constraint
 which would be that the solution universe should be as much as
 possible more ordered on one side than on the other. Of course,
 this rely on the possibility to give a sound sense to this, which
 implies to be able to find a canonical way to tell whether one
 solution of the set of equations in more more ordered on one
 side than on the other than another solution.


I think this is a valid approach, but I would put it into a larger
perspective.  The program you describe, if we were to actually implement
it, would have these parts: It has a certain set of laws of physics; it
has a certain order-measuring function (perhaps equivalent to what we know
as entropy); and it has a goal of finding conditions which maximize the
difference in this function's values from one side to the other of some
data structure that it is modifying or creating, and which represents
the universe.  It would not be particularly difficult to implement a
toy version of such a program based on some simple laws of physics, and
perhaps as you suggest our own universe might be the result of an instance
of such a program which is not all that much more large or complex.

In the context of the All Universe Principle as interpreted by
Schmidhuber, all programs exist, and all the universes that they generate
exist.  This program that you describe is one of them, and the universe
that is thus generated is therefore part of the multiverse.

So to first order, there is nothing particularly surprising or
problematical in envisioning programs like this as contributing to the
multiverse, along with the perhaps more naively obvious programs which
perform sequential simulation from some initial conditions.  All programs
exist, including ones which create universes in even more strange or
surprising ways than these.

By the way, Wolfram's book (wolframscience.com) does consider some
non-sequential simulations as models for simple 1- and 2-dimensional
universes.  These are what he calls Systems Based on Constraints
discussed in his chapter 5.

Where I think your idea is especially interesting is the possibility that
the program which creates our universe via this kind of optimization
technique (maximizing the difference in complexity) might be much
shorter than a more conventional program which creates our universe
via specifying initial conditions.  Shorter programs are considered
to have larger measure in the Schmidhuber model, hence it is of great
importance to discover the shortest program which generates our universe,
and if optimization rather than sequential simulation does lead to a
much shorter program, that means our universe has much higher measure
than we might have thought.

However, I don't think we can evaluate this possibility in a meaningful
way until we have a better understanding of the physics of our own
universe.  I am somewhat skeptical that this particular optimization
principle is going to work, because our universe's disorder gradient is
dominated by the Big Bang's decay to heat death, and these cosmological
phenomena don't necessarily seem to require the kinds of atomic and
temporal structures that lead to observers.  If you look at Tegmark's
paper http://www.hep.upenn.edu/~max/toe.html which lists a number of the
physical-constant coincidences necessary for life, not all of them would
have cosmological importance and change the order-to-disorder gradient
of the universe.


 It might well be that this additional constraint can also be
 used for selecting the appropriate set of global parameter for
 the set of equations considered in 2). It does not seem
 counter-intuitive that the sets of global parameters that
 allows for the maximization of the gradient of order among all
 possible solutions considering all possible values for global
 parameters be precisely those for which SASs emerges and
 therefore those we see in our universe: universes not able to
 generate complex enough substructures to be self aware would
 probably equally fail to exhibit large gradients of order and
 vice versa.

Certainly an interesting suggestion.  Again, when we look at the larger
view of all possible programs, we have optimization programs which
have some parameters fixed; and optimization programs which allow the
parameters to vary as part of the optimization process.  The latter
programs would tend to be smaller since they don't have to store the
value of the fixed parameters; but on the other hand the need to allow
for varying the 

Re: Why no white talking rabbits?

2004-01-10 Thread Jesse Mazer
Hal Finney wrote:
Jesse Mazer writes:
 Hal Finney wrote:
 However, I prefer a model in which what we consider equally likely is
 not patterns of matter, but the laws of physics and initial conditions
 which generate a given universe.  In this model, universes with simple
 laws are far more likely than universes with complex ones.

 Why? If you consider each possible distinct Turing machine program to be
 equally likely, then as I said before, for any finite complexity bound 
there
 will be only a finite number of programs with less complexity than that, 
and
 an infinite number with greater complexity, so if each program had equal
 measure we should expect the laws of nature are always more complex than 
any
 possible finite rule we can think of. If you believe in putting a 
measure on
 universes in the first place (instead of a measure on first-person
 experiences, which I prefer), then for your idea to work the measure 
would
 need to be biased towards smaller program/rules, like the universal 
prior
 or the speed prior that have been discussed on this list by Juergen
 Schimdhuber and Russell Standish (I think you were around for these
 discussions, but if not see
 http://www.idsia.ch/~juergen/computeruniverse.html and
 http://parallel.hpc.unsw.edu.au/rks/docs/occam/occam.html for more 
details)

No doubt I am reiterating our earlier discussion, but I can't easily find
it right now.  I claim that the universal measure is equivalent to the
measure I described, where all programs are equally likely.
Feed a UTM an infinite-length random bit string as its program tape.
It will execute only a prefix of that bit string.  Let L be the length
of that prefix.  The remainder of the bits are irrelevant, as the UTM
never gets to them.  Therefore all infinite-length bit strings which
start with that L-bit prefix represent the same (L-bit) program and will
produce precisely the same UTM behavior.
Therefore a UTM running a program chosen at random will execute a
program of length L bits with probability 1/2^L.  Executing a random
bit string on a UTM automatically leads to the universal distribution.
Simpler programs are inherently more likely, QED.
I don't follow this argument (but I'm not very well-versed in computational 
theory)--why would a UTM operating on an infinite-length program tape only 
execute a finite number of bits? If the UTM doesn't halt, couldn't it 
eventually get to every single bit?

 If the everything that can exist does exist idea is true, then every
 possible universe is in a sense both an outer universe (an independent
 Platonic object) and an inner universe (a simulation in some other
 logically possible universe).
This is true.  In fact, this may mean that it is meaningless to ask
whether we are an inner or outer universe.  We are both.  However it
might make sense to ask what percentage of our measure is inner vs outer,
and as you point out to consider whether second-order simulations could
add significantly to the measure of a universe.
What do you mean by add significantly to the measure of a universe, if 
you're saying that all programs have equal measure?

 If you want a measure on universes, it's
 possible that universes which have lots of simulated copies running in
 high-measure universes will themselves tend to have higher measure, 
perhaps
 you could bootstrap the global measure this way...but this would require 
an
 answer to the question I keep mentioning from the Chalmers paper, namely
 deciding what it means for one simulation to contain another. Without 
an
 answer to this, we can't really say that a computer running a simulation 
of
 a universe with particular laws and initial conditions is contributing 
more
 to the measure of that possible universe than the random motions of
 molecules in a rock are contributing to its measure, since both can be 
seen
 as isomorphic to the events of that universe with the right mapping.

We have had some discussion of the implementation problem on this list,
around June or July, 1999, with the thread title implementations.
I would say the problem is even worse, in a way, in that we not only
can't tell when one universe simulates another; we also can't be certain
(in the same way) whether a given program produces a given universe.
So on its face, this inability undercuts the entire Schmidhuberian
proposal of identifying universes with programs.
However I believe we have discussed on this list an elegant way to
solve both of these problems, so that we can in fact tell whether a
program creates a universe, and whether a second universe simulates the
first universe.  Basically you look at the Kolmogorov complexity of a
mapping between the computational system in question and some canonical
representation of the universe.  I don't have time to write more now
but I might be able to discuss this in more detail later.
Thanks for the pointer to the implementations thread, I found it in the 
archives here: