Re: Many worlds theory of immortality

2005-04-15 Thread Hal Finney
Jesse Mazer writes:
 Would you apply the same logic to copying a mind within a single universe 
 that you would to the splitting of worlds in the MWI? If so, consider the 
 thought-experiment I suggested in my post at 
 http://www.escribe.com/science/theory/m4805.html --

Generally, I don't think the same logic applies to copying a mind in a
single universe than to splitting of worlds in the MWI.  Copying a mind
will double its measure, while splitting one leaves it alone.  That is a
significant practical and philosophical difference.

Practically, copying a mind leaves it with half as many resources per
new-mind, while splitting it leaves it with the same number of resources
per mind.  This means that you might take very different practical
actions if you knew that your mind was going to be copied than if you
were about to split a coin.

Philosophically, the measure of the observer-moments associated with a
copied mind are twice as great as the measure of the observer-moments
associated with a split one.  Obviously 2 is not equal to 1.  This puts
the burden of proof on those who would claim that this difference is
philosophically irrelevant in considering issues of consciousness.

Hal Finney



RE: many worlds theory of immortality

2005-04-16 Thread Hal Finney
I agree with Brent's comment:

 I essentially agree.  If we say, 2+2=5 then we have failed to describe
 anything because we have contradicted our own semantics.  Logic is not a
 constraint on the world, but only on our use of language to describe it.  But
 that doesn't mean that any world for which we make up a description can exist.
 Logic doesn't constrain reality; either by prohibiting it or by making it
 possible.

It's not that logically impossible worlds don't (or can't) exist; it's
that if we use a logical contradiction, we have failed to describe
a world.

Consider a specific example that captures some of the sense of the
proposed logically impossible world where an electron is omniscient.
Consider a 2-D cellular automaton world like Conway's Life.  Every cell
is either occupied or unoccupied.  It has one of two states.  Now let
us consider such a world in which one cell holds much more than one
bit of information.  Suppose it holds a million bits.  This one cell
is tiny like an electron; yet it holds a great deal of information,
like an omniscient entity.

This description is logically contradictory.  A system with only two
states cannot hold a million bits of information.  That is an elementary
theorem of mathematical information theory.

The problem is not specific to a world.  The problem is with the concept
that a two state system can hold a million bits.  That concept is
inherently contradictory.  That makes it meaningless.  Trying to apply
it to a world or to anything else is going to produce meaningless results.

Rather than say that such a world cannot exist because it is logically
contradictory, it makes more sense to say that logically contradictory
descriptions fail to describe worlds, because they fail to describe
anything in a meaningful way.

Hal Finney



RE: many worlds theory of immortality

2005-04-17 Thread Hal Ruhl
I know of no reason to assume that the various branches of MWI run 
concurrently.

If they do not run concurrently then the only way I see for immortality is 
to be in a branch where immortality is already a possibility inherent in 
that branch.

Hal Ruhl



Re: Free Will Theorem

2005-04-18 Thread Hal Finney
On Apr 11, 2005, at 11:11 PM, Russell Standish wrote:
 I'm dealing with these questions in an artificial life system - Tierra
 to be precise. I have compared the original Tierra code, with one in
 which the random no. generator is replaced with a true random
 no. generator called HAVEGE, and another simulation in which the RNG
 is replaced with a cryptographically secure RNG called ISAAC. The
 results to date (and this _is_ work in progress) is that there is a
 distinct difference between the original Tierra PRNG, and the other
 two generators, but that there is little difference between HAVEGE and
 ISAAC. This seems to indicate that algorithmic randomness can be good
 enough to fool learning algorithms.

It definitely should be.  At least certain types of cryptographic random
number generators are reducible to factoring.  That means that if any
program can distinguish the output from the crypto RNG from the output
of a true RNG, you could factor a large number such as an RSA modulus.
This would be an important and completely unexpected cryptographic result.

Assuming that factoring is truly intractable, crypto RNGs are as good
as real ones, and deterministic universes are indistinguishable from
nondeterministic ones.

Hal



Re: Implications of MWI

2005-04-27 Thread Hal Finney
Mark Fancey writes:
 Did accepting and understanding the MWI drastically alter your
 philosophical worldview? If so, how?

I don't know if I would describe it as a drastic alteration, but I do
tend to think of my actions as provoking a continuum of results rather
than a single result.  For example, sometimes when I drive too fast and
nothing happens, I think about how I have fractionally killed people by
my actions.  Somewhere in the multiverse, kids ran out in front of my
car and I was not able to stop due to my speed.  Even though I didn't see
it, I killed those people just as surely as a pilot who drops a bomb and
doesn't stick around to see the results.  My decision to take that risky
action reduced the measure of those people's existence in the multiverse.

This has made me a little more careful; I no longer think back to all
the times when nothing happened and assume that the same will hold true
in the future.  I know that even though I SAW nothing happen, bad things
did happen as a result of my actions.  They were out of sight but they
happened anyway.  My actions have consequences even beyond those that
I see.

Another way it has influenced my thinking is about future indeterminacy.
I now believe, for example, that there is no meaning to certain questions
that people ask about future conditions.  For example, who will be the
next president?  I don't think this question is meaningful.  Many people
will be the next president.  My consciousness spans multiple universes
where different people will be president.

Any question like this which presupposes only one future has a similar
problem.  Another one we often hear is, are we in a speculative bubble
in real estate (or stocks, or whatever).  That's a meaningless question.
Bubbles can only be defined retrospectively.  If prices fall, then we
were in a bubble; if they don't, then we weren't.  But both futures exist.
I live in worlds where we are in a bubble and worlds where we are not in
a bubble.  The question has no answer.

Hal Finney



Re: Memory erasure

2005-05-01 Thread Hal Finney
Saibal Mitra writes:
 Although (quantum) suicide experiments can never be successful, memory
 erasure could still work. Suppose you are an artifically intelligent
 machine and you can erase any part of your memory. One day you receive
 news that an asteroid is on its way to earth which will completely
 destroy the planet. If you just erase this from your memory and check
 the news again you will very likely not hear about this anymore.

That's a cool idea.  Once you erase your memory, your consciousness spans
(i.e. is instantiated in) all universes where the event either happens or
does not. If it is an unlikely event, then the fraction of the multiverse
where that event happens is a small fraction of the set of universes which
share your consciousness. So the chance that you will then discover the
bad news is similarly small.

Believers in quantum suicide may for similar reasons argue that memory
erasure is impossible. There is a universe where your memory erasure
attempt fails, and your conscious train of thought proceeds unchanged in
those universes. Universes where you do erase your memory terminate your
train of thought and, from a narrow perspective, kill your current-moment
conscious mind. Hence your mind will proceed only down the pathways
where its consciousness continues to exist, and from your conscious
perspective it will appear that the memory erasure never works.

Arguing against this is that every night you fall asleep, a similar
loss of consciousness (often with memory erasure of the last few
thoughts before sleep). This theory would predict that each night you
only experience universes where, for whatever reason, you never again
lose consciousness.

You can turn this whole chain of logic around and make it an argument
against QS. Sleep proves that loss of consciousness is possible,
and that memory erasure is possible. Imagine memory erasure becoming
so complete that it erases your entire life. Is that possible? If so,
isn't it essentially the same as suicide? Or if it's not possible, where
is the dividing line between the amount of memory erasure that is and
is not possible?

Hal Finney



Re: Many worlds theory of immortality

2005-05-04 Thread Hal Finney
I would add another point with regard to observer-moments and continuity:
probably there is no unique next or previous relationship among
observer-moments.

The case of non-unique next observer-moments is uncontroversial, as it
relates to the universe splitting predicted by the MWI or the analogous
effect in more general multiverse theories.  Non-unique previous
observer-moments can probably happen as well due to the finite precision
of memory.  Any time information is forgotten we would have mental states
merge.  This requires a general multiverse theory, or at least a model
of mental states that span MWI branches; the conventional MWI does not
merge branches which have diverged through irreversible measurements.

In this view, then, we can chain observer-moments together to form
observer-paths, or more simply, observers.  But the chains are non-unique;
obervers can intersect (share observer-moments and then diverge), or
even braid together in interesting ways.  That means that there is no
unique sense in which you are a particular observer, at any moment;
rather, you can be thought of as any of the observers who share your
current observer-moment.

Hal Finney



Re: Everything Physical is Based on Consciousness

2005-05-06 Thread Hal Finney
Bruno Marchal writes:
 The problem is that from the first person point of view, the arbitrary
 delays, introduced by the UD, cannot be taken into account, so that
 big programs cannot be evacuated so easily. This does not mean little
 programs, or some little programs, does not win in the limit, just that
 something more must be said.  Sorry for not explaining more because I
 have a lot of work to finish (I'm still not connected at home, but in this
 case I'm not sure I will find the time to go home...).  See you next week.

As I understand the explanation, what happens is that as the UD generates
programs, it inherently and unintentionally generates multiple copies of
each program. The reason is that each program has only a finite size.
If we think of the program as written in some kind of programming
language, only a finite number of the characters will be used.
Any characters beyond this ending point are never executed.

This means that as the UD generates all character strings and begins
to execute them, a program which is n bits long gets created any time
the UD generates a string that starts with those particular n bits.
Each string which has those n bits as its prefix (the bits starting at
the beginning) will represent the same program.

The UD can't detect this because a priori there is no way to tell how
long a particular program will turn out to be.  This is a corollary of
the Halting Problem, there is no way to tell by inspection how long a
program is.  So it has no choice but to multiply create programs in this
way.

The result is that if we compare two programs, one n bits long and one
m bits long, where m  n, we will find that the n bit one gets created
proportionally more times than the m bit one.  And in fact the constant
of proportionality is 2^(m-n).  This lets us define a measure for each
program that tells what fraction of all programs the UD creates are that
particular program.  That measure is 2^(-n) for an n bit program.

Therefore, each program gets created an infinite number of times by the
UD, and the fraction of the whole ensemble of programs which consists
of a particular n bit program is 2^(-n).

Note that this does not depend on any slowdown effect for larger programs;
that effect is not considered significant in this model because it is only
noticeable from outside and not from within the program being executed.
Rather, it depends on multiple creations of each program, and the fraction
of the whole infinite ensemble that each program represents.

Hal Finney



Re: Bitstrings, Ontological Status and Time

2005-05-07 Thread Hal Ruhl
Hi Stephen:
At 04:37 PM 5/6/2005, you wrote:
Dear Hal,
   No, I disagree. The mere a priori existence of bit strings is not 
enough to imply necessity that what we experience 1st person view points. 
At best it allows the possibility that the bit strings could be 
implemented. You see the problem is that it is impossible to derive 
Change or Becoming from Being.
Which is why I have focused my efforts in this venue on finding a simple 
system that has a natural dynamic.

The fact that my result so far is a random dynamic does not prevent long 
sequences where reality visits information kernels [bit strings] such that 
the trace can be encoded in a set of rules [including simple ones] such as 
those we call physics [whatever they may actually be if we do not know them 
now].

Hal Ruhl



Re: Bitstrings, Ontological Status and Time

2005-05-07 Thread Hal Finney
Time is just a coordinate, in relativity theory.  The time coordinate
has an opposite sign to the space coordinates, and that subtle difference
is responsible for all of the enormous apparent difference between space
and time.

Granted, relativity theory is not a complete and accurate specification
of the world in which we live (that requires QM to be incorporated),
but it is still a self-consistent model which illustrates how time can
be dealt with mathematically in a uniform way with space.  Time and
space are not fundamentally different in relativity; they shade into
one another and can even change places entirely, if you cross the event
horizon of a black hole.

In fact, one can construct models in which there are more than one
dimension of time, just as we have more than one dimension of space.
How would your renaissance philosphers deal with two dimensions of time?
I think their ideas are obsolete and have no reference or value given
our much deeper modern understanding of these issues.

Hal Finney



Re: Bitstrings, Ontological Status and Time

2005-05-07 Thread Hal Finney
Stephen Paul King writes:
 I would agree that Time is just a coordinate (system), or as Leibniz 
 claimed an order of succession, if we are considering only events in 
 space-time that we can specify, e.g. take as a posteriori. What I am trying 
 to argue is that we can not do this in the a priori case for reasons that 
 have to do with Heisenberg's Unceratanty Principle. Since it is impossible 
 to construct a space-time hypersurface where each point has associated with 
 it all of the physical variables that we need to compute the entire global 
 manifold, from initial Big Bang singulary to the, possibly, infinite future, 
 it is a mistake to think of time simply as a coordinate. OTOH, it is 
 consistent if we are dealing with some particular situation and using 
 Special (or General) Relativity theory to consider the behavious of clocks 
 and rulers. ;-)

I agree that in our particular universe the role of time is complex.
Since we don't have a unified theory yet, we really can't say anything
definitive about what time will turn out to be.  It's entirely possible
that time may yet turn out to be a simple coordinate.  Wolfram is pushing
ideas where the universe is modeled as a cellular automaton (CA) which
has discrete units of space and time.  Of course his theories don't
quite work yet, but then, nobody else's do, either.

 I am trying to include the implications of QM in my thinking and hence 
 my point about time and my polemics against the idea of block space-time. 
 I do not care how eminent the person is that advocates the idea of Block 
 space-time, they are simply and provably wrong.

In this universe, perhaps so, although as I argued above absent a true
and accurate theory of physics I don't agree that we can so assertively
say that block models are disproven.  But I do agree that a simple,
relativity-based block model (if such exists) is incomplete as a model
for our universe since it does not include QM.

BTW there is also a block-universe type construction possible in QM.
Let phi(t) represent the state function of the entire universe at time t.
Then Schrodinger's equation H(phi) = i hbar d/dt(phi) shows how the
wave function will evolve.  It is determinstic and in a many worlds
interpretation this is all there is to the underlying physics.  So this
is a block-universe interpretation of QM.

However, it is non relativistic.  From what I understand, a full marriage
of QM and special relativity requires quantum field theory, which is
beyond my knowledge so I don't know whether it can be thought of as a
block universe.  And then of course that still leaves gravitation and
the other phenomena of general relativity, where we have no theory at
all that works.  Whether it will be amenable to a block universe view
is still unknown as far as I understand.

I don't see why you are so bound on rejecting block universes.  You just
don't like them?

 If you look around in the journals and books you will find discussion of 
 the implications of multiple-time dimensions.  For example:

Sure, in fact I first learned of the idea from one of Tegmark's
papers, he who is unknowingly one of the founding fathers of this list.
http://space.mit.edu/home/tegmark/dimensions.html describes his ideas
for why universes with 2 or more time dimensions are unlikely to have
observers.  The point is, you can't go quoting Leibniz about
this stuff.  We've left him far behind.

Hal Finney



Re: Everything Physical is based on Consciousness - A question

2005-05-08 Thread Hal Ruhl


Hi Jeanne:
It is much the same thing. More or less the first person is the one
standing in Bruno's transporter and the third person is the one operating
it. 
Several years ago I started a FAQ for this list but lacked the necessary
time to finish.
Hal Ruhl 

At 02:54 PM 5/8/2005, you wrote:

 I am a mere layperson who follows your discussions
with great interest, so forgive me if I'm about to ask a question whose
answer is apparent to all but me. I am very familiar with the
first person and third person concept in everyday
life and literature, but I am a little unclear about the specific meaning
that it holds in these discussions; I feel like I'm missing something
important that is blocking my understanding of how you are applying first
and third person to your work in terms of multiverses and MWI.
Could someone please direct me to some links that could help me better
understand these perspectives as they apply to the discussions.
Thank you.

Jeanne


- Original Message - 
From: Stephen Paul King 
To:
everything-list@eskimo.com
 
Sent: Sunday, May 08, 2005 11:35 AM
Subject: Re: Everything Physical is based on Consciousness

Dear Norman,

 You make a very interesting point
(the first point) and I think that we could all agree upon it as it
is but I notice that you used two words that put a sizable dent in the
COMP idea: snapshot and precisely represented. It
seems that we might all agree that we would be hard pressed to find any
evidence at all in a single snapshot on an entity to lead us to believe
that it somehow has or had some form of 1st person viewpoint, a
subjective experience. 
 Even if we were presented with many snapshots,
portraits of moments frozen in time like so many insects in
amber, we would do no better; but we have to deal with the same criticism
that eventually brought Skinnerian behaviorism down: models that only
access a 3rd person view and disallow for a person making the
3rd person view will, when examined critically, fail to offer any
explanation of even an illusion of a 1st person viewpoint! And we have
not even dealt with the Representable by
string-of-zeroes-and-ones . 

 Bitstring representability only gives
us a means to asks questions like: is it possible to recreate a 3rd
person view. Examples that such are possible are easy to find, go to your
nearest Blockbuster and rent a DVD... But again, unless we include the
fact that we each, as individuals, have some 1st person view that somehow
can not be known by others without also converging the 1st person
viewpoints of all involved, we are missing the obvious. A
representation of X is not necessarily 3rd person identical
to X even though it might be 1st person indistinguishable!

 About the multiverse being infinite
in space-time: You seem to be thinking of space-time as some kind of a
priori existing container, like a fish bowl, wherein all universes
exists, using the word exists as if it denoted
being there and not somewhere else. This is
inconsistent with accepted GR and QM in so many ways! GR does not allow
us to think off space-time as some passive fishbowl!
Space-time is something that can be changed - by changing the
distributions of momentum-energy - and that the alterable metrics of
space-time can change the distributions of momentum-energy - otherwise
known as matter - stuff that makes up planets, people,
amoeba, etc. 
 QM, as interpreted by Everrett et al tells us that
each eigenstate(?) of a QM system is separate from all
others, considered as representing entirely separate distributions of
matter/momentum-energy, and thus have entirely different and unmixed
space-times associated. The word parallel as used in MWI
should really be orthogonal since that is a more accurate
description of the relationships that the Many Worlds have with each
other.

 Now, what are we to make of these two
statements taken together? I don't know yet. ;-)

Stephen

- Original Message - 
From: Norman Samish 
To:
everything-list@eskimo.com
 
Cc:
everything-list@eskimo.com
 
Sent: Sunday, May 08, 2005 3:14 AM
Subject: Everything Physical is based on Consciousness

Gentlemen,
I think that we all must be zombies who behave as if they are
conscious, 
in the sense that a snapshot of any of us could, in principle, be
precisely 
represented by a string of zeroes and ones.

If it is true that the multiverse is infinite in space-time, is it
not true 
that anything that can exist must exist? If so, then, in
infinite 
space-time, there are no possible universes that do not exist.

Norman Samish
~~
- Original Message - 
From: Stathis Papaioannou

[EMAIL PROTECTED]
To:
[EMAIL PROTECTED]
Cc:

everything-list@eskimo.com
Sent: Saturday, May 07, 2005 10:47 PM
Subject: Re: Everything Physical is Based on Consciousness


Dear Stephen,

COMP is basically a variant of the familiar Problem of Other
Minds, which
is not just philosophical esoterica but something we have to deal
with in
everyday life. How

RE: many worlds theory of immortality

2005-05-09 Thread Hal Finney
Jonathan Colvin writes:
 Pondering on this, it raises an interesting question. Can we differentiate
 between worlds that are (or appear to be) rule-based, and those that are
 purely random? 

The usual approach is that a system which is algorithmically compressible
is defined as random.  A rule-based universe has a short program that
determines its evolution, or creates its state.  A random universe has
no program much smaller than itself which can encode its information.

Hal Finney



RE: many worlds theory of immortality

2005-05-09 Thread Hal Finney
The usual approach is that a system which is algorithmically 
compressible is defined as random.  A rule-based universe has 
a short program that determines its evolution, or creates its 
state.  A random universe has no program much smaller than 
itself which can encode its information.

Hal Finney

Jonathan Colvin replies:
 I think you meant algorithmically *in*compressible.

Yes, I did.

 The relevance was, I was thinking that those universes where we become
 immortal under MWI are not the conventional rule-based universes such as we
 appear to live in, but a different class of stochastic random ones (which
 require very unlikely strings of random coincidences to instantiate). The
 majority of such universes, being essentially random, are probably not very
 pleasant places to live.

You could look at it from the point of view of observer-moments.  Among
all observer-moments which remember your present situation and which also
remember very long lifetimes, which ones have the greatest measure?
It should be those which have the simplest explanations possible.
As time goes on, the explanations will presumably have to be more and more
complex, but it doesn't necessarily have to be extreme.  It could just be,
great scientist invents immortality in the year 2006.  Then, next year,
it will be great scientist invents immortality in the year 2007, etc.

Once you're lying on your death bed and each breath could be your last,
it starts to get a little more difficult.  Maybe it will be like those
movies where the condemned man is in the death chamber and they are about
to throw the switch, as the lawyer rushes to the prison with news from
the governor of a last-minute pardon.  You'll be taking your last breath,
and someone will rush in with a miraculous cure that was just discovered,
or some such.

Hal Finney



RE: many worlds theory of immortality

2005-05-09 Thread Hal Finney
Jonathan Colvin writes:
 That's putting it mildly. I was thinking that it is more likely that a
 universe tunnels out of a black hole that just randomly happens to contain
 your precise brain state at that moment, and for all of future eternity. But
 the majority of these random universes will be precisely that; random. In
 most cases you will then find that your immortal experience is of a purely
 random universe, which is likely a good definition of hell.

But it's not all that unlikely that someone in the world, unbeknownst
to you, has invented a cure; whereas for a universe with your exact
mind in it to be created purely de novo is astronomically unlikely.

Look at the number of atoms in your brain, 10^25 or some such, and imagine
how many arrangments there are of those atoms that aren't you, compared
to the relative few which are you.  The odds against that happening by
chance are beyond comprehension.  Whereas the odds of some lucky accident
saving you as you are about to die are more like lottery-winner long,
like one in a billion, not astronomically long, like one in a googleplex.

Especially if you accept that it is possible in principle for medicine
to give us an unlimited healthy lifespan, then all you really need to do
is to live in a universe where that medical technology is discovered,
and then avoid accidents.  Neither one seems all that improbable from
the perspective of people living in our circumstances today.  It's harder
to see how a cave man could look forward to a long life span.

I should add that I don't believe in QTI, I don't believe that we are
guaranteed to experience such outcomes.  I prefer the observer-moment
concept in which we are more likely to experience observer-moments where
we are young and living within a normal lifespan than ones where we are
at a very advanced age due to miraculous luck.

Hal Finney



Re: FW: Everything Physical is Based on Consciousness

2005-05-09 Thread Hal Finney
From: Hal Finney [mailto:[EMAIL PROTECTED]
Another way to think of it is that all bit strings
exist, timelessly; and some of them implicitly specify computer programs;
and some of those computer programs would create universes with observers
just like us in them.  You don't necessarily need the machinery of
the computer to run the program, it could be that the existence of the
program itself is sufficient for what we think of as reality.

Brent Meeker replies:

 In what sense does the program exist if not as physical tokens?
 Is it enough that you've thought of the concept?  The same program,
 i.e. bit-string, does different things on different computers.  So how
 can the program instantiate reality independent of the computer?

Yes, I think it is enough that I have thought of the concept!  Or more
accurately, I think it is enough that the concept is thinkable-of.

What I mean is, a bit string plus the concept of a computer is enough
to imply a universe with a time coordinate (or more than one!)  and all
the complexity we perceive.  In the Platonic sense both the bits and
the computer-concept exist in the abstract, as both are informational
entities.  So in principle that should be enough.

Now, as to the problem of which computer to use to interpret a given
bit string, what I think you also need to do is to imagine all possible
(abstract) computers as well as all possible bit strings.  Then these
produce all possible universes.

So this exposes a weakness, which is that we seem to need a measure over
all the computers, in order to compute a measure over the universes they
compute.  And if we look at this more closely, we see that I have glossed
over another assumption, which is an implied measure over bit strings.
Robin Hanson on another list challenged me on this point: we need to know
which bit strings are more likely in order to deduce which universes are
more likely.

In the case of the bit strings, there is an obvious symmetric choice,
which is that each bit has independent, equal probability of 1/2.
This is not the only possibility, though, and we might get different
probability assignments for our universes if we assume a different measure
over bit strings.  But this measure is certainly staring us in the face
and has an obvious appeal.

For computers, it is much harder to come up with a natural measure which
will tell us that some computers are more likely, have more measure,
than others.  My hope is that with further understanding and philosophical
exploration of this issue, we will either come up with an obvious measure
(as in the case of bit strings) or decide that it doesn't matter.

The theory of algorithmic complexity shows that, from a sufficiently
removed perspective, almost all computers compute essentially the
same complexity for a universe.  These wiggle words almost all and
essentially do leave room for the fact that certain computers compute
completely different complexities for a given universe.  But that's
only a small fraction of computers - at least, I'd like to say that,
but that again seems to require a measure over computers.

So I do think this is an area where some work is needed, but certainly
this result from AIT gives hope for a solution.  It cannot be a
coincidence that almost all computers work almost the same on almost
all universes.  That comes really close to letting us say, the computer
doesn't matter.  Maybe future philosphers will give us better grounds for
letting us be agnostic about the choice of computers.  And maybe we will
even find that the question of bit string measure is similarly irrelevant.

Hal Finney



Re: many worlds theory of immortality

2005-05-09 Thread Hal Finney
Norman Samish writes:
 If the multiverse is truly infinite in space-time, then all possible 
 universes must eventually appear in it, including an infinite number with 
 all 10^80 particles in it identical to those in our universe.

Yes, Tegmark calls this the Level I concept of a multiverse.  It's not
so much that the multiverse is truly infinite, it is enough if our own
mundane universe that we see around us is spatially infinite, as predicted
by inflation theory.  See http://it.arxiv.org/abs/astro-ph/0302131 for a
slightly more technical version of Tegmark's Scientific American article
on the topic.  Or the SciAm cover story, Infinite Earths in PARALLEL
UNIVERSES Really Exist, May 2003.

Hal Finney



RE: FW: Everything Physical is Based on Consciousness

2005-05-09 Thread Hal Finney
Brent Meeker writes:
 From: Hal Finney [mailto:[EMAIL PROTECTED]
 Yes, I think it is enough that I have thought of the concept!  Or more
 accurately, I think it is enough that the concept is thinkable-of.

 Why bother with the computer at all.  Since you're just conceptualizing the
 computer (it is actually going to do anything) and all the computer would do
 would be to produce some other bit-string, given the input bit-string; why not
 just think of all possible bit-strings. Isn't that what Bruno's UDA does -
 generate all possible bit strings.

 But since they only have to be thinkable-of, it seems all this talk about
 bit-strings and computers is otiose.  The universe is thinkable-of, 
 therefore
 it exists.

Yes, I think that is true too.  But the bit strings, interpreted as
programs, are crucial for the whole theory to be able to make predictions.

The idea is that the bit string is a compressed representation of the
universe.  Only universes which are lawful are compressible.  Hence,
lawful universes can be represented by small bit strings, which have
greater measure.

You are right that our universe exists, as its literal, expansive,
redundant self; but it also exists in the form of the many different
programs that would generate it.  Only the shortest such programs make
a significant contribution to the measure, so the long-form, literal
representation of the universe doesn't even matter.

Without the concept of bit strings and computers, we have no basis for
saying that more lawful universes have greater measure than random and
incompressible ones.  This would eliminate one of the great potential
strengths of the all-universe hypothesis (AUH), that it offers an
explanation for why we live in a lawful universe.

Hal Finney



RE: FW: Everything Physical is Based on Consciousness

2005-05-10 Thread Hal Finney
[I will assume that Brent meant to forward this to the list, his
mailer often seems to send replies only to me.]

Brent wrote:
 -Original Message-
 From: Hal Finney [mailto:[EMAIL PROTECTED]
 Sent: Tuesday, May 10, 2005 5:06 AM
 To: everything-list@eskimo.com
 Subject: RE: FW: Everything Physical is Based on Consciousness
 
 
 Brent Meeker writes:
  From: Hal Finney [mailto:[EMAIL PROTECTED]
  Yes, I think it is enough that I have thought of the concept!  Or more
  accurately, I think it is enough that the concept is thinkable-of.
 
  Why bother with the computer at all.  Since you're just conceptualizing the
  computer (it is actually going to do anything) and all the computer would 
  do
  would be to produce some other bit-string, given the input
 bit-string; why not
  just think of all possible bit-strings. Isn't that what Bruno's UDA does -
  generate all possible bit strings.
 
  But since they only have to be thinkable-of, it seems all this talk about
  bit-strings and computers is otiose.  The universe is
 thinkable-of, therefore
  it exists.
 
 Yes, I think that is true too.  But the bit strings, interpreted as
 programs, are crucial for the whole theory to be able to make predictions.
 
 The idea is that the bit string is a compressed representation of the
 universe.  Only universes which are lawful are compressible.
 Hence,
 lawful universes can be represented by small bit strings, which have
 greater measure.
 
 You are right that our universe exists, as its literal, expansive,
 redundant self; but it also exists in the form of the many different
 programs that would generate it.  Only the shortest such programs make
 a significant contribution to the measure, so the long-form, literal
 representation of the universe doesn't even matter.

 But it's the idea of representation that bothers me.  I think representation
 is a trinary relation, Rxyz = x represents y to z; not a binary one.
 Also, to apply Chatian's idea of algorithmic compression requires that the
 universe be infinite - all finite sequences are equally compressible.

I'm not sure how to interpret the z in x represents y to z.  If a
computer generates string y from string x, is the computer the z?

And as for Chaitin's algorithmic complexity, I am afraid that you have
it backward, that it does apply to finite strings.  I'm not even sure
how to begin to apply it to infinite strings.

For example, consider all 1,000,000-bit strings.  Only about 2^1,000
of them could be represented by a 1,000-bit program, since there are
only 2^1,000 such programs.  Most million-bit strings can't be created
by programs of substantially less than a million bits in size, because
there just aren't enough short programs.


 Without the concept of bit strings and computers, we have no basis for
 saying that more lawful universes have greater measure than random and
 incompressible ones.  This would eliminate one of the great potential
 strengths of the all-universe hypothesis (AUH), that it offers an
 explanation for why we live in a lawful universe.

 Why not?  In the infinite sequence of all bit-strings there are more short
 strings than long ones.

So if we had only a simple, literal equivalence between bit strings and
universes, what would that mean, there are more small universes than
big ones?  That doesn't seem to be either a very useful or accurate
prediction.

Hal Finney



RE: many worlds theory of immortality

2005-05-10 Thread Hal Finney
Stathis Papaioannou writes:
 Hal,
 I should add that I don't believe in QTI, I don't believe that we are
 guaranteed to experience such outcomes.  I prefer the observer-moment
 concept in which we are more likely to experience observer-moments where
 we are young and living within a normal lifespan than ones where we are
 at a very advanced age due to miraculous luck.

 Aren't the above two sentences contradictory? If it is guaranteed that 
 somewhere in the multiverse there will be a million year old Hal 
 observer-moment, doesn't that mean that you are guaranteed to experience 
 life as a million year old?

I don't think there are any guarantees in life!

I don't see a well defined meaning about anything I am guaranteed to
experience.

I am influenced by Wei Dai's approach to the fundamental problem of what
our expectations should be in the multiverse.  He focused not on knowledge
and belief, but on action.  That is, he did not ask what we expect, he asked
what we should do.

How should we behave?  What are the optimal and rational actions to take
in any given circumstances?  These questions are the domain of a field
which, like game theory, is a cross between mathematics, philosophy and
economics: decision theory.

Classical decision theory is uninformed by the AUP, but it does include
similar concepts.  You consider that you inhabit one of a virtually
infinite number of possible worlds, which in this theory are not real but
rather represent your uncertainty about your situtation.  For example,
in one possible world Bigfoot has sneaked up behind you but you don't
know it, and in other worlds he's not there.  You then use this world
concept to set up a probability distribution, and make your decision
based on optimal expected outcome over all possible worlds.

Incorporating the multiverse can be done in a couple of ways.  I think Wei
proposed just to add the entire multiverse as among the possible worlds.
Maybe we live in a multiverse, maybe we don't.  The hard part is then,
supposing that we do, how do we rank the expected outcomes of our actions?
Each action affects the multiverse in a complex way, being beneficial
in some branches and harmful in others.  How do we weight the different
branches?  Wei proposed to treat that weighting as an arbitrary part of
the user's utility function; in effect, making it a matter of taste and
personal preference how to weight the multiverse branches.

I would aim to get a little more guidance from the theory than that.
I would first try to incorporate the measure of the various branches
which my actions influence, and pay more attention to the branches with
higher measure.  Then, I think I would pay more attention to the effects
in those branches on observers (or observer-moments) which are relatively
similar to me.  However, that does not mean I would ignore the effects
of my actions on high-measure branches where there are no observers
similar to me (i.e. branches where I have died).  I might still take
measures such as buying life insurance for my children, because I care
about their welfare even in branches where I don't exist.  Similarly,
if I were a philanthropist, I might take care to donate my estate to
good causes if I die.

These considerations suggest to me an optimal course of action in a
multiverse, or even in a world where we are not sure if we live in a
single universe or a multiverse, which is arguably the situation we
all face.  It rejects the simplicity of the RSSA and QTI by recognizing
that our actions influence even multiverse branches where we die, and
taking into consideration the effects of what we do on such worlds.
There is still an element of personal preference in terms of how much we
care about observers who are very similar to ourselves vs those who are
more different, which gives room for various philosphical views along
these lines.

And in terms of your question, I would not act as though I expected to
be guaranteed a very long life span, because the measure of that universe
is so low compared to others where I don't survive.

Hal Finney



Re: many worlds theory of immortality

2005-05-10 Thread Hal Finney
Quentin Anciaux writes:
 but by definition of what being alive means (or being conscious), which is to 
 experience observer moments, even if the difference of the measure where you 
 have a long life compared to where you don't survive is enormous, you can 
 only experience world where you are alive...

The way I would say it is that you can only experience worlds where
you are conscious.  Being alive is not enough.  But really, this is a
tautology: you can only be conscious in worlds where you are conscious.
It sheds exactly zero light on any interesting questions IMO.

 And to continue, I find it very 
 difficult to imagine what could mean being unconscious forever (what you 
 suggest to be likely).

Yet you have already been unconscious forever, before your birth (if we
pretend/assume that the universe is infinite in both time directions).
Can you imagine that?  Why can it happen in one direction but not
the other?

And what do you think of life insurance?  Suppose you have young children
whom you love dearly, for whom you are the sole support, and who will
suffer greatly if you die without insurance?  Would you suggest that QTI
means that you should not care about their lives in universe branches
where you do not survive, that you should act as though those branches
don't exist?

Hal Finney



Re: Which is Fundamental?

2005-05-10 Thread Hal Finney
Lee Corbin writes:
 Why not instead adopt the scientific model?  That is, that
 we are three-dimensional creatures ensconced in a world 
 governed by the laws of physics, or, what I'll call the
 atoms and processes model. About observer-moments, I would
 say what LaPlace answered to Napoleon about a deity:
 I have no need of that hypothesis.

Observer moments are more than a hypothesis, they are our raw experiences
of the world.  It is more the world that is the hypothesis to explain
the observer moments, than vice versa.  I think, therefore I am... an
observer moment, as Descartes meant to say.

However, given the strong evidence we have for the world's existence and
the explanatory power it gives for our experience, I don't think there
is a problem with treating it as fundamental.  This leads to a model of
world - observers - observer-moments.  A world maintains and holds
one or more observers, which can themselves be thought of as composed
of multiple observer moments.

A key point is that the mapping is not just one-to-many.  It is
many-to-many.  That is, an observer moment is shared among multiple
observers; and an observer exists in multiple worlds.

To explain the first point, observers merge whenever information is
forgotten.  And they diverge whenever information is learned.  This means
that each observer-moment is a nexus of intersection of many observers.
The observer moment has multiple pasts and multiple futures.

To explain the second point, observers exist in any world which is
consistent with their observations.  The amount of information in
an observer is much less than the amount of information in a world
(at least, for observers and worlds like our own).  So there are many
worlds which are consistent with the information in an observer, and
the observer can be thought of as occupying all of those worlds.

In this sense, the mapping above might be better expressed as world -
observers - observer-moments.  It is many-to-many in both directions.

I see both views - worlds as primary, or observers and observer-moments as
primary - as playing an important role in understanding our relationship
to the multiverse.  In terms of choosing actions or making predictions,
we need methods for making quantitative estimates of what is likely
to happen.  This requires us to take into consideration the set of
worlds which our observer-moments span, and the set of possible future
observer-moments which we care about.

To calculate, we need a measure over observer-moments.  Then we can have a
greater expectation of experiencing observer moments with higher measures.
So how do we do this calculation?

I start by calculating the measure of universes.  Using an arbitrary,
simple, universal computer, I would calculate the minimum program size
for creating a given universe.  (Yes, I know this is non-computable, but
we can approximate it and use that for our estimates.)  The program size
gives the measure of that universe, and then that measure should lead
us to a measure for the observers and observer-moments in that universe.

This step of going from universe-measure to observer-measure seems a bit
problematic to me and I don't have a completely satisfactory solution,
but I won't go into the details of the problems right now.

Anyway, once you have the measure for an observer moment in a given
universe, you can sum the measures over all universes that generate that
particular observer moment, to get the measure of the observer moment.

Then, with a measure over observer-moments, you can take your current
observer moment, look at ones that you identify with in the future, and
consider the measure of those observer moments in helping you to choose
what actions to take.  This is how one should behave in a multiverse.

This approach seems to require acknowledgement of the fundamental
importance both of worlds and observer-moments.  We use worlds to
calculate measure; we use observer-moments to constrain the set of worlds
that we occupy, now and in the future.  A world-only approach would seem
to pin us to a single world and not recognize that we span all worlds
which contain identical observer-moments; an observer-only approach does
not seem to give us grounds to estimate measure a priori (although I
think Bruno may have a method which is supposed to do that).  We have
to use both concepts to get a complete picture of what is going on.

Hal Finney



Re: where is the harmonic oscillatorness?

2005-05-10 Thread Hal Finney
Eric Cavalcanti writes:
 Let's define a turing machine M with a set of internal states Q,
 an initial state s, a binary alphabet G={0,1}. The transition
 function is f: Q X G - Q X G X {L,R} , i.e., the function
 determines from the internal state and the symbol at the pointer
 which symbol to write and which direction (left or right) to
 move. 

 Write a program in M that calculates the evolution of a harmonic
 oscillator (HO). The solutions are to be N pairs of position and
 momentum of a HO, with time step T and d decimal digits. Let this
 set of pairs be P.

 The program will eventually halt and the tape will display a string
 S.
 ...
 Is there some harmonic oscillatorness in S?

Yes, potentially there is.  The first thing you need to do is to
define a harmonic oscillator.  Obviously you can't ask whether there
is X-ness in something if you don't have a definition of X.

So let us write a definition of a harmonic oscillator.  Express it as a
program which, when given some input that claims to describe a harmonic
oscillator, returns true if it is one, and false if it is not.  This
input can be required to be in some canonical form.

Now, if string S truly contains a harmonic oscillator, we should be
able to write a simple program which translates S into the form needed
for input to our test program, and which will then cause the test
program to return true.

The key is that the translation program must be simple.  The simpler it
is, the greater the degree to which we can say that S contains a harmonic
oscillator.  The more complex it is, then the harmonic oscillator is as
much in the mapping as in S.

This argument gains strength when we are dealing with an object more
complex than a harmonic oscillator.  If the object we are testing for
is so complex that it takes billions of bits to specify, then as long
as the mapping program is substantially smaller than that size, we have
an excellent reason to believe that the object is really in S.

Now, I have cheated in one regard.  I don't know of an objective way of
judging whether the mapping program is simple.  There are some results
in algorithmic information theory which go part way in this direction,
but there seem to be loopholes that are hard to avoid.  So things are not
quite as simple as I have said, but I think the thrust of the argument
shows the direction to pursue.

Hal Finney



Re: Olympia's Beautiful and Profound Mind

2005-05-13 Thread Hal Finney
We had some discussion of Maudlin's paper on the everything-list in 1999.
I summarized the paper at http://www.escribe.com/science/theory/m898.html .
Subsequent discussion under the thread title implementation followed
up; I will point to my posting at
http://www.escribe.com/science/theory/m962.html regarding Bruno's version
of Maudlin's result.

I suggested a flaw in Maudlin's argument at
http://www.escribe.com/science/theory/m1010.html with followup
http://www.escribe.com/science/theory/m1015.html .

In a nutshell, my point was that Maudlin fails to show that physical
supervenience (that is, the principle that whether a system is
conscious or not depends solely on the physical activity of the system)
is inconsistent with computationalism.  What he does show is that you
can change the computation implemented by a system without altering it
physically (by some definition).  But his desired conclusion does not
follow logically, because it is possible that the new computation is
also conscious.

(In fact, I argued that the new computation is very plausibly conscious,
but that doesn't even matter, because it is sufficient to consider that
it might be, in order to see that Maudlin's argument doesn't go through.
To repair his argument it would be necessary to prove that the altered
computation is unconscious.)

You can follow the thread and date index links off the messages above
to see much more discussion of the issue of implementation.

Hal Finney



Re: Tipler Weighs In

2005-05-16 Thread Hal Finney
Lee Corbin points to
Tipler's March 2005 paper The Structure of the World From Pure Numbers:
http://www.iop.org/EJ/abstract/0034-4885/68/4/R04

I tried to read this paper, but it was 60 pages long and extremely
technical, mostly over my head.  The gist of it was an updating of
Tipler's Omega Point theory, advanced in his book, The Physics of
Immortality.  Basically the OP theory predicts, based on the assumption
that the laws of physics we know today are roughly correct, that the
universe must re-collapse in a special way that can't really happen
naturally, hence Tipler deduces that intelligent life will survive
through and guide the ultimate collapse, during which time the information
content of the universe will go to infinity.

The new paper proposes an updated cosmological model that includes a
number of new ideas.  One is that the fundamental laws of physics for the
universe are infinitely complex.  This is where his title comes from; he
assumes that the universe is based on the mathematics of the continuum,
i.e. the real numbers.  In fact Tipler argues that the universe must
have infinitely complex laws, basing this surprising conclusion on the
Lowenheim-Skolem paradox, which says that any set of finite axioms
can be fit to a mathematical object that is only countable in size.
Hence technically we can't really describe the real numbers without an
infinite number of axioms, and therefore if the universe is truly based
on the reals, it must have laws of infinite complexity.  (Otherwise the
laws would equally well describe a universe based only on the integers.)

Another idea Tipler proposes is that under the MWI, different universes
in the multiverse will expand to different maximum sizes R before
re-collapsing.  The probability measure however works out to be higher
with larger R, hence for any finite R the probability is 1 (i.e. certain)
that our universe will be bigger than that.  This is his solution to why
the universe appears to be flat - it's finite in size but very very big.

Although Tipler wants the laws to be infinitely complex, the physical
information content of the universe should be zero, he argues, at the
time of the Big Bang (this is due to the Beckenstein Bound).  That means
among other things there are no particles back then, and so he proposes
a special field called an SU(2) gauge field which creates particles
as the universe expands.  He is able to sort of show that it would
preferentially create matter instead of antimatter, and also that this
field would be responsible for the cosmological constant which is being
observed, aka negative energy.

In order for the universe to re-collapse as Tipler insists it must,
due to his Omega Point theory, the CC must reverse sign eventually.
Tipler suggests that this will happen because life will choose to do so,
and that somehow people will find a way to reverse the particle-creation
effect, catalyzing the destruction of particles in such a way as to
reverse the CC and cause the universe to begin to re-collapse.

Yes, he's definitely full of wild ideas here.  Another idea is that
particle masses should not have specific, arbitrary values as most
physicists believe, but rather they should take on a full range of values,
from 0 to positive infinity, over the history of the universe.  There is
some slight observational evidence for a time-based change in the fine
structure constant alpha, and Tipler points to that to buttress his theory
- however the actual measured value is inconsistent with other aspects,
so he has to assume that the measurements are mistaken!

Another testable idea is that the cosmic microwave background radiation
is not the cooled-down EM radiation from the big bang, but instead is the
remnants of that SU(2) field which was responsible for particle creation.
He shows that such a field would look superficially like cooled down
photons, but it really is not.  In particular, the photons in this special
field would only interact with left handed electrons, not right handed
ones.  This would cause the photons to have less interaction with matter
in a way which should be measurable.  He uses this to solve the current
puzzle of high energy cosmic rays: such rays should not exist due to
interaction with microwave background photons.  Tipler's alternative does
not interact so well and so it would at least help to explain the problem.

Overall it is quite a mixed bag of exotic ideas that I don't think
physicists are going to find very convincing.  The idea of infinitely
complex natural laws is going to be particularly off-putting, I would
imagine.  However the idea that the cosmic microwave background interacts
differently with matter than ordinary photons is an interesting one and
might be worth investigating.  It doesn't have that much connection to
the rest of his theory, though.

Hal Finney



Re: Many Pasts? Not according to QM...

2005-05-18 Thread Hal Finney
Patrick Leahy writes:
 I've recently been reading the archive of this group with great interest 
 and noted a lot of interesting ideas. I'd like to kick off my contribution 
 to the group with a response to a comment made in numerous posts that a 
 single observer-moment can have multiple pasts, including macroscopically 
 distinct pasts, e.g. in one memorable example, pasts which differ only 
 according to whether a single speck of dust was or was not on a 
 confederate soldier's boot in 1863.

 Does anybody believe that this is consistent with the many-worlds 
 interpretation of QM?

First, welcome to the list.

You are right that in the strict MWI, if we define an observer-moment
to be restricted to one branch, then observer moments do not merge.

I might mention that there is some disagreement among aficionados of
the MWI as to what constitutes a branch.  Some reserve the concept of a
unique branch, and branch splitting, to an irreversible measurement-like
interaction, as you are doing.  Others say that even reversible operations
create new branches, in which sense it is OK to say that branches can
merge.  David Deutsch does this, for example, when he says that quantum
computers use the resources of many branches of the MWI (and hence prove
the reality of the MWI!).

However, particularly as we look to larger ensembles than just the MWI,
it becomes attractive to define observers and observer-moments based
solely on their internal information.  If we think of an observer as
being a particular kind of machine, then if we have two identical such
machines with identical states, they represent the same observer-moment.

From the first-person perspective of that observer-moment, there is no
fact of the matter as to which of the infinite number of possible
implementations and instantiations of that observer moment is the real
one.  They are all equally real.  From the inside view, the outside is
a blur of all of the possibilities.

If we apply that concept to the MWI, then we retrieve the concept of an
observer-moment that spans multiple branches.  As long as the information
state of the OM is consistent between the various branches, there is
no fact of the matter as to which branch it is really in.  That is the
sense in which we can say that observers merge and that observer moments
have multiple pasts.

Hal Finney



Re: WHY DOES ANYTHING EXIST

2005-05-19 Thread Hal Finney
Russell Standish writes:
 Alternatively, it is the recognition that Nothing and Everything are
 mathematically the same object (This is a little more subtle that
 Pearce's summing to zero, but it is essentially the same
 argument). Now either Nothing exists, or something exists. Since Nothing and
 Everything exhaust the possibilities, and they are identical, there is
 no question left to answer.

I don't follow the logic here.  Let's suppose we accept that
Nothing and Everything are the same.  Now either Nothing exists, or
something exists.  But where does the next phrase come from: Nothing
and Everything exhaust the possibilities.  That doesn't seem right.
The two possibilities weren't Nothing and Everything, they were Nothing
and something.

Even if we accept that Nothing and Everything are the same, we then
have to explain or decide whether Everything exists or merely some
things exist.

The question then becomes not, why is there something instead of nothing;
but, why is there something instead of everything.

Now, we don't know yet if everything exists, or merely some things exist,
so it is an open question.  But it does seem to be a question that needs
to be answered.

Hal Finney



Re: White Rabbit vs. Tegmark

2005-05-22 Thread Hal Finney
Patrick Leahy writes:
 Sure enough, you came up with my objection years ago, in the form of the 
 White Rabbit paradox. Since usage is a bit vague, I'll briefly re-state 
 it here. The problem is that worlds which are law-like, that is which 
 behave roughly as if there are physical laws but not exactly, seem to 
 vastly outnumber worlds which are strictly lawful. Hence we would expect 
 to see numerous departures from laws of nature of a non-life-threating 
 kind.

I think the question is whether we can assume the existence of a measure
over the mathematical objects which compose Tegmark's ensemble.  If so
then we can use the same argument as we do for Schmidhuber, namely that
the simpler objects would have greater measure.  Hence we would predict
that our laws of nature would be among the simplest possible in order
to allow for life to exist.

If we assume that a mathematical object (never clear what that meant)
corresponds to a formal axiomatic system, then we could use a measure
based on the size of the axiomatic description.  I don't remember now
whether Tegmark considered his mathematical objects to be the same as
formal systems or not.

Hal Finney



Re: White Rabbit vs. Tegmark

2005-05-22 Thread Hal Finney
Regarding the nature of Tegmark's mathematical objects, I found some
old discussion on the list, a debate between me and Russell Standish,
in which Russell argued that Tegmark's objects should be understood as
formal systems, while I claimed that they should be seen more as pure
Platonic objects which can only be approximated via axiomatization.

The discussions can generally be found at
http://www.escribe.com/science/theory/index.html?by=Daten=25 under
the title Tegmark's TOE  Cantor's Absolute Infinity.  In particular

http://www.escribe.com/science/theory/m4034.html
http://www.escribe.com/science/theory/m4045.html
http://www.escribe.com/science/theory/m4048.html

In this last message I write:

 I have gone back to Tegmark's paper, which is discussed informally
 at http://www.hep.upenn.edu/~max/toe.html
 and linked from

 http://arXiv.org/abs/gr-qc/9704009.

 I see that Russell is right, and that Tegmark does identify mathematical
 structures with formal systems.  His chart at the first link above shows
 Formal Systems as the foundation for all mathematical structures.
 And the discussion in his paper is entirely in terms of formal systems
 and their properties.  He does not seem to consider the implications if
 any of Godel's theorem.

Note, Tegmark's paper has moved to
http://space.mit.edu/home/tegmark/toe_frames.html .

See also http://www.escribe.com/science/theory/m4038.html where Wei Dai
argues that even unaxiomatizable mathematical objects, even infinite
objects that are too big to be sets, can have a measure meaningfully
applied to them.  However I do not know enough math to fully understand
his proposal.

Hal Finney



Decoherence and MWI

2005-05-23 Thread Hal Finney
I'd like to take advantage of having a bona fide physicist on the list to
ask a question about decoherence and its implications for the MWI.

Paddy Leahy wrote:
 The crucial point, which is not taught in introductory QM 
 classes, is the theory of Quantum decoherence, for which see the wikipedia 
 article and associated references (e.g. the Zurek quant-ph/0306072).

 This shows that according to QM, the decay time for quantum decoherence is 
 astonishingly fast if the product ((position shift)^2 * mass * 
 temperature) is much bigger than the order of a single atom at room 
 temperature. Moreover, the theory has been confirmed experimentally in 
 some cases.

 Since coherence decays exponentially, after say 100 decay times there is 
 essentially no chance of observing interference phenomena, which is the 
 *only* way we can demonstrate the existence of other branches. No chance 
 meaning not once in the history of the universe to date.

I understand that there is research into attempts to measure decoherence,
using special conditions and experimental arrangements.  As you say, in
ordinary situations, environmental influences make observing any effects
from other branches effectively impossible.  But it is possible to set
up experiments that decohere gradually and where they can measure any
residual interference effects, comparing the results to the predictions
of QM.

As far as I know, these experiments so far are consistent with QM theory.
Of course it's always possible that departures would show up eventually,
which is part of why they do the experiments.  But I would assume that
most physicists would not be astonished to find that QM continued to
work correctly no matter how far out these experiments were pushed.
In fact, that would presumably be the expected result, and any confirmed
departures from QM predictions would be surprising and even revolutionary.

If this is true, then how can a physicist not accept the MWI?  Isn't that
just a matter of taking this decoherence phenomenon to a (much) larger
degree?  Either you have to believe that at some point decoherence stops
following the rules of QM, or you have to believe that the mathematics
describes physical reality.  And the mathematical equations predict the
theoretical existence of the parallel yet unobservable branches.

Of course, given that they are in practice unobservable, a degree of
agnosticism is perhaps justifiable for the working physicist.  He doesn't
have to trouble himself with such difficult questions, in practice.
But still, if he believes the theory, and he applies it in his day to
day work, shouldn't he believe the implications of the theory?

To me, it almost requires believing a contradiction to expect that
decoherence experiments will follow the predictions of QM, without also
expecting that the more extreme versions of those predictions will be
true as well, which would imply the reality of the MWI.  You either have
to believe that a sufficiently accurate decoherence experiment would
find a violation of QM, or you have to believe in the MWI.

Don't you?

Hal Finney



Re: White Rabbit vs. Tegmark

2005-05-23 Thread Hal Finney
Paddy Leahy writes:
 Let's suppose with Wei Dai that a measure can be applied to Tegmark's 
 everything. It certainly can to the set of UTM programs as per Schmidhuber 
 and related proposals.  Obviously it is possible to assign a measure which 
 solves the White Rabbit problem, such as the UP.  But to me this procedure 
 is very suspicious.  We can get whatever answer we like by picking the 
 right measure.  While the UP and similar are presented by their proponents 
 as natural, my strong suspicion is that if we lived in a universe that 
 was obviously algorithmically very complex, we would see papers arguing 
 for natural measures that reward algorithmic complexity. In fact the 
 White Rabbit argument is basically an assertion that such measures *are* 
 natural.  Why one measure rather than another? By the logic of Tegmark's 
 original thesis, we should consider the set of all possible measures over 
 everything. But then we need a measure on the measures, and so ad 
 infinitum.

I agree that this is a potential problem and an area where more work
is needed.  We do know that the universal distribution has certain
nice properties that make it stand out, that algorithmic complexity
is asymptotically unique up to a constant, and similar results which
suggest that we are not totally off base in granting these measures the
power to determine which physical realities we are likely to experience.
But certainly the argument is far from iron-clad and it's not clear how
well the whole thing is grounded.  I hope that in the future we will
have a better understanding of these issues.

I don't agree however that we are attracted to simplicity-favoring
measures merely by virtue of our particular circumstances.  The universal
distribution was invented decades ago as a mathematical object of study,
and Chaitin's work on algorithmic complexity is likewise an example of
pure math.  These results can be used (loosely) to explain and justify
the success of Occam's Razor, and with more difficulty to explain why
the universe is as we see it, but that's not where they came from.

Besides, it's not all that clear that our own universe is as simple as
it should be.  CA systems like Conway's Life allow for computation and
might even allow for the evolution of intelligence, but our universe's
rules are apparently far more complex.  Wolfram studied a variety of
simple computational systems and estimated that from 1/100 to 1/10 of
them were able to maintain stable structures with interesting behavior
(like Life).  These tentative results suggest that it shouldn't take
all that much law to create life, not as much as we see in this universe.

I take from this a prediction of the all-universe hypothesis to be that
it will turn out either that our universe is a lot simpler than we think,
or else that these very simple universes actually won't allow the creation
of stable, living beings.  That's not vacuous, although it's not clear
how long it will be before we are in a position to refute it.

 I've overlooked until now the fact that mathematical physics restricts 
 itself to (almost-everywhere) differentiable functions of the continuum. 
 What is the cardinality of the set of such functions? I rather suspect 
 that they are denumerable, hence exactly representable by UTM programs.
 Perhaps this is what Russell Standish meant.

The cardinality of such functions is c, the same as the continuum.
The existence of the constant functions alone shows that it is at least c,
and my understanding is that continuous, let alone differentiable, functions
have cardinality no more than c.

 I must insist though, that there exist mathematical objects in platonia 
 which require c bits to describe (and some which require more), and hence 
 can't be represented either by a UTM program or by the output of a UTM.
 Hence Tegmark's original everything is bigger than Schmidhuber's.  But 
 these structures are so arbitrary it is hard to imagine SAS in them, so 
 maybe it makes no anthropic difference.

Whether Tegmark had those structures in mind or not, we can certainly
consider such an ensemble - the name is not important.  I posted last
Monday a summary of a paper by Frank Tipler which proposed that in fact
our universe's laws do require c bits to describe themm, and a lot of
other crazy ideas as well,
http://www.iop.org/EJ/abstract/0034-4885/68/4/R04 .  I don't think it
was particularly convincing, but it did offer a way of thinking about
infinitely complicated natural laws.  One simple example would be the fine
structure constant, which might turn out to be an uncomputable number.
That wouldn't be inconsistent with our existence, but it is hard to see
how our being here could depend on such a property.

Hal Finney



Re: White Rabbit vs. Tegmark

2005-05-23 Thread Hal Finney
Paddy Leahy writes:
 Oops, mea culpa. I said that wrong. What I meant was, what is the 
 cardinality of the data needed to specify *one* continuous function of the 
 continuum. E.g. for constant functions it is blatantly aleph-null. 
 Similarly for any function expressible as a finite-length formula in which 
 some terms stand for reals.

I think it's somewhat nonstandard to ask for the cardinality of the
data needed to specify an object.  Usually we ask for the cardinality
of some set of objects.

The cardinality of the reals is c.  But the cardinality of the data
needed to specify a particular real is no more than aleph-null (and
possibly quite a bit less!).

In the same way, the cardinality of the set of continuous functions
is c.  But the cardinality of the data to specify a particular
continuous function is no more than aleph null.  At least for infinitely
differentiable ones, you can do as Russell suggests and represent it as
a Taylor series, which is a countable set of real numbers and can be
expressed via a countable number of bits.  I'm not sure how to extend
this result to continuous but non-differentiable functions but I'm pretty
sure the same thing applies.

Hal Finney



RE: White Rabbit vs. Tegmark

2005-05-24 Thread Hal Finney
Lee Corbin writes:
 Russell writes
  You've got me digging out my copy of Kreyszig Intro to Functional
  Analysis. It turns out that the set of continuous functions on an
  interval C[a,b] form a vector space. By application of Zorn's lemma
  (or equivalently the axiom of choice), every vector space has what is
  called a Hamel basis, namely a linearly independent countable set B
  such that every element in the vector space can be expressed as a
  finite linear combination of elements drawn from the Hamel basis

 I can't follow your math, but are you saying the following
 in effect?

 Any continuous function on R or C, as we know, can be
 specified by countably many reals R1, R2, R3, ... But
 by a certain mapping trick, I think that I can see how
 this could be reduced to *one* real.  It depends for its 
 functioning---as I think your result above depends---
 on the fact that each real encodes infinite information.

I don't think that is exactly how the result Russell describes works, but
certainly Lee's construction makes his result somewhat less paradoxical.
Indeed, a real number can include the information from any countable
set of reals.

Nevertheless I'd be curious to see an example of this Hamel basis
construction.  Let's consider a simple Euclidean space.  A two dimensional
space is just the Euclidean plane, where every point corresponds to
a pair of real numbers (x, y).

We can generalize this to any number of dimensions, including a countably
infinite number of dimensions.  In that form each point can be expressed
as (x0, x1, x2, x3, ...).  The standard orthonormal basis for this vector
space is b0=(1,0,0,0...), b1=(0,1,0,0...), b2=(0,0,1,0...), 

With such a basis the point I showed can be expressed as x0*b0+x1*b1+
I gather from Russell's result that we can create a different, countable
basis such that an arbitrary point can be expressed as only a finite
number of terms.  That is pretty surprising.

I have searched online for such a construction without any luck.
The Wikipedia article, http://en.wikipedia.org/wiki/Hamel_basis has an
example of using a Fourier basis to span functions, which requires an
infinite combination of basis vectors and is therefore not a Hamel basis.
They then remark, Every Hamel basis of this space is much bigger than
this merely countably infinite set of functions.  That would seem to
imply, contrary to what Russell writes above, that the Hamel basis is
uncountably infinite in size.

In that case the Hamel basis for the infinite dimensional Euclidean space
can simply be the set of all points in the space, so then each point
can be represented as 1 * the appropriate basis vector.  That would be
a disappointingly trivial result.  And it would not shed light on the
original question of proving that an arbitrary continuous function can
be represented by a countably infinite number of bits.

Hal



Re: Plaga

2005-05-24 Thread Hal Finney
We discussed Plaga's paper back in June, 2002.  I reported some skeptical
analysis of the paper by John Baez of sci.physics fame, at
http://www.escribe.com/science/theory/m3686.html .  I also gave some
reasons of my own why arbitrary inter-universe quantum communication
should be impossible.

Hal Finney



Re: White Rabbit vs. Tegmark

2005-05-26 Thread Hal Finney
Paddy Leahy writes:
 For the continuum you can restore order by specifying a measure which just 
 *defines* what fraction of real numbers between 0  1 you consider to lie 
 in any interval. For instance the obvious uniform measure is that there 
 are the same number between 0.1 and 0.2 as between 0.8 and 0.9 etc. 
 Why pick any other measure? Well, suppose y = x^2. Then y is also between 
 0 and 1. But if we pick a uniform measure for x, the measure on y is 
 non-uniform (y is more likely to be less than 0.5). If you pick a uniform 
 measure on y, then x = sqrt(y) also has a non-uniform measure (more likely 
 to be  0.5).

 A measure like this works for the continuum but not for the naturals 
 because you can map the continuum onto a finite segment of the real line.
 In m6511 Russell Standish describes how a measure can be applied to the 
 naturals which can't be converted into a probability. I must say, I'm not 
 completely sure what that would be good for.

I think it still makes sense to take limits over the integers.
The fraction of integers less than n that is prime has a limit of 0
as n goes to infinity.  The fraction that are even has a limit of 1/2.
And so on.

When you apply a measure to the whole real line, it has to be non-uniform
and has to go asymptotically to zero as you go out to infinity.
This happens implicitly when you map it to (0,1) even before you put
a measure on that segment.  The same thing can be done to integers.
The universal distribution assigns probability to every integer such
that they all add to 1.  The probability of an integer is based on the
length of the smallest program in a given Universal Turing Machine which
outputs that integer.  Specifically it equals the sum of 1/2^l where
l is the length of each program that outputs the integer in question.
Generally this will give higher measure to smaller numbers, although a few
big numbers will have relatively high measure if they have small programs
(i.e. if they are simple).  Of course this measure is non-uniform and
goes asymptotically to zero, as any probability measure must.

One problem with the UD is that the probability that an integer is even
is not 1/2, and that it is prime is not zero.  Probabilities in general
will not equal those defined based on limits as in the earlier paragraph.
It's not clear which is the correct one to use.

Going back to Alistair's example, suppose we lived in a spatially infinite
universe, Tegmark's level 1 multiverse.  Of course our entire Hubble
bubble is replicated an infinite number of times, to any desired degree
of precision.  Hence we have an infinite number of counterparts.

Do you see a problem in drawing probabilistic conclusions from this?
Would it make a difference if physics were ultimately discrete and all
spatial positions could be described as integers, versus ultimately
continuous, requiring real numbers to describe positions?

Note that in this case we can't really use the UD or a line-segment
measure because there is no natural starting point which distinguishes the
origin of space.  We can't have a non-uniform measure in a homogeneous
space, unless we just pick an origin arbitrarily.  So in this case the
probability-limit concept seems most appropriate.

Hal Finney



Re: White Rabbit vs. Tegmark

2005-05-27 Thread Hal Finney
Bruno Marchal writes:
 Le 26-mai-05, à 18:03, Hal Finney a écrit :

  One problem with the UD is that the probability that an integer is even
  is not 1/2, and that it is prime is not zero.  Probabilities in general
  will not equal those defined based on limits as in the earlier 
  paragraph.
  It's not clear which is the correct one to use.

 It seems to me that the UDA showed that the (relative) measure on a 
 computational state is determined by the (absolute?) measure on the 
 infinite computational histories going through that states. There is a 
 continuum of such histories, from the first person person point of view 
 which can not be aware of any delay in the computations emulated by the 
 DU (= all with Church's thesis), the first persons must bet on the 
 infinite union of infinite histories.

Sorry, I was not clear: by UD I meant the Universal Distribution, aka
the Universal Prior, not the Universal Dovetailer, which I think you
are talking about.  The UDA is the Universal Dovetailer Argument, your
thesis about the nature of first and third person experience.

I simply meant to use the Universal Distribution as an example probability
measure over the integers, where, given a particular Universal Turing
Machine, the measure for integer k equals the sum of 1/2^n where n is
the length of each program that outputs integer k.  Of course this is an
uncomputable measure.  A much simpler measure is 1/2^k for all positive
integers k.

I don't know whether there would be any probability measures over the
integers such that the probability of every event E equals the limit as n
approaches infinity of the probability of E for all integers less than n.

Actually on further thought, it's clear that the answer is no.  Consider
the set Ex = all integers less than x.  Clearly the probability of Ex
being true for integers less than n goes to zero as n goes to infinity.
But the only way a probability measure can give 0 for a set is to give
0 for every element of the set.  That means that the measure for all
elements less than x must be zero, for any x.  And that implies that
the measure must be 0 for all finite x, which rules out any meaningful
measure.

Hal Finney



RE: White Rabbit vs. Tegmark

2005-05-27 Thread Hal Finney
Brent Meeker writes:
 I doubt that the concept of logically possible has any absolute meaning.  It
 is relative to which axioms and predicates are assumed.  Not long ago the
 quantum weirdness of Bell's theorem, or special relativity would have been
 declared logically impossible.  Is it logically possible that Hamlet doesn't
 kill Polonius?  Is it logically possible that a surface be both red and green?

I agree.  We went around on this logically possible stuff a few
weeks ago.  A universe is not constrained by logical possibility.
Our understanding of what is or is not a possible universe is constrained
by our mental abilities, which include logic as one of their components.

If I say a universe exists where glirp glorp glurp, that is not
meaningful.  But it doesn't constrain or limit any universe, it is
simply a non-meaningful description.  It is a problem in my mind and my
understanding, not a problem in the nature of the multiverse.

If I say a universe exists where p and not p, that has similar problems.
It it is not a meaningful description.

Similarly if I say a universe exists where pi = 3.  Saying this
demonstrates an inconsistency in my mathematical logic.  It doesn't
limit any universes.

More complex descriptions, like whether green can be red, come down
to our definitions and what we mean.  Maybe we are inconsistent in
our minds and failing to describe a meaningful universe; maybe not.
But again it does not limit what universes exist.

To summarize, logic is not a property of universes.  It is a tool that
our minds use to understand the world, including possible universes.
We may fail to think clearly or consistently or logically about what
can and cannot exist, but that doesn't change the world out there.

Rather than expressing the AUH as the theory that all logically possible
universes exist, I would just say that all universes exist.  And of course
as we try to understand the nature of such a multiverse, we will attempt
to be logically consistent in our reasoning.  That's where logic comes in.

Hal Finney



Re: Plaga

2005-05-27 Thread Hal Finney
Paddy Leahy writes:
 As an exercise I've been trying to pinpoint exactly what is wrong with
 Plaga's paper  On careful reading, the paper is just littered with
 confusions and errors  Hence, if we saw what he predicted, we would
 actually *disprove* MWI QM, not confirm it as he thinks.

Thanks for looking at this.  It seemed clear to me that it could not work
but it is good to see a detailed analysis of where Plaga goes wrong.

Seems that his result would do more than disprove the MWI, it would
actually disprove QM in general.  As you have shown, he effectively has to
assume nonlinear state evolution (although he does not do so explicitly,
he claims to be working in orthodox QM).  Bruno noted that Steven Weinberg
has done work with possible nonlinear version of QM.  Some researchers
have found that his model would allow for faster-than-light signalling.
Probably communicating with the Everett branches would be possible
as well.

Hal Finney



Re: Many Pasts? Not according to QM...

2005-05-28 Thread Hal Finney
Stathis Papaioannou writes:
 More generally, if a person has N OM's available to him at time t1 and kN at 
 time t2, does this mean he is k times as likely to find himself experiencing 
 t2 as t1? I suggest that this is not the right way to look at it. A person 
 only experiences one OM at a time, so if he has passed through t1 and t2 
 it will appear to him that he has spent just as much time in either interval 
 (assuming t1 and t2 are the same length). The only significance of the fact 
 that there are more OM's at t2 is that the person can expect a greater 
 variety of possible experiences at t2 if the OM's are all distinct.

It's a good puzzle.  Some time back I expressed it as follows: suppose
the measure of the even days of my life were arranged to be twice as
good as the measure of the odd days.  How would I notice this?  Would I
somehow be more likely to experience an even day?  Should I arrange
to have good things happen on even days and bad things on odd days?
I don't see how I would notice any difference.

Now, I lean more to a favorable answer to these questions.  In fact I
would say, yes, I should arrange to have good things happen on even days.
Even though the difference is not directly perceptible, I believe I
would be making the universe a better place.

Here are a chain of examples.  I won't try to offer much justification
at each step, I am just sketching an argument.

First, consider 10 people.  We can either give 9 of them a good experience
and 1 of them a bad one, or 9 of them bad and 1 of them good.  It is
clear that it is better to give the 9 good and 1 bad.

Now, consider 2 people.  We are going to give the first a good experience
and the second a bad one.  But we can make 9 copies of the first, or
9 copies of the second, as we do it.  I claim it is better to make 9
copies of the first, the one who is having a good experience.

Now, consider a person who goes through life but who has a problem with
his short term memory that makes him forget what happens every day.
(Fictional examples can be seen in the movies Memento and Fifty First
Dates, although I don't know how realistic they are.  Keep in mind this
is just a thought experiment and not dependent on any actual details of
human pathology.)  We can either give him 9 days of good experiences
and 1 bad, or vice versa.  I claim it is better to make the 9 days be
good experiences and 1 day bad, rather than the other way around.

And finally consider an ordinary person who remembers things from one
day to the next.  On day 1 something good happens and on day 2 something
bad happens.  We can either make him have 9 times the measure on day 1
or on day 2.  I claim that it is better to give him 9 times the measure
on day 1, when the good thing happens.

Now, you may be saying, where is the argument?  These are just examples
with unsupported claims.  The point is to show that in all these examples
the people are unaware of the changes in measure and numbers of good
and bad experiences.  But that doesn't change the fact that it is still
better to cause more good experiences in the world than bad.

Would we say that it is OK to mistreat a person with lack of short
term memory just because they won't remember it?  I don't think so.
It still causes genuine pain and suffering.  Giving them good experiences
causes joy.  The 50 First Dates movie expresses this in a poignant and
moving matter.  People are willing to sacrifice to bring happiness to
someone they love who suffers such a condition.  I thought this was
an excellent movie BTW, although you have to overlook some extremely
juvenile humor.  Memento was also interesting but much darker in tone.

It is the same with all the examples.  Causing more experiences of
joy is better than causing more experiences of sadness.  Even with
the one person who lives from day to day, it still applies.  He is not
subjectively aware of his measure changing, but if he or anyone else has
objective awareness of the circumstance, the same logic that applies in
the other examples works here as well.  Give more happiness to the days
with greater measure.  That makes the world a better place.

Now for an interesting twist.  Our measure decreases steadily in life.
Every day we have a certain probability of dying, and our measure
decreases by that fraction.  The reasoning in the examples above would
imply that it is better to have happiness when our measure is high, which
is when we are young.  Unhappiness in old age has less impact.  So if
you are putting off some happiness, do it today, don't procrastinate.

(Of course, you get much the same result in a non-multiverse model,
where putting off a reward makes you risk dying before you get to
experience it.)

Hal Finney



RE: White Rabbit vs. Tegmark

2005-05-29 Thread Hal Finney
Jonathan Colvin writes:
 That's rather the million-dollar question, isn't it? But isn't the
 multiverse limited in what axioms or predicates can be assumed? For
 instance, can't we assume that in no universe in Platonia can (P AND ~P) be
 an axiom or predicate?

No, I'd say that you could indeed have a mathematical object which had P
AND ~P as one of its axioms.  The problem is that from a contradiction,
you can deduce any proposition.  Therefore this mathematical system can
prove all well formed strings as theorems.

As Russell mentioned the other day, everything is just the other side of
the coin from nothing.  A system that proves everything is essentially
equivalent to a system that proves nothing.  So any system based on P AND
~P has essentially no internal structure and is essentially equivalent
to the empty set.

One point is that we need to distinguish the tools we use to analyze
and understand the structure of a Tegmarkian multiverse from the
mathematical objects which are said to make up the multiverse itself.
We use logic and other tools to understand the nature of mathematics;
but mathematical objects themselves can be based on nonstandard logics
and have exotic structures.  There are an infinite number of formal
systems that have nothing to do with logic at all.  Formal systems are
just string-manipulation engines and only a few of them have the axioms
of logic as a basis.  Yet they can all be considered mathematical objects
and some of them might be said to be universes containing observers.

Hal Finney



Re: objections to QTI

2005-05-30 Thread Hal Finney
Let me pose the puzzle like this, which is a form we have discussed
before:

Suppose you found yourself extremely old, due to a near-miraculous set
of circumstances that had kept you alive.  Time after time when you were
about to die of old age or some other cause, something happened and you
were able to continue living.  Now you are 1000 years old in a world
where no one else lives past 120.  (We will ignore medical progress for
the purposes of this thought experiment.)

Now, one of the predictions of QTI is that in fact you will experience
much this state, eventually.  But the question is this: given that you
find yourself in this circumstances, is this fact *evidence* for the
truth of the QTI?  In other words, should people who find themselves
extremely old through miraculous circumstances take it as more likely
that the QTI is true?

Hal Finney



My model, comp, and the Second Law

2017-01-27 Thread hal Ruhl

Hi Everyone:

Its been a while since I posted.

I would like to start a thread to discuss the Second Law of Thermodynamics
and the possibility that its origins can be found in perhaps my model, or 
comp, or their combination.

As references I will start with use are:

"Time's Arrow: The Origin of Thermodynamic Behavior" , 
1992 by Micheal Mackey

"Microscopic Dynamics and the Second Law of Thermodynamics"
2001 by Michael Mackey.

my model as it appears in my posts of March and April of 2014.

My idea comes from the fact that almost all the real numbers fail to be 
computable and this
causes computational termination and/or computational precision issues.

This should make the operable phase space grainy.  This ambiguity causes 
entropy [system configuration uncertainty] to increase or stay the same 
at each evolutionary [trajectory] step.

The system should also not be reversible for the same reason. 

If correct, would [my Model,Comp] be observationally verified?

Hal





 

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


RE: My model, comp, and the Second Law

2017-08-07 Thread Hal Ruhl
Hi everyone:

 

Unfortunately I have been very ill for the last 15 months or so.

 

I am working on this project again and hope to post soon.

 

Hal Ruhl 

 

From: everything-list@googlegroups.com 
[mailto:everything-list@googlegroups.com] On Behalf Of auxon
Sent: Thursday, February 9, 2017 3:08 PM
To: Everything List <everything-list@googlegroups.com>
Subject: Re: My model, comp, and the Second Law

 

I can't wait to dig into this.  

On Friday, January 27, 2017 at 7:02:13 PM UTC-5, hal Ruhl wrote:

 

Hi Everyone:

 

Its been a while since I posted.

 

I would like to start a thread to discuss the Second Law of Thermodynamics

and the possibility that its origins can be found in perhaps my model, or comp, 
or their combination.

 

As references I will start with use are:

 

"Time's Arrow: The Origin of Thermodynamic Behavior" , 

1992 by Micheal Mackey

 

"Microscopic Dynamics and the Second Law of Thermodynamics"

2001 by Michael Mackey.

 

my model as it appears in my posts of March and April of 2014.

 

My idea comes from the fact that almost all the real numbers fail to be 
computable and this

causes computational termination and/or computational precision issues.

 

This should make the operable phase space grainy.  This ambiguity causes 

entropy [system configuration uncertainty] to increase or stay the same 

at each evolutionary [trajectory] step.

 

The system should also not be reversible for the same reason. 

 

If correct, would [my Model,Comp] be observationally verified?

 

Hal

 

 

 

 

 

 

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com 
<mailto:everything-list+unsubscr...@googlegroups.com> .
To post to this group, send email to everything-list@googlegroups.com 
<mailto:everything-list@googlegroups.com> .
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


email function check

2017-11-23 Thread Hal Ruhl
Hello Everyone:

Just a check of my new email account so I can resume participation.

Hal 

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


<    2   3   4   5   6   7