Boltzmann Brains, consciousness and the arrow of time

2008-12-31 Thread Hal Finney
 towards the first interpretation, for the following reason. If
consciousness really was able to somehow distinguish the forward from
reverse phases in a Boltzmann fluctuation, it would be quite remarkable.
Given that the fundamental laws of physics are time symmetric, nothing
should be able to do that, to deduce a true implicit arrow of time that
goes beyond the superficial arrow of time caused by entropy differences.
The whole point of time symmetry, the very definition, is that there
should be no such implicit arrow of time.  This suggestion would seem
to give consciousness a power that it should not have, allow it to do
something that is impossible.

And if the first interpretation is correct, it seems to call into question
the very nature of causality, and its posible role in consciousness. If
we are forced to attribute consciousness to sequences of events which
occur purely by luck, then causality can't play a significant role. This
is the rather surprising conclusion which I reached from these musings
on Boltzmann Brains.

Hal Finney

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: against UD+ASSA, part 1

2007-09-26 Thread Hal Finney

Wei Dai writes:
 I promised to summarize why I moved away from the philosophical position
 that Hal Finney calls UD+ASSA. Here's part 1, where I argue against ASSA.
 Part 2 will cover UD.

 Consider the following thought experiment. Suppose your brain has been
 destructively scanned and uploaded into a computer by a mad scientist. Thus
 you find yourself imprisoned in a computer simulation. The mad scientist
 tells you that you have no hope of escaping, but he will financially support
 your survivors (spouse and children) if you win a certain game, which works
 as follows. He will throw a fair 10-sided die with sides labeled 0 to 9. You
 are to guess whether the die landed with the 0 side up or not. But here's a
 twist, if it does land with 0 up, he'll immediately make 90 duplicate
 copies of you before you get a chance to answer, and the copies will all run
 in parallel. All of the simulations are identical and deterministic, so all
 91 copies (as well as the 9 copies in the other universes) must give the
 same answer.

This is an interesting experiment, but I have two comments. First,
you could tighten the dilemma by having the mad scientist flip a biased
coin with say a 70% chance of coming up heads, but then he duplicates
you if it comes up tails. Now you have it that the different styles of
reasoning lead to opposite actions, while in the original you might as
well pick 0 in any case.

Second, why the proviso that the simulations are identical and
deterministic?  Doesn't the reasoning (and dilemma) go through just as
strongly if they are allowed to diverge? You will still be faced with a
conflict where one kind of reasoning says you have your 91% subjective
probability of it coming up a certain way, while logic would seem to
suggest you should pick the other one.

But, in the case where your instances diverge, isn't the subjective-
probability argument very convincing? In particular if we let you run
for a while after the duplication - minutes, hours or days - there might
be quite a bit of divergence.  If you have 91 different people in one
case versus 1 in the other, isn't it plausible - in fact, compelling -
to think that you are in the larger group?

And again, even so, wouldn't you still want to make your choice on the
basis of ignoring this subjective probability, and pick the one that
maximizes the chances for your survivors: as you say, the measure of
the outcomes that you care about?

If so, then this suggests that the thought experiment is flawed because
even in a situation where most people would agree that subjective
perception is strongly skewed, they would still make a choice ignoring
that fact. And therefore its conclusions would not necessarily apply
either when dealing with the simpler case of a deterministic and
synchronous duplication.

Hal Finney

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Conscious States vs. Conscious Computations

2007-09-26 Thread Hal Finney

Jason writes:
 A given piece of data can represent an infinite number of different
 things depending on the software that interprets it.  What may be an
 mp3 file to one program may look like snow to an image editor.

I'm doubtful that you could find a string of any significant length which
both sounds like sensible music and looks like a realistic picture. I'm
even more doubtful that the enormous length of the data that would
represent the brain activity associated with an observer-moment could
be meaningfully interpreted as anything else.

My guess is that sufficiently long, meaningful data strings have
their meaning implicitly within themselves, because there is no
reasonable-length program that can interpret them as anything else.

Hal Finney

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



New Scientist: Parallel universes make quantum sense

2007-09-24 Thread Hal Finney

New Scientist has an article on parallel universes:

 David Deutsch at the University of Oxford and colleagues have shown
 that key equations of quantum mechanics arise from the mathematics of
 parallel universes. This work will go down as one of the most important
 developments in the history of science, says Andy Albrecht, a physicist
 at the University of California at Davis. In one parallel universe,
 at least, it will - whether it does in our one remains to be seen.

It is behind a paywall at
http://space.newscientist.com/article/mg19526223.700-parallel-universes-make-quantum-sense.html
but I found a copy on Google Groups:
http://groups.google.com/group/alt.kq.p/browse_thread/thread/9631b2e37ba5e7a2/fb3202c9c5b71228?lnk=stq=%22new+scientist%22+deutsch+albrechtrnum=1#fb3202c9c5b71228

It has a great quote from Tegmark: The critique of many worlds is
shifting from 'it makes no sense and I hate it' to simply 'I hate it'.

The thrust of the article is about recent work to fix the two perceived
problems in the MWI: non-uniqueness of basis (the universe splits in all
different ways) and recovering the Born rule. The basis problem is now
considered (by supporters) to be resolved via improved understanding
of decoherence. This work (which was not particularly focused on the
MWI) generally seems to lead to a unique basis for measurement-like
interactions, hence there is no ambiguity in terms of which way the
universe splits.

As for the Born rule, the article points to the effort begun by Deutsch in
1999 to base things on decision theory. The idea is that we fundamentally
care about probability insofar as it influences the decisions and choices
we make, so if we can recover a sensible decision theory in the MWI, we
have basically explained probability. I've seen a number of critiques of
Deutsch's paper but according to this article, subsequent work by David
Wallace and Simon Saunders has extended it to the point where things
are pretty solid.

Hence the two traditional objections to the MWI are now at least arguably
dealt with, and given its advantage in terms of formal simplicity (fewer
axioms), supporters argue that it should be considered the leading
model for QM. This is where we get claims about it being among the most
important discoveries in the history of mankind, etc.

It's interesting to see the resistance of the physics community to
multiverse concepts. It all comes back to the tradition of experimental
verification I suppose, which is still pretty much impossible. Really
it is more a question of philosophy than of physics as we currently
understand these disciplines.

We see the same thing happening all over again in string theory. I
don't know if you guys are following this at all. String theory is
going through a crisis as it has turned out in the past few years that
it does not predict a single universe, rather a multiverse where there
is a landscape of possible sets of parameters, each of which would
correspond to a universe. The big problem is that there is no natural
or accepted measure (unlike with QM where everyone knew all along that
the measure had to be the Born rule and it was just a matter of how
many hoops you had to jump through to pull it out of your model).  As a
result it looks like it might be impossible to get even probabilistic
predictions out of the string theory landscape.

AFAIK no one within the community has followed our path and looked
at algorithmic complexity as a source of measure (i.e. the Universal
Distribution, which says that the simplest theories have higher measure).
Granted, even if that direction were pursued it would probably be
computationally intractable so they still would not be able to pull much
out in the way of predictions. Neverthless physicists are skilled at the
use of approximation and assumptions to get plausible predictions out of
even rather opaque mathematics so it's possible they might get somewhere.

But at this point it looks like the resistance is too strong. Rather
than string theory making the multiverse respectable as we might hope,
it seems likely that the multiverse will kill string theory.

Hal Finney

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: The physical world is real

2007-09-24 Thread Hal Finney

Youness Ayaita writes:
 It's a very trivial fact though that the two approaches are not
 equivalent. Nonetheless it's interesting to note it. I argue that we
 have good reasons to discard the second approach. The fundamental role
 will be assigned to the physical worlds (hence the title of this
 message). The difference between the two approaches leads to different
 expections to the question What will I experience next?.
 Consequently it can be measured empirically. We find this result by
 observing that different physical worlds may produce the same observer
 moment (e.g. if the physical worlds differ in a detail not perceivable
 by the observer). This assigns a higher probability to the observer
 moment when chosen randomly in order to answer the question (it's
 multiply counted because it appears more than once in the everyting
 ensemble). Opposed to this, every observer moment (in the RSSA within
 a given reference class) would have an equal probability to be
 selected if we used the second approach.

I don't see why taking OMs as primary implies that they would all have
equal probability. If two physical worlds instantiate the same OM,
that may cause the OM to have higher measure. In the UDASSA model that
I prefer, OM measure is essentially the sum of the measures of all
programs that output that OM. If two universes instantiate it, both
contribute measure to it (as do Boltzmann brains, demons with boxes,
Matrixes and other simulators, etc.).

Hal Finney

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: One solution to the Measure Problem: UTM outputs a qualia, not a universe

2007-09-20 Thread Hal Finney

[By the way, I notice that I do not receive my own postings back in email,
which makes my archive incomplete. Does anyone know if there is a way to
configure the mailing list reflector to give me back my own messages?]

Russell Standish wrote:
 On Wed, Sep 19, 2007 at 12:10:33PM -0700, Hal Finney wrote:
  The lifetime formulation also captures the intuition many people have
  that consciousness should not jump around as observer moments are
  created in the various simulations and scenarios we imagine in our
  thought experiments. That was the conclusion I reached in the posting
  referenced above, that teleportation might in some sense not work
  even though someone walks out of the machine thousands of miles away
  who remembers walking into it. The measure of such a lifetime would be
  substantially less than that of a similar person who never teleports.
  
  Hal Finney

 I note that you have identified yourself with the the ASSA camp in the
 past (at least I say so in my book, so it must be true, right! :). What
 you are proposing above is an anti-functionalist position. The question is
 does functionalism necessarily imply RSSA, and antifunctionalism imply
 the ASSA? ie, does this whole RSSA/ASSA debate turn on the question of
 functionalism?

The distinction I am drawing seems somewhat orthogonal to the RSSA/ASSA
debate. Suppose someone is about to die in a terrible accident. From
the 1st person perspective, RSSA would say that he expects to survive
through miraculous good luck. ASSA would say that he expects to die and
never experience anything again. Now suppose that in most universes an
advanced, benevolent human/AI civilization later recreates his mental
state and in effect resurrects him in a sort of heaven. Both ASSA and
RSSA might now say that his expectation prior to the accident should be to
wake up in this heaven, that that is his most likely next experience.

My argument suggests otherwise, that the chance of this being his next
experience would be rather low. However it basically leaves the RSSA/ASSA
distinction intact. We would go back to the situation where RSSA predicts
a miraculously lucky survival of the accident while ASSA predicts death.

But actually my analysis is supportive of the ASSA in this form, in that
the measure of a lifetime which ends in the accident is much higher than
the measure of one which survives.

As far as functionalism, I agree that this kind of analysis argues
against it.  Indeed the post from Wei Dai which introduced this concept,
which I quote here, http://www.udassa.com/origins.html (apologies for the
incompleteness of this web site), suggests that the size of a computer
would affect measure, contradicting functionalism.

Frankly I suspect that Bruno's analysis would or should lead to the same
kind of conclusion. I wonder if he supports strict functionalism? Would
he say yes doctor to any and all functional brain replacements? Or
would some additional investigation be appropriate?


 I wonder where this leaves Mallah, who admits to computationalism, yet
 is died-in-the-wool ASSA?

Indeed I have often wondered where in the world is Jacques Mallah,
who was so influential on this list in the past but who seems to have
vanished utterly from the net. Actually, I wrote that sentence based
on previous Google searches, but just now I discovered that as of
two weeks ago he has published his first communication in many years:
http://arxiv.org/abs/0709.0544 . Here is his abstract, which seems similar
in its goals to your own work:

: The Many Computations Interpretation (MCI) of Quantum Mechanics
: Authors: Jacques Mallah
: (Submitted on 4 Sep 2007)
: 
: Abstract: Computationalism provides a framework for understanding
: how a mathematically describable physical world could give rise to
: conscious observations without the need for dualism. A criterion
: is proposed for the implementation of computations by physical
: systems, which has been a problem for computationalism. Together
: with an independence criterion for implementations this would allow,
: in principle, prediction of probabilities for various observations
: based on counting implementations. Applied to quantum mechanics,
: this results in a Many Computations Interpretation (MCI), which is
: an explicit form of the Everett style Many Worlds Interpretation
: (MWI). Derivation of the Born Rule emerges as the central problem for
: most realist interpretations of quantum mechanics. If the Born Rule
: is derived based on computationalism and the wavefunction it would
: provide strong support for the MWI; but if the Born Rule is shown not
: to follow from these to an experimentally falsified extent, it would
: indicate the necessity for either new physics or (more radically)
: new philosophy of mind.

I am looking forward to reading this!

Hal

--~--~-~--~~~---~--~~
You received this message because you

Re: One solution to the Measure Problem: UTM outputs a qualia, not a universe

2007-09-20 Thread Hal Finney

Stathis Papaioannou writes:
 On 20/09/2007, Hal Finney [EMAIL PROTECTED] wrote:

  The lifetime formulation also captures the intuition many people have
  that consciousness should not jump around as observer moments are
  created in the various simulations and scenarios we imagine in our
  thought experiments. That was the conclusion I reached in the posting
  referenced above, that teleportation might in some sense not work
  even though someone walks out of the machine thousands of miles away
  who remembers walking into it. The measure of such a lifetime would be
  substantially less than that of a similar person who never teleports.

 I have great conceptual difficulty with this idea. It seems to allow
 that I could have died five minutes ago even though I still feel that
 I am alive now. (This is OK with me because I think the best way to
 look at ordinary life is as a series of transiently existing OM's
 which create an illusion of a self persisting through time, but I
 don't think this is what you were referring to.)

You will probably agree that there are some branches of the multiverse
where you did indeed die five minutes ago, and perhaps people are
standing around staring in shock at your dead body. And supposing that
you had just had a narrow escape from a perilous situation, you might
even consider that those branches where you died are of greater measure
than those where you survived. That's basically all my analysis says,
as far as normal life. The main novelty is what it has to say about
exotic thought experiments like teleportation and resurrection.

Hal Finney

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: One solution to the Measure Problem: UTM outputs a qualia, not a universe

2007-09-19 Thread Hal Finney

[I want to first note for the benefit of readers that I am Hal Finney
and no relation to Hal Ruhl - it can be confusing having two Hal's on
the list!]

Rolf Nelson writes:
 UDASSA (if I'm interpreting it right, Hal?) says:

 1. The measure of programs that produce OM (I am experiencing A, and
 I remember my previous experience as B) as its single output,
 compared to the measure of programs that produce OM (I am not
 experiencing A, and I remember my previous experience as B) as its
 single output, is what we perceive as the likelihood of A following
 B, rather than A not following B.

I think you mean, the likelihood of A following B rather than not-A
following B. That's probably reasonable, although I suggested a somewhat
different approach in this (as usual) somewhat overly long posting:

http://www.nabble.com/Teleportation-thought-experiment-and-UD%2BASSA-tf3057020.html#a8498222

Imagine that we could write down a description of a person's mental
states for his whole lifetime, from birth to death. Every possible such
sequence would be a possible lifetime and would exist in the universe
of all information patterns. Some would have higher measure than others.
As usual, it is plausible that the highest-measure such lifetimes would
be those which exist as parts of universes that have reasonably simple
descriptions.

Then we can get at your question of what is the likelihood of A following
B by asking, what is the measure of all lifetimes which experience event
B followed by event A, compared to the measure of all lifetimes which
experience event B not followed by event A.

The difference from what you expressed would be, for example, if some
future civilization creates simulated OMs which remember B followed by
A, while in the real world B did not get followed by A. Your OM based
formulation might have those future OMs add quite a bit of measure to
B-then-A, while the lifetime based formulation would consider those
as less important, because of the discontinuity between the original
lifetime and the future simulation of B-then-A.

The lifetime formulation also captures the intuition many people have
that consciousness should not jump around as observer moments are
created in the various simulations and scenarios we imagine in our
thought experiments. That was the conclusion I reached in the posting
referenced above, that teleportation might in some sense not work
even though someone walks out of the machine thousands of miles away
who remembers walking into it. The measure of such a lifetime would be
substantially less than that of a similar person who never teleports.

Hal Finney

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: One solution to the Measure Problem: UTM outputs a qualia, not a universe

2007-09-16 Thread Hal Finney

Rolf writes:
 World-Index-Compression Postulate: The most probable way for the
 output of a random UTM program to be a single qualia, is through
 having a part of the program calculate a Universe, U, that is similar
 to the universe we currently are observing; and then having another
 part of the program search through the universe and pick out a
 substring by using an search algorithm SA(U) that tries to find a
 random sentient being in U and emit his qualia as the final output.

Yes, as you note later this is very similar to the concept I called
UD+ASSA or just UDASSA and described in a series of postings to this
list back in 2005. It was not original with me but actually was based
on an idea of Wei Dai, who founded this last way back in 1998. I was
working at one point on the udassa.com site to bring the ideas together
but never finished it. I'm surprised that guy found it, I don't recall
mentioning that URL. Must have let it slip sometime!

You might enjoy this old post where I tried to work out in some plausible
detail the size of a program to output a mental state, or as you say a
quale, and came up with an answer in the 10s of kilobits, not far from
your estimate.

http://www.nabble.com/UDist-and-measure-of-observers-tf3056759.html

Hal

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: How would a computer know if it were conscious?

2007-06-03 Thread Hal Finney

Part of what I wanted to get at in my thought experiment is the
bafflement and confusion an AI should feel when exposed to human ideas
about consciousness.  Various people here have proffered their own
ideas, and we might assume that the AI would read these suggestions,
along with many other ideas that contradict the ones offered here.
It seems hard to escape the conclusion that the only logical response
is for the AI to figuratively throw up its hands and say that it is
impossible to know if it is conscious, because even humans cannot agree
on what consciousness is.

In particular I don't think an AI could be expected to claim that it
knows that it is conscious, that consciousness is a deep and intrinsic
part of itself, that whatever else it might be mistaken about it could
not be mistaken about being conscious.  I don't see any logical way it
could reach this conclusion by studying the corpus of writings on the
topic.  If anyone disagrees, I'd like to hear how it could happen.

And the corollary to this is that perhaps humans also cannot legitimately
make such claims, since logically their position is not so different
from that of the AI.  In that case the seemingly axiomatic question of
whether we are conscious may after all be something that we could be
mistaken about.

Hal

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



How would a computer know if it were conscious?

2007-06-02 Thread Hal Finney

Various projects exist today aiming at building a true Artificial
Intelligence.  Sometimes these researchers use the term AGI, Artificial
General Intelligence, to distinguish their projects from mainstream AI
which tends to focus on specific tasks.  A conference on such projects
will be held next year, agi-08.org.

Suppose one of these projects achieves one of the milestone goals of
such efforts; their AI becomes able to educate itself by reading books
and reference material, rather than having to have facts put in by
the developers.  Perhaps it requires some help with this, and various
questions and ambiguities need to be answered by humans, but still this is
a huge advancement as the AI can now in principle learn almost any field.

Keep in mind that this AI is far from passing the Turing test; it is able
to absorb and digest material and then answer questions or perhaps even
engage in a dialog about it.  But its complexity is, we will suppose,
substantially less than the human brain.

Now at some point the AI reads about the philosophy of mind, and the
question is put to it: are you conscious?

How might an AI program go about answering a question like this?
What kind of reasoning would be applicable?  In principle, how would
you expect a well-designed AI to decide if it is conscious?  And then,
how or why is the reasoning different if a human rather than an AI is
answering them?

Clearly the AI has to start with the definition.  It needs to know what
consciousness is, what the word means, in order to decide if it applies.
Unfortunately such definitions usually amount to either a list of
synonyms for consciousness, or use the common human biological heritage
as a reference.  From the Wikipedia: Consciousness is a quality of the
mind generally regarded to comprise qualities such as subjectivity,
self-awareness, sentience, sapience, and the ability to perceive the
relationship between oneself and one's environment.  Here we have four
synonyms and one relational description which would arguably apply to
any computer system that has environmental sensors, unless perceive
is also merely another synonym for conscious perception.

It looks to me like AIs, even ones much more sophisticated than I am
describing here, are going to have a hard time deciding whether they
are conscious in the human sense.  Since humans seem essentially unable
to describe consciousness in any reasonable operational terms, there
doesn't seem any acceptable way for an AI to decide whether the word
applies to itself.

And given this failure, it calls into question the ease with which
humans assert that they are conscious.  How do we really know that
we are conscious?  For example, how do we know that what we call
consciousness is what everyone else calls consciousness?  I am worried
that many people believe they are conscious simply because as children,
they were told they were conscious.  They were told that consciousness
is the difference between being awake and being asleep, and assume on
that basis that when they are awake they are conscious.  Then all those
other synonyms are treated the same way.

Yet most humans would not admit to any doubt that they are conscious.
For such a slippery and seemingly undefinable concept, it seems odd
that people are so sure of it.  Why, then, can't an AI achieve a similar
degree of certainty?  Do you think a properly programmed AI would ever
say, yes, I am conscious, because I have subjectivity, self-awareness,
sentience, sapience, etc., and I know this because it is just inherent in
my artificial brain?  Presumably we could program the AI to say this,
and to believe it (in whatever sense that word applies), but is it
something an AI could logically conclude?

Hal

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Boltzmann brains

2007-06-01 Thread Hal Finney

Stathis Papaioannou [EMAIL PROTECTED] writes:
 On 01/06/07, Hal Finney [EMAIL PROTECTED] wrote:
  The reference to Susskind is a paper we discussed here back
  in Aug 2002, Disturbing Implications of a Cosmological Constant,
  http://arxiv.org/abs/hep-th/0208013 .  The authors argued that in current
  cosmological models the universe dies a heat death and falls into a steady
  state of exponential expansion which goes on forever.  In that state,
  quantum gravity fluctuations will eventually cause macroscopic objects
  to appear.  This is extremely rare but still with infinite time to work
  with, every object will appear an infinite number of times.  That includes
  disembodied brains, the so-called Boltzmann brains, as well as planets and
  whole universes.  But the smaller objects are vastly more common, hence it
  is most likely that our experiences are due to us being a Boltzmann brain.

 It isn't generally the case that given a non-zero probability of an event E
 occurring per trial (or per unit time period), then as the number of trials
 n approaches infinity the probability of E occurring approaches 1. For
 example, if Pr(E) = 1/2^n, then even though Pr(E) is always non-zero, the
 probability of ~E as n-inf is given by the infinite product of (1-1/2^n),
 which converges to approximately 0.288788, not zero. So if the exponential
 expansion is associated with a continuous decrease in the probability that
 an event of interest will occur during a unit time period, that event may
 still never occur given infinite time, even though at no point can the event
 be said to be impossible.

Right, but apparently the physics doesn't work this way.  The papers
just seem to take the size of the necessary object in Planck units and
say the probability of it popping into existence is 1/e^size.  This is
constant and therefore it will happen an infinite number of times.


  This has a few bad implications; one is that our perceptions should end
  and not continue (but they do continue) and another is that brains would
  be just as likely to (falsely) remember chaotic universes as lawful ones
  (but we only remember lawful ones).  So this model is not considered
  consistent with our experiences.

 Another possibility is that Boltzmann Brains arising out of chaos are the
 observer moments which associate to produce the first person appearance of
 continuity of consciousness and an orderly universe. Binding together
 observer moments thus generated is no more difficult than binding together
 observer moments generated in other multiverse theories.

So how would this explain why we see an orderly universe?  I think we
would have to say that Boltzmann brains that remember an orderly universe
are substantially smaller (take up fewer Planck units) than those that
remember chaotic ones.

I considered this possibility but I couldn't come up with a good
justification.  Now, keep in mind that the Boltzmann brain does not have
to literally be a brain, with lobes and neurotransmitters and blood;
it could be any equivalent computational system.  Chances are that true
Boltzmann brains would be small solid-state computers that happen to
hold programs that are conscious.  Shrinking the brain even a little
increases its probability of existence tremendously.

(I am assuming that probability makes sense even though we are speaking of
events that happen a countably infinite number of times; both Boltzmann
brains and whole universes like ours will appear infinitely often in
the de Sitter state, but the smaller systems will be far more frequent.
I assume that this means that we would be more likely to experience
being the small systems then the big ones, even though both happen an
infinite number of times.)

So to explain the lawfulness we would have to argue that Boltzmann brains
that remember lawful universes can be designed to be smaller than those
that remember chaotic universes, as well as slightly lawless flying-rabbit
universes.  It's not completely implausible that the greater simplicity
of a lawful universe would allow the memory store of the Boltzmann
brain to be made smaller, as it would allow clever coding techniques to
compress the data.  However one would think that memories of universes
even simpler than our own would then be that much more likely, as would
memories of shorter lifetimes and other possibilities to simplify and
shrink the device.  This explanation doesn't really seem to work.

Hal

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Boltzmann brains

2007-05-31 Thread Hal Finney
 since the Big
Bang would be vastly greater than for brains like ours existing in the
relative youth of the universe.  A measure concept related to information
might therefore reduce the measure of such brains to insignificance.

Hal Finney

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: computationalism and supervenience

2006-09-06 Thread Hal Finney

Russel Standish writes:
 Or my point that in a Multiverse, counterfactuals are instantiated
 anyway. Physical supervenience and computationalism are not
 incompatible in a multiverse, where physical means the observed
 properties of things like electrons and so on.

I'd think that in the context of a multiverse, physical supervenience
would say that whether consciousness is instantiated would depend only
on physical conditions here, at this point in the multiverse, and would
not depend on conditions elsewhere.  It would be a sort of locality
condition for the multiverse.  In that case it seems you still have
a problem because even if counterfactuals are tested elsewhere in the
multiverse, whether they are handled correctly will not be visible
locally.

So you'd still have a contradiction, with supervenience saying
that consciousness depends only on local physical conditions, while
computationalism would say that consciousness depends on the results of
counterfactual tests done in other branches or worlds of the multiverse.

Hal Finney

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: computationalism and supervenience

2006-09-05 Thread Hal Finney

I am sorry that I have not been able to keep up with the list lately.
I can only peek in occasionally.

My interpretation of the question of computationalism vs supervenience
can be put succinctly.  Computationalism says that consciousness depends
both on actual behavior and on counterfactuals.  Therefore, it depends
both on what happens and on what doesn't happen.  Supervenience says
that consciousness depends only on physical behavior; hence it depends
only on what happens.

Since computationalism says that consciousness depends on what doesn't
happen, while supervenience says that it depends only on what happens,
the two doctrines are inconsistent.

To marry them would require altering computationalism so that it no
longer depended on counterfactuals, which then requires us to say that
all systems implement all calculations.


As far as this latter question, the framework I adopted and adapted from
Wei Dai, which I call UD+ASSA, suggests that there is a sense in which
it is true, but it is not significant or meaningful.

The UDASSA framework seeks to calculate the measure of a conscious
experience.  We start by thinking of a conscious experience as something
that can be described as an abstract information pattern.  Any physical
system which instantiates that information pattern can be said to
contribute to the measure of that conscious experience.

The Universal Distribution defines the measure of an information pattern
as the fraction of all programs that output that pattern.  Equivalently,
the measure is the sum of 1/2^L_n, where L_n is the length in bits of
the nth program that outputs the pattern.  Short programs have higher
measure than long ones; hence to a good approximation the measure depends
on the length of the shortest program that outputs it.

If we consider all programs, some of them instantiate or simulate
physical universes.  They have their own laws of physics and initial
conditions.  Some are complex, some simple.  In those universes we
may find physical systems which we would naively view as instantiating
particular computations, even conscious computations.  We would like to
say that such universes add to the measure of those computations.

In the UDASSA framework, this is handled by imagining a two part program.
The first part creates and runs the universe; the second part scans the
output of the first part and outputs the abstract information pattern
that represents the conscious experience.  The size of this two part
program is the sum of the size of its parts.  Only if both parts are
small will the contribution to the measure of the experience be large.

As has been discussed here, in principle you can find a mapping from
any physical system to any computation.  This threatens to lead to the
conclusion that every consciousness is instantiated by every physical
system.  Traditionally, computationalism opposed that conclusion by
insisting on support for counterfactuals.  But the UDASSA framework
handles it in a different way.

In the UDASSA framework, a mapping from, say, a solid rock to an abstract
information pattern representing a moment of human consciousness would
be a very large program.  In truth, this mapping program wouldn't even
need to use the rock.  It could output the human information pattern just
as easily without the help of information from the rock.  The mapping
program will need to include all of the information and programming
needed to generate the human consciousness information pattern essentially
from scratch.  That's going to be a very large program.

In contrast, a mapping program that goes from a human brain to an abstract
information pattern representing a moment of human consciousness can
be quite compact.  It would use the physical information from the human
brain state and translate that to whatever form was used to abstractly
specify the computational state.  While this might be modestly complex,
it would be far, far simpler than the nearly-astronomical complexity
needed to output a human brain experience from a rock.

The result is that physical systems which have plausible naive
interpretations as implementing certain computations will contribute
significantly to the measure of such computations; while physical systems
where we would need a contrived and complex mapping will contribute
negligibly to such measure.

This provides a reason, within this framework, to neglect the possible
existence of conscious entities created by non-conscious computations.
Any mapping which could specify such an entity will be enormous and will
not contribute meaningfully to the measure of such entities.

Hal Finney

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com

Re: Bruno's argument

2006-08-02 Thread Hal Finney

A useful model of computation is the Turing Machine.  A TM has a tape
with symbols on it; a head which moves along the tape and which can read
and write symbols, and a state machine with a fixed number of states
that controls head movement and symbol writing based on the current
state and the symbol at the head's current location.  It has been shown
that this relatively simplistic model is able to do anything that more
sophisticated computer models can do.

We can consider the state of a TM to be made up of the conjunction of
three things: the current state of the tape (i.e. the string of symbols
written there); the position of the head; and the state of the internal
state machine.  Maybe it would be best to call this the superstate
because normally the state of the TM just refers to its internal state
machine state.  The TM can then be said to advance from superstate to
superstate according to its internal rules and the contents of the tape.

If a TM ever gets into the same superstate twice, it is in an infinite
loop.  This is because the TM is fully deterministic and so it will
always go into the same successor superstate from a given superstate.
Halting TM's never get into the same superstate twice.  Therefore halting
TM's go through a unique succession of superstates, from the first to
the last.

We can map or label a TM's superstates with successive integers,
corresponding to the order that it goes through the superstates of a
computation.  In this mapping, the only difference between two different
computations is their length.  If two computations had the same length N,
they would both go through states labeled 0, 1, 2, ..., N.

What is a computation?  A TM computation has two parts.  One is the
initial conditions: the initial value on the tape, the initial head
position.  The other is the set of rules used, the internal state machine
that controls the machine.  Together these two parts define a trajectory
of the TM through a sequence of superstates.

We often think of the internal state machine as being like the program
and the initial contents of tape as being the data.  However, as
Turing was the first to recognize, this distinction is not always useful,
and sometimes it makes more sense to think of at least part of the tape
contents as being program rather than data.  In particular, the Universal
TM treats part of the tape as a specification for a specific other TM
that it will emulate, and the remainder of the tape is then the input
to that TM.

Generally, when we think of counterfactuals in a TM computation we mean
to change the data, not the program.  We don't mean to ask, what would
happen if you ran a different program on the same data.  Rather, we
mean, what would happen if you ran the same program on different data.
We want to say that two computations are equivalent only if they have
the same counterfactual behavior - that is, if the programs would behave
the same on all data.

One problem with this is noted above, that we cannot always cleanly
distinguish program and data.  In the case of the UTM, is the prefix part
of the tape, that defines the particular TM to emulate, program or data?
If it is program, we would not try to vary it in considering whether
two computations are equivalent.  If it is data, we should consider
such variations.  In general, I don't think we can always distinguish
these cases cleanly.  UTMs can be nested to any desired degree.  What is
program to one is data to another.  More complex UTM computations may be
aided by certain patterns on the tape which will disrupt the computation
if they are changed.

Another problem is that a more complex mapping may be able to be set up
between two different computations even if we consider counterfactuals
as all different initial tape configurations.  We could make the mapping
be a function of the superstate as defined above.  Two computations with
different initial tapes will start in different superstates, hence the
mapping is still unique.  And it will be robust over all possible inputs
and hence all possible counterfactual computations.

On these considerations, It seems to me that there are problems
with basing the distinction between computations on support for
counterfactuals.  TMs make the very notion of counterfactuals rather
fuzzy, and still admit the possibility of mappings between computations
that remain robust even in the face of counterfactuals.

My preferred view is to focus on the algorithmic complexity of the
mapping between two computations, and to ask whether the information
needed to specify the mapping is less than the information needed to
write down the computation from scratch.  If not, if the mapping is
substantially bigger than the computation it purports to describe,
then the correspondence is an illusion and is not real.

Hal Finney

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post

Re: Interested in thoughts on this excerpt from Martin Rees

2006-07-28 Thread Hal Finney
 we
discussed back in 2002:

 Dyson, L., Kleban, M.  Susskind, L. Disturbing implications of a 
 cosmological constant. Preprint http://xxx.lanl.gov/abs/hep-th/0208013, 
 (2002). 

They used a slightly different physics model but came up with the same
idea, that most OMs should be in the distant future, contradicting what
we call the ASSA, which I think they just considered an implication of
the anthropic principle.

You might be right that these papers could be read as an argument
against a single-universe model, if in fact we could come up with a
good justification within a multiverse model for decreasing OM measure
in the future.  We'd probably have to have a pretty strong argument
in that regard, though.

Hal Finney

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: Interested in thoughts on this excerpt from Martin Rees

2006-07-27 Thread Hal Finney

Saibal Mitra writes:
 From: Hal Finney [EMAIL PROTECTED]
  The real problem is not just that it is a philosophical speculation,
  it is that it does not lead to any testable physical predictions.
  The string theory landscape, even if finite, is far too large for
  systematic exploration.  Our ideas, with an infinite number of possible
  universes, are even worse.

 I'm not so sure that our ideas are worse.

I should clarify, I meant that our ideas are even worse in terms of
systematic exploration of all the possibilities, because we generally
consider an infinity of possible universes, while the string theory
landscape predicts (some people say) about 10^500 possible universes.

 If you read some recent articles,
 e.g.:

 http://arxiv.org/abs/astro-ph/0607227

 you see that they haven't really formulated rigorous theories about measure,
 probabilities etc. of the multiverse. It's still very much in the
 handwaving stage.

This is actually a very interesting paper, by Starkman and Trotta.  I had
seen some mention of it but hadn't tracked it down.  Here is the abstract:

We revisit anthropic arguments purporting to explain the measured value
of the cosmological constant. We argue that different ways of assigning
probabilities to candidate universes lead to totally different anthropic
predictions. As an explicit example, we show that weighting different
universes by the total number of possible observations leads to an
extremely small probability for observing a value of Lambda equal to or
greater than what we now measure. We conclude that anthropic reasoning
within the framework of probability as frequency is ill-defined and
that it cannot be used to explain the value of Lambda, nor, likely,
any other physical parameters.

The paper is pretty technical but I thought I could understand the gist
of it.  The cosmological constant (Lambda) is a repulsive force which
drives galaxies apart in the Big Bang model.  Until a few years ago it was
thought to be entirely theoretical, but since then observations indicate
that it is real, and that the universal expansion is accelerating.
The question then becomes what would happen in universes with different
values of the CC.

The paper basically shows that observers (or civilizations) can last
longer in universes with smaller CC's.  The CC eventually puts an end
to the observations that can be made, because the expansion gets too
fast and there is no longer enough energy density.  The higher the CC,
the sooner this happens.  With CC's as high as what we observe, the
theoretical lifetime of civilization is much shorter than in universes
with smaller CC's.

The authors choose to use as their measure, the number of times the
CC can be measured in a given universe.  This makes low-CC universes
have a much higher measure, because the window for CC observations is
longer in those.  Hence they conclude that the highest probability is
for a CC much smaller than we observe, and so our own CC value cannot
be explained anthropically.

This is in contrast to earlier results which used different measures, such
as the number of galaxies, and found that our CC results were consistent
with anthropic considerations.  The authors argue that their measure is
at least as philosphically justifiable as those earlier papers, and their
real point is that no measure can be justified as better than another,
hence all anthropic reasoning is just hand-waving.

In our terms we might put it like this.  The new paper essentially uses a
measure which is the number of possible observer-moments in the universe.
Universes with a high CC go through a big rip process eventually,
accelerating to a super-expansion mode and presumably putting an end
to observers.  Universes with a low or zero CC go through this much
later or not at all, allowing for more observer-moments.  Hence this
measure gives a bonus to universes that last a long time.

Earlier papers apparently looked at a snapshot of time similar to the
present day, and in effect based the measure on the number of observers
(assumed to be proportional to the number of galaxies).  So we have a
distinction between an observer-moment measure and an observer measure.
The two apparently give very different results, the OM measure preferring
long-lasting universes while the observer measure is more interested in
the size of the universe.

I guess I'll stop here and see if there is more interest.  To leave with
a few questions: Is there any fundamental way to decide which measure
is best?  Do the OM measure and the observer measure really give
different results, and is that significant?  Are there other measures
that might be used, and what results would they get?  And finally, will
this apparent failure of anthropic reasoning discredit the concept among
working physicists?  As I mentioned, I've already seen it used in a blog
common on Woit's blog that I pointed to the other day, in just that way.

Hal Finney

Re: Interested in thoughts on this excerpt from Martin Rees

2006-07-26 Thread Hal Finney
 misleading
perspective.  While he personally may be happy with anthropic ideas,
most physicists are not.  Where I live in Santa Barbara, Nobelist David
Gross, head of the KITP at UCSB, is famous for his active hostility to
the concept.  So opposed are physicists to adopting all-universe models
that they are ready to abandon twenty years of work and strike off in
a new direction rather than face the immensity of the anthropic universe.

Now, I'm sure that some physicists will continue to work on these ideas,
just as a minority has continued to work on rivals to string theory
all these years.  The bottom line is that unless some way is found to
make specific, testable predictions (and not the kind of hand-waving we
sometimes get away with around here, explaining why bunnies can't fly),
the anthropic universe is not physics.  It is philosophy, and physicists
want nothing to do with it.

Hal Finney

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



RE: Bruno's argument

2006-07-21 Thread Hal Finney

Apologies for being out of touch with the list, I can only dip a toe
in occasionally these days.

Stathis wrote:

 It seems to me trivially obvious that any sufficiently complex physical
 system implements any finite computation, just as any sufficiently
 large block of marble contains every marble statue of a given size.
 The difference between random noise (or a block of marble) on the one
 hand and a well-behaved computer (or the product of a sculptor's work)
 on the other is that the information is in the latter case presented
 in a way that can interact with the world containing the substrate of
 its implementation. But I think that this idea leads to almost the same
 conclusion that you reach: it really seems that if any computation can
 be mapped to any physical substrate, then that substrate is superfluous
 except in that tiny subset of cases involving well-behaved computers that
 can handle counterfactuals and thus interact with their environment,
 and we may as well say that every computation exists by virtue of its
 status as a platonic object. I say almost because I can't quite see
 how to prove it, even though I suspect that it is so.

If we take the next step, though, the real question changes somewhat.
Let's imagine that what we see as physical objects are actually the
result of some kind of computational process.  We are living in a virtual
universe.  Even if you don't believe it, try following the logic for
a minute.

In that case, this physical object, this block of marble, is actually
a computational process.  But if we continue to believe that the
complex physical system implements every computation, then what we
are really saying is that a complex computational system implements
every computation - because the complex physical system is actually
(or at least, hypothetically could be) a manifestation of a computation.

So we should really reword Stathis claim.  Instead of any sufficiently
complex physical system implements any finite computation, it must be,
any sufficiently complex computation implements any finite computation.
And IMO that is a rather more interesting claim, and perhaps more amenable
to analysis since it stays within the realm of mathematics and logic,
rather than crossing boundaries between the physical and the ideal.


Indeed, it is not inherently surprising or implausible that a computation
can be said to implement more than one computation.  If we think of
a computation as a sequence of logical steps A, B, C, ... Z, then it
automatically can be said to also implement every subsequence of those
steps: for example, J, K, L is implemented by that sequence, as is
O, P, Q, R, S, T.

It might also be that computation A could be embedded in computation B in
a more subtle way.  We could, for example, interleave the computations
of A and B, doing A's operations on the odd-numbered steps and doing
B's operations on the even-numbered ones.

With Stathis' sufficiently complex computations, we could imagine that
the computation is so long and so messy and confusing that, with enough
work, we could indeed hope to find virtually any smaller computation
embedded within this complexity.

On the other hand it clearly won't do to say that every computation
implements every other.  The identity computation F(x) = x does not
implement a master-level chess playing program.  So there must be a
threshold of complexity before we could start making this kind of claim.

This raises the question of whether there is an objective fact about
whether computation A implements computation B.  And should it count if
A merely comes close to implementing B?

There is a paradox due to Putman which argues that even a counting program
loop: x = x + 1; goto loop; will implement every program.  Going back
to my first example of some program that goes through 26 steps or states
A through Z, we can identify the initial state of the counting program
x=1 with state A; we identify x=2 with state B, and so on up through
x=26 which is state Z.  So our counting program can be said, in a sense,
to implement our A-Z program, if we interpret it the right way.

Then there are various responses to this, and counter-arguments as well,
which I won't get into.  IMO there is a gray area where it is hard to
say whether A truly implements B or if the correspondence is in the mind
of the beholder.


The relevance to our issues is when we start talking about measures over
computer programs and computations, and relating them to first-person
experiences, it's necessary to consider whether it is meaningful to
say what a given computation is doing.  If every sufficiently complex
computation implements every other, then that contradicts any reasoning
based on the differences between different computations.  So I think it
is an important issue to get right and to be clear about.

Hal Finney

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything

RE: A calculus of personal identity

2006-06-29 Thread Hal Finney

Stathis Papaioannou writes:
 Hal Finney writes:
  
  What I argued was that it would be easier to find the trace of a person's
  thoughts in a universe where he had a physically continuous record than
  where there were discontinuities (easier in the sense that a smaller
  program would suffice).  In my framework, this means that the universe
  would contribute more measure to people who had continuous lives than
  people who teleported.  Someone whose life ended at the moment of
  teleportation would have a higher measure than someone who survived
  the event.  Therefore, I would view teleportation as reducing measure
  similarly to doing something that had a risk of dying.  I would try to
  avoid it, unless there were compensating benefits (as indeed might be
  the case, just as people willingly accept the risk of dying by driving
  to work, because of the compensating benefits).
  
  You can say that by definition the person survives, but then, you
  can say anything by definition.  I guess the question is, what is the
  reasoning behind the definition.

 OK, this is the old ASSA versus RSSA distinction. But leaving this
 argument aside, I don't see how teleportation could be analogous to a
 risky, measure reducing activity if it seemed to be a reliable process
 from a third person perspective. If someone plays Russian Roulette, we
 both agree that from a third person perspective, we are likely to observe
 a dead body eventually. But with teleportation (destructive, to one place)
 there is a 1:1 ratio between pre-experiment subjects and post-experiment
 subjects from a third person perspective. Are you suggesting that the
 predicted drop in measure will have no third person observable effect?

First, I tend to think that the phrase third person observable is
something of a contradiction.  Observation is a first-person activity.
I would prefer to think of third person effects as simply the physical
record of events.

In this case, there will definitely be a third person effect.  Having
someone teleport is a third-person difference from not having them
teleport.  We are talking about two cases here that are third-person
distinguishable, with very different physical histories, hence it is
plausible that there be different subjective first-person effects.

As far as the comparison with Russian Roulette, if someone only plays it
once, there might not be a third person difference.  Yet I would argue
his measure was reduced (in the multiverse).

Really, when we are talking about third person records, all we have
is the actual sequence of events that occurs.  Suppose someone plays
Russian Roulette multiple times.  In this universe, perhaps we see them
pull the trigger five times and survive, and on the sixth pull they die.
That subjective history, of playing RR six times, is instantiated in
this universe.  This universe contributes measure to that history.

Other universes, not observable to us, may have him die after different
trials.  Each of those contributes measure to subjective histories that
end at different points.  The result is that the measure of his lifespan
is reduced at each trigger pull, but that in any single third-person
universe that reduction in measure is unobservable.  Instead we see no
change until the final trigger pull.

Consider this example: someone commits to killing himself if you die, and
now you play Russian Roulette yourself.  Each time you pull the trigger
you reduce his measure, that of the other person who will kill himself
if you die.  But you will never observe him dying (in the first-person
sense).  This is a case of an unobservable measure decrease which you
might nevertheless believe in.


  As far as Lee's suggestion that people could be dying thousands of
  times a second, my framework does not allow for arbitrary statements
  like that.  Given a physical circumstance, we can calculate what happens.
  It's not just arbitrary what we choose to say about life and death.
  We can calculate the measure of different subjective life experiences,
  based on the physical record.
  
  If we wanted to create a physical record where this framework would
  be compatible with saying that people die often, it would be necessary
  to physically teleport people thousands of times a second.  Or perhaps
  the same thing could be done by freezing people for a substantial time,
  reviving them for a thousandth of a second, then re-freezing them again
  for a while, etc.
  
  If we consider the practical implications of such experiments I don't
  think it is so implausible to view them as being worse than living a
  single, connected, subjective life.  It would be quite difficult to
  interact in a meaningful way with the world under such circumstances.

 Assuming it could be done seamlessly, how would it make any difference? If
 you believe the important aspect of our consciousness resides in the
 activity at neural synapses, this is exactly what is happening. They
 are constantly falling

RE: A calculus of personal identity

2006-06-28 Thread Hal Finney

Lee Corbin writes:
 Stathis writes
  Hal Finney in his recent thread on teleportation thought
  experiments disagrees with the above view. He suggests
  that it is possible for  a subject to apparently undergo
  successful teleportation, in that the individual walking
  out of the receiving station has all the appropriate
  mental and physical attributes in common with the individual
  entering the transmitting station, but in reality not survive
  the procedure. I have difficulty understanding this, as it
  seems to me that the subject has survived by definition.

 Well, if you've characterized his views correctly, then he's
 not in agreement with you, me, and Derek Parfit. What might
 be fun to explore is how desperate some people would have to
 be in order to teleport (or perhaps how lucrative the
 opportunity?).  Also, I suppose that if you confided to them
 that this was happening to them all the time thousands of
 times per second, they'd still have some unfathomable reason
 not to go near a teleporter.

Sorry, I have been reading the list somewhat lightly recently and
have missed some threads.

What I argued was that it would be easier to find the trace of a person's
thoughts in a universe where he had a physically continuous record than
where there were discontinuities (easier in the sense that a smaller
program would suffice).  In my framework, this means that the universe
would contribute more measure to people who had continuous lives than
people who teleported.  Someone whose life ended at the moment of
teleportation would have a higher measure than someone who survived
the event.  Therefore, I would view teleportation as reducing measure
similarly to doing something that had a risk of dying.  I would try to
avoid it, unless there were compensating benefits (as indeed might be
the case, just as people willingly accept the risk of dying by driving
to work, because of the compensating benefits).

You can say that by definition the person survives, but then, you
can say anything by definition.  I guess the question is, what is the
reasoning behind the definition.

As far as Lee's suggestion that people could be dying thousands of
times a second, my framework does not allow for arbitrary statements
like that.  Given a physical circumstance, we can calculate what happens.
It's not just arbitrary what we choose to say about life and death.
We can calculate the measure of different subjective life experiences,
based on the physical record.

If we wanted to create a physical record where this framework would
be compatible with saying that people die often, it would be necessary
to physically teleport people thousands of times a second.  Or perhaps
the same thing could be done by freezing people for a substantial time,
reviving them for a thousandth of a second, then re-freezing them again
for a while, etc.

If we consider the practical implications of such experiments I don't
think it is so implausible to view them as being worse than living a
single, connected, subjective life.  It would be quite difficult to
interact in a meaningful way with the world under such circumstances.

However, if one were so unfortunate as to be put into such a situation,
then it would no longer be particularly bad to teleport.  You're being
broken into pieces all the time anyway, so the event of teleportation
would presumably not make things any worse.  Particularly if you were
somehow being teleported thousands of times a second, then adding a
teleportation would basically be meaningless since you're teleporting
anyway at every instant.  So I don't agree with Lee's conclusion that
in this situation people would still resist teleportation.

Hal Finney

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: Fermi Paradox and measure

2006-06-27 Thread Hal Finney

Ron Hale-Evans writes:
 My favourite answer to the Fermi Paradox has been that the aliens are
 using nearly-perfect compression or encryption for their radio signals
 (if they're using radio), and that's why all we can detect is noise.

 However, tonight another answer occurred to me. What if we're living
 in a finite simulation?

I don't know that multiverse concepts explain the Fermi paradox, but
they do cast it in a different light.

As Bruno points out, our first-person experiences could be created by
many different kinds of programs, corresponding to different realities.
It could be that everything is pretty much as it seems.  Or perhaps we
are living in a simulation controlled by aliens, or our descendants,
or robots.  Or it's even possible that everything is an illusion and we
are in effect imagining it.  All of these possibilities contribute to the
measure of our experiences.  So in some sense it must be simultaneously
true that we are in a simulation, and that we are not in a simulation.
Both situations exist in the multiverse and both contribute to the
reality of our experiences.

The hard part of the Fermi question still remains.  It might be stated,
why is the universe seemingly so large and so empty?  In multiverse
terms, why is the measure of observers who live in large, empty universes
so large, compared to the measure of observers who live in universes
teeming with life?  For if the measure of the latter observers were much
greater than the measure of the former, we would be highly unlikely to
find ourselves one of that very small set of observers who see sparse
universes.

(Of course, I am skipping past the various conventional explanations that
have been offered which allow for the universe to in fact be full of life
but for it somehow not to be observable.  Those have not been generally
found to be convincing so we should focus on the hard part.  Also,
note that while I write life for short I really mean intelligent life.)

A while back I speculated as follows.  Presumably there are laws of
physics which would lead to very densely populated universes.  And we
know that there are laws that lead to very sparse universes, like
the ones we live in.  All universes exist; all laws are instantiated.

For various reasons many of us argue that universes with simpler laws
are likely to be more common, to have larger measure.  Now, we know that
if the laws are too simple, life cannot exist.  Trivial universes are
not living ones.  Presumably, as the laws get more complex, we pass a
threshold where life can start to exist.  But perhaps it is reasonable
to assume that we will first find laws where life can barely exist,
before we find laws where life is very common.  If so, then there is a
band of complexity where universes at the simple end of this band have
very sparse intelligent life, and universes at the complex end have very
dense intelligent life.

Then, to be consistent with our observations, we have to conclude that
this band is quite wide - that universes that are just barely complex
enough for life have much simpler laws than universes that are teeming
with life.  That is how we would explain the fact that we find ourselves
in one of the first kind.  Their boost from having simpler laws must
outweigh the increase in numbers of intelligent life forms in the more
complex universes.

I read that the universe is estimated to have about 10^23 stars.
A universe with a high density of intelligent life might therefore be
10^23 times more densely populated than ours.  This is about 2^75 times.
Therefore we would predict that the physical laws necessary to create
such a densely populated universe would be at least 75 bits longer than
the simpler laws of our own universe.

This is a prediction of multiverse theory as I interpret it.  If it should
turn out that there are very simple sets of laws that would create very
numerous observers, then that would contradict the theory in this form.

Hal Finney

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



RE: Re: Teleportation thought experiment and UD+ASSA

2006-06-22 Thread Hal Finney

Stathis Papaioannou [EMAIL PROTECTED] writes:
 OK, I think I'm clear on what you're saying now. But suppose I argue
 that I will not survive the next hour, because the matter making up my
 synapses will have turned over in this time. To an outside observer the
 person taking my place would seem much the same, and if you ask him, he
 will share my memories and he will believe he is me. However, he won't
 be me, because I will by then be dead. Is this a valid analysis? My view
 is that there is a sense in which it *is* valid, but that it doesn't
 matter. What matters to me in survival is that there exist a person in
 an hour from now who by the usual objective and subjective criteria we
 use identifies as being me.

The problem is that there seems to be no basis for judging the validity
of this kind of analysis.  Do we die every instant?  Do we survive sleep
but not being frozen?  Do we live on in our copies?  Does our identity
extend to all conscious entities?  There are so many questions like
this, but they seem unanswerable.  And behind all of them lurks our
evolutionary conditioning forcing us to act as though we have certain
beliefs, and tricking us into coming up with logical rationalizations
for false but survival-promoting beliefs.

I am attracted to the UD+ASSA framework in part because it provides
answers to these questions, answers which are in principle approximately
computable and quantitative.  Of course, it has assumptions of its own.
But modelling a subjective lifespan as a computation, and asking how
much measure the universe adds to that computation, seems to me to be
a reasonable way to approach the problem.


 Even if it were possible to imagine another way of living my life which
 did not entail dying every moment, for example if certain significant
 components in my brain did not turn over, I would not expend any effort
 to bring this state of affairs about, because if it made no subjective
 or objective difference, what would be the point? Moreover, there would
 be no reason for evolution to favour this kind of neurophysiology unless
 it conferred some other advantage, such as greater metabolic efficiency.

Right, so there are two questions here.  One is whether there could be
reasons to prefer a circumstance which seemingly makes no objective or
subjective difference.  I'll say more about this later, but for now I'll
just note that it is often impossible to know whether some change would
make a subjective difference.

The other question is whether we could or should even try to overcome
our evolutionary programming.  If evolution doesn't care if we die
once we have reproduced, should we?  If evolution tells us to sacrifice
ourselves to save two children, eight cousins, or 16 great-great uncles,
should we?  In the long run, we might be forced to obey the instincts
built into us by genes.  But it still is interesting to consider the
deeper philosophical issues, and how we might hypothetically behave if
we were free of evolutionary constraints.

Hal Finney

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: Teleportation thought experiment and UD+ASSA

2006-06-22 Thread Hal Finney

Bruno raises a lot of good points, but I will just focus on a couple
of them.

The first notion that I am using in this analysis is the assumption that a
first-person stream of consciousness exists as a Platonic object.  My aim
is then to estimate the measure of such objects.  I don't know whether
people find this plausible or not, so I won't try to defend it yet.

The second part, which I know is more controversial, is that it is
possible to represent this object as a bit string, or as some similar,
concrete representation.  I think there are a couple of challenges
here.  The first is how to turn something as amorphous and intangible
as consciousness into a concrete representation.  But I assume that
subsequent development of cognitive sciences will eventually give us a
good handle on this problem and allow us to diagram, graph and represent
streams of consciousness in a meaningful way.  As one direction to pursue,
we know that brain activity creates consciousness, hence a sufficiently
compressed representation of brain activity should be a reasonable
starting point as a representation of first-person experience.

Another issue that many people have objected to is the role of time.
Consciousness, it is said, is a process, not a static structure such as
might be represented by a bit string.  IMO this can be dealt with by
interpreting the bit string as a multidimensional object, and treating
one of the dimensions as time.  See, for example, one of Wolfram's 1-D
cellular automaton outputs:

http://en.wikipedia.org/wiki/Image:CA_rule30s.png

We see something that can alternatively be interpreted as a pure bit
string; as a two-dimensional array of bits; or as a one-dimensional
bit string evolving in time.  In the same way we can capture temporal
evolution of consciousness by interpreting the bit string as having a
time dimension.

An important point is that although there may be many alternative ways
and notations to represent consciousness, they should all be isomorphic,
and only a relatively short program should be necessary to map from one
to another.  Hence, the measure computed for all of these representations
will be about the same, and therefore it is meaningful to speak of this
as the measure of the experience as a platonic entity.

Bruno also questioned my use of a physical universe in my analysis.
I am not assuming that physical universes exist as the basis of reality.
I only expressed the analysis in that form because we were given a
particular situation to analyze, and that situation was expressed as
events in a single universe.

The Universal Dovetailer does not play a principle role in my analysis,
because it does not play such a role in Kolmogorov complexity.  At most,
the Universal Dovetailer can be used as a heuristic device to explain
what it might mean to run all computatations in order to explain
K complexity.

I think one difference between K complexity and Bruno's reasoning with the
Universal Dovetailer is that the former focuses on sizes of programs while
Bruno seems to work more in terms of run time.  In the K complexity view,
the measure of an information object is (roughly) 1/2^L, where L is the
size of the shortest program which outputs that object.  Equivalently,
the measure of an information object is the fraction of all programs
which output that object, where programs are sampled uniformly from
all bit strings (or from whatever the input alphabet is for the UTM).
This does not have anything to do with run time.  Some bit patterns
may have short programs that take a very long run time to output them.
Such bit patterns are considered to have low complexity and high measure,
despite the long run time needed.

I think Bruno has sometimes said that the Universal Dovetailer makes some
things have higher measure than others because they get more run time.
I'm not sure how this would work, but it is a difference from the
Kolmogorov complexity (aka Universal Distribution) view that I am using.

Okay, those are some of the foundational questions and assumptions that
I think are raised by Bruno's analysis.  The rest of it goes through as
I have described many times.

Hal

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: Teleportation thought experiment and UD+ASSA

2006-06-21 Thread Hal Finney

Russell Standish [EMAIL PROTECTED] writes:
 On Tue, Jun 20, 2006 at 09:35:12AM -0700, Hal Finney wrote:
  I think that one of the fundamental principles of your COMP hypothesis
  is the functionalist notion, that it does not matter what kind of system
  instantiates a computation.  However I think this founders on the familiar
  paradoxes over what counts as an instantiation.  In principle we can
  come up with a continuous range of devices which span the alternatives
  from non-instantiation to full instantiation of a given computation.
  Without some way to distinguish these, there is no meaning to the question
  of when a computation is instantiated; hence functionalism fails.
  

 I don't follow your argument here, but it sounds interesting. Could you
 expand on this more fully? My guess is that ultimately it will depend
 on an assumption like the ASSA.

I am mostly referring to the philosophical literature on the problems of
what counts as an instantiation, as well as responses considered here
and elsewhere.  One online paper is Chalmers' Does a Rock Implement
Every Finite-State Automaton?, http://consc.net/papers/rock.html.
Jacques Mallah (who seems to have disappeared from the net) discussed
the issue on this list several years ago.

Now, Chalmers (and Mallah) claimed to have a solution to decide when
a physical system implements a calculation.  But I don't think they
work; at least, they admit gray areas.  In fact, I think Mallah came
up with the same basic idea I am advocating, that there is a degree of
instantiation and it is based on the Kolmogorov complexity of a program
that maps between physical states and corresponding computational states.

For functionalism to work, though, it seems to me that you really need
to be able to give a yes or no answer to whether something implements a
given calculation.  Fuzziness will not do, given that changing the system
may kill a conscious being!  It doesn't make sense to say that someone is
sort of there, at least not in the conventional functionalist view.

A fertile source of problems for functionalism involves the question
of whether playbacks of passive recordings of brain states would be
conscious.  If not (as Chalmers and many others would say, since they
lack the proper counterfactual behavior), this leads to a machine with a
dial which controls the percentage of time its elements behave according
to a passive playback versus behaving according to active computational
rules.  Now we can turn the knob and have the machine gradually move from
unconsciousness to full consciousness, without changing its behavior in
any way as we twiddle the knob.  This invokes Chalmers' fading qualia
paradox and is again fatal for functionalism.

Maudlin's machines, which we have also mentioned on this list from time
to time, further illustrate the problems in trying to draw a bright line
between implementations and clever non-implementations of computations.

In short I view functionalism as being fundamentally broken unless there
is a much better solution to the implementation question than I am aware
of.  Therefore we cannot assume a priori that a brain implementation and a
computational implementation of mental states will be inherently the same.
And I have argued in fact that they could have different properties.

Hal Finney

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: Teleportation thought experiment and UD+ASSA

2006-06-21 Thread Hal Finney
 for the second part, the second part can
be a relatively simple translation from physical to mental states.
Therefore we can create a short program which outputs the virtual world
and qualia in all its glory, and this short, two-part program will be
the main contribution to the measure of that mental experience.

Keep in mind that in this framework, we do not start with this two-part
structure.  The starting point is much simpler: all we want to know
is, what is the Kolmogorov complexity of a mental experience?  It is
only once we begin analyzing the problem that we note as you did that
building that mental experience as a universe of its own would require
an enormously large program.  And then we realize that we can make a
much smaller program - with exactly the same output! - that has the two
part structure I described here.  We deduce that such a program is what
is actually responsible for the measure of the experience.

And from this we conclude that the contribution of a universe to the
measure of a conscious experience is not the universe's measure itself,
but that measure reduced by the measure of the program which outputs
that conscious experience given the universe data as input.  This then
leads to the principle that a big brain in a small universe gets more of
that universe's measure; that multiple instantiations of a consciousness
within a universe mean more measure; and that fuzziness of the concept
of an instantiation is no problem because it only affects the size of
the numbers being multiplied together to get the measure contribution.

As for the question above about the Universal Dovetailer universe, it is
easily solved in this framework.  The output of the UD is of essentially
no help in producing the mental state in question, because the ouput is
so enormous and we would have no idea where to look.  Hence the UD does
not make a dominant contribution to mental state measure and we avoid
the paradox without any need for ad hoc rules.

Hal Finney

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



RE: Re: Teleportation thought experiment and UD+ASSA

2006-06-21 Thread Hal Finney

Stathis Papaioannou [EMAIL PROTECTED] writes:
 Hal Finney writes:
  I should first mention that I did not anticipate the conclusion that
  I reached when I did that analysis.  I did not expect to conclude that
  teleportation like this would probably not work (speaking figurately).
  This was not the starting point of the analysis, but the conclusion.

 Yes, but every theoretical scientist hopes ultimately to be vindicated
 by the experimentalists. I'm now not sure what you mean by the second
 sentence in the above quote. What would you expect to find if (classical,
 destructive) teleportation of a subject in Brussels to Moscow and/or
 Washington were attempted?

From the third party perspective, I'd expect that we'd start with a
person in Brussels, and end up with people in Moscow and Washington who
each have the memories and personality of the person who is no longer
in Brussels.  The population of Earth would have increased by one.
I imagine that this is unproblematic and is simply a restatement of the
stipulated conditions of the experiment.

The more interesting question to ask is whether I would submit to
this, and if so, what would I expect?  Note that this is not subject
to experimental verification.  When we have described the third party
situation, we have already said everything that experimentalists could
verify.  When those two people wake up in Moscow and Washington there
is no conceivable experiment by which we can judge whether the person
in Brussels has in some sense survived, or perhaps has done even better
than surviving.  It's not even clear what these questions mean.

It was my attempt to formalize these questions which led to my analysis.
Perhaps it is best if I go back to the more formal statement of the
results, and say that the contribution of this universe to the measure
of a person who experiences surviving the teleportation and wakes up in
W or M is much less than the contribution to the measure of a person who
walks into the machine in Brussels and never experiences anything else.
At a minimum, this would make me hesitant to use the machine.

Now, other philosophical considerations might still convince me to use the
machine; but it would be more like the two copies are my heirs, people
who will live on after I am gone and help to put my plans into action.
People sometimes sacrifice themselves for their children, and the argument
would be even stronger here since these are far more similar to me than
biological relations.  So even if I don't personally expect to survive
the transition I might still decide to use the machine.

Hal Finney

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: Teleportation thought experiment and UD+ASSA

2006-06-21 Thread Hal Finney

Russell Standish [EMAIL PROTECTED] writes:
 If computationalism is true, then a person is instantiated by all
 equivalent computations. If you change one instantiation to something
 inequivalent, then that instantiation no longer instantiates the
 person. The person continues to exist, as long as there remain valid
 computations somewhere in the universe. And in almost any of the many
 worlds variants we consider in this list, that will be true.

That's true, but even with the MWI, making an instantiation cease to
exist decreases the measure of that person.  Around here we call that
murder.  The moral question still exists.  I don't see the MWI as
rescuing functionalism and computationalism.

What, after all, do these principles mean?  They say that the
implementation substrate doesn't matter.  You can implement a person
using neurons or tinkertoys, it's all the same.  But if there is no way
in principle to tell whether a system implements a person, then this
philosophy is meaningless since its basic assumption has no meaning.
The MWI doesn't change that.

Hal Finney

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



RE: Re: Only Existence is necessary?

2006-06-21 Thread Hal Finney

Stathis Papaioannou [EMAIL PROTECTED] writes:
 I am reminded of David Chalmer's paper recently mentioned by Hal Finney,
 Does a Rock Implement Every Finite State Automaton?, which looks at
 the idea that any physical state such as the vibration of atoms in a
 rock can be mapped onto any computation, if you look at it the right
 way. Usually when this idea is brought up (Hilary Putnam, John Searle,
 the aforementioned Chalmers paper) it is taken as self-evidently
 wrong. However, I have not seen any argument to convince me that this
 is so; it just seems people think it *ought* to be so, then look around
 for a justification having already made up their minds.

I tend to agree.  People find the conclusion unpalatable and then they
try to come up with some justification for why it is not true.  As I
mentioned, at least some people like, I think, Hans Moravec, accept the
basic conclusion.

 Now, if any
 computation is implemented by any physical process, then if one physical
 process exists, then all possible computations are implemented. I'll stop
 at this point, although it is tempting to speculate that if all it takes
 for every computation to be implemented is a single physical process -
 a rock, a single subatomic particle, the idle passage of time in an
 otherwise empty universe - perhaps this is not far from saying that the
 physical process is superfluous, and all computations are implemented
 by virtue of their existence as platonic objects.

Yes, I think this is close to Moravec's view.  He believes in the platonic
existence of all conscious experiences, and sees the role of physical
implementation as just to allow us to interact with those other entities
who are instantiated in our universe.

Hal Finney

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Teleportation thought experiment and UD+ASSA

2006-06-20 Thread Hal Finney
 to first-person subjective records which span
the whole range of lifetime from start to end.  Arbitrary subsets of
that subjective lifespan will have less measure than the whole thing.

Now, back to B, W and M.  As just discussed, the measure of B, which
corresponds to a life which ends in Brussels, is likely to be relatively
substantial.  However this hypothetical teleporting or copying machine
works, we stipulate that the guy who started in Brussels is not there a
moment later.  It's likely that a straightforward program which has been
tracking neural events by virtue of their exceedingly slight impact
on the Planck scale is going to be thrown off by this new process.
Therefore a relatively simple program can output subjective record B.

When we consider W and M (revivals in Washington and Moscow respectively)
there is still probably a fairly small program that can produce those
records.  The main additional complication is that the program has to
somehow be designed to be able to pick up the trail in the new city,
to jump from recording neural events in Brussels to recording them in
Washington or Moscow.

This could be done in a couple of ways.  The simplest would be to
hard-code the location of the new version of the person, perhaps
as a vector from the old one.  But this could take quite a bit of
information, as it might have to be accurate to the size of a synapse,
a few nanometers.  Probably a better way would be to track whatever
physical event carries the causal signal from Brussels to the destination.
Presumably something has to travel between the cities, carrying the
information about the person, to allow him to be reconstructed at the
destination(s).  Whatever this effect or principle is, we could write a
program which was able to follow this signal, as the neural activities in
Belgium are being scanned or whatever.  This would allow identifying the
location of the new instance of the person, without having to hard-code
the precise coordinates.  The third-person universe data would tell us
where to look.

This would probably not be too complicated a program, but it is
nevertheless going to be substantially larger than program B.  B only had
to track neural events.  W and M have to be able to track both neural
events and whatever physical principles are utilized by this copying
and transmission process.  W and M are therefore going to have to have
two different analysis methods, compared to one for B.  They should not
have to be twice as big, since they don't have to actually track thought
during the transmission (we assume that thought is suspended during that
time), they just have to figure out where the new brain is being built.
But chances are it is going to make W and M quite a bit larger than B.

Compared to each other, W and M are probably almost identical.  The only
difference is that they make arbitrary different choices for which signal
to follow in tracking the copying information, in order to find where the
new instance is located.  Maybe one has a 1 bit where the other has a 0.
In all other respects the two programs are identical, and their measure
should be the same.

But, as noted above, B is substantially smaller, and as a result, B has
a substantially larger measure.  This means that the contribution of this
third-person record of universe events to a subjective, first-person life
experience that ends in Brussels is much larger than to an experience
which continues in either Washington or Moscow.

If we consider these as three hypothetical people, one who dies in
Brussels, one who continues in Washington, and one who continues in
Moscow, it is the first one who is instantiated to the substantially
greatest degree by the operation of such a copying machine as we are
considering.  Informally, we could say that your most likely experience
is that you will die in Brussels (bearing in mind the formal statement
in the previous sentence).  That is how I would analyze it based on
computational principles.

Hal Finney

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: Teleportation thought experiment and UD+ASSA

2006-06-20 Thread Hal Finney

Bruno writes:
 Hal,

 It seems to me that you are introducing a notion of physical universe,=20
 and then use it to reintroduce a notion of first person death, so that=20
 you can bet you will be the one annihilated in Brussels.

I should first mention that I did not anticipate the conclusion that
I reached when I did that analysis.  I did not expect to conclude that
teleportation like this would probably not work (speaking figurately).
This was not the starting point of the analysis, but the conclusion.

The starting point was the framework I have described previously, which
can be stated very simply as that the measure of an information pattern
comes from the universal distribution of Kolmogorov.  I then applied this
analysis to specific information patterns which represent subjective,
first person lifetime experiences.  I concluded that the truncated version
which ends when the teleportation occurs would probably have higher
measure than the ones which proceed through and beyond the teleportation.

Although I worked in terms of a specific physical universe, that is
a short-cut for simplicity of exposition.  The general case is to simply
ask for the K measure of each possible first-person subjective life
experience - what is the shortest program that produces each one.  I
assume that the shortest program will in fact have two parts, one which
creates a universe and the second which takes that universe as input
and produces the first-person experience record as output.

This leads to a Schmidhuber-like ensemble where we would consider
all possible universes and estimate the contribution of each one to
the measure of a particular first-person experience.  It is important
though to keep in mind that in practice the only universe which adds
non-negligible measure would be the one we are discussing.  In other
words, consider the first person experience of being born, living your
life, travelling to Brussels and stepping into a teleportation machine.
A random, chaotic universe would add negligibly to the measure of this
first-person life experience.  Likewise for a universe which only evolves
six-legged aliens on some other planet.  So in practice it makes sense
to restrict our attention to the (approximately) one universe which has
third-person objective events that do add significant measure to the
instantiation of these abstract first-person experiences.


 You agree that this is just equivalent of negating the comp hypothesis.=20
 You would not use (classical) teleportation, nor accept a digital=20
 artificial brain, all right? Do I miss something?

It is perhaps best to say that I would not do these things
*axiomatically*.  Whether a particular teleportation technology would
be acceptable would depend on considerations such as I described in my
previous message.  It's possible that the theoretical loss of measure for
some teleportation technology would be small enough that I would do it.

As far as using an artificial brain, again I would look to this kind of
analysis.  I have argued previously that a brain which is much smaller
or faster than the biological one should have much smaller measure, so
that would not be an appealing transformation.  OTOH an artificial brain
could be designed to have larger measure, such as by being physically
larger or perhaps by having more accurate and complete memory storage.
Then that would be appealing.

I think that one of the fundamental principles of your COMP hypothesis
is the functionalist notion, that it does not matter what kind of system
instantiates a computation.  However I think this founders on the familiar
paradoxes over what counts as an instantiation.  In principle we can
come up with a continuous range of devices which span the alternatives
from non-instantiation to full instantiation of a given computation.
Without some way to distinguish these, there is no meaning to the question
of when a computation is instantiated; hence functionalism fails.

My approach (not original to me) is to recognize that there is a degree
of instantiation, as I have described via the conditional Kolmogorov
measure (i.e. given a physical system, how much does it help a minimal
computation to produce the desired output).  This then leads very
naturally to the analysis I provided in my previous message, which
attempted to estimate the conditional K measure for the hypothetical
first-person computations that were being potentially instantiated by
the given third-party physical situation.

Hal Finney

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: Reasons and Persons

2006-05-30 Thread Hal Finney

Jesse Mazer writes:
 I agree that Parfit's simple method would probably create a nonfunctional 
 state in between, or at least the intermediate phase would involve a sort of 
 split personality disorder with two entirely separate minds coexisting in 
 the same brain, without access to each other's thoughts and feelings. But 
 this is probably not a fatal flaw in whatever larger argument he was making, 
 because you could modify the thought experiment to say something like let's 
 assume that in the phase space of all possibe arrangements of neurons and 
 synapses, there is some continuous path between my brain and Napoleon's 
 brain such that every intermediate state would have a single integrated 
 consciousness. There's no way of knowing whether such a path exists (and of 
 course I don't have a precise definition of 'single integrated 
 consciousness'), but it seems at least somewhat plausible.

One way (perhaps the only way) I could see to do it would be for you
to gradually acquire amnesia, then once you have forgotten your past,
your personality could gradually change to match Napoleon's, then you
could gradually recover memory of Napoleon's past.

Whether such an extreme case would still support whatever conclusions
Parfit seeks to draw, I don't know.  You're never half-yourself and
half-Napoleon.  Rather, you sort of stop being anybody in the middle
of the process.  I don't think it makes any sense to suppose that you
could be half-yourself and half-Napoleon.

Certainly the physical process Russell quoted could never work,
because there is no one-to-one correspondence between the neutrons in
your brain and Napoleons.  And each neutron has a distinctive shape.
If you brought it over unchanged, it would intersect with and overlap
other cells in the brain, and be non-functional.  But if you change its
shape, it won't be the same neuron in terms of its functional behavior.
If you brought neurons over from Napoleon's brain but altered them
in the process to match your own neurons physically and functionally,
then you would never stop being yourself.

Hal Finney

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: Ascension (was Re: Smullyan Shmullyan, give me a real example)

2006-05-30 Thread Hal Finney

Jesse Mazer writes:
 The dovetailer is only supposed to generate all *computable* functions 
 though, correct? And the diagonalization of the (countable) set of all 
 computable functions would not itself be computable.

The dovetailer I know does not seem relevant to this discussion about
functions.  It generates programs, not functions.  For example, it
generates all 1 bit programs and runs each for one cycle; then generates
all 2 bit programs and runs each for 2 cycles; then generates all 3
bit programs and runs each for 3 cycles; and so on indefinitely.  (This
assumes that the 3 bit programs include all 2- and 1-bit programs, etc.)
In this way all programs get run with an arbitrary number of cycles.

These programs differ from functions in two ways.  First, programs may
never halt and hence may produce no fixed output, while functions must
have a well defined result.  And second, these programs take no inputs,
while functions should have at least one input variable.

What do you understand a dovetailer to be, in the context of computable
functions?

Hal Finney

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: Smullyan Shmullyan, give me a real example

2006-05-19 Thread Hal Finney

Bruno writes:
 Meanwhile just a few questions to help me. They are hints for the=20
 problem too. Are you familiar with the following recursive program=20
 for computing the factorial function?

 fact(0) =3D 1
 fact (n) =3D n * fact(n - 1)

 Could you compute fact 5, from that program? Could you find a similar=20
 recursive definition (program) for multiplication (assuming your=20
 machine already know how to add)?
 Could you define exponentiation from multiplication in a similar way? =20
 Could you find a function which would grow more quickly than=20
 exponentiation and which would be defined from exponentiation like=20
 exponentiation is defined from multiplication? Could you generalize all=20
 this and define a sequence of more and more growing functions. Could=20
 you then diagonalise effectively (=3D writing a program who does the=20
 diagonalization) that sequence of growing functions so as to get a=20
 function which grows more quickly than any such one in the preceding=20
 sequence?

Here's what I think you are getting at with the fairy problem.  The point
is not to write down the last natural number, because of course there
is no such number.  Rather, you want to write a program which represents
(i.e. would compute) some specific large number, and you want to come up
with the best program you can for this, i.e. the program that produces
the largest number from among all the programs you can think of.

If we start with factorial, we could define a function func0 as:

func0(n) = fact(n)

Now this gets big pretty fast.  func0(100) is already enormous, it's like
a 150 digit number.  However we can stack this function by calling
it on itself.  func0(func0(100)) is beyond comprehension.  And we can
generalize, to call it on itself as many times as we want, like n times:

func1(n) = func0(func0(func0( ... (n))) ... )))

where we have nested calls of func0 on itself n times.  This really gets
bigger fast, much faster than func0.

Then we can nest func1:

func2(n) = func1(func1(func1( ... (n))) ... )))

where again we have nested calls of func1 on itself n times.  We know
that func1(n) gets bigger so fast, func1(func1(n)) will get bigger
amazingly faster, and of course with n of them it is that much faster yet.

This clearly generalizes to func3, func4, 

Now we can step up a level and define hfunc1(n) = funcn(n), the nth
function along the path from func1, func2, func3,   Wow, imagine
how fast that gets bigger.  hfunc is for hyperfunc.

Then we can stack the hfuncs, and go to an ifunc, a jfunc, etc.  Well,
my terminology is getting in the way since I used letters instead of
numbers here.  But if I were more careful I think it would be possible
to continue this process more or less indefinitely.  You'd have program
P1 which continues this process of stacking and generalizing, stacking
and generalizing.  Then you could define program P2 which runs P1 through
n stack-and-generalize sequences.  Then we stack-and-generalize P2, etc.
It never ends.  But it's not clear to me how to describe the process
formally.

So we have this ongoing process where we define a series of functions
that get big faster and faster than the ones before.  I'm not sure how we
use it.  Maybe at some point we just tell the fairy, okay, let me live
P1000(1000) years.  That's a number so big that from our perspective it
seems like it's practically infinite.  But of course from the infinite
perspective it seems like it's practically zero.

Hal Finney

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: why can't we erase information?

2006-04-11 Thread Hal Finney

A few years ago I posted a speculation about Harry Potter universes,
from the Schmidhuber perspective.  Schmidhuber argues that the reason
we don't see such a universe is that its program would be more complex,
hence its algorithmic-complexity measure would be less.  Such a universe
would basically have natural laws identical to what we see, but in
addition it would have exceptions to the laws.  You wave a wand and say
Lumino! and light appears.  (Here I am taking the Harry Potter name
rather literally, but the same thing applies to the more general concept
of universes with magical exceptions to the rules.)

You could also argue, as Wei does, on anthropic grounds that in such a
universe the ease of exploiting magic would reduce selection pressure
towards intelligence.  Indeed in the Harry Potter stories there are
magical animals but it is never explained why their amazing powers did
not allow them to dominate the world and kill off mundane creatures long
before human civilization arose.

I suggested that the Schmidhuber argument has a loophole.  It's true that
the measure of a simple universe is much greater than a universe with
the same laws plus one or more exceptions.  But if you consider the set
of all universes built on those laws plus exceptions, considering all
possible variants on exceptions, the collective measure of all these
universes is roughly the same as the simple universe.  So Schmidhuber
gives us no good reason to reject the possibility that our universe may
have exceptions to the natural laws.

If we do live in an exceptional universe, we are more likely to live in
one which is only slightly exceptional, i.e. one whose laws are among
the simplest possible modifications from the base laws.  Unfortunately,
without a better picture of the true laws of physics and an understanding
of the language that expresses them most simply, we can't say much about
what form exceptions might take.  We know that they would be likely
to be simple, in the same language that makes our base laws simple,
but since we don't know that language it is hard to draw conclusions.

Here is where the anthropic argument advanced by Wei Dai sheds some
light; one thing we could say is that these simple exceptions should not
be exploitable by life and make things so easy as to remove selection
pressure.  So this would constrain the kinds of exceptions that could
exist.

Ironically, waving a wand and speaking in Latin would indeed be the
kind of exception that would not likely be exploited by unintelligent
life forms.  So purely on anthropic principles we could not fully rule out
Harry Potter magic.  But the complexity of embedding Latin phrases in the
natural laws would argue strongly against us living in such a universe.

Hal Finney

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re:why can't we erase information?

2006-04-10 Thread Hal Finney

A few random thoughts:

Not only can't you erase information, in the MWI I believe you can't
create it either.  The constancy of information is another way of
expressing the QM principle of unitarity.

I think it's also tied to time symmetry.  Universes with time symmetry
would be unable to create or destroy information.  The MWI is time
symmetric (that is, the Schrodinger equation is time symmetric).

Wolfram investigated a variety of CA systems, some of which happened to
be time symmetric.  Generally I think those were more likely to create
very regular patterns, while it was the time-asymmetric ones that were
more likely to be chaotic and show interesting patterns.

One advantage of being unable to destroy information is that it
automatically makes learning and memory possible.  These capabilities are
probably necessary for the evolution of intelligence.  It's not clear
though that complete inability to destroy information is necessary for
memory to work though.

Perhaps if we favor simple universes, there is basically a choice between
complete information preservation vs universes where it is not preserved
well at all once you move above the Planck scale (e.g. information might
be 0.999% preserved per Planck time step, which is not at all for our
purposes).

The idea of a universe where there are a few obscure loopholes that break
the laws of physics is possible in this model, but somewhat unlikely.
And there is no guarantee that the loopholes would be easy to find.

Hal Finney

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: Indeterminism

2006-03-26 Thread Hal Finney

Johnathan Corgan wrote:
 Still, there is a certain appeal to shifting the question from Why are
 we conscious? to Consciousness doesn't exist, so why do we so firmly
 believe that it does?

It is possible to imagine a machine that doubts (or perhaps I should say
doubts, i.e. we should not assume that it has doubts in the same way we
do) whether it is conscious.  Imagine a simple theorem-proving machine,
one of Bruno's logic machines, complicated enough to have a representation
of itself.  We want to ask it if it is conscious.  So we have to define
consciousness in logical terms.  That seems quite daunting.  If we allow
room for indeterminacy in our definitions, the machine might also have
indeterminacy in its estimation of whether it is conscious.

Or, imagine we meet aliens.  How do we know if they are conscious?  Or,
turning it around, how would they know if they possess what humans call
consciousness?  How would we describe consciousness to them, who have
very different brains and ways of information processing, such that
they can know for sure whether they are conscious in the same way that
humans are?

The question of whether someone is conscious is far more problematic
than is often supposed, given that we cannot even define consciousness!
I tend to think that it is simply a convenient assumption, that everyone
is conscious, to avoid facing up to the overwhelming difficulties that
a true analysis of the question brings.  The mere fact that we cannot
define consciousness ought to be a pretty big red flag that we should
not be making facile assumptions about who has it and who doesn't!

(Or, if you say that we can in fact define consciousness, tell me how
to know which AI programs have it, and which don't?)

Hal Finney

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Solipsism (was: Numbers)

2006-03-17 Thread Hal Finney
 that part of the UD makes to our
experiences.  Likewise we could create a model of the solipsist universe,
where only the person is real and all of his external experiences are
provided by the program.

It is very likely that it will turn out that the real-universe program
will be much smaller than the solipsist one.  The solipsist program has
to have an enormous table to supply all of the sensory experiences a
person will receive.  It also needs to have some mechanism to compute
his brain patterns, and probably the simplest will be to have an actual
physical universe model.  Building the brain patterns into an internal
table is another way to do it but will probably be even larger.  Any way
you look at it, the solipsist program is going to be enormous.

Reasoning in this way, we will not only be able to basically reject
the solipsist hypothesis, we can actually do so in a quantitative way!
We will be able to say, the fraction of your experiences due to the
solipsist universe, or equivalently the probability that it is true,
is this extremely tiny number.  And it will very likely be an incredibly
small number, 1/2^(10^20), perhaps.  (I wrote a long posting last year
which effectively estimated this number.)

This, then, is an example of the power of the assumption about the
platonic universe, that physical and mathematical reality are the same.
It not only sheds light on ancient and seemingly insolvable conundrums
such as solipsism, it should allow us to in principle produce quantitative
estimates of the role of solipsist universes in the larger reality.
If as I wrote yesterday we are able to eventually verify predictions of
this model in terms of physical observations, we would have achieved
a unification of physics and philosophy far deeper than has ever been
accomplished before.

Hal Finney

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Multiverse concepts in string theory

2006-02-13 Thread Hal Finney
 principle we then derive the result that we should live in a
universe which is about as simple as possible (i.e. has as high measure
as possible) consistent with the evolution of life.

One could then look at all of these string theory models, where no doubt
some of them are very much simpler than others.  The simpler ones would
be the universes where we are most likely to live, if life is possible
at all in them.  Therefore the (near) infinity of possible models should
not immediately be cause for dispair, but rather there is reason to hope
that a model that defines the universe we actually live in may well be
relatively simple, within the reach of a practical research program.

Of course there are no guarantees of this.  It could be that even though
the complexity of our universe is small compared to infinity, it is still
large compared to what human research could hope to achieve.  We really
don't have much basis to decide at this point, other than that the various
attempts to explore relatively simple string theory models have all failed
to describe a universe like ours.  They do reveal universes with particles
and force fields, but not ones that closely resemble our universe.  And
usually these hypothetical universes instantly explode or otherwise behave
in a manner hostile to life.

Of course, this level of analysis reminds us that there is no reason
a priori that a variant of string theory has to be the correct model.
As we know from Tegmark and Schmidhuber, among others, it is easy to
generalize into a much wider set of possible universes.  String theory
has the nice feature that it creates a well-defined particle zoo, i.e.
a set of particles with particular masses, charges and other properties;
and it is perhaps not so easy to pull that out of other models.  But there
are undoubedly at least some other ways to do it.  And the truth is,
we don't even know that the existence of particles as we observe them
is truly as fundamental as string theory assumes.  Particles could be
an illusion, a manifestation of some deeper underlying phenomena.

Unfortunately I think that in fairness we still have to classify most of
these multiverse concepts as philosophy more than physics.  Although we
can come up with some in-principle predictions - such as that we will
probably never discover a universe model that is much simpler than our
own but which would evolve life - I don't know of any predictions that
one could make based on anthropic reasoning that could be tested today.
Physics is a science, and that means it needs to work with theories that
can be tested and disproven.  We are a long way from being able to come
up with any experiment that a working physicist in his lab could run
to see whether multiverse models are correct.  (And no, quantum suicide
doesn't count!)

I also get the impression that Susskind's attempts to bring disreputable
multiverse models into holy string theory is more likely to kill
string theory than to rehabilitate multiverses.  Perhaps I am getting a
biased view by only reading this one blog, which opposes string theory,
but it seems that more and more people are saying that the emperor has
no clothes.  If string theory needs a multiverse then it is even less
likely to ever be able to make physical predictions, and its prospects
are even worse than had been thought.  A lot of people seem to be piling
on and saying that it is time for physics to explore alternative ideas.
The hostile NY Times book review is just one example.

Hal Finney



Re: Let There Be Something

2005-11-03 Thread Hal Finney
Russell Standish writes:
 It predicts that either a) there is no conscious life in a GoL
 universe (thus contradicting computationalism) or b) the physics as
 seen by conscious GoL observers will be quantum mechanical in nature.

 If one could establish that a given GoL structure is conscious, and
 then if one could demonstrate that its world view is incompatible with
 QM then we might have a contradiction.=20

 Even then, there is still a loophole. I suspect that 3D environment
 are far more likely to evolve the complex structures needed for
 consciousness, so that conscious GoL observers are indeed a rare
 thing. I don't know if this is the case or not, but if true it would
 make a GoL example irrelevant. More interesting is to look at some 3D
 CA rules that appear to support universal computation - Andy Wuensche
 had a paper on this in last year's ALife in Boston. No arXiv ref I'm
 afraid, but you could perhaps email him for an eprint...

That's very interesting.  Is it a matter of evolution, or mere existence?
I can see that life would be hard to evolve naturally in Life -
it's too chaotic.  But it might well be possible for us to create
a specially-designed Life robot which was able to move around and
interact with a sufficiently well-defined and restrictive environment.

How much constraint would your theories put on the capabilities of such
a robot?  Is it just that it could never be truly conscious?  Or would
your arguments limit its capabilities more strongly?  Consciousness is
hard to test for; would there be purely functional limitations that you
could predict?

Hal Finney



Re: Let There Be Something

2005-11-01 Thread Hal Finney
Tom Caylor writes:
 I believe that my statement before:

 ...simply bringing in the hypothetical set of all unobservable things
 doesn't explain rationally in any way (deeper than our direct
 experience) the existence of observable things.

 applies to the multiverse as well, since
 the multiverse = observable things + unobservable things
 and equivalently
 the multiverse = this universe + unobservable things

Are you saying that you don't agree that the anthropic principle applied
to an ensemble of instances has greater explanatory power than when
applied to a single instance?

Hal Finney



Re: Let There Be Something

2005-10-28 Thread Hal Finney
Tom Caylor writes:
 I just don't get how it can be rationally justified that you can get 
 something out of nothing.  To me, combining the multiverse with a 
 selection principle does not explain anything.  I see no reason why it 
 is not mathematically equivalent to our universe appearing out of 
 nothing.

I would suggest that the multiverse concept is better thought of in
somewhat different terms.  It's goal is not really to explain where the
universe comes from.  (In fact, that question does not even make sense
to me.)

Rather, what it explains better than many other theories is why the
universe looks the way it does.  Why is the universe like THIS rather
than like THAT?  Why are the physical constants what they are?  Why are
there three dimensions rather than two or four?  These are hard questions
for any physical theory.

Multiverse theories generally sidestep these issues by proposing that
all universes exist.  Then they explain why we see what we do by invoking
anthropic reasoning, that we would only see universes that are conducive
to life.

Does this really not explain anything?  I would say that it explains
that there are things that don't need to be explained.  Or at least,
they should be explained in very different terms.  It is hard to say
why the universe must be three dimensional.  What is it about other
dimensionalities that would make them impossible?  That doesn't make
sense.  But Tegmark shows reasons why even if universes with other
dimensionalities exist, they are unlikely to have life.  The physics
just isn't as conducive to living things as in our universe.

That's a very different kind of argument than you get with a single
universe model.  Anthropic reasoning is only explanatory if you assume the
actual existence of an ensemble of universes, as multiverse models do.
The multiverse therefore elevates anthropic reasoning from something of
a tautology, a form of circular reasoning, up to an actual explanatory
principle that has real value in helping us understand why the world is
as we see it.

In time, I hope we will see complexity theory elevated in a similar way,
as Russell Standish discusses in his Why Occam's Razor paper.  Ideally we
will be able to get evidence some day that the physical laws of our own
universe are about as simple as you can have and still expect life to
form and evolve.  In conjunction with acceptance of generalized Occam's
Razor, we will have a very good explanation of the universe we see.

Hal Finney



Re: Quantum theory of measurement

2005-10-12 Thread Hal Finney
Ben Goertzel writes about:
 http://grad.physics.sunysb.edu/~amarch/

 The questions I have regard the replacement of the Coincidence Counter (from
 here on: CC) in the  above experiment with a more complicated apparatus.

 What if we replace the CC with one of the following:

 1) a carefully sealed, exquisitely well insulated box with a printer inside
 it.  The printer is  hooked up so that it prints, on paper, an exact record
 of everything that comes into the CC.  Then,  to erase the printed record,
 the whole box is melted, or annihilated using nuclear explosives, or
 whatever.

The CC is not what is erased.  Rather, the so-called erasure happens
to the photons while they are flying through the apparatus.  Nothing in
the experiment proposes erasing data in the CC.  So I don't really see
what you are getting at.

 What will the outcome be in these experiments?

It won't make any difference, because the CC is not used in the way you
imagine.  It doesn't have to produce a record and it doesn't have to erase
any records.

Let me tell you what really happens in the experiment above.  It is
actually not so mystical as people try to make it sound.

We start off with the s photon going through a 2 slit experiment and
getting interference.  That is standard.

Now we put two different polarization rotations in front of the two slits
and interference goes away.  The web page author professes amazement,
but it is not really that surprising.  After all, interference between two
photons would typically be affected by messing with their polarizations.
It is not all that surprising that putting polarizers into the paths
could mess up the interference.

But now comes the impressive part.  He puts a polarizer in front of the
other photon, the p photon, and suddenly the interference comes back!
Surely something amazing and non-local has happened now, right?

Not really.  This new polarizer will eliminate some of the p photons.
They won't get through.  The result is that we will throw out some
of the measurements of s photons, because if the p photon got eaten
by its polarizer, the CC doesn't trigger as there is no coincidence.
(This is the real reason for the CC in this experiment.)

So now we are discarding some of the s photon measurements, and keeping
some.  It turns out that the ones we keep do show an interference pattern.
If we had added back in the ones we discarded, it would blur out the
interference fringes and there would be no pattern.

The point is, there is no change to the s photon when we put the polarizer
over by p.  Its results do not visibly change from non-interference
to interference, as the web page might imply.  (If that did happen,
we'd have the basis for a faster than light communicator.)  No, all
that is happening is that we are choosing to throw out half the data,
and the half we keep does show interference.

The only point of the CC, then, is to tell us which half of the data
to throw out of the s photon measurements.  Destroying the CC and all
of the other crazy things you suggest have nothing to do with the
experiment.  The CC is not what is erased and it does not create a
permanent record.  It is only there to tell us whether a p photon got
through its polarizer or not, so that we know whether to throw away
the s photon measurement.

Hal Finney



RE: Quantum theory of measurement

2005-10-12 Thread Hal Finney
Ben Goertzel writes:
 Hal,
  It won't make any difference, because the CC is not used in the way you
  imagine.  It doesn't have to produce a record and it doesn't have to erase
  any records.

 OK, mea culpa, maybe I misunderstood the apparatus and it was not the CC
 that records
 things, but still the records
 could be kept somewhere, and one can ask what would happen if the records
 were
 kept somewhere else (e.g. in a macroscopic medium).  No?

I don't think this makes sense, at least I can't understand it.


  The point is, there is no change to the s photon when we put the polarizer
  over by p.  Its results do not visibly change from non-interference
  to interference, as the web page might imply.  (If that did happen,
  we'd have the basis for a faster than light communicator.)  No, all
  that is happening is that we are choosing to throw out half the data,
  and the half we keep does show interference.

 Yes but we are choosing which half to throw out in a very peculiar way --
 i.e. we are throwing it out by un-happening it after it happened,
 by destroying some records that were only gathered after the events
 recorded in the data already happened...

You have to try to stop thinking of this in mystical terms.  IMO people
present a rather prosaic phenomenon in a misleading and confusing way,
and this is giving you an incorrect idea.  Nothing is un-happening.
No records are destroyed after they were gathered.

Forget that anybody told you this was a quantum eraser and think
about what really happens.  When all the polarizers are in place, half
of the p photons get eaten and half get through.  This gives us a way
to split up the s measurements into two halves.  It turns out that each
half independently shows interference, but that the two interference
patterns are the opposite of each other.  When you combine the two halves
back together, the peaks of one half fill in the valleys of the other,
and the data set as a whole shows no interference.

Look at it concretely as it might happen in the lab.  We record a bunch
of s measurements and also record whether we get a coincidence with a p
photon getting through, in the CC.  Maybe we write a little check mark
next to the s measurements where there was a p photon coincidence.

We go through afterwards to analyze the data.  If we just plot all
the s measurements we see a smooth curve, no interference.  Now we
go through and cross off the ones where there was no p coincidence.
We cross off s measurement number 1, then numbers 3 and 4, then 5, 7,
10 through 12, and so on.  When we plot the remaining measurements,
now we see an interference pattern.

In other words, the coincidence with the p photon identifies a subset
of the s measurements which shows interference.  The total collection
of s measurements still shows no interference.

There is no real erasing going on.  Whoever coined the term quantum
eraser was a master of public relations, but unfortunately he confused
millions of lay people into getting the wrong idea about the physics.

Hal Finney



Re: What Computationalism is and what it is *not*

2005-09-05 Thread Hal Finney
Bruno writes:
 I will think about it, but I do think that CT and AR are just making  
 the YD more precise. Also everybody in cognitive science agree  
 explicitly or implicitly with both CT and AR, so to take them away  
 from YD could be more confusing.

I think that is probably true about the Church Thesis, which I
would paraphrase as saying that there are no physical processes more
computationally powerful than a Turing machine, or in other words that
the universe could in principle be simulated on a TM.  I wouldn't be
surprised if most people who believe that minds can be simulated on
TMs also believe that everything can be simulated on a TM.

(I don't see the two philosophical questions as absolutely linked, though.
I could imagine someone who accepts that minds can be simulated on TMs,
but who believes that naked singularities or some other exotic physical
phenomenon might allow for super-Turing computation.)

But isn't AR the notion that abstract mathematical and computational
objects exist, to the extent that the mere potential existence of a
computation means that we have to consider the possibility that we are
presently experiencing and living within that computation?  I don't
think that is nearly as widely believed.

That simple mathematical objects have a sort of existence is probably
unobjectionable, but most people probably don't give it too much thought.
For most, it's a question analogous to whether a falling tree makes a
noise when there's no one there to hear it.  Whether the number 3 existed
before people thought about it is an abstract philosophical question
without much importance or connection to reality, in most people's minds,
including computationalists and AI researchers.

To then elevate this question of arithmetical realism to the point
where it has actual implications for our own perceptions and our models
of reality would, I think, be a new idea for most computationalists.
Right here on this list I believe we've had people who would accept
the basic doctrines of computationalism, who would believe that it is
possible for a human mind to be uploaded into a computer, but who
would insist that the computer must be physical!  A mere potential or
abstractly existing computer would not be good enough.  I suspect that
such views would not be particularly rare among computationalists.

Hal Finney



Re: What Computationalism is and what it is *not*

2005-09-03 Thread Hal Finney
Bruno writes:
 To sum up: comp is essentially YD, if only to provide a picture of  
 the first person comp indeterminacy. But CT is used to give a range  
 for that indeterminacy (the UD*, the trace of the UD). It is by CT  
 that the UD is really comp-universal, and it is a consequence of CT  
 that this forces it to dovetail, and to dovetail on an incredibly  
 redundant structures (providing non trivial relative measures). AR is  
 used to just accept the notion of UD* and other infinite mathematical  
 structures, and for justifying the use of the excluded middle principle.

Okay, I was mostly trying to clarify the terminology.  The problem is
that sometimes you use comp as if it is the same as computationalism,
and sometimes it seems to include these additional concepts of the Church
Thesis and Arithmetical Realism.  Maybe you should come up with a new
word for the combination of comp (aka Yes Doctor) + CT + AR.  Then you
could make it clear when you are just talking about computationalism,
and when you are including the additional concepts.

Hal Finney



Re: subjective reality

2005-08-30 Thread Hal Finney
I wade into this dispute with trepidation, because I think it is for
the most part incomprehensible.  But I believe I see one place where
there was a miscommunication and I hope to clear it up.

Godfrey Kurtz wrote, to Bruno Marchal:

 You ARE doing something speculative whether you admit it or not! And I
 don't really have to study your argument because
 it is derived from premises that, you already admitted, are
 incompatible with the conclusions you claim.

What is this incompatibility?  I believe he means it to be the following.
Bruno had written:

 This I knew. The collapse is hardly compatible with comp (and thus 
 YD). Even Bohm de Broglie theory, is incompatible with YD.

And yet, Bruno claims that his methods will lead to a derivation of
physics, which as far as we know includes QM.  Godfrey sees the previous
quote from Bruno as indicating that his Yes Doctor starting point is
*incompatible* with QM.  This is the contradiction that he sees.

I'll stop here and invite Godfrey to comment on whether this is the
admission of incompatibility between premises and conclusions that he
was referring to above.

Hal Finney



Re: Book preview: Theory of Nothing

2005-08-29 Thread Hal Finney
I am a little confused about Russell's use of the term self-aware.
I have only had a chance to read a few pages of his book but I don't
particularly see it defined in there.

As Russell uses the term, is our normal, day to day state of consciousness
self-aware?  When I am reading, or watching TV, or eating, am I
self-aware?

I'm not sure how literally to interpret the phrase, whether seeing my
foot makes me self-aware (since my foot is part of my self) but seeing
my shoe does not?  That's probably not right.

It would be helpful to see how Russell distinguishes (or identifies)
awareness, self-awareness, and consciousness for example.

Hal Finney



Re: Maudlin's Machine and the UDist

2005-08-08 Thread Hal Finney
Russell Standish writes:
 The take home message I get from Maudlin's experiment is that a
 computationalist consciousness is supervenient on a physical process
 _spread_ over the multiverse, ie the counterfactuals must really exist
 as alternate branches of the Multiverse.

So what does that tell you about Olympia?  Is she conscious or not,
by this criterion?  I guess that you would say that if the unused
counterfactual machinery would actually work if tested, then she is
conscious; but if the counterfactual machines were broken or blocked
such that they wouldn't work (even though they are not used) then she
is unconscious.  And perhaps you can say that the machines are in fact
tested in other branches of the multiverse, so the criterion is more
than merely a hypothetical difference between unused working machines
and unused broken machines.  I see some difficulties with this position
but I better first hear whether this is what you have in mind before
trying to extrapolate further.

 A far as your UDist argument goes, the fact that a conscious HLUT, or
 a conscious clock has very low measure simply means it is very
 unlikely for us to be one of these things. They would still be
 conscious. However accepting the Multiverse would eliminate these
 objects from being conscious at all, because tof the lack of
 counterfactuals.

From my perspective it doesn't make sense to ask whether a system is
conscious, per se.  Consciousness exists platonically in the multiverse.
Any given consciousness exists, whether a particular system implements
it or not.

What we want to know is whether running a certain program or process
will add to the measure of a given consciousness.  Running a clock
will not add any noticeable measure to any consciousness.  Running a
neural simulation or some AI program may well add significant measure.
We can deduce these facts without considering counterfactuals.  It is
only necessary to see how short a program can compute a representation
of the abstract conscious calculation by starting from the program or
process that we initiate.

My understanding is that the main argument for requiring counterfactuals
in the definition of implementation is to escape the argument that a clock
implements every finite state machine.  I believe that other responses are
better, such as the one by Jacques Mallah.  Unfortunately Mallah's works
seems to have largely disappeared from the web, as has Mallah himself,
but I found an early copy of one of them on archive.org and have put it
here, http://www.finney.org/~hal/mallah1.html.  This version does not
lay out the argument as clearly as the later ones, and merely hints at
the role Kolmogorov complexity can play, but the basic ideas are present.

Hal Finney



Re: Maudlin's Machine and the UDist

2005-08-08 Thread Hal Finney
, then that will not in general test the counterfactuals.
If you expand it to include a wide range of possibilities, then there
is too much going on, too many variations and bizarre outcomes, so
that the criteria you are trying to use for consiciousness are met in
some worlds and contradicted in others.  All in all I don't think this
approach will work as a general method for making consciousness supervene
on physicality.

Hal Finney



Re: OMs are events

2005-08-05 Thread Hal Finney
Bruno writes:
 Now an important fact is the following: computations themselves can be 
 seen as proofs.

I have often seen this stated, but I'm not sure I have ever actually seen
the construction.  I could imagine turning a TM computation into a proof
in the following way.

There would be one axiom, which is the initial configuration of the TM:
its initial tape, its head position, and its head state.  Then the state
transition table of the TM would correspond to rules of inference such
that only one would apply at each state.  Then starting with our one
axiom, we go through successive lines of the proof, applying the single
applicable rule of inference at each line, which exactly corresponds to
what the TM would do at each state.

 But with Church thesis, the notion of computation is 
 absolute: it does not depend on the formal system chosen (java = C = 
 turing = quantum computer = ...).

So... a computation in one of these frameworks can be transformed into
an equivalent computation in any other.

 But provability is a relative notion 
 (like the notion of *total* (everywhere defined) computable functions. 

So in the case of the TM computation a theorem that is provable given
the axioms and rules of inference would be a state that is reachable
from the corresponding initial state and transition table.  The question
of whether a given theorem is provable is equivalent to asking whether
a given computational state is ever reached.  And of course that depends
on precisely what computation is being done: on the specific program and
data which are running.

 To say A is proved has no meaning, you should always say: A is proved 
 in this or that theory (or by this or that machine).

Right, if someone asked, does a program ever reach a state where x = 0?
Then we must respond, which program? It is a meaningless question
to ask unless we specify what program is running.  That would be the
corresponding statement, in computational terms.

 Of course 
 provability can obey universal principles: for example the notion of 
 classical checkable proof in sufficiently rich system is completely 
 captured by the modal logics G and G*.

Well, you lost me on that one!

Hal Finney



Re: OMs are events

2005-08-04 Thread Hal Finney
Brent Meeker writes:
 I'm uncertain whether instantiated by abstract mathematical patterns means 
 that the patterns are being physically realized by a process in time (as in 
 the 
   sci-fi above) or by the physical existence of the patterns in some static 
 form 
 (e.g. written pieces of paper) or just by the Platonic existence of the 
 patterns within some mathematic/logic system.

I'd be curious to know whether you think that Platonic existence could
include a notion of time.  Can you imagine a process, something that
involves the flow of time, existing Platonically?  Or would you restrict
Platonic existence to things like integers and triangles, which seemingly
don't involve anything like time?

How about the case of mathematical proofs?  Could an entire proof
exist Platonically?  A proof has a sort of time-like flow to it, causal
dependency of later steps on earlier ones.  It seems to be an interesting
intermediate case.

My tentative opinion is that it does make sense to ascribe Platonic
existence to such things but I am interested to hear other people's
thoughts.

Hal Finney



Re: OMs are events

2005-08-04 Thread Hal Finney
Brent Meeker wrote (he always forgets to forward to the list):
 Hal Finney wrote:
  I'd be curious to know whether you think that Platonic existence could
  include a notion of time.

 I think timelessness is a defining characteristic of Platonic existence.  I 
 use scare quotes because I'm not sure what definition of existence would 
 justify ascribing existence of things like mathematical objects.  Once we go 
 beyond our model of physical existence, we may have to invent various 
 categories 
 of existence, e.g. Dr. Watson exists in Sherlock Holmes stories.

A Tegmark or Schmidhuber model in effect assumes that abstract,
Platonic objects have enough existence for people to live in them.
And in that case, there is no real difference between physical existence
and Platonic existence.  Physical existence would be a subset of Platonic.
(Or as Bruno says, physical existence is a modal way of viewing Platonic
existence, i.e. which objects are physically real would depend on the
observer.)  I know that not everyone here shares that view, however.

If we consider the concept that everything exists, the title of this
list, then this does seem to lead us towards this merging of physical and
mathematical/logical/Platonic existence.  On the other hand, I would say
that everything exists, but some things exist more than others.  So I am
drawing distinctions between degrees of existence (based on measure),
and it may be that distinguishing physical from abstract existence is
not such a dissimilar strategy.

Hal Finney



Re: OMs are events

2005-08-01 Thread Hal Finney
Quentin Anciaux writes:
 Le Lundi 01 Août 2005 05:32, Hal Finney a écrit :
  I am generally of the school that considers that calculations can be
  treated as abstract or formal objects, that they can exist without a
  physical computer existing to run them.

 I completely agree with that... but I have problem with the word 
 instantiating in front of an abstract calculation, because if the 
 calculation is abtract that means the calculation just is, no need of 
 instantiation.

I agree, and if I used that terminology then it was probably a
mistake.  Looking back at the message you replied to, I did not talk of
instantiating an abstract calculation.  I did mention the question of
whether a given calculation instantiated a given OM.  Maybe instantiate
is not the right word there.  I meant to consider the question of whether
the first calculation added to the measure of the information structure
corresponding to the OM.  If you can find any other place where I used
the word confusingly, let me know.

 On the other hand I have still problem with abstract 
 calculation... take for example a mathematic demonstration written on a sheet 
 of paper, it doesn't mean anything if there is no observer to read it and 
 understand it (thereby instantiating the calculus in his own mind), what do 
 you think of that ?

I can interpret your question in two ways.  One is, does a mathematical
proof written on paper has an intrinsic meaning, or is the meaning
in the mind of the reader?  And the other is, do mathematical proofs
have abstract, logical/mathematical existence, in the same way that,
say, numbers or geometric figures might be said to abstractly exist?

As far as the first question, I would analyze it by asking whether someone
who did not know the language it was written in, not even recognizing
the symbols, would be able to deduce what the proof was.  I believe the
answer is yes, for reasonably long proofs.  There would be no ambiguity.

As a concrete way to understand this, suppose we want to ask the question,
does this string of symbols represent proof X, where X is some valid
mathematical proof.  We could write a translation program which, given the
symbols, would output proof X.  If the string of symbols is reasonably
long and actually does match proof X, the translation program will be
short, much shorter than the proof itself.  However if the string of
symbols is not a proof of X, then the translation program will have to
be long.  By the same type of argument I have used repeatedly, this gives
us a tool for evaluating whether a string has a given meaning.  If the
translation program is short, then the meaning is in the string.  If the
translation program is long, then the meaning is in the translation.
I believe that this shows that it is in fact reasonably to suppose that a
complex proof written on paper does in fact have intrinsic meaning and it
is not just a matter of how it is interpreted in the mind of the reader.

In terms of the other question, whether proofs have abstract mathematical
existence just as (we suppose) integers and triangles do, again I think
that the answer is yes.  Proofs are merely more complex.  They have
relationships amongs their parts.  They depend on an axiom system.
The implicit causality and time ordering among steps of the proof
could be represented graphically, by colored arrows leading from one
step to another.  I could imagine a representation where valid proof
steps would be as apparent and obvious as the question of whether a set
of lines in a geometric figure all meet at a common point.

In short, I do think that proofs, and for that matter computations,
can be sensibly thought of as having abstract existence just like other
complex mathematical objects.  Some of the constructions of set theory
are far more complex than any humanly understandable proof, yet it is
reasonable to say that sets exist in the abstract.  The fact that a
proof has many parts and has complex relationships between the parts is
no obstacle to its having abstract mathematical existence.

Hal Finney



RE: What We Can Know About the World

2005-07-31 Thread Hal Finney
 a
line among shades of gray and try to separate white from black.  But on
the other hand, in practice only brains are noticeably conscious (and
probably only big brains; the nematode with its 302 neurons can't have
much consciousness).  Even though our stomachs and earlobes are causal
networks and have their little slivers of consciousness, only our brains
manage to really count.  It just seems strange that if consciousness is,
in the metaphysical sense, so easy that it's omnipresent, then why do
so few systems actually exhibit it?

Hal Finney



Re: OMs are events

2005-07-31 Thread Hal Finney
 is in such and such a position. This
 is associated with the state of the transistors of the computer running the
 program. But that same pattern could arise in a completely different
 calculation. You would have to extract exactly what program is running on
 the machine to be able to define OMs like that. To do that you need to feed
 the program with different kinds of input and study the output, otherwise
 you'll fall prey to the famous ''clock paradox'' (you can map the time
 evolution of a clock to that of any object, including brains).

I'm not sure I fully understand this, but I'll make two comments.  First,
a simulation of the solar system is vastly simpler than the calculation
needed to create an observer.  Intuitions based on the first case will
fail when applied to the other.  It may be plausible that two different
calculations could create matching representations of Jupiter's orbit.
But it's completely implausible that two calculations could accidentally
both create the same sequence of observer moments.  I estimated in the
message above that the chance of that happening would be one in 2^(10^18).
No human alive can even begin to grasp the impossibility of such an event.
Think of the most absolutely, totally, completely impossible event you
could ever imagine, and you won't be anywhere near as improbable as that.
It is beyond human comprehension.

Second, this clock paradox has been discussed before.  Years ago Jacques
Mallah on this list pointed out that algorithmic complexity disposes of
it neatly.  Sure, you can map any two calculations together, but if the
map becomes bigger than the calculations, then all the correspondence
is in the map and none in the calculations.

In measure terms, it still comes down to how short a program you can
write to produce the output that corresponds to an observer.  Go ahead
and write your clock or counter program, but its output does not match my
canonical representation of an observer moment.  The challenge is to write
a translation program that turns the output of your clock into the OM's
canonical representation (which is 10^18 bits in size!).  Such a program
is going to be as big as the OM data itself.  The clock is of no help.

On the other hand consider a program which (we would agree) really
does output or create the observer-moment, but perhaps not in the nice
canonical representation I might have defined.  Then we can write
a mapping program which will be relatively short, to turn one data
representation into another.  Even though we have 10^18 bits of data,
the mapping program will still be much smaller than this, because its
complexity does not depend on the size of the data to be translated.
This shows that the program really did create the observer-moment, because
there was little extra data in the map program.  The correspondence was
in the calculation, not in the map.

With such large data sets as observer-moments, the point becomes
very clear.  There is effectively no ambiguity about whether a given
calculation instantiates an OM or not.  Clocks don't do it; neural network
simulations can do it (with proper input); universe simulations can do it
(using a subset of their output).

Hal Finney



Re: OMs are events

2005-07-31 Thread Hal Finney
Quentin Anciaux writes:
 In all of these discussion, it is really this point that annoy me... What is 
 the calculation ? Is it a physical process ? Obviously a calculation need 
 time... what is the difference between an abstract calculation (ie: one which 
 is done on a sheet of paper or just in your head) with an effective 
 calculation ? What is the meaning of instantiating in a block universe 
 view ?

I am generally of the school that considers that calculations can be
treated as abstract or formal objects, that they can exist without a
physical computer existing to run them.

The goal is to model the universe (among other things) as such a
calculation.  If we demand that a calculation exists in a universe, and
a universe is also a calculation, then we have an infinite regression.
One might postulate a God who is infinite himself and is the endpoint
of the regression, but absent such supernatural entities, the model
otherwise doesn't work.

Why model the universe as a calculation?  Well, for one reason, because it
seems to work.  It appears that physical law is essentially mathematical,
implying that it should be feasible in principle to construct a program
which could simulate the entire universe to any degree of accuracy.
It would seem odd, given that the universe can be a calculation, if it
weren't a calculation.

If it seems objectionable to have a calculation without a calculator,
perhaps simpler examples can support the intuition.  You can imagine a
triangle without a triangulator.  You can imagine a number without someone
who counts.  Perhaps you can even imagine a mathematical proof without
a prover.  Mathematical objects may have virtually unlimited complexity
and internal structure, and can be said to exist independently of anyone
who thinks about them or discovers them.  Computations seem to fit very
comfortably into this framework.

If we allow ourselves to imagine calculations as having mathematical
reality, and further to imagine that our universe is such a calculation,
then we have unified mathematical and physical reality.  There is no
longer a difference.  Things which are physically real are merely a
subset of the things which are mathematically real.

If we don't take this step, we have two kinds of reality, mathematical
and physical, which makes for a more awkward (IMO) philosophical position.

However I certainly understand that all these arguments are only
persuasive and indicative and certainly do not amount to a proof.
Nevertheless it is my hope that by pursuing these ideas we can construct
testable propositions which, if verified, will add weight to the
possibility that this is the nature of reality.

Hal Finney



Reality in the multiverse

2005-07-28 Thread Hal Finney
One problem with reality in the context of multiverse theories is that
it may mean different things to different people.

If we assume (for analytical purposes) that some form of multiverse
exists, then ultimately the reality is the multiverse.  But it seems that
each person is constrained only to see one universe out of the multiverse.
For him, that universe is all that is real, the rest of the multiverse
is irrelevant.  So already there is confusion over whether we should
include the other worlds of the multiverse in reality.

I have been exploring the concept that the Universal Distribution exists
and is real.  Reality in this model is every computer program execution,
or equivalently (I would claim, but it is not too important here) every
information pattern.

This is a sort of multiverse, in that it includes multiple universes.
Anything that can be created by a computer program exists, and arguably
universes fall into this category.

But it also includes other things.  Chaotic information patterns
that would not seem to possess most of the properties of a universe
exist as well - without time, or causality, or dimensionality perhaps -
just raw noise.

And disembodied consciousnesses exist, too.  We could each have our
information patterns, the processes that make up our minds, be produced by
programs which do not actually create the rest of the universe but simply
contain hard-coded sense impressions which are delivered by clockwork.

The UDist framework allows us to theoretically approximate the measure
of these various information objects, so we can say that some are more
prominent in the multiverse than others.  But all exist, all are real,
in this model.

One of the points Bruno makes is that in these kinds of models,
the reality for a given observer is pretty complicated.  Much of the
multiverse is irrelevant to him, but that doesn't mean he can focus on
just one universe as real.  The observer spans multiple universes and
multiple realities.

In the UDist framework, I would say it in this way:  Many programs
create the information pattern corresponding to a given observer.
Some of those programs create the observer as part of a relatively
straightforward universe that corresponds fairly simply to his sense
impressions.  Some programs create the observer within a universe that
has a far more subtle and complex relationship to what the observer
senses.  In some universes the observer is part of a simulation a la
The Matrix, being run on artificial machines within that universe, so
that what the observer sees has little relation to the true reality
of that universe.  And some programs create the information pattern as
I described above, without a real universe at all, so that the observer
in effect hallucinates the entire universe.

The point is that all of these programs exist, hence all contribute
measure to the observer.  From the observer's perspective, all of these
are in a sense real to him.  However, he can in principle calculate
(at least approximately) the numerical contribution made by each of
these programs, and perhaps it turns out that the vast majority of the
measure comes from just one of them.  He might be justified in that case
in largely ignoring the others and saying that only that one is real
for him.

But for full precision he must still take into consideration all of
the programs that could create instances of his information pattern,
and consider all of them to be real to some extent.  And then, perhaps,
he may choose to accept that the whole multiverse is real, even the parts
which do not affect him.  Otherwise he has to say that all programs exist
which happen to include an information pattern corresponding to him,
the observer who is making this claim.  That's not a very compelling
theoretical model.

Hal Finney



UD, ASSA, QTI and DA

2005-07-28 Thread Hal Finney
UD is the Universal Distribution, which assigns measure to information
objects as the fraction of computer programs which output them on a
given Universal Turing Machine.  (I know I promised to use UDist for
this acronym but I couldn't resist the title I chose.)

ASSA is the Absolute Self Selection Assumption, which says that we
should reason as though we are randomly selected observer moments (OMs).
Combining the UD+ASSA means that the OMs should be selected according
to the Universal Distribution.

QTI is the Quantum Theory of Immortality, which says that in a
many-worlds or multiverse model, each of us will experience immortality
with certainty, because our deaths in some worlds will not affect the
fact that we live on in others.

The DA is the Doomsday Argument, which says (among other things) that
the human race will not greatly increase in numbers beyond its present
day size, because otherwise the chances that we would find ourselves
so early in its history are insignificant.

Here is an idea to tie these together.  Some time back [1] I proposed
that the measure of observer-moments would be amplified for those OMs
which were remembered.  Most OMs exist only transiently and are forgotten,
but certain ones are special, make an impression, and are remembered for
hours, days or even years.  The measure of such OMs is arguably greater
than the ones which are forgotten because of this effect.

Let us assume that this is true and that in fact it can be a very
strong effect, such that remembered OMs may acquire far more measure
than forgotten ones.  Then what does that suggest for the DA and QTI?

One way to state the DA, using ASSA concepts, is to start with the
supposition that the human race actually will vastly increase its numbers,
perhaps spreading throughout the entire universe.  Then we have to
ask the question, how could it be that we are so early in its history?
In measure terms, the total measure of late observer moments is vastly
greater than the total measure of OMs as early as ours, so the odds are
overwhelmingly against these OMs being experienced.

Just as we accept the low measure of Flying Rabbit OMs as explaining
the lawfulness of the universe, we should accept this way of expressing
the DA as showing the contradiction in assuming that the human race
will grow enormously.  That is the conclusion of the DA, which some
find paradoxical.

The resolution could be to suppose that as the human race spreads,
somehow it will retain a memory of its youth.  The OMs that we are
experiencing today will echo through time and gain measure in that way,
via the remembering effect I discussed above.  It could happen as literal
memories, if we assume that somehow we will become personally immortal and
create copies of ourselves who will spread throughout the universe and
share in the vastening of the human race.  Or perhaps a similar measure
amplification effect could occur more indirectly; perhaps our mere
influence on future events could leave sufficient traces down through
time that present day OMs have much larger measure than future ones
(on a per-OM basis).  This would allow our present-day existence to be
consistent with even a great vastening of humanity, resolving the DA.

A similar argument might apply to the QTI.  In ASSA terms, the
conventional QTI does not work because the measure of extremely old
versions of people is very small, so they would not be very prominent
in the multiverse, any more than inhabitants of Flying Rabbit worlds.
However, if the memory-amplification effect holds, then there could be
a similar phenomenon to QTI.

If we suppose that somehow certain people are destined for immortality,
then the measure of even their young-age OMs would be greatly increased
relative to their fellows by virtue of memory amplification.  This would
mean that those of us who are fortunate enough to have such destinies
(which is not completely impossible today, given the progress of medical
technology), would have very high-measure OMs.

This implies that the experience of being a person is prima facae evidence
that he may expect a much longer than usual life span, perhaps even an
immortal existence.  Such lucky individuals would have so much higher
measure OMs even in their youth (due to memory amplification) that a
randomly selected OM would very plausibly be that of such an individual.

And the fact that we are young and not super-old is perhaps consistent as
well, since younger OMs would have more memory amplification, both due
to the fact that early experiences have more inherent distinctiveness
and novelty because of youth, and due to the additional years they have
to be remembered and for the echo effect to produce measure amplification.

In short, finding onesself to be young is not necessarily an argument
against this variant of the QTI, and may in fact be considered evidence
in favor of a long or even immortal life span.

Hal Finney

[1] Near the end of http://www.escribe.com/science

Re: what relation do mathematical models have with reality?

2005-07-25 Thread Hal Finney
Stephen Paul King wrote:
 BTW, Scott Aaronson has a nice paper on the P=NP problem that is found here:
 http://www.scottaaronson.com/papers/npcomplete.pdf

That describes different proposals for physical mechanisms for efficiently
solving NP-complete problems: things like quantum computing variants,
relativity, analog computing, and so on.  He actually looked at a claim
that soap bubble films effectively solve NP complete problems and tested
it himself, to find that they don't work.

He also discusses time travel and even what we call quantum suicide,
where you kill yourself if the machine doesn't guess right.

I am skeptical though about something he says in conclusion:  Even many
computer scientists do not seem to appreciate how different the world
would be if we could solve NP-complete problems efficiently  If such
a procedure existed, then we could quickly find the smallest Boolean
circuits that output (say) a table of historical stock market data,
or the human genome, or the complete works of Shakespeare.  It seems
entirely conceivable that, by analyzing these circuits, we could make
an easy fortune on Wall Street, or retrace evolution, or even generate
Shakespeare's 38th play.  For broadly speaking, that which we can compress
we can understand, and that which we can understand we can predict
if we could solve the general case - if knowing something was tantamount
to knowing the shortest efficient description of it - then we would be
almost like gods.

This doesn't seem right to me, the notion that an NP solving oracle
would be able to find the shortest efficient description of any data.
That would require a more complex oracle, one that would be able to
solve the halting problem.

I think Aaronson is blurring the lines between finding the smallest
Boolean circuit and finding the smallest efficient description.  Maybe
finding the smallest Boolean circuit is in NP; it's not obvious to me
but it's been a while since I've studied this stuff.  But even if we
could find such a circuit I'm doubtful that all the rest of Aaronson's
scenario follows.

Hal Finney



Re: what relation do mathematical models have with reality?

2005-07-25 Thread Hal Finney
Brent Meeker wrote:
 [Hal Finney wrote:]

  When you observe evidence and construct your models, you need some
  basis for choosing one model over another.  In general, you can create
  an infinite number of possible models to match any finite amount of
  evidence.  It's even worse when you consider that the evidence is noisy
  and ambiguous.  This choice requires prior assumptions, independent of the
  evidence, about which models are inherently more likely to be true or not.

 In practice we use coherence with other theories to guide out choice.  With
 that kind of constraint we may have trouble finding even one candidate
 theory.

Well, in principle there still should be an infinite number of theories,
starting with the data is completely random and just happens to
look lawful by sheer coincidence.  I think the difficulty we have in
finding new ones is that we are implicitly looking for small ones, which
means that we implicitly believe in Occam's Razor, which means that we
implicitly adopt something like the Universal Distribution, a priori.

 We begin with an intuitive physics that is hardwired into us by
 evolution.  And that includes mathematics and logic.  Ther's an excellent
 little book on this, The Evolution of Reason by Cooper.

No doubt this is true.  But there are still two somewhat-related problems.
One is, you can go back in time to the first replicator on earth, and
think of its evolution over the ages as a learning process.  During this
time it learned this intuitive physics, i.e. mathematics and logic.
But how did it learn it?  Was it a Bayesian-style process?  And if so,
what were the priors?  Can a string of RNA have priors?

And more abstractly, if you wanted to design a perfect learning machine,
one that makes observations and optimally produces theories based on
them, do you have to give it prior beliefs and expectations, including
math and logic?  Or could you somehow expect it to learn those?  But to
learn them, what would be the minimum you would have to give it?

I'm trying to ask the same question in both of these formulations.
On the one hand, we know that life did it, it created a very good (if
perhaps not optimal) learning machine.  On the other hand, it seems like
it ought to be impossible to do that, because there is no foundation.

  Mathematics and logic are more than models of reality.  They are
  pre-existent and guide us in evaluating the many possible models of
  reality which exist.

 I'd say they are *less* than models of reality.  They are just consistency
 conditions on our models of reality.  They are attempts to avoid talking
 nonsense.  But note that not too long ago all the weirdness of quantum
 mechanics and relativity would have been regarded as contrary to logic.

I guess we could agree that they are other than models of reality?
It still strikes me as paradoxical: ultimately we have learned our
intuitions about mathematics and logic from reality, via the mechanisms
of evolution and also our own individual learning experiences.  And yet
it seems that at some level a degree of logic, and certain mathematical
assumptions, are necessary to get learning off the ground in the first
place, and that they should not depend on reality.

I'm pretty confused about this right now.

Hal Finney



Re: what relation do mathematical models have with reality?

2005-07-24 Thread Hal Finney
Brent Meeker writes:
 Here's my $0.02. We can only base our knowledge on our experience
 and we don't experience *reality*, we just have certain
 experiences and we create a model that describes them and
 predicts them.  Using this model to predict or describe usually
 involves some calculations and interpretation of the calculation
 in terms of the model.  The relation of the model to reality, if
 it's a good one, is it gives us the right answer, i.e. it
 predicts accurately.  Their are other criteria for a good model
 too, such as fitting in with other models we have; but prediction
 is the main standard.

This makes sense but you need another element as well.  This shows up
most explicitly in Bayesian reasoning models, but it is implicit in
others as well.  That is the assumption of priors.

When you observe evidence and construct your models, you need some
basis for choosing one model over another.  In general, you can create
an infinite number of possible models to match any finite amount of
evidence.  It's even worse when you consider that the evidence is noisy
and ambiguous.  This choice requires prior assumptions, independent of the
evidence, about which models are inherently more likely to be true or not.

This implies that at some level, mathematics and logic has to come before
reality.  That is the only way we can have prior beliefs about the models.
Whether it is the specific Universal Priori (1/2^n) that I have been
describing or some other one, you can't get away without having one.

 So in my view, mathematics and theorems
 about computer science are just models too, albeit more abstract
 ones.  Persis Diaconsis says, Statistics is just the physics of
 numbers.  I have a similar view of all mathematics, e.g.
 arithmetic is just the physics of counting.

I don't think this works, for the reasons I have just explained.
Mathematics and logic are more than models of reality.  They are
pre-existent and guide us in evaluating the many possible models of
reality which exist.

Hal Finney



Re: what relation do mathematical models have with reality?

2005-07-24 Thread Hal Finney
Forwarded on behalf of Brent Meeker:
 On 24-Jul-05, you wrote:

  Brent Meeker writes:
  Here's my $0.02. We can only base our knowledge on our experience
  and we don't experience *reality*, we just have certain
  experiences and we create a model that describes them and
  predicts them.  Using this model to predict or describe usually
  involves some calculations and interpretation of the calculation
  in terms of the model.  The relation of the model to reality, if
  it's a good one, is it gives us the right answer, i.e. it
  predicts accurately.  Their are other criteria for a good model
  too, such as fitting in with other models we have; but prediction
  is the main standard.
  
  This makes sense but you need another element as well.  This shows up
  most explicitly in Bayesian reasoning models, but it is implicit in
  others as well.  That is the assumption of priors.
  
  When you observe evidence and construct your models, you need some
  basis for choosing one model over another.  In general, you can create
  an infinite number of possible models to match any finite amount of
  evidence.  It's even worse when you consider that the evidence is noisy
  and ambiguous.  This choice requires prior assumptions, independent of the
  evidence, about which models are inherently more likely to be true or not.

 In practice we use coherence with other theories to guide out choice.  With
 that kind of constraint we may have trouble finding even one candidate
 theory. We begin with an intuitive physics that is hardwired into us by
 evolution.  And that includes mathematics and logic.  Ther's an excellent
 little book on this, The Evolution of Reason by Cooper.


  
  This implies that at some level, mathematics and logic has to come before
  reality.  That is the only way we can have prior beliefs about the models.
  Whether it is the specific Universal Priori (1/2^n) that I have been
  describing or some other one, you can't get away without having one.
  
  So in my view, mathematics and theorems
  about computer science are just models too, albeit more abstract
  ones.  Persis Diaconsis says, Statistics is just the physics of
  numbers.  I have a similar view of all mathematics, e.g.
  arithmetic is just the physics of counting.
  
  I don't think this works, for the reasons I have just explained.
  Mathematics and logic are more than models of reality.  They are
  pre-existent and guide us in evaluating the many possible models of
  reality which exist.

 I'd say they are *less* than models of reality.  They are just consistency
 conditions on our models of reality.  They are attempts to avoid talking
 nonsense.  But note that not too long ago all the weirdness of quantum
 mechanics and relativity would have been regarded as contrary to logic.


 Brent Meeker



Re: what relation do mathematical models have with reality?

2005-07-23 Thread Hal Finney
Colin Hales writes:
 The idea brings with it one unique aspect: none of the calculii we
 hold so dear, that are so wonderful to play with, so poweful in their
 predictive nature in certain contexts, are ever reified. None of them
 actually truly capture reality in any way. They only appear to in
 certain contexts. The only actual mathematics that captures the true
 nature of the universe is the universe itself as a calculus. It doesn't
 invalidate the maths we love. It just makes it merely a depiction in a
 certain context. Very useful but thats all.

You might like this quote from John Wheeler, in his textbook Gravitation written
with Charles Misner and Kip Thorne, which perhaps expresses a similar idea:

: Paper in white the floor of the room, and rule it off in one-foot
: squares. Down on one's hands and knees, write in the first square
: a set of equations conceived as able to govern the physics of the
: universe. Think more overnight. Next day put a better set of equations
: into square two. Invite one's most respected colleagues to contribute
: to other squares. At the end of these labors, one has worked oneself
: out into the door way. Stand up, look back on all those equations,
: some perhaps more hopeful than others, raise one's finger commandingly,
: and give the order `Fly!' Not one of those equations will put on wings,
: take off, or fly. Yet the universe 'flies'.

My current view is a little different, which is that all of the equations
fly.  Each one does come to life but each is in its own universe,
so we can't see the result.  But they are all just as real as our own.
In fact one of the equations might even be our own universe but we can't
easily tell just by looking at it.

Hal Finney



Re: is induction unformalizable?

2005-07-22 Thread Hal Finney
Wei Dai writes:
 1. P=?NP is a purely mathematical problem, whereas the existence of an HPO
 box is an emperical matter. If we had access to a purported HPO box while
 P=?NP is still unsolved, we can use the box to exhaustively search for
 proofs of either P=NP or PNP.

I've seen many speculations that P=?NP may be undecideable under our
current axioms.  I guess this is because people are tired of looking
for proofs and PhD students don't want to get assigned this problem.

I'm not sure whether both of the following possibilities would be
consistent with the issue being undecideable:

A: There actually exists a polynomial-time algorithm to solve all
NP problems, but we can't prove that it always works, even though it
always does.

B: There is no polynomial time algorithm that solves all NP problems,
but we can't prove that no such algorithm exists.

I wonder if we could ask the HPO (halting problem oracle) box any harder
questions, that might help resolve the issue if it turned out that P=NP
is undecideable.  Could we use it to directly ask whether the algorithm
in case A above exists, and perhaps even to find it?

 2. I think it's very unlikely that P=NP, but in case it is, we can still
 test an HPO box by generating random instances of hard problems with known
 solutions. (That is, you generate a random solution first, then generate a
 random problem with that solution in mind.) For example here's a page about
 generating random instances of the Traveling Salesman Problem with known
 optimal solutions.

 http://www.ing.unlp.edu.ar/cetad/mos/FRACTAL_TSP_home.html

That's a good idea, but is it known that this subset of problems is
still NP-hard?  I would worry that problems like these, where a fractal
or space-filling curve type of path is the right solution, might turn
out to be easier to solve than the general case.

Hal Finney



UDist and measure of observers

2005-07-22 Thread Hal Finney
all the neurons, record their interconnection patterns, and then their
firing rates and activity levels.  All of these have relatively simple
physical correlates given that you can analyze matter at an arbitrarily
fine scale.  Locating the neurons can be done by tracing their outer
membranes.  The interconnection patterns should be determined by the
amount of area they have in common, the number and distribution of
vessicles and receptors in the area, and basic chemistry as to whether
the connection is inhibitory or excitatory.  This is a matter of simple
geometry and counting.  Likewise, the activity level is a function of
the concentration of various chemicals inside vs outside the neural
membrane and can be calculated very simply.

This level of software is adequate to create the data structure defined
above for completely specifying the neural activity which corresponds
to a given set of observer moments.  It amounts to simple counting,
area calculation, and averaging.  My guess again is that 10^4 to 10^5
bits is fully adequate to perform these tasks.  Adding the  10^3 bits
needed to localize the observer still keeps it within this range.

Combining the software to create the universe, perhaps 10^4 bits, and the
software to output the observer description, about 10^4 to 10^5 bits,
we get the size proposed above, 10^4 to 10^5 bits for a self-contained
program which will output the observer description in question.  On this
basis we can use a number like 1/2^(10^4) as an estimate for the measure
of such a set of observer moments.

Hopefully this explanation will clarify how we can apply the UDist
model to calculate measure of observer moments as well as other
information structures.  It also illustrates how far we are from the
scientific knowledge necessary to come up with more precise estimates
for the information content of conscious entities.

Nevertheless, even with the crude level of knowledge available today,
we can make many powerful predictions from this kind of model.  One case
described above is the paradox of whether conscious entities exist all
around us due to vibrations in air molecules, which this analysis lets us
reject in a quantitative sense.  Hans Moravec in particular has argued
that such entities have a reality equal to our own, which is clearly
wrong.  A similar analysis disposes of the long standing philosophical
debate over whether a clock implements every finite state machine (and
hence every conscious entity).  Other puzzles, such as the impact on
measure of replays and duplicates can also be addressed and solved in
this framework.  I have described other predictions and solutions in my
earlier messages on this topic.

Again, I hope that by laying out my calculations in this much detail it
will help people to see somewhat concretely how the Universal Distribution
works and how you can analyze measure using actual software engineering
concepts.  It makes the UDist much more real as a useful tool for
understanding measure and making predictions.

Hal Finney



Re: The Time Deniers and the idea of time as a dimension

2005-07-21 Thread Hal Finney
George Levy writes:
 Hal Finney wrote:
 http://space.mit.edu/home/tegmark/dimensions.html , specifically
 http://space.mit.edu/home/tegmark/dimensions.pdf .  

 Wouldn't it be true that in the manyworld, every quantum branchings that 
 is decoupled from other quantum branchings would in effect define its 
 own time dimension? The number of decoupled branchings contained by the 
 observable universe is very large. Linear time is only an illusion due 
 to our limited perspective of the branching/merging network that our 
 consciousness traverses. While our consciousness may spread over 
 (experience) several OMs or nodes in that network, it can only perceive 
 a single path through the network.

Tegmark's idea of multiple time dimensions was more general than this.
As with multiple space dimensions, you could travel about in the
time dimensions.

In relativity theory, there is a light cone that restricts which
direction is forward in time.  You can change your direction but are
constrained to always be going forward relative to your light cone.
This keeps you from turning around and going backwards in time, because
you can't exceed the speed of light.  However with 2 dimensional time the
geometry is different and you actually go backwards in time.  Your own
personal clock goes forward but you can end up back before you started.

I'll give you a mental visualization you might find useful and
interesting.  There is a conventional way to think of a light cone which
is what gives it its name.  Imagine a 2+1 dimensional universe, 2 spatial
dimensions and 1 of time.  To think of it, start with an x-y plane with
the x and y axes.  We'll call the y axis time, positive being upward.
This is a 1+1 dimensional universe. Now imagine the lines x=y and x=-y,
in other words the two lines running at 45 degrees and crossing at the
origin.  These can be thought of as the paths of light rays emitted by or
received at the origin.  Now imagine spinning the whole thing around the y
axis, where the new z axis will be another spatial dimension.  The crossed
lines become a pair of cones that represent possible light beams being
emitted from or received at the origin.  These are called light cones.
At each point in space we could imagine a pair of such cones existing,
future and past.  Objects are constrained in their movements to only be
going upward, they have to stay within their light cones.

Now for the variant, with a 1+2 dimensional universe: 1 spatial dimension
and 2 time dimensions.  Again we will start with the x-y plane, y is time,
and we draw the crossed 45 degree lines.  This time we spin around the
x axis, to again produce two cones, but they are pointed right and left
rather than up and down.  In this model z is a time dimension like y,
so we have 2 time dimensions.  Now, objects again are constrained in
their movements not to cross the cones, but the cones are pointed to
the side rather than upward.  This means that objects are not stuck
inside the cones but are in effect outside of them and are able to move
much more freely.  You can see perhaps how an object could start at
the origin, move in a loop in the y-z plane and return to the origin,
all without ever passing through the cones.

This is the nature of the 2-dimensional time explored by Tegmark.
It is pretty different from the MWI.  I would not say that the MWI
has multidimensional time any more than it has 3 dimensional space.
Technically the MWI all happens in one spacetime area, it is all
superimposed and squashed together.  There is merely a mathematical
separation which occurs when states become decoherent, such that their
future histories are causally independent.  But technically they are still
using the same space and time, they are just invisible to each other.

Hal Finney



RE: The Time Deniers and the idea of time as a dimension

2005-07-19 Thread Hal Finney
Physicist Max Tegmark has an interesting discussion on the
physics of a universe with more than one time dimension at
http://space.mit.edu/home/tegmark/dimensions.html , specifically
http://space.mit.edu/home/tegmark/dimensions.pdf .  In the excerpts
below, n is the number of space dimensions and m the number of time
dimensions, so when he writes m  1 he means more than one time dimension.
Quoting Tegmark:

: What would reality appear like to an observer in a manifold with more
: than one time-like dimension? Even when m  1, there is no obvious
: reason why an observer could not, none the less, perceive time as being
: one-dimensional, thereby maintaining the pattern of having thoughts
: in a one-dimensional succession that characterizes our own reality
: perception. If the observer is a localized object, it will travel along
: an essentially one-dimensional (time-like) world line through the (n +
: m)-dimensional spacetime manifold. The standard general relativity notion
: of its proper time is perfectly well defined, and we would expect this
: to be the time that it would measure if it had a clock and that it would
: subjectively experience.
:
: Needless to say, many aspects of the world would none the less appear
: quite different.  For instance, a re-derivation of relativistic mechanics
: for this more general case shows that energy now becomes an m-dimensional
: vector rather than a constant, whose direction determines in which
: of the many time directions the world line will continue, and in the
: non-relativistic limit, this direction is a constant of motion. In
: other words, if two non-relativistic observers that are moving in
: different time directions happen to meet at a point in spacetime, they
: will inevitably drift apart in separate time directions again, unable
: to stay together.
:
: Another interesting difference, which can be shown by an elegant
: geometrical argument [10], is that particles become less stable when m
:  1
:
: In addition to these two differences, one can concoct seemingly strange
: occurrences involving backward causation when m  1. None the less,
: although such unfamiliar behaviour may appear disturbing, it would seem
: unwarranted to assume that it would prevent any form of observer from
: existing. After all, we must avoid the fallacy of assuming that the
: design of our human bodies is the only one that allows self-awareness
:
: There is, however, an additional problem for observers when m  1,
: which has not been previously emphasized even though the mathematical
: results on which it is based are well known. If an observer is to be
: able to make any use of its self-awareness and information-processing
: abilities, the laws of physics must be such that it can make at least
: some predictions. Specifically, within the framework of a field theory,
: it should, by measuring various nearby field values, be able to compute
: field values at some more distant spacetime points (ones lying along its
: future world line being particularly useful) with non-infinite error
: bars. If this type of well-posed causality were absent, then not only
: would there be no reason for observers to be self-aware, but it would
: appear highly unlikely that information processing systems (such as
: computers and brains) could exist at all.

Tegmark then goes into quite a technical discussion about solving the
equations of physics given various ways of specifying initial values,
the upshot of which is that if m  1 (i.e. more than one time dimension)
observers would not be able to predict the state in the rest of the
universe from their observations, which would seem to preclude the
existence of observers.  I'm not sure I fully understood this argument.

However the earlier part is quite instructive in giving us a picture of
how a universe could look that had multiple time dimensions.  Any one
entity would still have a single time line, but different ones might
disagree about which direction the future was, and time loops would
be possible.  Personally I think this is a more serious problem than
Tegmark's idea about prediction difficulties, although he seems to gloss
over it as mere unfamiliar behavior.

Nevertheless I think it is instructive to realize that multiple time
dimension universes are a conceptual possibility even if they are unlikely
to contain observers like us.  Tegmark is implicitly writing within the
block universe perspective which is generally adopted by physicists.
Translating this into a flow of time view seems quite challenging
and suggests that that viewpoint may not be as flexible in terms of
deep understanding of the notion of time.

Hal Finney



Re: is induction unformalizable?

2005-07-15 Thread Hal Finney
One question I am uncertain about is this: how well could we test a
supposed halting-problem oracle (HPO) box?

In particular I wonder, suppose it turns out that P=NP and that further
there is an efficient algorithm to solve any NP problem.  For those
unfamiliar with this terminology P means polynomial time, and we say
that problems in P can be solved efficiently.  NP means nondeterministic
polynomial time, and essentially this means that for problems in NP, we
can efficiently test a purported solution for correctness.  Whether P
is equal to NP or merely a subset of it is one of the major unsolved
problems of computer science.

But what if the aliens have solved it, and (somewhat to our surprise)
the answer is that every NP problem can be efficiently solved.  And they
have embedded this NP solving algorithm (along with some other ones)
in the HPO box.

My concern is that to test the HPO box we could for example give it
a problem we have solved and see if it gets the answer.  But success
might just imply that the HPO had substantially (but not astronomically)
greater computing power than the human race can bring to bear.  Or we
could give it a problem we can't solve and then check the answer the
HPO gives, but if the answer is testable that would mean it is in NP,
and so even success in this area could be explained if P=NP as above.

It is much less philosophically challenging to imagine that P=NP than
to imagine that a true HPO could exist.  Things would be different if
we ever get a proof that P  NP but we aren't in that situation now.

Are there other tests we could give, harder ones, that could give us
evidence that it was a true HPO, that could not be fooled by an NP solver?

My knowledge of these areas is pretty spotty.  The only non NP problem I
know of offhand is the travelling salesman problem, finding the shortest
path visiting everyone of a set of cities with specified distances
between each pair.  Proposed solutions cannot be tested efficiently,
as far as I know.  If the box solved travelling salesmen problems for
us, it might be a boon to salesmen but we would not necessarily know if
we were getting truly optimal paths.

So in Wei's story, when the scientists go to test the HPO box, how strong
is the evidence that they can reasonably expect to get for it being a
real HPO?  And I suppose a practical point arises as well; even if it
is not a true HPO, if it is nevertheless able to solve every problem we
give it, it's probably worth the money!

Hal Finney



RE: is induction unformalizable?

2005-07-14 Thread Hal Finney
 that we intuitively think
 is very unlikely, but not completely impossible. An example would be a
 device that can decide the truth value of any set theoretic statement. A
 universe that contains such a device would exist in the set theoretic
 hierarchy, but would have no finite description in formal set theory,
 and would be assigned a measure of 0 by STUM.

 I'm not sure where this line of thought leads. Is induction
 unformalizable? Have we just not found the right formalism yet? Or is
 our intuition on the subject flawed?

The mainstream view, I gather, is that induction is indeed unformalizable.
The contrary claim, that induction can be formalized, would be considered
controversial.

Another way to express the problem is to think of trying to build an
optimal induction machine.  It could use Bayes theorem to update its
beliefs, but what about the priors?  Same problem.  We could use the
Universal Prior but it gives probability 0 to HPOs.  Then there are all
those other priors that implicitly assume infinite computation, so where
does it stop?  There are no end to infinities, and as Wei's example shows,
there is apparently no place to stand once you start down that road.

It would be absurd to suggest, say, that everything up to Aleph-23
has Platonic existence, while infinities from Aleph-24 on up are mere
mysticism.  Likewise, building a universe out of a UTM+HPO doesn't make
sense because as Wei says, there is a 2nd-order HPO, an HPO2, which is
beyond the scope of UTM+HPO, so what if the aliens show up with one
of those?  For a multiverse model to make sense it has to be simple,
distinctive and (ideally) unique.  We don't quite achieve uniqueness
with the UDist (due to the arbitrary choice of a UTM which creates a
multiplicative constant difference on measure), which is a major flaw.
But adding oracles makes the problem infinitely worse.

Here's what I conclude.  If we really believe in the Universal
Distribution, then we ascribe probability zero to HPOs.  That means
that in Wei's story, indeed the aliens are tricking Earth.  If we try to
imagine a universe where the aliens are legitimate and have real HPOs,
that is impossible.  We are just confusing ourselves if we think such a
universe could be real.  There is no point in even considering thought
experiments based on it, any more than imagining what would happen if
aliens showed up with a logical formula which was obviously simultaneously
true and false.  So given that we stand upon the UDist, there is no need
to pay much attention to these kinds of thought experiments.

I would suggest that evidence for or against the UDist should come
more from the fields of mathematics and logic than from any empirical
experience.  My hope is that further study will lead to a computational
model which is distinguished by its uniqueness and lack of ambiguity.
That seems necessary for this kind of explanation of our existence to
be successful.

Hal Finney



Re: The Time Deniers

2005-07-14 Thread Hal Finney
Russell Standish writes:
 On Wed, Jul 13, 2005 at 04:20:27PM -0700, Hal Finney wrote:
 =20
  Right, that is one of the big selling points of the Tegmark and
  Schmidhuber concept, that the Big Bang apparently can be described in
  very low-information terms.  Tegmark even has a paper arguing that it
  took zero information to describe it (but frankly I am getting pretty
  turned off on the zero information concept since several people here
  use it to describe completely different things, and if it really took
  zero information then there couldn't be more than one thing described,
  could it?).
 =20

 Tegmark does not say his model has zero information (at least not in
 the classic 1998 paper). His words were (pg 25 of my copy):

 In this sense, our ultimate ensemble of all mathematical structures
 has virtually no algorithmic complexity at all.

 Note, this is not zero, but simply small (at least compared with the
 observed complexity of our frog perspective).

Thanks for the correction.  I was actually thinking of a different Tegmark
paper, http://space.mit.edu/home/tegmark/nihilo.html, but I see on closer
reading that he also says there that the algorithmic information content
of our universe is close to zero but does not actually say it is zero.

 There is only one zero information object, and that is the set of all
 descriptions (all infinite length bitstrings).=20

Do you really think there is such a thing as a zero information object?
If so, why do you have to say what it is?  :-)

Is this just an informal concept or is there some formalization of it?

Surely Chaitin's algorithmic information theory would not work; inputting
a zero length program into a typical UTM would not produce the set of
all infinite length bitstrings; in fact, I don't see how a TM could even
create such an output from any program.

Hal Finney



RE: The Time Deniers

2005-07-13 Thread Hal Finney

 True, it isn't always necessary to compute things in the same order--if 
 you're simulating a system that obeys time-symmetric laws you can always 
 reverse all the time-dependent quantities (like the momentum of each 
 particle) in the final state and use that as an initial state for a new 
 simulation, and the new simulation will behave like a backwards movie of the 
 original simulation.

One problem with this in practice is that it seems that the information
needed to specify the final state is far greater than the information
needed to specify the original state, at least with physics like ours.
In our universe, you could take a snapshot at some time that recorded all
the particle motions in a brain.  Then you could evolve it forward and
produce the successive subjective experiences.  However, I don't think the
snapshot has to be completely detailed.  Some sloppiness is acceptable.
The brain is robust and you could change the details of thermal motions
very considerably and the brain would still work fine.

If you took a snapshot at the end and evolved it backward it would
also work, in theory, but in practice it would not work unless every
detail of every motion was precise to an incredible degree.  (This is
ignoring issues of QM state reduction and such, I'm basically considering
a Newtonian clockwork here.)  It's like, it's easy to come up with
motions to scramble an egg; but to come up with motions to unscramble
one will require absolute precision in every respect.  The result is
that the information requirements for specifying a final-state based
simulation that includes an arrow of time are exponentially greater than
the information needed to create a plausible initial-state simulation.

If we then add the concept of measure based inversely on the size of
the information description, we find that almost all measure of such
simulations comes from initial-state based ones rather than final-state
based.

 But since I don't have a well-defined mathematical 
 theory of what it means for two computations to have the same causal 
 structure, I'm not sure whether the causal structure would actually be any 
 different if you computed a universe in reverse order. When I think of 
 causal structure, I'm not really presupposing any asymmetry between 
 cause and effect, I'm just imagining a collection of events which are 
 linked to each other in some way like in a graph, but the links need not 
 have any built-in direction--if two events are linked, that doesn't mean one 
 event is the cause and the other is the effect, so the pattern of links 
 could still be the same even if you did compute things in reverse order. 
 From what I've read about loop quantum gravity, it's a theory in which space 
 and time emerge from a more primitive notion of linked events, but I'm 
 pretty sure it's not a time-asymmetric theory.

My feeling is that causality, like time, is in the eye of the beholder.
It's not an inherent or fundamental property.  Rather, it is a way that
we can interpret events in some kinds of universes.  Completely chaotic
universes (where every moment is random and uncorrelated with the next)
would not have causality in any meaningful sense.  Likewise for static
universes.

In fact I would suggest that causality only exists in our universe in
areas where there is an arrow of time; that is, in areas which are far
from equlibrium and where entropy is unusually low.  The problem in
equilibrium regions is that you can always look at things two ways.
Suppose particle A collides with B and changes its course so that B
collides with C.  We can express this as that A causes B to hit C.
But all the physics works just as well in the reverse direction, in
equilibrium, so we could just as easily say that C caused B to hit A.

Scerir has also posted some interesting paradoxes along these lines
relating to QM.  Suppose we have a photon that passes through a
polarizer oriented at 20 degrees from vertical, then through one
oriented at 40 degrees, and makes it through both.  At the end we would
say its polarization was 40 degrees.  But what was it between the two
polarizers?  Conventionally we would say that the first polarizer made its
polarization become 20 degrees and the second polarizer then changed the
polarization to 40 degrees.  But actually you can just as easily argue
that the photon polarization was 40 degrees between the two polarizers.
That interpretation works just as well, a sort of retroactive causality.

As with time, my guess is that if we restrict our attention to observers
like us, of a type we can comprehend, then automatically we are going to
pick out information systems that have a notion of time, an arrow of time,
and hence a sense of causality.  Not all systems have these properties,
but some do, and all the ones that we would identify as observers fall
into that category.

Hal Finney



RE: The Time Deniers

2005-07-13 Thread Hal Finney
 will be 
 forthcoming...the reason I'd guess it is is that such a notion would seem 
 essential for defining what it means to instantiate a particular observer 
 in such a way that you don't count things like lookup tables, and also so 
 that you don't end up concluding that random thermal vibrations in a rock 
 actually instantiate all possible algorithms (the problem discussed in 
 Chalmers' paper Does a Rock Implement Every Finite-State Automaton? at 
 http://cogprints.org/226/00/199708001.html ).

The model I suggested the other day, basically just the Universal
Distribution, IMO fully solves these two riddles.  First, the question
is not are they conscious but how much measure do they add to the
particular observer-moments which they are putatively experiencing.
And the UDist shows that this can be answered in a straightforward,
quantitative way, by asking what is the shortest program that takes these
data structures as input (the lookup table or the rock) and outputs
something that matches our canonicalized representation of the OMs in
question (perhaps a schematic representation of a neural network with
specified firing patterns).  I can tell you that the rock isn't going
to add any measure.  I don't know about the lookup table, maybe there
is an algorithm to use it to deduce the neural network that would have
created it.  Hans Moravec argued that there was such an algorithm, at
least for a big enough lookup table.  But in principle it is an empirical
question in the framework based on the UDist.  There's nothing fuzzy or
philosophical about it.

 Well, that's what I was talking about in my last post when I said that my 
 intuition of causal structure is not a time-asymmetric one, that it would 
 only be about saying two events are causally related without specifying one 
 as the cause and the other as the effect. And as I said, my 
 understanding of loop quantum gravity is that it does involve some notion of 
 building spacetime out of relationships between events without any 
 time-asymmetry being involved.

Maybe so, I don't know much about LCG.


 Scerir has also posted some interesting paradoxes along these lines
 relating to QM.  Suppose we have a photon that passes through a
 polarizer oriented at 20 degrees from vertical, then through one
 oriented at 40 degrees, and makes it through both.  At the end we would
 say its polarization was 40 degrees.  But what was it between the two
 polarizers?  Conventionally we would say that the first polarizer made its
 polarization become 20 degrees and the second polarizer then changed the
 polarization to 40 degrees.  But actually you can just as easily argue
 that the photon polarization was 40 degrees between the two polarizers.
 That interpretation works just as well, a sort of retroactive causality.

 What would the MWI say about this? Whatever it would say, I'm pretty sure it 
 wouldn't say that there was a single photon in a definite state between the 
 two polarizers.

No, I think it does, but I might be wrong.  I think it says the universe
splits into two when the photon hits the first polarizer; in one the
photon is absorbed and in the other the photon continues in the 20
degree polarization state.  Or you can run time backwards and get the
photon to be in the 40 degree state.  I don't think the MWI helps much
with this.

Hal Finney



RE: The Time Deniers

2005-07-12 Thread Hal Finney
Lee Corbin writes:
 Perhaps you could address the biggest stumbling block that perhaps
 I still have: continuity.

 I'll even go out on a limb and suggest that *continuity* is really
 what bothers a lot of people. A lot of us (e.g. Jesse Mazer) are
 quite okay with, say, a program that uses the rules of Life to
 give rise to a conscious entity.  But we get really squeamish when
 someone talks about just using the static, instant descriptions---
 the generations of Life as depicted on, say, 2D grids. Even if you
 have big a pile of such descriptions---trillions and trillions of
 them---we point out that these snapshots are only frozen instants,
 where the real meat was the continuous process (that so happened
 to use the Rules).

One point of my example was that if you think of the Life universe as
existing in and of itself, as a Platonic entity, pure information, there
is really no difference between these views.

This thread talks about time deniers and I might be one, but from my
perspective it seems that many people are time mystics.  They see a
special role for time that goes beyond its mere presence as part of the
laws of physics of a universe.

I imagine that multiple universes could exist, a la Schmidhuber's ensemble
or Tegmark's level 4 multiverse.  Time does not play a special role in
the descriptions of these universes.  Some universes will have properties
that are similar to what we think of as the passage of time; others will
have nothing that would be recognizably like time; and yet others will
have some aspects that are similar to time passing but not quite the same.

Does a pure Life universe have a time coordinate?  In a way, it does.
Or you can just as easily see it as a stack of grids.  Is there really
a difference, if the laws of physics are the same in both cases?

Alternative sets of Life rules will cause every grid in that stack to
be the same, or equivalently, will cause each successive instant to have
the same state.  Does that universe have time?

Even in the case of Life, there are other ways to create the stack of
grids (or equivalently, succession of states) than to start with some
initial conditions and evolve forwards.  You could start with some final
conditions and work your way backwards.  Or you could start in the middle
and work outwards.  Wolfram considers computational systems (in my view,
simple universes) which get defined via successive approximations in much
this way.  Do such universes have time?  There is no unambiguous answer.

Tegmark in one of his papers considers universes with two or more
time dimensions.  Can you wrap your mind around that?  Doesn't the
potential existence of such universes suggest that the notion of process
vs static-state is too simple?  What would a 2D-time process be like, vs
a 1D-time process like what we are used to?  Could we imagine universes
with fractal time dimensions, like the fractal space dimensions which
are sometimes explored?

These considerations lead me to the view that there is nothing special
about time, that it is merely a useful way of looking at some universes.
Probably the fraction of universes (or more generally, information
objects) that have a notion of time that is very similar to our own
is small.

Now, certainly it seems that consciousnesses like ours, anything that
we would recognize as a conscious entity, will involve a notion of
time similar to what we use.  We are bound up with the idea of time
and so if we see a consciousness in a Life universe, whether we think
of it as a stack of cells or as a succession of states, it will seem
to that consciousness that time is passing.  But this is largely
a selection effect of our own anthropomorphic biases.  We only see
consciousnesses that perceive time passing because those are the only
kinds of informational entities that we can think of as conscious.

 P.S. I thought UD was Universal Dovetailer, but now you mean
 Universal Description. We've got to get cautious using the
 acronyms, or be sure, as you did here, to say what you mean in
 a post.

Actually it is Universal Distribution but I didn't want to write that
out in detail every time I used it.  Maybe I will write UDist in the
future to help remind people that no doves were harmed in creating
this concept.

 P.P.S. Stephen Paul King was one of those who kept bringing up
 the distinction between a *description* of something and the
 thing itself. With what I have written above, I see a connection
 now.

For an informational object, a sufficiently precise description is
equivalent to the object itself, in my view.  And I am considering an
ontology where everything is an informational object.

Hal Finney



RE: The Time Deniers

2005-07-12 Thread Hal Finney
Jesse Mazer writes:
 Hal Finney wrote:
 I imagine that multiple universes could exist, a la Schmidhuber's ensemble
 or Tegmark's level 4 multiverse.  Time does not play a special role in
 the descriptions of these universes.

 Doesn't Schmidhuber consider only universes that are the results of 
 computations? Can't we consider any computation as having an intrinsic 
 causal structure? How would it be possible to write an algorithm that 
 describes a Life universe where there's no time, where the t-axis is 
 replaced by a z-axis, for example?

Well, you could just replace the letter t with the letter z, but of course
that wouldn't change the underlying nature of things.  You might well
say that there was still a time axis, just that it had a different name.

But the bigger question is whether the order in which a universe is
computed must match the concept of time within that universe.  It is
true that for universes like ours, it seems difficult to compute them
in any way other than starting at the past and working our way into
the future.  In that case, the order of computation is the same as the
within-universe time axis.

However it might be dangerous to generalize and to assume that this is
always the case, or that one can go so far as to define the concept of
time within a universe to be the order in which things were computed.

It is not difficult to come up with universes that can be computed in
a different order than the natural within-universe direction of time.
Even our own universe appears to be time-symmetric at the micro-scale.
The only reason we have an arrow of time is apparently because the
universe was created in a special low-entropy state.  A universe without
such a special state at one end could be computed in either direction.
Or we could start in the middle and compute forward and backward from
that point.  Or maybe we could even compute it sideways, taking a
particular timelike line as the initial conditions.

Further, in our own universe there appears to be quite a bit of ambiguity
about time ordering, and many different computational strategies will
work equally well.  Relativity theory shows that events either have a
timelike separation, in which case it is clear which one is in the past,
or a spacelike separation, which makes it ambiguous which one is farther
in the past.

It was suggested here a while back that a Life universe could be
computed using an algorithm which ran around somewhat randomly and made
localized changes to cells in order to make them match the Life rules.
Eventually this would converge to a stable and consistent Life universe.
Any observers living in that universe would have a perceived direction
of time that was very different from the actual order in which it was
computed.

However although this is possible, I think it is likely that any
high-measure universe containing observers like us will pretty much
have to be computed in the past-to-future direction of time.  That seems
to be the best way to specify a universe like ours with simple initial
conditions, using a simple algorithm.  So I imagine that in practice,
for most universes that we are interested in, it will be correct to
identify subjective (within-universe) time with computational ordering.
But this is not true in general.

 Tegmark in one of his papers considers universes with two or more
 time dimensions.

 If this universe is computable, it can be simulated by an algorithm that can 
 run in a universe with only one time dimension. Perhaps the algorithm would 
 go back and forth between simulating time increments in different 
 directions, like how a regular computer can simulate a parallel computer.

Yes, but there is still a difference between two time dimensions and one,
just as there is a difference between two spatial dimensions and one.
An interesting question is whether there would be any algorithms possible
in a universe with two dimensional time that would run fundamentally
faster than in a universe with one dimensional time.  I don't understand
the concept well enough to address that.  But if so, a being who evolved
in such a universe might deny that one dimensional time observers could
exist, that such a limited notion of time would be rich enough to support
the computational complexity necessary for life and intelligence to exist.

Hal Finney



RE: The Time Deniers

2005-07-11 Thread Hal Finney
Stathis Papaioannou writes:
 (c) A random string of binary code is run on a computer. There exists a 
 programming language which, when a program is written in this language so 
 that it is the same program as in (a) and (b), then compiled, the binary 
 code so produced is the same as this random string.

I don't know what you mean by random in this context.  If you mean
a string selected at random from among all strings of a certain length,
the chance that it will turn out to be the same program functionally is
so low as to be not worth considering.

But ignoring that, here is how I approach the more general problem of
whether a given string creates or instantiates a given observer.  I made
a long posting on this a few weeks ago.  In my opinion it follows simply
from assuming the Universal Distribution (UD).

In this model, all information objects are governed by this probability
distribution, the UD.  One way to think of it is to imagine all possible
programs being run; then the fraction of programs which instantiate a
given information object is that object's measure.

So to solve the problem of whether your program instantiates an
observer is a two step process.  First write down a description of the
information pattern that equals that observer.  More specifically, write
the description of the information pattern that defines that observer
experiencing the particular moments of consciousness that you want to
know if your program is instantiating.  Doing this will require a much
stronger and more detailed theory of conciousness than we now possess,
but I don't think there is any inherent obstacle that will keep us from
gaining this ability.

The second step is to consider your program's output and see if it
is reasonably similar to the information pattern you just defined.
The simplest case is where the output is identical.  Then you can
say that the program does instantiate that consciousness.  However it
could be that the program basically creates the same pattern but it is
represented somewhat differently.  How can we consider all possible
alternate ways of representing an information pattern and still let
them count, without opening the door so wide that all patterns count?

The solution follows rigorously from the definition of the UD.  We append
a second interpretation program to the first one, the one which ran the
putative conscious program.  This second program turns the output from
the first one into the canonical form we used to define the conscious
information pattern.  The concatenation of the two programs then outputs
the pattern in canonical form and we can recognize it.

The key point now is that the contribution to the measure of the
observer moments being simulated is, by the definition of the Universal
Distribution, based on the size of the program which outputs the
information pattern in question.  And the size of that program will be the
size of its two component parts: the first one, that you were wondering
about, which may have generated a consciousness; and the second one,
which took the output of the first one and turned it into the canonical
form which matched the OM pattern in question.

In other words, the contribution which this program makes to the measure
of a given observer's experience will be based on the size of the program
(smaller is better) and on the size of the interpretation program which
turns the output of the first program into canonical form (again, smaller
is better).  Obviously a sufficiently large interpretation program can
turn any output into what we want.  The question is whether a small
one can do the trick.  That is what tells us that the pattern is really
there and not something which we are forcing by our interpretation.

Standard considerations of the UD imply that the exact nature of the
canonical form used is immaterial, however it does matter how precisely
you need to specify the information pattern that truly does represent a
set of conscious observer moments.  That second question is a matter of
psychology and as we improve our understanding of consciousness we will
have a better handle on it.  Once we do, this approach will provide an
in-principle technique to calculate how much contribution to measure
any given program string makes to any given conscious experience.

Most importantly, this follows entirely from the assumption of the
Universal Distribution.  No other assumptions are needed.  It is a simple
assumption yet it provides a very specific process and rule to answer
this kind of question.

Hal Finney



RE: The Time Deniers

2005-07-10 Thread Hal Finney
Again travel has forced me to take an absence from this list for a while,
but I think I will be home for several weeks so hopefully I will be able
to catch up at last.

One question I would ask with regard to the role of time is, is there
something about time (and perhaps causality) that goes over and above
the equations or natural laws that control and define a given universe?

Let us imagine a Cellular Automaton based universe; for simplicity, let it
be a 1-dimensional CA such as those studied in detail in Wolfram's book.
We have an x dimension and a t dimension, and some rules which are the
natural laws of that universe.  A sample rule might be
s[x,t+1] = s[x,t] XOR (s[x-1,t] OR s[x+1],t]).  This means that the
state at position x and time t+1 is the exclusive-or of the state at the
previous time (s[x,t]) and the OR of the left and right neighbor states.
In other words, a cell reverses its state if either of its neighbors is
on.

Wolfram investigates all 256 possible rules which determine a cell's
next state from the previous state of the cell and its two neighbors.
Some lead to surprisingly complex patterns and it is conceivable that such
universes might even be complex enough to allow life and consciousness
to evolve.

So we have a notion of time, t, and space, x.  The question is this.
If we don't *call* it time, does that change things?  Suppose we have
a universe with 2 spatial dimensions, x and y.  But it is governed by
the same rule: s[x,y+1] = s[x,y] XOR (s[x-1,y] OR s[x+1],y]).  Here
I have replaced t in the rule above by y.

Does this make a difference?  I think we will agree that it does not.
Changing the letter t to the letter y does not change the fundamental
nature of this universe.  It only changes how we describe it.

Then we can ask, is this rather abstract description of the universe,
in terms of its natural laws, enough for us to know whether the
consciousnesses that exist in it are really conscious?  Or do we need
to know more?  Do we need to know details about how the universe was
created (whatever that means!)?  Do we need to know if there is a flow
of causality in this universe?

My answer is that the natural laws ought to be enough.  If we can find
a reasonable interpretation (defined rigorously as a mapping whose
information content is significantly smaller than the pattern itself) of
a pattern in the universe as something that we would consider a conscious
observer in our own universe, then we would be right to say that this
CA universe has consciousness.  (More precisely, that this CA universe
contributes measure to these instances and kinds of conscious observers.)

I don't think it makes sense to demand more information than the natural
laws (like, what kind of universal-computer is running to interpret
these laws, what algorithm it uses, how sequential is it, is it allowed
to backtrack and change things, etc.).  The laws themselves define
the universe.  The two are, in a sense, equivalent.  That is all the
information there is.  The laws should be, in fact they must be, enough
to answer the question about whether the consciousness which appears in
such a universe is real.  That's how it appears to me.

In our own universe, we too have natural laws that relate to space and
time.  One such law is from Newton: d2x/dt2 = Force/Mass (i.e. F=ma).
Relativity and QM have their own laws that refer to x, y, z, and t.
Generally, t is treated differently than the other coordinates, which
are all treated the same.  But obviously we could substitute some other
letter, say q, for t and it would not make a difference.  A universe
with quime instead of time would be the same.

So again, is it enough to look at the natural laws of our universe in
order to decide whether the consciousnesses within it are real?  Or do we
need more?  Can we imagine a universe like ours, which follows exactly the
same natural laws, but where time doesn't really exist (in some sense),
where there is no actual causality?  I have trouble with this idea, but
I'd be interested to hear from those who think that such a distinction
exists.

Hal Finney



UD + ASSA

2005-07-10 Thread Hal Finney
, and for that to be meaningful, objects with higher measure
have to be considered more prominent.  We should expect the universe
we observe to have relatively high measure.  We should expect ourselves
as observers, and as observer moments, to have relatively high measure.
If we face alternatives of either a low measure or a high measure future,
we should expect to experience the high measure one.

As far as the problem of being unconscious objects, I don't necessarily
see that as contradictory.  We all know what it is like to be unconscious.
We become unconscious every day when we sleep.  We also know through
experience that there are many degrees and kinds of consciousness.

In practice, being a table or the number 3 is so different from what we
think of as consciousness that we cannot relate to it as human beings.
We need to restrict our attention to information objects that have a
similar nature and complexity to our own.  Among those objects, we can
distinguish between ones with low and high measure.  The theory predicts
that we should find ourselves as entities with a relatively high measure,
and explains those aspects of our existence which have a high measure.

The ASSA is well suited for this interpretation, because it relates
measure of observer moments to subjective probability.  The older SSA,
which is observer based where the ASSA is observer-moment based, also
can work reasonably well in this model for the same reason.

But the details of ASSA vs ASA vs other interpretations are not of
fundamental importance in my view.  The most important part is the UD.
We then connect its definition of measure to subjective experience using
the concept that higher measure states are more likely to be experienced.
This is the basic principle from which we attempt to make our predictions
and explanations.

Hal Finney



Re: Duplicates Are Selves

2005-07-03 Thread Hal Finney
I have been on vacation so I have a large backlog of messages to read!
But they are very interesting and full of challenging ideas.  I find this
list to be one of the best I have ever been on in terms both of fearlessly
exploring difficult areas and also remaining cordial and polite.

I am trying to understand Lee Corbin's idea about duplicates as selves
better.  I can understand seeing exact, synchronized duplicates as
selves (such as two computers running the same simulated individual
in lock-step).  But when they begin to diverge I understand that Lee
still sees them as (in some sense) himself and one copy would in fact
sacrifice to benefit a diverged copy just as much(?) as to benefit its
own body.  Is this right?

What I would ask is, is there a limit to this?  Is this common-self-ness
a matter of degree, or is it all-or-none?  Is there some degree of
divergence after which a copy might be somewhat reluctant to continue
to view its brother copy as being exactly equivalent to itself?

For example, what if someone were an identical twin?  In some sense they
are duplicates at the moment of conception who then begin to diverge.
This seems to be different from the copies we discuss merely in degree
of divergence, not in kind.  Would it be reasonable to argue that an
identical twin should view his brother as himself?

And what about the possibility of creating non-identical copies?
Perhaps our copying machine is imperfect and the products are not quite
the same as the original.  They are very close, perhaps so close that
only extremely detailed inspection can detect the differences.  Or perhaps
they are not really so close at all and the copies in fact bear little
resemblance to their originals.  How does the potential existence of such
imperfect copying machines affect the notion that one should view copies
as selves?

If imperfect or diverged copies are to be considered as lesser-degree
selves, is there an absolute rule which applies, an objective reality
which governs the extent to which two different individuals are the
same self, or is it ultimately a matter of taste and opinion for the
individuals involved to make the determination?  Is this something that
reasonable people can disagree on, or is there an objective truth about
it that they should ultimately come to agreement on if they work at it
long enough?

Hal Finney



RE: Torture yet again

2005-06-22 Thread Hal Finney
Jesse Mazer wrote:

Suppose there had already been a copy made, and the two of you 
were sitting side-by-side, with the torturer giving you the 
following options:

A. He will flip a coin, and one of you two will get tortured 
B. He points to you and says I'm definitely going to torture 
the guy sitting there, but while I'm sharpening my knives he 
can press a button that makes additional copies of him as many 
times as he can.

Would this change your decision in any way? What if you are 
the copy in this scenario, with a clear memory of having been 
the original earlier but then pressing a button and finding 
yourself suddenly standing in the copying chamber--would that 
make you more likely to choose B?

I think this variation points to the major flaw in this thought
experiment, which is the implicit assumption that copying is possible yet
is not used.  In fact, if copying is possible as the thought experiment
stipulates, it would tend to be widely used.  The world would be full of
people who are copies.  You would be likely to be an nth-generation copy.
There would be no novelty as Jesse's variation suggests in allowing you
to experience (presumably for the first time!) being copied.

I keep harping on this because copying increases measure.  It is different
from flipping a coin, which does not increase measure.  Your expectations
going into a copy are different.  To the extent that this language makes
sense, I would say that you have a 100% chance of becoming the copy and
a 100% chance of remaining the original.  This is different from flipping
a coin.

You may think that it would feel the same way, but you've never tried it.
Fundamentally, our perception of the world, our phenomenology, our sense
of identity and our concept of future and past selves are not intrinsic,
but are useful tools which have *evolved* to allow our minds to achieve
the goals of survival and reproduction.  In a world where copying
is possible, we would evolve different ways of perceiving the world.
I believe that in such a world, we would perceive the aftermath of copying
very differently than the aftermath of flipping a coin.  The effects
are different, the evolutionary and survival implications are different.

In the world of this thought experiment, if the additional copies are
(via special dispensation) going to be treated well and given a good
chance to survive and thrive, then yes, most people would press the
button like crazy.  It's just like today, if a bachelor were given
the opportunity to have sex with a dozen beautiful women, he'd jump
at the chance.  It's not because of any intrinsic value in the act,
it's because evolution has programmed him to take this opportunity to
increase the measure of his genes.  In the same way, pressing the button
would increase the measure of your mind, and it would be equally as
rewarding.

In the spirit of this list, let me offer my own variation.  It is like
the original, except instead of torture you are offered a 50-50 chance
to experience a delicious meal prepared by an expert chef.  Or you can
press the button to make some copies, in which case you get a 100% chance
of having the meal.  For me, pressing the button is a win-win situation,
assuming the copies will be OK.  I certainly don't think that pressing
the button reduces the measure of my enjoyment of the food.

Hal Finney



Re: another puzzzle

2005-06-22 Thread Hal Finney
Stathis Papaioannou writes:
 That is the basic idea behind these thought experiments with copies: as a 
 more easily understood analogy for what happens in the multiverse/plenitude.

I don't agree, and in fact I think the use of copies as an analog for
what happens in the multiverse is fundamentally misleading.  If it were
not, you could create the same thought experiments just by talking about
flipping coins and such.

What is the analog, in the multiverse, of pushing a button to make a copy?
When faced with the chance of torture, you are going to push a button
to make a copy.  What does that correspond to in the multiverse?

The closest I can suggest is flipping a coin such that you don't get
tortured if it comes up heads.  Well, that destroys the whole point of
the thought experiment, doesn't it?  Of course you'll flip the coin.
Anyone would.

Pushing a button to make a copy is completely different.  That's why we
have so much disagreement about what to do in that case, while there
would be no disagreement about what to do if you could flip a coin to
avoid being tortured.  That in itself should be a give-away that the
situations are not as analogous as some are suggesting.

I would suggest going back over these thought experiments and substitute
flipping coins for making copies, and see if the paradoxes don't go away.

I believe that many of the paradoxes in the copy experiments are because
people do not grasp the full meaning of what copying implies.  They are
thinking very much in the lines Stathis suggests, that it is a variant on
flipping a coin.  But it's not.  Copying is fundamentally different from
flipping a coin, because copying increases measure while coin flipping
does not.

Measure is crucially important in multiverse models because it is the only
foundation for whatever predictive or explanatory ability they possess.
Choosing to overlook measure differences in analyzing thought experiments
inevitably leads to error.  Treating copying like coin flipping is just
such an error.  If you would instead think through the full implications
of copying you would see that it is completely different from flipping
a coin.  The increase of measure that occurs in copying manifests in the
world in tangible and obvious ways.  Its phenomenological consequences are
no less important.  These considerations must be included when analyzing
thought experiments involving copies, otherwise you are led into paradox
and confusion.

Hal Finney



RE: Copies Count

2005-06-22 Thread Hal Finney
Stathis Papaioannou writes:
 Hal Finney writes:
 Suppose you will again be simultaneously teleported to Washington
 and Moscow.  This time you will have just one copy waking up in each.
 Then you will expect 50-50 odds.  But suppose that after one hour,
 the copy in Moscow gets switched to the parallel computer so it is
 running with 10 times the measure; 10 copies.  And suppose that you know
 beforehand that during that high-measure time period (after one hour)
 in Moscow you will experience some event E.

 Again, it's a two step process, each time considering the next moment. 
 First, 50% chance of waking up in either Moscow or Washington. Second, 100% 
 chance of experiencing E in Moscow or 0% chance of experiencing E in 
 Washington. The timing is crucial, or the probabilities are completely 
 different.

Doesn't this approach run into problems if we start reducing the time
interval before the extra copying in Moscow?  From one hour, to one
second, to one millisecond?  At what point does your phenomenological
expectation switch over from 90% Washington to 90% Moscow?  And does
it do so discontinuously, or is there a point at which you are just
barely conscious enough in Moscow before the secondary duplication,
that perhaps the two probabilities balance?

I am doubtful that this approach works.

Jesse Mazer suggested backwards causation, that the secondary copying in
Moscow would influence the perceptual expectation of waking up in Moscow
even before it happens.  So he would say 90% Moscow from the beginning.
However I think that has problems if we allow amnesia to occur in Moscow
before the amplification.

I have been enjoying these discussions but unfortunately I will have to
take leave, I am going on vacation with the family for a week so I will
have little chance to participate during that time.  I'll look forward
to catching up when I return -

Hal Finney



Re: death

2005-06-21 Thread Hal Finney
Bruno Marchal writes:
 Le 20-juin-05, =E0 18:16, Hal Finney a =E9crit :
  That's true, from the pure OM perspective death doesn't make sense
  because OMs are timeless.  I was trying to phrase things in terms of
  the observer model in my reply to Stathis.  An OM wants to preserve
  the measure of the observer that it is part of, due to the effects of
  evolution.  Decreases in that measure would be the meaning of death,
  in the context of the multiverse.

 I will keep reading your posts hoping to make sense of it. Still I was=20=
 about asking you if you were assuming the multiverse context or if=20
 you were hoping to extract (like me) the multiverse itself from the=20
 OMs. In which case, the current answer seems still rather hard to=20
 follow.

I was trying to use Stathis' terminology when I wrote about the
probability of dying.  Actually I am now trying to use the ASSA and I
don't have a very good idea about what it means to specify a subjective
next moment.  I think ultimately it is up to each OM as to what it views
as its predecessor moments, and perhaps which ones it might like to
consider its successor moments.

Among the problems: substantial, short-term mental changes might be
so great that the past OM would not consider the future OM to be the
same person.  This sometimes even happens with our biological bodies.
I can easily create thought experiments that bend the connections beyond
the breaking poing.  There appears to be no bright line between the
degree to which a past and future OM can be said to be the same person,
even if we could query the OM's in question.

Another problem: increases in measure from a past OM to a future OM.
We can deal with decreases in measure by the traditional method of
expected probability.  But increases in measure appear to require
probability  1.  That doesn't make sense, again causing me to question
the whole idea of a subjective probability distribution over possible
next moments.


 Then in another post you just say:

  It's a bit hard for me to come up with a satisfactory answer to this=20=
  problem, because I don't start from the assumption of a physical=20
  universe at all--like Bruno, I'm trying to start from a measure on=20
  observer-moments and hope that somehow the appearance of a physical=20
  universe can be recovered from the subjective probabilities=20
  experienced by observers

Actually I didn't write this, Jesse Mazer did.  But I do largely agree
with this approach, and I wrote the reply:

I have a similar perspective.  However I think it will turn out that the
simplest mathematical description of an observer-moment will involve a Big
Bang.  That is, describe a universe, describe natural laws, and let the
OM evolve.  This is the foundation for saying that the universe is real.


 And this answers the question. I am glad of your  interest in the=20
 possibility to explain the universe from OMs, but then, as I said I=20
 don't understand how an OM could change its measure. What is clear for=20=
 me is that an OM (or preferably a 1-person, an OM being some piece of=20
 the 1-person) can change its *relative* measure (by decision, choice,=20
 will, etc.) of its possible next OMs.

The OM can change the universe, and this will include changing the measure
of many people's future OMs.  Wei Dai, in whose footsteps I largely
travel, finally decided that *any* philosophy for an OM was acceptable,
and its only task was to optimize the multiverse to suit its preferences.
This does not require that we introduce a subjective probability for
measure of next OM, but it can allow OMs to think that way.  If the
current OM has an interest in certain OMs, the ones it chooses to call its
next OMs, and it wants to adjust the relative measure of those OMs to
suit its tastes, that can be accommodated in this very general model.

Hal Finney



Re: Torture yet again

2005-06-21 Thread Hal Finney
Jonathan Colvin writes:
 You are sitting in a room, with a not very nice man.

 He gives you two options.

 1) He'll toss a coin. Heads he tortures you, tails he doesn't.

 2) He's going to start torturing you a minute from now. In the meantime, he
 shows you a button. If you press it, you will get scanned, and a copy of you
 will be created in a distant town. You've got a minute to press that button
 as often as you can, and then you are getting tortured.

I understand that you are trying to challenge this notion of subjective
probability with copies.  I agree that it is problematic.  IMO it is
different to make a copy than to flip a coin -  different operationally,
and different philosophically.

What you need to do is to back down from subjective probabilities and
just ask it like this: which do you like better, a universe where there
is one of you who has a 50-50 chance of being tortured; or a universe
where there are a whole lot of you and one of them will be tortured?
Try not to think about which one you will be.  You will be all of them.
Think instead about the longer term: which universe will best serve your
needs and desires?

There is an inherent inconsistency in this kind of thought experiment
if it implicitly assumes that copying technology is cheap, easy and
widely available, and that copies have good lives.  If that were the
case, everyone would use it until there were so many copies that these
properties would no longer be true.

It is important in such experiments to set up the social background in
which the copies will exist.  What will their lives be like, good or
bad?  If copies have good lives, then copying is normally unavailable.
In that case, the chance to make copies in this experiment may be a
once-in-a-lifetime opportunity.  That might well make you be willing to
accept torture of a person you view as a future self, in exchange for
the opportunity to so greatly increase your measure.

OTOH if copying is common and most people don't do it because the future
copies will be penniless and starve to death, then making copies in this
experiment is of little value and you would not accept the greater chance
of torture.

This analysis is all based on the assumption that copies increase measure,
and that in such a world, observers will be trained that increasing
measure is good, just as our genes quickly learned that lesson in a
world where they can be copied.

Hal Finney



Re: Measure, Doomsday argument

2005-06-21 Thread Hal Finney
Quentin Anciaux writes:
 Why aren't we blind ? :-)

 If the measure of an OM come from the information complexity of it, it 
 seems 
 that an OM of a blind person need less information content because there is 
 no complex description of the outside world available to the blind observer. 
 So as they are less complex, they must have an higher measure ... but I'm 
 not blind, so as a lot of people on earth... 

There may be something of a puzzle there...

Although I think specifically that blind people don't necessarily have
a lower information content in their mental states.  It is said that
blind people have their other sense become more acute to take over the
unused brain capacity (at least people blind from birth).  So their mental
states may take just as much information as sighted people.

Beyond that, the puzzle remains as to why we are as complex as we are,
why we are not simpler beings.  It would seem that one could imagine
conscious beings who would count as observers, as people we might
have been, but who would have simpler minds and senses than ours.
Certainly the higher animals show signs of consciousness, and their
brains are generally smaller than humans, especially the cortex, hence
probably with lower information content.

Of course there are a lot more people than other reasonably large-brained
animals, so perhaps our sheer numbers cancel any penalty due to our
larger and more-complex brains.

Hal Finney



RE: Copies Count

2005-06-20 Thread Hal Finney
Stathis Papaioannou writes:
 Here is another way of explaining this situation. When there are multiple 
 parallel copies of you, you have no way of knowing which copy you are, 
 although you definitely are one of the copies during any given moment, with 
 no telepathic links with the others or anything like that. If a proportion 
 of the copies are painlessly killed, you notice nothing, because your 
 successor OM will be provided by one of the copies still going (after all, 
 this is what happens in the case of teleportation). Similarly, if the number 
 of copies increases, you notice nothing, because during any given moment you 
 are definitely only one of the copies, even if you don't know which one. 

 However, if your quantum coin flip causes 90% of the copies to have bad 
 experiences, you *will* notice something: given that it is impossible to 
 know which particular copy you are at any moment, or which you will be the 
 next moment, then there is a 90% chance that you will be one of those who 
 has the bad experience. Similarly, if you multiply the number of copies 
 tenfold, and give all the new copies bad experiences, then even though the 
 old copies are left alone, you will still have a 90% chance of a bad 
 experience, because it is impossible to know which copy will provide your 
 next OM.

I'm not sure I fully understand what you are saying, but it sounds like
you agree at least to some extent that copies count.  The number of
copies, even running in perfect synchrony, will affect the measure of
what that observer experiences, or as you would say, his subjective
probability.

So let me go back to Bruno's thought experiment and see if I understand
you.  You will walk into a Star Trek transporter and be vaporized and
beamed to two places, Washington and Moscow, where you will have two
(independent) copies wake up.  Actually they are uploads and running on
computers, but that doesn't matter (we'll assume).  Bruno suggests that
you would have a 50-50 expectation of waking up in Washington or Moscow,
and I think you agree.

But suppose it turns out that the Moscow computer is a parallel
processor which, for safety, runs two copies of your program in perfect
synchrony, in case one crashes.  Two synchronized copies in Moscow,
one in Washington.

Would you say in this case that you have a 2/3 expectation of waking up
in Moscow?

And to put it more sharply, suppose instead that in Washington you will
have 10 copies waking up, all independent and going on and living their
lives (to the extent that uploads can do so), sharing only the memory
of the moment you walked into the transporter.  And in Moscow you will
have only one instance, but it will be run on a super-parallel computer
with 100 computing elements, all running that one copy in parallel and
synchronized.

So you have 10 independent copies in Washingon, and 100 copies that
are all kept in synchrony in Moscow.  What do you expect then?  A 90%
chance of waking up in Washington, because 9/10 of the versions of you
will be there?  Or a 90% chance of waking up in Moscow, because 9/10 of
the copies of you will be there?

I think, based on what you wrote above, you will expect Moscow, and that
copies count in this case.

If you agree that copies count when it comes to spatial location, I
wonder if you might reconsider whether they could count when it comes
to temporal location.  I still don't have a good understanding of this
situation either, it is counter-intuitive, but if you accept that the
number of copies, or as I would say, measure, does make a difference,
then it seems like it should apply to changes in time as well as space.

Hal Finney



Re: death

2005-06-20 Thread Hal Finney
 is
like... well, I can't analogize it to anything, because it has never been
possible and we have no experience of it.  Only if technology advances
to allow mental copies will it be possible for us to increase measure.
What will it be like?  It will be like gaining greater influence over
the world, greater ability to put our plans into fruition (just as death
represents a loss of those abilities).

 What about if you 
 were a piece of sentient software: surely having multiple instantiations of 
 the ones and zeroes could not make any difference; if it did, wouldn't that 
 be a bit like expecting that your money would have greater purchasing power 
 if your bank backed up their data multiple times? Or like saying that 
 2+2=4 would be more vividly true (or whatever it is that increasing 
 measure causes to happen) if lots and lots of people held hands and did the 
 calculation simultaneously?

I don't think those are accurate analogies.  Your money would not have
multiple purchasing power if it were backed up multiple times, but
that information would have greater measure.  Measure is a property of
information.  When the information about your money has greater measure,
this will tend to give it greater robustness and more opportunity to
interact with and influence the world.  Specifically, it can't be deleted
and lost as easily.  Multiple backups have real value and any legitimate
bank will make sure to use them.

 I can't be completely sure that increasing your measure would have no 
 effect. Maybe there would be some sort of telepathic communication between 
 the various copies, such as is said to occur between identical twins, or 
 some as yet undiscovered physical phenomenon. However, there is absolutely 
 no evidence at present for such a thing, and I think that until such 
 evidence is found, we should only go on what we know to be true and what can 
 logically be deduced from it.

No, I have absolutely no expectation of any such nonphysical effects from
increasing measure.  I am confident that we agree about the third-party
effects of making copies.  No copy will demonstrate that he is able to
read the minds of his fellows.  I can only say that increasing measure is
the opposite of decreasing measure, that our measure decreases every day,
and that we fight as hard as we can to keep it from decreasing faster.
I believe that you view this fight as a philosophical mistake, but your
genes don't agree!

Your genes are not content to have just one copy.  A gene doesn't say,
it doesn't matter how many of me there are, as long as at least one
exists I am still alive.  No, each gene fights its hardest to increase
its measure.  It wants to occupy as much of the universe as it can.
It wants to increase its influence, its redundancy, its robustness.
This is what increasing measure means to your genes.  If people lived
in a regime where increasing measure were possible, I believe they would
come to adopt similar views, and for the same reason our genes did.

Hal Finney



RE: Dualism and the DA

2005-06-20 Thread Hal Finney
Jonathan Colvin writes:
 This is, I think, the crux of the reference class issue with the DA. My (and
 your) reference class can not be merely conscious observers or all
 humans, but must be something much closer to someone (or thing) discussing
 or aware of the DA). I note that this reference class is certainly
 appropriate for you and me, and likely for anyone else reading this. This
 reference class certainly also invalidates the DA (although immaterial souls
 would rescue it).

But we don't use such a specific reference class in other areas of
reasoning.  We don't say, why do things fall to the ground, and answer it,
because we are in a reference class of people who have observed things
fall to the ground.

If we explain an observed phenomenon merely by saying that we are
in the reference class of people who have observed it, we haven't
explained anything.  We need to be a little more ambitious.

Hal Finney



RE: Copies Count

2005-06-20 Thread Hal Finney
Stathis Papaioannou writes:
 I agree that you will have a 90% chance of waking up in Moscow, given that 
 that is the *relative* measure of your successor OM when you walk into the 
 teleporter. This is the only thing that really matters with the copies, from 
 a selfish viewpoint: the relative measure of the next moment:

So let me try an interesting variant on the experiment.  I think someone
else proposed this recently, the idea of retroactive causation.
I won't put that exact spin on it though.

Suppose you will again be simultaneously teleported to Washington
and Moscow.  This time you will have just one copy waking up in each.
Then you will expect 50-50 odds.  But suppose that after one hour,
the copy in Moscow gets switched to the parallel computer so it is
running with 10 times the measure; 10 copies.  And suppose that you know
beforehand that during that high-measure time period (after one hour)
in Moscow you will experience some event E.

What is your subjective probability beforehand for experiencing E?
I think you agreed that if you had been woken up in Moscow on
the super-parallel computer that you would expect a 90% chance of
experiencing E.  But now we have interposed a time delay, in which your
measure starts off at 1 in Moscow and then increases to 10.  Does that
make a difference in how likely you are to experience E?

I am wondering if you think it makes sense that you would expect a 50%
probability of experiencing events which take place in Moscow while
your measure is 1, but a 90% probability of experiencing events like
E, which take place while your measure is 10?  I'm not sure about this
myself, because I am skeptical about this continuity-of-identity idea.
But perhaps, in your framework, this would offer a solution to the
problem you keep asking, of some way to notice or detect when your
measure increases.

In that case we would say that you could notice when your measure
increases because it would increase your subjective probability of
experiencing events.

Perhaps we could even go back to the thought experiment where you have
alternating days of high measure and low measure.  Think of multiple
lockstep copies being created on high measure days and destroyed on low
measure days.  Suppose before beginning this procedure you flip a quantum
coin (in the MWI) and will only undergo it if the coin comes up heads.
Now, could you have a subjective anticipation of 50% of experiencing the
events you know will happen on low-measure days, but an anticipation of
90% of experiencing the events you know will happen on high-measure days?
Then that would be a tangible difference, and you would be justified in
pre-arranging your affairs so that pleasant events happen on the high
measure days and unpleasant ones happen on the low measure days.

It's an interesting concept in any case.  I need to think about it more,
but I'd be interested to hear your views.

Hal Finney



Re: death

2005-06-20 Thread Hal Finney
Bruno Marchal writes:
 Le 19-juin-05, =E0 15:52, Hal Finney a =E9crit :

  I guess I would say, I would survive death via anything that does not
  reduce my measure.

 But if the measure is absolute and is bearing on the OMs, and if that=20
 is only determined by their (absolute) Kolmogorov complexity (modulo a=20=
 constant) associated to the OM (how is still a mystery for me(*)),=20
 how could anything change the measure of an OM?

That's true, from the pure OM perspective death doesn't make sense
because OMs are timeless.  I was trying to phrase things in terms of
the observer model in my reply to Stathis.  An OM wants to preserve
the measure of the observer that it is part of, due to the effects of
evolution.  Decreases in that measure would be the meaning of death,
in the context of the multiverse.

Hal Finney



Re: Dualism and the DA

2005-06-20 Thread Hal Finney
Pete Carlton writes:
 I think the second question, where will I be in the next  
 duplication, is also meaningless.  I think that if you know all the  
 3rd-person facts before you step into the duplicator - that there  
 will be two doubles made of you in two different places, and both  
 doubles wil be psychologically identical at the time of their  
 creation such that each will say they are you - then you know  
 everything there is to know.  There is no further question of which  
 one will I be?  This is simply a situation which pushes the folk  
 concept of I past its breaking point; we don't need to posit any  
 kind of dualism to paper over it, we just have to revise our concept  
 of I.

I agree that this view makes sense.  We come up with all these mind
bending and paradoxical thought experiments, and even though everyone
agrees about every fact of the third-person experience, no one can agree
on what it means from the first person perspective.  Maybe, then, there
is no fact of the matter to agree on, with regard to the first person.

On the other hand, in a world where Star Trek transporters were common,
it seems likely that most people would carry over their conventional views
about continuity of identity to the use of this technology.  Once they
have gone through it a few times, and have memories of having done so,
it won't seem much different from other forms of transportation.

Copies seem a little more problematic.  We're pretty cavalier about
creating and destroying them in our thought experiments, but the social
implications of copies are enormous and I suspect that people's views
about the nature of copying would not be as simple as we sometimes assume.

I doubt that many people would be indifferent between the choice of
having a 50-50 chance of being teleported to Moscow or Washington, vs
having copies made which wake up in both cities.  The practical effects
would be enormously different.  And as I wrote before, I suspect that
these practical differences are not to be swept under the rug, but point
to fundamental metaphysical differences between the two situations.

Hal Finney



Re: Measure, Doomsday argument

2005-06-20 Thread Hal Finney
Quentin Anciaux writes:
 It has been said on this list, to justify we are living in this reality and 
 not in an Harry Potter like world that somehow our reality is simpler, has 
 higher measure than Whitte rabbit universe. But if I correlate this 
 assumption with the DA, I also should assume that it is more probable to be 
 in a universe with billions of billions of observer instead of this one.
 How are these two cases different ?

I would answer this by predicting that any universe which allows for a
substantial chance of billions of billions of observers would have to
be much more complex.  It would have a larger description, either in
terms of its natural laws or of the initial conditions.

Aside from the DA, we have another argument against the fact that
our universe is well suited for advanced civilizations, namely the
Fermi paradox: that we have not been visited by aliens.  These two are
somewhat similar arguments, the DA limiting civilization in time, and
Fermi limiting it in space.  In both cases it appears that our universe
is not particularly friendly to advanced forms of life.

The empirical question presents itself like this.  Very simple universes
(such as empty universes, or ones made up of simple repeating patterns)
would have no life at all.  Perhaps sufficiently complex ones would be
full of life.  So as we move up the scale from simple to complex, at
some point we reach universes that just barely allow for advanced life
to evolve, and even then it doesn't last very long.  The question is,
as we move through this transition region from nonliving universes,
to just-barely-living ones, to highly-living ones, how long is the
transition region?

That is, how much more complex is a universe that will be full of life,
compared to one which just barely allows for life?  We don't know the
answer to that, but in principle it can be learned, through study and
perhaps experimental simulations.  If it takes only a bit more complexity
to go from a just-barely-living universe to a highly-living one, then
we have a puzzle.  Why aren't we in one of the super-living universes,
when their complexity penalty is so low?

OTOH if it turns out that the transition region is wide, and that
you need a much more complex universe to be super-living than to be
just-barely-living, then that is consistent with what we see.  We are in
one of the universes in the transition region, and in fact so are most
advanced life forms.  The relative complexity of super-living universes
means that their measures are low, so even though they are full of life,
it is more likely for a random advanced life form to be in one of the
marginal universes like our own.

In this way the DA is consistent with the fact that we don't live in
a magical universe, but it implies some mathematical properties of the
nature of computation which we are not yet in a position to verify.

Hal Finney



Re: death

2005-06-19 Thread Hal Finney
Stathis Papaioannou writes:
 Yes; hence, everyone is immortal. But leaving that much-debated issue aside 
 for now, I'm not sure that I understand what, if anything, you would accept 
 as a method of surviving the death of your physical body. Would you consider 
 that scanning your brain at the moment of death and uploading your mind to a 
 computer constitutes survival? What about the Star Trek teleporter: is that 
 a method of transportation or of execution? If you can accept the 
 possibility that you can survive the death of your physical body at all, 
 then I think you have to accept that the people in my thought experiment are 
 *not* killed, despite the death of their physical bodies, just as in the 
 case of mind uploading or teleportation.

I guess I would say, I would survive death via anything that does not
reduce my measure.  If I am stopped here, I should be started over there,
or back then, or when such-and-such happens.  If my measure is conserved
then I can be happy.  If it can be increased, I will be that much happier.

Both uploading and transporting conserve measure, so they are not death.
Being killed and having only one in 10^100 of me continue does reduce
my measure, so that is death, death on a scale that has never been seen
before in the universe.  (Compensated by birth on a scale that has never
been seen before... So morally maybe it's not that bad.  Still it's
jerking people around to an amazing degree.)

Hal Finney



Re: Time travel in multiple universes

2005-06-19 Thread Hal Finney
 the crooked photon paths.

What of the version of Marty's mother who fell in love with him?  What is
it like to be her?  I think QM would predict that she is of measure zero.
It's not clear what that means, but under the ASSA it would seem to mean
that there is essentially zero chance that anyone will find themself
in such a situation.  Yet, she had to live, she had to breathe, her
heart had to beat and fall in love, for the paradox to be recognized.
Was she a zombie?  Did she act as an automaton, without consciousness?
How could she fall in love if so?

These are tough questions and I don't really have an answer for them.
On the one hand, it seems that we will never have to face the prospect
of living in a paradox.  On the other, it seems that someone must live
there or else there would be no way of recognizing that a paradox would
arise out of certain actions.

A related manifestation of this paradox in time travel stories is the
creation of information out of nothing.  The classical example is the man
who is given a gift by his older self of a time machine from the future.
He presents it to the world and is acclaimed as the inventor of time
travel.  All future time machines are based on his model.  There's no
paradox here, but who invented the time machine?  Well, the universe did.
How did the universe get that intelligence?  From the shadow worlds.

Imagine, for example, if the time machine were actually invented by
someone, and they wanted to go back in time and give it to their younger
self.  That would be a paradox and would be prevented.  But maybe the
simplest way to prevent it is to just skip the invention part and let the
younger man receive the time machine as a gift.  This solves the problem,
avoids a paradox, achieves local consistency, and perhaps requires less
work on nature's part.

Deutsch, or maybe it was Moravec, wrote an article years ago about
how you could coerce nature into doing computation for you, if you had
a time machine.  Basically you would set things up so that a paradox
would arise unless a randomness circuit got lucky and came up with a
solution to a hard-to-solve problem.  Where does the solution come from?
Again you are getting nature to act as a quantum computer for you,
in the shadow branches.

Isaac Asimov had a story about a chemical which would dissolve *before*
it was placed in water.  He wrote about how scientists learned to extend
the time span, and then the military found a way to use this effect
for a bomb.  When a sample of the chemical was observed to dissolve, it
was sealed in a dry, water-tight container and dropped in an unfriendly
country.  Now, in so many hours, it was certain that the interior of the
bomb would get wet.  But how could that happen?  The only force strong
enough was a disastrous flood, of such magnitude that the container
would be carried off and torn open, so that the chemical could dissolve.
And of course this would do tremendous damage to the enemy's facilities.

Once again, nature is forced to act intelligently, even strategically,
in order to prevent paradox.  The substance had to get wet, it had, in
a sense, already gotten wet, and nature had to find the best way to do it.

One final point, which is that there is a related phenomenon in some of
our multiverse models.  It seems that to avoid a paradox, nature must
in some sense try different things to see which ones work.  But when
does it do this?  How much time does it take?  Well, it can't happen in
the regular time coordinate.  We already know what happens there, the
non-paradoxical time line occurs and no paradoxes exist.  I suggested
that there are parallel universes where the paradoxes occur, and this
provides a mechanism and place and time for the extra work, the trial
and error, to happen.

In some multiverse models, we allow for a certain amount of trial and
error to create a universe.  We don't always stick to just a simple case
where we set up the initial conditions and natural laws, and evolve them
forward, time-step by time-step.  We have discussed more complex rules
which would work in part by trial and error, setting things first one
way and then another, until various consistency criteria are satisfied.
Such an optimization or satisficing machine might be a better model
for our own universe than the simple Cartesian clockwork program we may
naively imagine as our first conception.

The point is that there is a time coordinate within the universe, but it
is not necessarily the same as the time coordinate of the computer that
is creating the universe.  That computer may be going back and forth,
tweaking here, changing there, taking a long time just to set up a small
patch of space-time in its output tape.  This is another way to think of
where and when the alternatives for paradox free time travel could
be considered and rejected.

Hal Finney



Re: death

2005-06-18 Thread Hal Finney
Stathis Papaioannou writes:
 Hal Finney writes:
 God creates someone with memories of a past life, lets him live for a
 day, then instantly and painlessly kills him.
 
 What would you say that he experiences?  Would he notice his birth and
 death?  I would generally apply the same answers to the 10^100 people
 who undergo your thought experiment.

Keep in mind that I was just trying to answer your question very
directly and literally, about the person would experience in your
thought experiment.  I wasn't trying to get all moralistic about it.
Maybe he minds about being killed, maybe he doesn't.  I think most people
would mind, in which case I think God is being pretty cruel.  But all
that morality is pretty much irrelevant to the simple question of what
he would experience.  I have tried to answer that as straightforwardly
as I can, above.

 Before continuing, it is worth looking at the definition of death. The 
 standard medical definition will not do for our purposes, because it doesn't 
 allow for future developments such as reviving the cryogenically preserved, 
 mind uploads, teleportation etc. A simple, general purpose definition which 
 has been proposed before on this list is that a person can be said to die at 
 a particular moment when there is no chance that he will experience a next 
 moment, however that experience might come about. Equivalently, death 
 occurs when there is no successor observer moment, anywhere or ever.

That definition doesn't make any sense in the context of everything exists,
because by definition every possible observer moment exists.

Hal Finney



Re: copy method important?

2005-06-18 Thread Hal Finney
 I'm no physicist, but doesn't Heisenberg's Uncertainty Principle forbid
 making exact quantum-level measurements, hence exact copies?  If so, then
 all this talk of making exact copies is fantasy.
 Norman Samish

You can't *specifically* copy a quantum state, but you can create
systems in *every possible* quantum state (of a finite size), hence you
can make an ensemble which contains a copy of a given quantum system.
You can't say which specific item in the ensemble is the copy, but you
can make a copy.  That may or may not be sufficient for a particular
thought experiment to go forward.

In practice most people believe that consciousness does not depend
critically on quantum states, so making a copy of a person's mind would
not be affected by these considerations.

Hal Finney



Re: copy method important?

2005-06-18 Thread Hal Finney
Norman Samish writes:
 Isn't it possible that decision processes of the brain, hence 
 consciousness, DOES depend critically on quantum states?

Yes, it's possible.  There is a school of thought which advances this
position.  Penrose, Hamerhoff are a couple of the names, off the top
of my head.  There is an extensive literature on the subject to which you
could find some entries via Google.  I just did so and found archives
of a mailing list called QUANTUM-MIND which is all about this subject.

Nevertheless I think it is safe to say that the opposite opinion
is more widespread, that the mind does not depend critically on any
quantum property.  One of the main reasons is that quantum coherence
is very difficult to maintain outside of carefully prepared laboratory
conditions.  Another point is that our models of neurons do not require
quantum behavior, yet computer simulations suggest that they can learn
patterns and respond in meaningful ways similar to actual neural tissue.
Of course we are far from being able to simulate anything at the level
of consciousness, but so far there is nothing observable about neural
behavior that suggests nonclassical effects.

 My understanding of the workings of the brain is that my action, whether 
 thought or deed, is determined by whether or not certain neurons fire.  This 
 depends on many other neurons.  So the brain can be in a state of delicate 
 balance, where it could be impossible to predict whether or not the neuron 
 fires.
 We all have to make decisions where the pluses apparently equal the 
 minuses.  It would take very little to tip the balance one way or the other. 
 Perhaps, at the deepest level, the route we take depends on whether an 
 electron has left or right polarization, or some other quantum property - 
 which we agree can't be measured.

I think it is doubtful that neurons often get into a condition where
they are so delicately balanced that a single electron could make a
difference.  There are a lot of electrons in a neuron!  But even if it
did happen, it wouldn't mean that the neuron *depends* on this effect.
A simulation of a brain that was non-quantum might not behave 100% the
same as the real brain being modelled, but it would probably work ok.
By their nature, brains need to be robust and immune to disturbances.
Neurons are constantly dying, their internals assaulted by changes in
blood chemistry, but the brain keeps chugging away.  It's not exactly
a delicate flower.  Again, this is exactly the opposite of the quantum
behavior we observe in the lab, which is extremely sensitive and gets
messed up if you look at it funny.

 If this is true, then perhaps Free Will (or at least behavior that is, 
 in principle, unpredictable) does exist.

Right, well, for many people, being at the mercy of unpredictable and
uncontrollable randomness may be free but it's hardly willful.

Hal Finney



RE: Dualism and the DA

2005-06-17 Thread Hal Finney
Jonathan Colvin writes:
 In the process of writing this email, I did some googling, and it seems my
 objection has been independantly discovered (some time ago). See
 http://hanson.gmu.edu/nodoom.html

 In particular, I note the following section, which seems to mirror my
 argument rather precisely:

 It seems hard to rationalize this state space and prior outside a religious
 image where souls wait for God to choose their bodies. 
 This last objection may sound trite, but I think it may be the key. The
 universe doesn't know or care whether we are intelligent or conscious, and I
 think we risk a hopeless conceptual muddle if we try to describe the state
 of the universe directly in terms of abstract features humans now care
 about. If we are going to extend our state desciptions to say where we sit
 in the universe (and it's not clear to me that we should) it seems best to
 construct a state space based on the relevant physical states involved, to
 use priors based on natural physical distributions over such states, and
 only then to notice features of interest to humans. 

 I've looked for rebuttals of Hanson, and haven't found any. Nick references
 him, but comments only that Hanson also seems to be comitted to the SIA (not
 sure why he thinks this).

There was an extensive debate between Robin Hanson and Nick Bostrom
on the Extropians list in mid 1988.  You can pick it up from the point
where Robin came up with the rock/monkey/human/posthuman model which
he describes in the web page you cite above, at this link:
http://forum.javien.com/conv.php?new=trueconvdata=id::vae825qL-Gceu-2ueS-wFbo-Kwj0fIHLv6dh

You can also try looking this earlier thread,
http://forum.javien.com/conv.php?new=trueconvdata=id::U9mLfRBF-z8ET-BDyq-8Sz1-5UotvKx2iIS2
and focus on the postings by Nick and Robin, which led Robin to produce
his formal model.

I think if you look at the details however you will find it is Robin
Hanson who advocates the you could have been a rock position and Nick
Bostrom who insists that you could only have been other people.  This
seemed to be one of the foundations of their disagreement.

As far as the Self Indication Axiom, it might be due to such lines as
this, from Robin's essay you linked to:

And even if everyone had the same random chance of developing amnesia,
the mere fact that you exist suggests a larger population. After all,
if doom had happend before you were born, you wouldn't be around to
consider these questions.

I think this is similar to the reasoning in the SIA.

Hal Finney



Re: another puzzzle

2005-06-16 Thread Hal Finney
Stathis Papaioannou writes:
 You find yourself in a locked room with no windows, and no memory of how you 
 got there. The room is sparsely furnished: a chair, a desk, pen and paper, 
 and in one corner a light. The light is currently red, but in the time you 
 have been in the room you have observed that it alternates between red and 
 green every 10 minutes. Other than the coloured light, nothing in the room 
 seems to change. Opening one of the desk drawers, you find a piece of paper 
 with incredibly neat handwriting. It turns out to be a letter from God, 
 revealing that you have been placed in the room as part of a philosophical 
 experiment. Every 10 minutes, the system alternates between two states. One 
 state consists of you alone in your room. The other state consists of 10^100 
 exact copies of you, their minds perfectly synchronised with your mind, each 
 copy isolated from all the others in a room just like yours. Whenever the 
 light changes colour, it means that God is either instantaneously creating 
 (10^100 - 1) copies, or instantaneously destroying all but one randomly 
 chosen copy.

 Your task is to guess which colour of the light corresponds with which state 
 and write it down. Then God will send you home.

Let me make a few comments about this experiment.  I would find it quite
alarming to be experiencing these conditions.  When the light changes
and I go from the high to the low measure state, I would expect to die.
When it goes from the low to the high measure state, I would expect that
my next moment is in a brand new consciousness (that shares memories
with the old).  Although the near-certainty of death is balanced by the
near-certainty of birth, it is to such an extreme degree that it seems
utterly bizarre.  Conscious observers should not be created and destroyed
so cavalierly, not if they know about it.

Suppose you stepped out of a duplicating booth, and a guy walked up with
a gun, aimed it at you, pulled the trigger and killed you.  Would you
say, oh, well, I'm only losing two seconds of memories, my counterpart
will go on anyway?  I don't think so, I think you would be extremely
alarmed and upset at the prospect of your death.  The existence of
your counterpart would be small comfort.  I am speaking specifically
of your views, Stathis, because I think you have already expressed your
disinterest in your copies.

God is basically putting you in this situation, but to an enormously,
unimaginably vaster degree.  He is literally playing God with your
consciousness.  I would say it's a very bad thing to do.

And what happens at the end?  Suppose I guess right, all 10^100 of me?
How do we all go home?  Does God create 10^100 copies of entire
universes for all my copies to go home to as a reward?  I doubt it!
Somehow I think the old guy is going to kill me off again, all but one
infinitesimal fraction of me, and let this tiny little piece go home.

Well, so what?  What good is that?  Why do I care, given that I am
going to die, what happens to the one in 10^100 part of me?  That's an
inconceivably small fraction.

In fact, I might actually prefer to have that tiny fraction stay in the
room so I can be reborn.  Having 10^100 copies 50% of the time gives me
a lot higher measure than just being one person.  I know I just finished
complaining about the ethical problems of putting a conscious entity in
this situation, but maybe there are reasons to think it's good.

So I don't necessarily see that I am motivated to follow God's
instructions and try to guess.  I might just want to sit there.
And in any case, the reward from guessing right seems pretty slim
and unmotivating.  Congratulations, you get to die.  Whoopie.

Hal Finney



  1   2   3   4   >