Re: Teleportation thought experiment and UD+ASSA

2006-06-27 Thread Saibal Mitra

Hal, thanks for explaining!

I think that your approach makes a lot of sense. Applying this to copying
experiments, the probability of finding yourself to be the digital copy is:

m1/[m1 + m2]

where m1 is the measure of the mental experience corresponding to knowing
that you are the digital copy and m2 the measure of the mental experience
corresponding to knowing that you are still in biological form. I think that
for practical implementations m1 = m2 because the digital implementation
will just simulate the brain, so the complexity of the translation program
would be practically the same.

Saibal



- Original Message - 
From: ""Hal Finney"" <[EMAIL PROTECTED]>
To: 
Sent: Wednesday, June 21, 2006 08:49 AM
Subject: Re: Teleportation thought experiment and UD+ASSA


>
> "Saibal Mitra" <[EMAIL PROTECTED]> writes:
> > I don't understand why you consider the measures of the programs that do
the
> > simulations. The ''real'' measure should be derived from the algorithmic
> > complexity of the laws of physics that describe how the computers/brains
> > work. If you know for certain that a computation will be performed in
this
> > universe, then it doesn't matter how it is performed.
>
> I think what you're saying here is that if a mental state is instantiated
> by a given universe, the contribution to its measure should just be
> the measure of the universe that instantiates it.  And that universe's
> measure is based on the complexity of its laws of physics.
>
> I used to hold this view, but I eventually abandoned it because of a
> number of problems.  I need to go back and collect the old messages
> and discussions that we have had and put them into some kind of order.
> But I can mention a couple of issues.
>
> One problem is the one I just wrote about in my reply to Russell, the
> fuzziness of the concept of implementation.  In at least some universes
> we may face a gray area in deciding whether a particular computation,
> or more specifically a particular mental state, is being instantiated.
> Philosophers like Hans Moravec apparently really believe that every
> system instantiates virtually every mental state!  If you look at the
> right subset of the atomic vibrations inside a chunk of rock, you can
> come up with a pattern that is identical to the pattern of firing of
> neurons in your own brain.  Now, most philosophers reject this, they come
> up with various technical criteria that implementations have to satisfy,
> but as I wrote to Russell I don't think any of these work.
>
> The other problem arises from fuzziness in what counts as a "universe".
> The problem is that you can write very simple programs which will
> create your mental state.  For example, the Universal Dovetailer does
> just that.  But the UD program is much smaller than what our universe's
> physical laws probably would be.  Does the measure of the UD "count" as a
> contribution to every mind it creates?  If so, then it will dominate over
> the contributions from more conventional universes; and further, since
> the UD generates all minds, it means that all minds have equal measure.
> To reject the UD as a cheating non-universe means that we will need a
> bunch of ad hoc rules about what counts as a universe and what does not,
> which are fundamentally arbitrary and unconvincing.
>
> Then there are all those bothersome disputes which arise in this model,
> such as whether multiple instantiations should add more measure than
> just one; or whether a given brain in a small universe should get more
> measure than the same brain in a big universe (since it uses a higher
> proportion of the universe's resources in the first case).  All these
> issues, as well as the ones above, are addressed and answered in my
> current framework, which is far simpler (the measure of a mental state
> is just its Kolmogorov measure - end of story).
>
>
> > The algorithmic complexity of the program needed to simulate a brain
refers
> > to a ''personal universe''. You can think of the brain as a machine that
is
> > simulating a virtual world in which the qualia we experience exist. That
> > world also exists independent of our brain in a universe of its own.
This
> > world has a very small measure defined by the very large algorithmic
> > complexity of the program needed to specify the brain.
>
> I agree with this, I think.  The program needed to specify a mental state
> a priori would be far larger than the program needed to specify the laws
> of physics which could cause that mental state to evolve "naturally".
> Both programs make a contribution to the measure of the mental state

RE: Re: Teleportation thought experiment and UD+ASSA

2006-06-23 Thread Stathis Papaioannou

Hal Finney writes:

> The problem is that there seems to be no basis for judging the validity
> of this kind of analysis.  Do we die every instant?  Do we survive sleep
> but not being frozen?  Do we live on in our copies?  Does our identity
> extend to all conscious entities?  There are so many questions like
> this, but they seem unanswerable.  And behind all of them lurks our
> evolutionary conditioning forcing us to act as though we have certain
> beliefs, and tricking us into coming up with logical rationalizations
> for false but survival-promoting beliefs.

There is a way around some of these questions. When I ask whether my copy 
produced as a result of teleportation, or whatever, will "really be me", what I 
want to know is whether that copy will subjectively stand in the same 
relationship to me as I stand now in relationship to my self of a moment ago. 
It doesn't really matter to me what process I go through (provided that the 
process does not have other unpleasant side-effects, of course) as long as this 
is the end result. In fact, I am quite happy with the notion that I live only 
transiently, because it does away with all the paradoxes of personal identity. 
"Normal life" then consists in the relationship between these transient selves: 
that they have certain thoughts in a certain temporal sequence, memories of 
previous selves, a sense of identity persisting through time, and so on. While 
it is true that in the world with which we are all familiar this sequence of 
selves is implemented in a single organism living its life from birth to death, 
there is no logical reason why this has to be so; and if you consider that 
there may not be a single atom in your body today that was there a year ago, 
physical continuity over time is just an illusion anyway.  
 
> > Even if it were possible to imagine another way of living my life which
> > did not entail dying every moment, for example if certain significant
> > components in my brain did not turn over, I would not expend any effort
> > to bring this state of affairs about, because if it made no subjective
> > or objective difference, what would be the point? Moreover, there would
> > be no reason for evolution to favour this kind of neurophysiology unless
> > it conferred some other advantage, such as greater metabolic efficiency.
> 
> Right, so there are two questions here.  One is whether there could be
> reasons to prefer a circumstance which seemingly makes no objective or
> subjective difference.  I'll say more about this later, but for now I'll
> just note that it is often impossible to know whether some change would
> make a subjective difference.

Yes, it does present difficulties, not least because we haven't managed to 
duplicate a person yet and there is no experimental data! But it wouldn't be 
fair to insist, if we did do the experiment, that we can't really know the 
subject's first person experience, because the same criticism could be made of 
any situation with a human subject. For example, we could say that we can't be 
sure someone who has had transient loss of consciousness is the same person 
afterwards, despite their insistence that they feel the same, because it is 
impossible to know what they are actually feeling, and even if we relied on 
their verbal account, we could not be sure that they have an accurate memory of 
what they were like before the incident.
 
> The other question is whether we could or should even try to overcome
> our evolutionary programming.  If evolution doesn't care if we die
> once we have reproduced, should we?  If evolution tells us to sacrifice
> ourselves to save two children, eight cousins, or 16 great-great uncles,
> should we?  In the long run, we might be forced to obey the instincts
> built into us by genes.  But it still is interesting to consider the
> deeper philosophical issues, and how we might hypothetically behave if
> we were free of evolutionary constraints.

I was actually trying to make a different point. If the subject undergoing 
teleportation does not have his identity preserved, as opposed to what would 
have happened if he had continued living life normally, then this means that - 
despite his behaviour being the same and despite his insistence that he feels 
the same - some subtle change or error was introduced as a result of the 
teleportation. Since the matter in a person's body is turning over all the 
time, there is a constant risk of introducing copying errors as the various 
cellular components (including neuronal components) are replaced. Those errors 
that would have a negative impact on the organism's survival chances are weeded 
out by evolution, while those that make no difference at all remain. The 
putative errors introduced by teleportation are of a type which, at the very 
least, do not change the subjects external behaviour, even though they result 
in the non-preservation of personal identity (whatever that might mean). Now, 
even though it might b

Re: Teleportation thought experiment and UD+ASSA

2006-06-22 Thread "Hal Finney"

Bruno raises a lot of good points, but I will just focus on a couple
of them.

The first notion that I am using in this analysis is the assumption that a
first-person stream of consciousness exists as a Platonic object.  My aim
is then to estimate the measure of such objects.  I don't know whether
people find this plausible or not, so I won't try to defend it yet.

The second part, which I know is more controversial, is that it is
possible to represent this object as a bit string, or as some similar,
concrete representation.  I think there are a couple of challenges
here.  The first is how to turn something as amorphous and intangible
as consciousness into a concrete representation.  But I assume that
subsequent development of cognitive sciences will eventually give us a
good handle on this problem and allow us to diagram, graph and represent
streams of consciousness in a meaningful way.  As one direction to pursue,
we know that brain activity creates consciousness, hence a sufficiently
compressed representation of brain activity should be a reasonable
starting point as a representation of first-person experience.

Another issue that many people have objected to is the role of time.
Consciousness, it is said, is a process, not a static structure such as
might be represented by a bit string.  IMO this can be dealt with by
interpreting the bit string as a multidimensional object, and treating
one of the dimensions as time.  See, for example, one of Wolfram's 1-D
cellular automaton outputs:

http://en.wikipedia.org/wiki/Image:CA_rule30s.png

We see something that can alternatively be interpreted as a pure bit
string; as a two-dimensional array of bits; or as a one-dimensional
bit string evolving in time.  In the same way we can capture temporal
evolution of consciousness by interpreting the bit string as having a
time dimension.

An important point is that although there may be many alternative ways
and notations to represent consciousness, they should all be isomorphic,
and only a relatively short program should be necessary to map from one
to another.  Hence, the measure computed for all of these representations
will be about the same, and therefore it is meaningful to speak of this
as the measure of the experience as a platonic entity.

Bruno also questioned my use of a physical universe in my analysis.
I am not assuming that physical universes exist as the basis of reality.
I only expressed the analysis in that form because we were given a
particular situation to analyze, and that situation was expressed as
events in a single universe.

The Universal Dovetailer does not play a principle role in my analysis,
because it does not play such a role in Kolmogorov complexity.  At most,
the Universal Dovetailer can be used as a heuristic device to explain
what it might mean to "run all computatations" in order to explain
K complexity.

I think one difference between K complexity and Bruno's reasoning with the
Universal Dovetailer is that the former focuses on sizes of programs while
Bruno seems to work more in terms of run time.  In the K complexity view,
the measure of an information object is (roughly) 1/2^L, where L is the
size of the shortest program which outputs that object.  Equivalently,
the measure of an information object is the fraction of all programs
which output that object, where programs are sampled uniformly from
all bit strings (or from whatever the input alphabet is for the UTM).
This does not have anything to do with run time.  Some bit patterns
may have short programs that take a very long run time to output them.
Such bit patterns are considered to have low complexity and high measure,
despite the long run time needed.

I think Bruno has sometimes said that the Universal Dovetailer makes some
things have higher measure than others because they get more run time.
I'm not sure how this would work, but it is a difference from the
Kolmogorov complexity (aka Universal Distribution) view that I am using.

Okay, those are some of the foundational questions and assumptions that
I think are raised by Bruno's analysis.  The rest of it goes through as
I have described many times.

Hal

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



RE: Re: Teleportation thought experiment and UD+ASSA

2006-06-22 Thread "Hal Finney"

"Stathis Papaioannou" <[EMAIL PROTECTED]> writes:
> OK, I think I'm clear on what you're saying now. But suppose I argue
> that I will not survive the next hour, because the matter making up my
> synapses will have turned over in this time. To an outside observer the
> person taking my place would seem much the same, and if you ask him, he
> will share my memories and he will believe he is me. However, he won't
> be me, because I will by then be dead. Is this a valid analysis? My view
> is that there is a sense in which it *is* valid, but that it doesn't
> matter. What matters to me in survival is that there exist a person in
> an hour from now who by the usual objective and subjective criteria we
> use identifies as being me.

The problem is that there seems to be no basis for judging the validity
of this kind of analysis.  Do we die every instant?  Do we survive sleep
but not being frozen?  Do we live on in our copies?  Does our identity
extend to all conscious entities?  There are so many questions like
this, but they seem unanswerable.  And behind all of them lurks our
evolutionary conditioning forcing us to act as though we have certain
beliefs, and tricking us into coming up with logical rationalizations
for false but survival-promoting beliefs.

I am attracted to the UD+ASSA framework in part because it provides
answers to these questions, answers which are in principle approximately
computable and quantitative.  Of course, it has assumptions of its own.
But modelling a subjective lifespan as a computation, and asking how
much measure the universe adds to that computation, seems to me to be
a reasonable way to approach the problem.


> Even if it were possible to imagine another way of living my life which
> did not entail dying every moment, for example if certain significant
> components in my brain did not turn over, I would not expend any effort
> to bring this state of affairs about, because if it made no subjective
> or objective difference, what would be the point? Moreover, there would
> be no reason for evolution to favour this kind of neurophysiology unless
> it conferred some other advantage, such as greater metabolic efficiency.

Right, so there are two questions here.  One is whether there could be
reasons to prefer a circumstance which seemingly makes no objective or
subjective difference.  I'll say more about this later, but for now I'll
just note that it is often impossible to know whether some change would
make a subjective difference.

The other question is whether we could or should even try to overcome
our evolutionary programming.  If evolution doesn't care if we die
once we have reproduced, should we?  If evolution tells us to sacrifice
ourselves to save two children, eight cousins, or 16 great-great uncles,
should we?  In the long run, we might be forced to obey the instincts
built into us by genes.  But it still is interesting to consider the
deeper philosophical issues, and how we might hypothetically behave if
we were free of evolutionary constraints.

Hal Finney

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: Teleportation thought experiment and UD+ASSA

2006-06-22 Thread Bruno Marchal

Hal,

Here I agree with everything you say. Functionalism presupposes 
computationalism, but computationalism makes computationalism false. 
exit functionnalism. Even maudlin makes the confusion. I repeat that 
both thought experiments and Godel's incompleteness show that if we are 
machine then we cannot know which machine we are, nor can we know for 
sure our substitution level. We can bet empirically (and religiously!) 
only. We can deduce also that our experiences supervene on the 
continuum of computational histories appearing below our substitution 
level (those comp histories that we cannot distinguish). This explain 
qualitatively quantum facts, i.e. why matter will behave like if it was 
emerging from an infinity of "parallel" computations.

Now I cannot take seriously the Mallah-Chalmers problem of rock 
instantiating finite state automata, given that with comp consciousness 
arises from the possible behavior of an infinity of non finite state 
automata (universal machine). Mallah-Chalmers are adding a naïve 
"solution of the mind body problem" (where mind are attached to 
concrete computations) on a naïve, aristotelian, view of matter  which 
already contradict comp (by UDA).

Bruno


Le 21-juin-06, à 08:11, Hal Finney a écrit :

>
> Russell Standish <[EMAIL PROTECTED]> writes:
>> On Tue, Jun 20, 2006 at 09:35:12AM -0700, "Hal Finney" wrote:
>>> I think that one of the fundamental principles of your COMP 
>>> hypothesis
>>> is the functionalist notion, that it does not matter what kind of 
>>> system
>>> instantiates a computation.  However I think this founders on the 
>>> familiar
>>> paradoxes over what counts as an instantiation.  In principle we can
>>> come up with a continuous range of devices which span the 
>>> alternatives
>>> from non-instantiation to full instantiation of a given computation.
>>> Without some way to distinguish these, there is no meaning to the 
>>> question
>>> of when a computation is instantiated; hence functionalism fails.
>>>
>>
>> I don't follow your argument here, but it sounds interesting. Could 
>> you
>> expand on this more fully? My guess is that ultimately it will depend
>> on an assumption like the ASSA.
>
> I am mostly referring to the philosophical literature on the problems 
> of
> what counts as an instantiation, as well as responses considered here
> and elsewhere.  One online paper is Chalmers' "Does a Rock Implement
> Every Finite-State Automaton?", http://consc.net/papers/rock.html.
> Jacques Mallah (who seems to have disappeared from the net) discussed
> the issue on this list several years ago.
>
> Now, Chalmers (and Mallah) claimed to have a solution to decide when
> a physical system implements a calculation.  But I don't think they
> work; at least, they admit gray areas.  In fact, I think Mallah came
> up with the same basic idea I am advocating, that there is a degree of
> instantiation and it is based on the Kolmogorov complexity of a program
> that maps between physical states and corresponding computational 
> states.
>
> For functionalism to work, though, it seems to me that you really need
> to be able to give a yes or no answer to whether something implements a
> given calculation.  Fuzziness will not do, given that changing the 
> system
> may kill a conscious being!  It doesn't make sense to say that someone 
> is
> "sort of" there, at least not in the conventional functionalist view.
>
> A fertile source of problems for functionalism involves the question
> of whether playbacks of passive recordings of brain states would be
> conscious.  If not (as Chalmers and many others would say, since they
> lack the proper counterfactual behavior), this leads to a machine with 
> a
> dial which controls the percentage of time its elements behave 
> according
> to a passive playback versus behaving according to active computational
> rules.  Now we can turn the knob and have the machine gradually move 
> from
> unconsciousness to full consciousness, without changing its behavior in
> any way as we twiddle the knob.  This invokes Chalmers' "fading qualia"
> paradox and is again fatal for functionalism.
>
> Maudlin's machines, which we have also mentioned on this list from time
> to time, further illustrate the problems in trying to draw a bright 
> line
> between implementations and clever non-implementations of computations.
>
> In short I view functionalism as being fundamentally broken unless 
> there
> is a much better solution to the implementation question than I am 
> aware
> of.  Therefore we cannot assume a priori that a brain implementation 
> and a
> computational implementation of mental states will be inherently the 
> same.
> And I have argued in fact that they could have different properties.
>
> Hal Finney
>
> >
>
http://iridia.ulb.ac.be/~marchal/


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, sen

Re: Teleportation thought experiment and UD+ASSA

2006-06-22 Thread Bruno Marchal


Le 21-juin-06, à 08:49, Hal Finney a écrit  (to Saibal Mitra):

snip

> and further, since
> the UD generates all minds, it means that all minds have equal measure.

Never underestimate the "basic fundamental stupidity" of the UD. The UD 
execution is very redundant and the measure will be relative. Unless 
you bet on ASSA in this context, but this does not make sense (cf our 
older discussion).



> And from this we conclude that the contribution of a universe to the
> measure of a conscious experience is not the universe's measure itself,
> but that measure reduced by the measure of the program which outputs
> that conscious experience given the universe data as input.

As far as I can make sense of the "universe as data"  is it not a 
Relative SSA?



> As for the question above about the Universal Dovetailer universe, it 
> is
> easily solved in this framework.  The output of the UD is of 
> essentially
> no help in producing the mental state in question, because the ouput is
> so enormous

I think it is misleading to talk of any output of the UD. I guess you 
mean the computations or execution (material or not) of the UD.



> and we would have no idea where to look.

But from the first person point of view there is no need to know where 
to look.


> Hence the UD does
> not make a dominant contribution to mental state measure and we avoid
> the paradox without any need for ad hoc rules.


I would say simply that the UD is just a frame from which internal 
relative measure occurs. So, indeed the UD itself has no genuine 
measure attributed to it. This could be otherwise for some internal 
(perhaps quantum) universal dovetailer.


Bruno


http://iridia.ulb.ac.be/~marchal/


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: Teleportation thought experiment and UD+ASSA

2006-06-22 Thread Bruno Marchal


Le 20-juin-06, à 08:47, Hal Finney a écrit :

>
> I'll offer my thoughts on first-person indeterminacy.  This is based
> on Wei Dai's framework which I have called UD+ASSA.


I guess you mean your UDist here.




> I am working on
> some web pages to summarize the various conclusions I have drawn from
> this framework.  (Actually, here I am going to in effect use the SSA
> rather than the ASSA, i.e. I will work not with observer-moments but
> with entire observer lifetimes.  But the same principles apply.)


Why? How? This is not obvious, and in some context even not consistent.




>
> Let us consider Bruno's example where you are annihilated in Brussels
> and then copies of your final state are materialized in Washington and
> Moscow, and allowed to continue to run.  What can we say about your
> subjective first-person expectations in this experiment?
>
> Here is how I would approach the problem.  It is a very straightforward
> computational procedure (in principle).


I don't think so. See below.




> Consider any hypothetical
> subjective, first person stream of consciousness.  This would basically
> be a record of the thoughts and experiences of a hypothetical observer.
> Let us assume that this can be written and recorded in some form.


OK. I would add that such a record is annihilated in personal use of 
teleportation. If not you can no more distinguish it from a third 
person record.




>
> Perhaps it is a record of neural firing patterns over the course of the
> observer's lifetime,

This hides some ambiguity.  " neural firing pattern" does not typically 
belongs to first person experience (unlike pain pleasure ...).


> or perhaps a more compressed description based on
> such information.

? (such a compression has nothing to do with the experience, so I don't 
understand).


>
> The question I would aim to answer is this: for any proposed, 
> hypothetical
> first-person lifetime stream of consciousness, how much measure does
> this hypothetical subjective lifetime acquire from the third-person
> events in the universe?

Which universe?  And how do you link first person experience and third 
person description. You are talking like if the mind body problem was 
solved.


>
> The answer is very simple: it is the conditional Kolmogorov measure of
> the subjective lifetime record, given the universe as input.


(Conditional) Kolmogorov measure is not computable (even in principle) 
unless you have an oracle for the halting problem. It is well defined, 
and does not depend on the chosen Universal machine (except for a 
constant), but it is not computable. Still it is computable "in the 
limit", but for using that feature, you need to take into account the 
invariance for delays in "reconstitutions", but this changes the whole 
frame of reasoning.
I am not sure I can figure out what do you mean by "universe" here, nor 
how could a universe be an input (nor an output actually).



>  In other
> words, consider the shortest program which, given the universe as 
> input,
> produces that precise subjective lifetime record as output;


There is no algorithm for finding that shortest programs. (part of the 
price of Church thesis like in the diagonalization posts).



> if the length
> of that program is L, then this universe contributes 1/2^L to the 
> measure
> of that subjective lifetime.
>
> Note that I am not trying to start from the universe and decide what 
> the
> first-person stream of consciousness is; rather, I compute the 
> numerical
> degree to which the universe instantiates any first-person stream of
> consciousness.  However, this does in effect answer the first question,
> since we can consider all possible streams of consciousness, and
> determine which one(s) the universe mostly adds measure to.


Only with an oracle for the halting problem. Still, I don't understand 
what you mean (in this context) by "this universe".
Also, minimal complexity is a prior akin to ... classical physics. Even 
quantum mechanician does not really need it, because they can derive 
such minimization from the quantum phase randomization, as Feynman 
discovered (see a concise explanation of this in my 2001 paper 
"Computation, Consciousness and the Quantum" (cf url below)).



>  These would
> be the ones that we would informally say that the universe 
> instantiates.
>
> Now, let me illustrate how this would be applied to the situation in
> question, and some other thought experiments.  Specifically, let us
> imagine three hypothetical streams of consciousness: B goes through 
> life
> until the moment the subject is annihilated in Brussels, then stops.
> W goes through life as does B but continues with the life experiences
> from Washington.  And M is like W, going through life until the event
> in Brussels but then continuing with the events in Moscow.
>
> Normally we only consider first-person experiences like M and W when
> we discuss this experiment, where the consciousness "jumps" to Moscow
> or Wash

RE: Re: Teleportation thought experiment and UD+ASSA

2006-06-21 Thread Stathis Papaioannou


Hal Finney writes:
 
> > Yes, but every theoretical scientist hopes ultimately to be vindicated> > by the experimentalists. I'm now not sure what you mean by the second> > sentence in the above quote. What would you expect to find if (classical,> > destructive) teleportation of a subject in Brussels to Moscow and/or> > Washington were attempted?> > From the third party perspective, I'd expect that we'd start with a> person in Brussels, and end up with people in Moscow and Washington who> each have the memories and personality of the person who is no longer> in Brussels.  The population of Earth would have increased by one.> I imagine that this is unproblematic and is simply a restatement of the> stipulated conditions of the experiment.> > The more interesting question to ask is whether I would submit to> this, and if so, what would I expect?  Note that this is not subject> to experimental verification.  When we have described the third party> situation, we have already said everything that experimentalists could> verify.  When those two people wake up in Moscow and Washington there> is no conceivable experiment by which we can judge whether the person> in Brussels has in some sense survived, or perhaps has done even better> than surviving.  It's not even clear what these questions mean.> > It was my attempt to formalize these questions which led to my analysis.> Perhaps it is best if I go back to the more formal statement of the> results, and say that the contribution of this universe to the measure> of a person who experiences surviving the teleportation and wakes up in> W or M is much less than the contribution to the measure of a person who> walks into the machine in Brussels and never experiences anything else.> At a minimum, this would make me hesitant to use the machine.> > Now, other philosophical considerations might still convince me to use the> machine; but it would be more like the two copies are my heirs, people> who will live on after I am gone and help to put my plans into action.> People sometimes sacrifice themselves for their children, and the argument> would be even stronger here since these are far more similar to me than> biological relations.  So even if I don't personally expect to survive> the transition I might still decide to use the machine.
OK, I think I'm clear on what you're saying now. But suppose I argue that I will not survive the next hour, because the matter making up my synapses will have turned over in this time. To an outside observer the person taking my place would seem much the same, and if you ask him, he will share my memories and he will believe he is me. However, he won't be me, because I will by then be dead. Is this a valid analysis? My view is that there is a sense in which it *is* valid, but that it doesn't matter. What matters to me in survival is that there exist a person in an hour from now who by the usual objective and subjective criteria we use identifies as being me.
 
Even if it were possible to imagine another way of living my life which did not entail dying every moment, for example if certain significant components in my brain did not turn over, I would not expend any effort to bring this state of affairs about, because if it made no subjective or objective difference, what would be the point? Moreover, there would be no reason for evolution to favour this kind of neurophysiology unless it conferred some other advantage, such as greater metabolic efficiency.
 
Stathis PapaioannouWith MSN Spaces email straight to your blog. Upload jokes, photos and more. It's free!
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups "Everything List" group.  To post to this group, send email to everything-list@googlegroups.com  To unsubscribe from this group, send email to [EMAIL PROTECTED]  For more options, visit this group at http://groups.google.com/group/everything-list  -~--~~~~--~~--~--~---




Re: Teleportation thought experiment and UD+ASSA

2006-06-21 Thread Russell Standish

On Wed, Jun 21, 2006 at 10:31:16AM -0700, "Hal Finney" wrote:
> 
> Russell Standish <[EMAIL PROTECTED]> writes:
> > If computationalism is true, then a person is instantiated by all
> > equivalent computations. If you change one instantiation to something
> > inequivalent, then that instantiation no longer "instantiates" the
> > person. The person continues to exist, as long as there remain valid
> > computations somewhere in the universe. And in almost any of the many
> > worlds variants we consider in this list, that will be true.
> 
> That's true, but even with the MWI, making an instantiation cease to
> exist decreases the measure of that person.  Around here we call that
> "murder".  The moral question still exists.  I don't see the MWI as
> rescuing functionalism and computationalism.
> 
> What, after all, do these principles mean?  They say that the
> implementation substrate doesn't matter.  You can implement a person
> using neurons or tinkertoys, it's all the same.  But if there is no way
> in principle to tell whether a system implements a person, then this
> philosophy is meaningless since its basic assumption has no meaning.
> The MWI doesn't change that.
> 
> Hal Finney
> 

There is no way to tell if a given system implements a person
REGARDLESS of whether functionalism is true or not.

The best we have is some kind of Turing test - if it walks and quacks
like the only person we know (ourself), it is a person and if it doesn't then
there is little benefit in assuming that it is a person.

Cheers

-- 
*PS: A number of people ask me about the attachment to my email, which
is of type "application/pgp-signature". Don't worry, it is not a
virus. It is an electronic signature, that may be used to verify this
email came from me if you have PGP or GPG installed. Otherwise, you
may safely ignore this attachment.


A/Prof Russell Standish  Phone 8308 3119 (mobile)
Mathematics0425 253119 (")
UNSW SYDNEY 2052 [EMAIL PROTECTED] 
Australiahttp://parallel.hpc.unsw.edu.au/rks
International prefix  +612, Interstate prefix 02



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: Teleportation thought experiment and UD+ASSA

2006-06-21 Thread Quentin Anciaux

Hi Hal,

Le Mercredi 21 Juin 2006 19:31, Hal Finney a écrit :
> What, after all, do these principles mean?  They say that the
> implementation substrate doesn't matter.  You can implement a person
> using neurons or tinkertoys, it's all the same.  But if there is no way
> in principle to tell whether a system implements a person, then this
> philosophy is meaningless since its basic assumption has no meaning.
> The MWI doesn't change that.

That's exactly the point of Bruno I think... What you've shown is that 
physicalism is not compatible with computationalism. In the UD vision, there 
is no real "instantiation" even the UD itself does not need to be 
instantiated, only the existence of the algorithm itself is necessary.

Quentin

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: Teleportation thought experiment and UD+ASSA

2006-06-21 Thread "Hal Finney"

Russell Standish <[EMAIL PROTECTED]> writes:
> If computationalism is true, then a person is instantiated by all
> equivalent computations. If you change one instantiation to something
> inequivalent, then that instantiation no longer "instantiates" the
> person. The person continues to exist, as long as there remain valid
> computations somewhere in the universe. And in almost any of the many
> worlds variants we consider in this list, that will be true.

That's true, but even with the MWI, making an instantiation cease to
exist decreases the measure of that person.  Around here we call that
"murder".  The moral question still exists.  I don't see the MWI as
rescuing functionalism and computationalism.

What, after all, do these principles mean?  They say that the
implementation substrate doesn't matter.  You can implement a person
using neurons or tinkertoys, it's all the same.  But if there is no way
in principle to tell whether a system implements a person, then this
philosophy is meaningless since its basic assumption has no meaning.
The MWI doesn't change that.

Hal Finney

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



RE: Re: Teleportation thought experiment and UD+ASSA

2006-06-21 Thread "Hal Finney"

"Stathis Papaioannou" <[EMAIL PROTECTED]> writes:
> Hal Finney writes:
> > I should first mention that I did not anticipate the conclusion that
> > I reached when I did that analysis.  I did not expect to conclude that
> > teleportation like this would probably not work (speaking figurately).
> > This was not the starting point of the analysis, but the conclusion.
>
> Yes, but every theoretical scientist hopes ultimately to be vindicated
> by the experimentalists. I'm now not sure what you mean by the second
> sentence in the above quote. What would you expect to find if (classical,
> destructive) teleportation of a subject in Brussels to Moscow and/or
> Washington were attempted?

>From the third party perspective, I'd expect that we'd start with a
person in Brussels, and end up with people in Moscow and Washington who
each have the memories and personality of the person who is no longer
in Brussels.  The population of Earth would have increased by one.
I imagine that this is unproblematic and is simply a restatement of the
stipulated conditions of the experiment.

The more interesting question to ask is whether I would submit to
this, and if so, what would I expect?  Note that this is not subject
to experimental verification.  When we have described the third party
situation, we have already said everything that experimentalists could
verify.  When those two people wake up in Moscow and Washington there
is no conceivable experiment by which we can judge whether the person
in Brussels has in some sense survived, or perhaps has done even better
than surviving.  It's not even clear what these questions mean.

It was my attempt to formalize these questions which led to my analysis.
Perhaps it is best if I go back to the more formal statement of the
results, and say that the contribution of this universe to the measure
of a person who experiences surviving the teleportation and wakes up in
W or M is much less than the contribution to the measure of a person who
walks into the machine in Brussels and never experiences anything else.
At a minimum, this would make me hesitant to use the machine.

Now, other philosophical considerations might still convince me to use the
machine; but it would be more like the two copies are my heirs, people
who will live on after I am gone and help to put my plans into action.
People sometimes sacrifice themselves for their children, and the argument
would be even stronger here since these are far more similar to me than
biological relations.  So even if I don't personally expect to survive
the transition I might still decide to use the machine.

Hal Finney

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



RE: Re: Teleportation thought experiment and UD+ASSA

2006-06-21 Thread Stathis Papaioannou


Hal Finney writes:
 
> I should first mention that I did not anticipate the conclusion that> I reached when I did that analysis.  I did not expect to conclude that> teleportation like this would probably not work (speaking figurately).> This was not the starting point of the analysis, but the conclusion.
Yes, but every theoretical scientist hopes ultimately to be vindicated by the experimentalists. I'm now not sure what you mean by the second sentence in the above quote. What would you expect to find if (classical, destructive) teleportation of a subject in Brussels to Moscow and/or Washington were attempted?
 
Stathis PapaioannouBe one of the first to try Windows Live Mail. Windows Live Mail.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups "Everything List" group.  To post to this group, send email to everything-list@googlegroups.com  To unsubscribe from this group, send email to [EMAIL PROTECTED]  For more options, visit this group at http://groups.google.com/group/everything-list  -~--~~~~--~~--~--~---




Re: Teleportation thought experiment and UD+ASSA

2006-06-21 Thread "Hal Finney"

"Saibal Mitra" <[EMAIL PROTECTED]> writes:
> I don't understand why you consider the measures of the programs that do the
> simulations. The ''real'' measure should be derived from the algorithmic
> complexity of the laws of physics that describe how the computers/brains
> work. If you know for certain that a computation will be performed in this
> universe, then it doesn't matter how it is performed.

I think what you're saying here is that if a mental state is instantiated
by a given universe, the contribution to its measure should just be
the measure of the universe that instantiates it.  And that universe's
measure is based on the complexity of its laws of physics.

I used to hold this view, but I eventually abandoned it because of a
number of problems.  I need to go back and collect the old messages
and discussions that we have had and put them into some kind of order.
But I can mention a couple of issues.

One problem is the one I just wrote about in my reply to Russell, the
fuzziness of the concept of implementation.  In at least some universes
we may face a gray area in deciding whether a particular computation,
or more specifically a particular mental state, is being instantiated.
Philosophers like Hans Moravec apparently really believe that every
system instantiates virtually every mental state!  If you look at the
right subset of the atomic vibrations inside a chunk of rock, you can
come up with a pattern that is identical to the pattern of firing of
neurons in your own brain.  Now, most philosophers reject this, they come
up with various technical criteria that implementations have to satisfy,
but as I wrote to Russell I don't think any of these work.

The other problem arises from fuzziness in what counts as a "universe".
The problem is that you can write very simple programs which will
create your mental state.  For example, the Universal Dovetailer does
just that.  But the UD program is much smaller than what our universe's
physical laws probably would be.  Does the measure of the UD "count" as a
contribution to every mind it creates?  If so, then it will dominate over
the contributions from more conventional universes; and further, since
the UD generates all minds, it means that all minds have equal measure.
To reject the UD as a cheating non-universe means that we will need a
bunch of ad hoc rules about what counts as a universe and what does not,
which are fundamentally arbitrary and unconvincing.

Then there are all those bothersome disputes which arise in this model,
such as whether multiple instantiations should add more measure than
just one; or whether a given brain in a small universe should get more
measure than the same brain in a big universe (since it uses a higher
proportion of the universe's resources in the first case).  All these
issues, as well as the ones above, are addressed and answered in my
current framework, which is far simpler (the measure of a mental state
is just its Kolmogorov measure - end of story).


> The algorithmic complexity of the program needed to simulate a brain refers
> to a ''personal universe''. You can think of the brain as a machine that is
> simulating a virtual world in which the qualia we experience exist. That
> world also exists independent of our brain in a universe of its own. This
> world has a very small measure defined by the very large algorithmic
> complexity of the program needed to specify the brain.

I agree with this, I think.  The program needed to specify a mental state
a priori would be far larger than the program needed to specify the laws
of physics which could cause that mental state to evolve "naturally".
Both programs make a contribution to the measure of the mental state,
but the second one's contribution is enormously greater.

The key point, due to Wei Dai, is that you can mathematically treat the
two on an equal footing.  As you have described it, we have a virtual
world with qualia being created by a brain; and you have that same
world existing independently as a universe of its own.  Those are pretty
different in a Schmidhuber type model.  The second case is the output of
one of the universe programs (a very very complex one).  The first case
is a rather intangible property of a universe program much like our own.

To unify them we ask the question of how we can output a representation
of that virtual world with qualia, using the shortest possible program.
Assuming that there actually is a universe which naturally evolves
a brain experiencing this mental state, we can do it with a two-part
program: the first which creates and evolves the universe, and the second
which analyzes the output of that universe to output the virtual world
representation we seek.  This second part basically translates the brain
activity, part of the universe created by the first part, into whatever
reprsentation we have chosen for the virtual world and qualia.

Given that the universe created by the first part does evolve the
brain states ne

Re: Teleportation thought experiment and UD+ASSA

2006-06-21 Thread Russell Standish

On Tue, Jun 20, 2006 at 11:11:15PM -0700, "Hal Finney" wrote:
> 
> I am mostly referring to the philosophical literature on the problems of
> what counts as an instantiation, as well as responses considered here
> and elsewhere.  One online paper is Chalmers' "Does a Rock Implement
> Every Finite-State Automaton?", http://consc.net/papers/rock.html.
> Jacques Mallah (who seems to have disappeared from the net) discussed
> the issue on this list several years ago.
> 
> Now, Chalmers (and Mallah) claimed to have a solution to decide when
> a physical system implements a calculation.  But I don't think they
> work; at least, they admit gray areas.  In fact, I think Mallah came
> up with the same basic idea I am advocating, that there is a degree of
> instantiation and it is based on the Kolmogorov complexity of a program
> that maps between physical states and corresponding computational states.
> 
> For functionalism to work, though, it seems to me that you really need
> to be able to give a yes or no answer to whether something implements a
> given calculation.  Fuzziness will not do, given that changing the system
> may kill a conscious being!  It doesn't make sense to say that someone is
> "sort of" there, at least not in the conventional functionalist view.
> 

If computationalism is true, then a person is instantiated by all
equivalent computations. If you change one instantiation to something
inequivalent, then that instantiation no longer "instantiates" the
person. The person continues to exist, as long as there remain valid
computations somewhere in the universe. And in almost any of the many
worlds variants we consider in this list, that will be true.

> A fertile source of problems for functionalism involves the question
> of whether playbacks of passive recordings of brain states would be
> conscious.  If not (as Chalmers and many others would say, since they
> lack the proper counterfactual behavior), this leads to a machine with a
> dial which controls the percentage of time its elements behave according
> to a passive playback versus behaving according to active computational
> rules.  Now we can turn the knob and have the machine gradually move from
> unconsciousness to full consciousness, without changing its behavior in
> any way as we twiddle the knob.  This invokes Chalmers' "fading qualia"
> paradox and is again fatal for functionalism.
> 

It is only a problem in a single world ontology - in many worlds
ontologies this is not a problem.

> Maudlin's machines, which we have also mentioned on this list from time
> to time, further illustrate the problems in trying to draw a bright line
> between implementations and clever non-implementations of computations.
> 

Indeed  - it points to many worlds ontologies as necessary for
functionalism to be true. This is, in fact, the point of Bruno's main argument.

> In short I view functionalism as being fundamentally broken unless there
> is a much better solution to the implementation question than I am aware
> of.  Therefore we cannot assume a priori that a brain implementation and a
> computational implementation of mental states will be inherently the same.
> And I have argued in fact that they could have different properties.
> 
> Hal Finney
> 
> 
-- 
*PS: A number of people ask me about the attachment to my email, which
is of type "application/pgp-signature". Don't worry, it is not a
virus. It is an electronic signature, that may be used to verify this
email came from me if you have PGP or GPG installed. Otherwise, you
may safely ignore this attachment.


A/Prof Russell Standish  Phone 8308 3119 (mobile)
Mathematics0425 253119 (")
UNSW SYDNEY 2052 [EMAIL PROTECTED] 
Australiahttp://parallel.hpc.unsw.edu.au/rks
International prefix  +612, Interstate prefix 02



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: Teleportation thought experiment and UD+ASSA

2006-06-21 Thread "Hal Finney"

Russell Standish <[EMAIL PROTECTED]> writes:
> On Tue, Jun 20, 2006 at 09:35:12AM -0700, "Hal Finney" wrote:
> > I think that one of the fundamental principles of your COMP hypothesis
> > is the functionalist notion, that it does not matter what kind of system
> > instantiates a computation.  However I think this founders on the familiar
> > paradoxes over what counts as an instantiation.  In principle we can
> > come up with a continuous range of devices which span the alternatives
> > from non-instantiation to full instantiation of a given computation.
> > Without some way to distinguish these, there is no meaning to the question
> > of when a computation is instantiated; hence functionalism fails.
> > 
>
> I don't follow your argument here, but it sounds interesting. Could you
> expand on this more fully? My guess is that ultimately it will depend
> on an assumption like the ASSA.

I am mostly referring to the philosophical literature on the problems of
what counts as an instantiation, as well as responses considered here
and elsewhere.  One online paper is Chalmers' "Does a Rock Implement
Every Finite-State Automaton?", http://consc.net/papers/rock.html.
Jacques Mallah (who seems to have disappeared from the net) discussed
the issue on this list several years ago.

Now, Chalmers (and Mallah) claimed to have a solution to decide when
a physical system implements a calculation.  But I don't think they
work; at least, they admit gray areas.  In fact, I think Mallah came
up with the same basic idea I am advocating, that there is a degree of
instantiation and it is based on the Kolmogorov complexity of a program
that maps between physical states and corresponding computational states.

For functionalism to work, though, it seems to me that you really need
to be able to give a yes or no answer to whether something implements a
given calculation.  Fuzziness will not do, given that changing the system
may kill a conscious being!  It doesn't make sense to say that someone is
"sort of" there, at least not in the conventional functionalist view.

A fertile source of problems for functionalism involves the question
of whether playbacks of passive recordings of brain states would be
conscious.  If not (as Chalmers and many others would say, since they
lack the proper counterfactual behavior), this leads to a machine with a
dial which controls the percentage of time its elements behave according
to a passive playback versus behaving according to active computational
rules.  Now we can turn the knob and have the machine gradually move from
unconsciousness to full consciousness, without changing its behavior in
any way as we twiddle the knob.  This invokes Chalmers' "fading qualia"
paradox and is again fatal for functionalism.

Maudlin's machines, which we have also mentioned on this list from time
to time, further illustrate the problems in trying to draw a bright line
between implementations and clever non-implementations of computations.

In short I view functionalism as being fundamentally broken unless there
is a much better solution to the implementation question than I am aware
of.  Therefore we cannot assume a priori that a brain implementation and a
computational implementation of mental states will be inherently the same.
And I have argued in fact that they could have different properties.

Hal Finney

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: Teleportation thought experiment and UD+ASSA

2006-06-20 Thread Brent Meeker

Hal Finney wrote:
> I'll offer my thoughts on first-person indeterminacy.  This is based
> on Wei Dai's framework which I have called UD+ASSA.  I am working on
> some web pages to summarize the various conclusions I have drawn from
> this framework.  (Actually, here I am going to in effect use the SSA
> rather than the ASSA, i.e. I will work not with observer-moments but
> with entire observer lifetimes.  But the same principles apply.)
> 
> Let us consider Bruno's example where you are annihilated in Brussels
> and then copies of your final state are materialized in Washington and
> Moscow, and allowed to continue to run.  What can we say about your
> subjective first-person expectations in this experiment?
> 
> Here is how I would approach the problem.  It is a very straightforward
> computational procedure (in principle).  Consider any hypothetical
> subjective, first person stream of consciousness.  This would basically
> be a record of the thoughts and experiences of a hypothetical observer.
> Let us assume that this can be written and recorded in some form.
> Perhaps it is a record of neural firing patterns over the course of the
> observer's lifetime, or perhaps a more compressed description based on
> such information.
> 
> The question I would aim to answer is this: for any proposed, hypothetical
> first-person lifetime stream of consciousness, how much measure does
> this hypothetical subjective lifetime acquire from the third-person
> events in the universe?

How do you draw the line in a 3rd person view?  From the 3rd person view all 
the events in my body 
are events in the universe.

Brent Meeker

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: Teleportation thought experiment and UD+ASSA

2006-06-20 Thread Saibal Mitra

I don't understand why you consider the measures of the programs that do the
simulations. The ''real'' measure should be derived from the algorithmic
complexity of the laws of physics that describe how the computers/brains
work. If you know for certain that a computation will be performed in this
universe, then it doesn't matter how it is performed.

The algorithmic complexity of the program needed to simulate a brain refers
to a ''personal universe''. You can think of the brain as a machine that is
simulating a virtual world in which the qualia we experience exist. That
world also exists independent of our brain in a universe of its own. This
world has a very small measure defined by the very large algorithmic
complexity of the program needed to specify the brain.


Saibal



From: ""Hal Finney"" <[EMAIL PROTECTED]>
To: 
Sent: Tuesday, June 20, 2006 06:35 PM
Subject: Re: Teleportation thought experiment and UD+ASSA


>
> Bruno writes:
> > Hal,
> >
> > It seems to me that you are introducing a notion of physical
universe,=20
> > and then use it to reintroduce a notion of first person death, so
that=20
> > you can bet you will be the one "annihilated" in Brussels.
>
> I should first mention that I did not anticipate the conclusion that
> I reached when I did that analysis.  I did not expect to conclude that
> teleportation like this would probably not work (speaking figurately).
> This was not the starting point of the analysis, but the conclusion.
>
> The starting point was the framework I have described previously, which
> can be stated very simply as that the measure of an information pattern
> comes from the universal distribution of Kolmogorov.  I then applied this
> analysis to specific information patterns which represent subjective,
> first person lifetime experiences.  I concluded that the truncated version
> which ends when the teleportation occurs would probably have higher
> measure than the ones which proceed through and beyond the teleportation.
>
> Although I worked in terms of a specific physical universe, that is
> a short-cut for simplicity of exposition.  The general case is to simply
> ask for the K measure of each possible first-person subjective life
> experience - what is the shortest program that produces each one.  I
> assume that the shortest program will in fact have two parts, one which
> creates a universe and the second which takes that universe as input
> and produces the first-person experience record as output.
>
> This leads to a Schmidhuber-like ensemble where we would consider
> all possible universes and estimate the contribution of each one to
> the measure of a particular first-person experience.  It is important
> though to keep in mind that in practice the only universe which adds
> non-negligible measure would be the one we are discussing.  In other
> words, consider the first person experience of being born, living your
> life, travelling to Brussels and stepping into a teleportation machine.
> A random, chaotic universe would add negligibly to the measure of this
> first-person life experience.  Likewise for a universe which only evolves
> six-legged aliens on some other planet.  So in practice it makes sense
> to restrict our attention to the (approximately) one universe which has
> third-person objective events that do add significant measure to the
> instantiation of these abstract first-person experiences.
>
>
> > You agree that this is just equivalent of negating the comp
hypothesis.=20
> > You would not use (classical) teleportation, nor accept a digital=20
> > artificial brain, all right? Do I miss something?
>
> It is perhaps best to say that I would not do these things
> *axiomatically*.  Whether a particular teleportation technology would
> be acceptable would depend on considerations such as I described in my
> previous message.  It's possible that the theoretical loss of measure for
> some teleportation technology would be small enough that I would do it.
>
> As far as using an artificial brain, again I would look to this kind of
> analysis.  I have argued previously that a brain which is much smaller
> or faster than the biological one should have much smaller measure, so
> that would not be an appealing transformation.  OTOH an artificial brain
> could be designed to have larger measure, such as by being physically
> larger or perhaps by having more accurate and complete memory storage.
> Then that would be appealing.
>
> I think that one of the fundamental principles of your COMP hypothesis
> is the functionalist notion, that it does not matter what kind of system
> instantiates a computation.  However I think this founders on the fa

Re: Teleportation thought experiment and UD+ASSA

2006-06-20 Thread Russell Standish

On Tue, Jun 20, 2006 at 09:35:12AM -0700, "Hal Finney" wrote:
> 
> The starting point was the framework I have described previously, which
> can be stated very simply as that the measure of an information pattern
> comes from the universal distribution of Kolmogorov.  I then applied this
> analysis to specific information patterns which represent subjective,
> first person lifetime experiences.  I concluded that the truncated version
> which ends when the teleportation occurs would probably have higher
> measure than the ones which proceed through and beyond the teleportation.

Comment to Bruno - Hal starts with the ASSA. I'm pretty sure this
negates functionalism and hence COMP. 

I just checked my book - I noted Hal as a staunch member of the ASSA
camp :).

> 
> This leads to a Schmidhuber-like ensemble where we would consider
> all possible universes and estimate the contribution of each one to
> the measure of a particular first-person experience.  It is important

Echoes of my previous correspondence to Bruno - it would seem
Schmidhuber is an ASSA supporter too...

> 
> I think that one of the fundamental principles of your COMP hypothesis
> is the functionalist notion, that it does not matter what kind of system
> instantiates a computation.  However I think this founders on the familiar
> paradoxes over what counts as an instantiation.  In principle we can
> come up with a continuous range of devices which span the alternatives
> from non-instantiation to full instantiation of a given computation.
> Without some way to distinguish these, there is no meaning to the question
> of when a computation is instantiated; hence functionalism fails.
> 

I don't follow your argument here, but it sounds interesting. Could you
expand on this more fully? My guess is that ultimately it will depend
on an assumption like the ASSA.


-- 
*PS: A number of people ask me about the attachment to my email, which
is of type "application/pgp-signature". Don't worry, it is not a
virus. It is an electronic signature, that may be used to verify this
email came from me if you have PGP or GPG installed. Otherwise, you
may safely ignore this attachment.


A/Prof Russell Standish  Phone 8308 3119 (mobile)
Mathematics0425 253119 (")
UNSW SYDNEY 2052 [EMAIL PROTECTED] 
Australiahttp://parallel.hpc.unsw.edu.au/rks
International prefix  +612, Interstate prefix 02



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: Teleportation thought experiment and UD+ASSA

2006-06-20 Thread "Hal Finney"

Bruno writes:
> Hal,
>
> It seems to me that you are introducing a notion of physical universe,=20
> and then use it to reintroduce a notion of first person death, so that=20
> you can bet you will be the one "annihilated" in Brussels.

I should first mention that I did not anticipate the conclusion that
I reached when I did that analysis.  I did not expect to conclude that
teleportation like this would probably not work (speaking figurately).
This was not the starting point of the analysis, but the conclusion.

The starting point was the framework I have described previously, which
can be stated very simply as that the measure of an information pattern
comes from the universal distribution of Kolmogorov.  I then applied this
analysis to specific information patterns which represent subjective,
first person lifetime experiences.  I concluded that the truncated version
which ends when the teleportation occurs would probably have higher
measure than the ones which proceed through and beyond the teleportation.

Although I worked in terms of a specific physical universe, that is
a short-cut for simplicity of exposition.  The general case is to simply
ask for the K measure of each possible first-person subjective life
experience - what is the shortest program that produces each one.  I
assume that the shortest program will in fact have two parts, one which
creates a universe and the second which takes that universe as input
and produces the first-person experience record as output.

This leads to a Schmidhuber-like ensemble where we would consider
all possible universes and estimate the contribution of each one to
the measure of a particular first-person experience.  It is important
though to keep in mind that in practice the only universe which adds
non-negligible measure would be the one we are discussing.  In other
words, consider the first person experience of being born, living your
life, travelling to Brussels and stepping into a teleportation machine.
A random, chaotic universe would add negligibly to the measure of this
first-person life experience.  Likewise for a universe which only evolves
six-legged aliens on some other planet.  So in practice it makes sense
to restrict our attention to the (approximately) one universe which has
third-person objective events that do add significant measure to the
instantiation of these abstract first-person experiences.


> You agree that this is just equivalent of negating the comp hypothesis.=20
> You would not use (classical) teleportation, nor accept a digital=20
> artificial brain, all right? Do I miss something?

It is perhaps best to say that I would not do these things
*axiomatically*.  Whether a particular teleportation technology would
be acceptable would depend on considerations such as I described in my
previous message.  It's possible that the theoretical loss of measure for
some teleportation technology would be small enough that I would do it.

As far as using an artificial brain, again I would look to this kind of
analysis.  I have argued previously that a brain which is much smaller
or faster than the biological one should have much smaller measure, so
that would not be an appealing transformation.  OTOH an artificial brain
could be designed to have larger measure, such as by being physically
larger or perhaps by having more accurate and complete memory storage.
Then that would be appealing.

I think that one of the fundamental principles of your COMP hypothesis
is the functionalist notion, that it does not matter what kind of system
instantiates a computation.  However I think this founders on the familiar
paradoxes over what counts as an instantiation.  In principle we can
come up with a continuous range of devices which span the alternatives
from non-instantiation to full instantiation of a given computation.
Without some way to distinguish these, there is no meaning to the question
of when a computation is instantiated; hence functionalism fails.

My approach (not original to me) is to recognize that there is a degree
of instantiation, as I have described via the conditional Kolmogorov
measure (i.e. given a physical system, how much does it help a minimal
computation to produce the desired output).  This then leads very
naturally to the analysis I provided in my previous message, which
attempted to estimate the conditional K measure for the hypothetical
first-person computations that were being potentially instantiated by
the given third-party physical situation.

Hal Finney

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: Teleportation thought experiment and UD+ASSA

2006-06-20 Thread Bruno Marchal

Hal,

It seems to me that you are introducing a notion of physical universe, 
and then use it to reintroduce a notion of first person death, so that 
you can bet you will be the one "annihilated" in Brussels.

You agree that this is just equivalent of negating the comp hypothesis. 
You would not use (classical) teleportation, nor accept a digital 
artificial brain, all right? Do I miss something?

Bruno


Le 20-juin-06, à 08:47, Hal Finney a écrit :

>
> I'll offer my thoughts on first-person indeterminacy.  This is based
> on Wei Dai's framework which I have called UD+ASSA.  I am working on
> some web pages to summarize the various conclusions I have drawn from
> this framework.  (Actually, here I am going to in effect use the SSA
> rather than the ASSA, i.e. I will work not with observer-moments but
> with entire observer lifetimes.  But the same principles apply.)
>
> Let us consider Bruno's example where you are annihilated in Brussels
> and then copies of your final state are materialized in Washington and
> Moscow, and allowed to continue to run.  What can we say about your
> subjective first-person expectations in this experiment?
>
> Here is how I would approach the problem.  It is a very straightforward
> computational procedure (in principle).  Consider any hypothetical
> subjective, first person stream of consciousness.  This would basically
> be a record of the thoughts and experiences of a hypothetical observer.
> Let us assume that this can be written and recorded in some form.
> Perhaps it is a record of neural firing patterns over the course of the
> observer's lifetime, or perhaps a more compressed description based on
> such information.
>
> The question I would aim to answer is this: for any proposed, 
> hypothetical
> first-person lifetime stream of consciousness, how much measure does
> this hypothetical subjective lifetime acquire from the third-person
> events in the universe?
>
> The answer is very simple: it is the conditional Kolmogorov measure of
> the subjective lifetime record, given the universe as input.  In other
> words, consider the shortest program which, given the universe as 
> input,
> produces that precise subjective lifetime record as output; if the 
> length
> of that program is L, then this universe contributes 1/2^L to the 
> measure
> of that subjective lifetime.
>
> Note that I am not trying to start from the universe and decide what 
> the
> first-person stream of consciousness is; rather, I compute the 
> numerical
> degree to which the universe instantiates any first-person stream of
> consciousness.  However, this does in effect answer the first question,
> since we can consider all possible streams of consciousness, and
> determine which one(s) the universe mostly adds measure to.  These 
> would
> be the ones that we would informally say that the universe 
> instantiates.
>
> Now, let me illustrate how this would be applied to the situation in
> question, and some other thought experiments.  Specifically, let us
> imagine three hypothetical streams of consciousness: B goes through 
> life
> until the moment the subject is annihilated in Brussels, then stops.
> W goes through life as does B but continues with the life experiences
> from Washington.  And M is like W, going through life until the event
> in Brussels but then continuing with the events in Moscow.
>
> Normally we only consider first-person experiences like M and W when
> we discuss this experiment, where the consciousness "jumps" to Moscow
> or Washington respectively, but it is also useful to consider B, which
> corresponds to dying in Brussels.
>
> Let me first deal with a trivial case to illustrate one of the issues 
> that
> arise when we compare first-person experiences that stop at different
> times.  Imagine a conventional lifetime where a person lives to a ripe
> old age of 90.  Now imagine the truncated version of that which we
> cut off arbitrarily at age 50.  Obviously the universe will contribute
> significant measure to both of these first-person experience streams.
> Which one will get more?
>
> I would suggest that it is actually the 90 year old lifespan which
> will have more measure.  The reason is because any program to turn the
> third-person record of all events into a meaningful, compact record of
> the lifetime experience is going to have to deal with the enormous gap
> between the fundamental events of physics, which happen at the Planck
> scale, and the fundamental events of consciousness, which although
> small to us are at an enormously larger scale compared to physics.
> This means that the program to do this conversion is going to have to
> be intensively data driven; it will have to identify tenuous and rather
> amorphous patterns of physical events, in order to translate them into
> the neurophysiological events that we would want to record.
>
> Given this structure, the (approximate) moment of physical death will
> be easily recognized, as it is that moment when 

RE: Teleportation thought experiment and UD+ASSA

2006-06-20 Thread Stathis Papaioannou


Hal,
 
Are you suggesting that the teleportation B->W,M could actually take place, from a third person perspective, but it is possible that the subject entering the teleporter at B from his point of view might actually die - not come out either at W or M? I know there are many people who would say that the subject *definitely* would die, and teleportation would be a form of suicide masquerading as transportation (maybe the people materialising at W and M are zombies or something),  but I don't see how it is even logically possible that the teleportation may sometimes work and sometimes not from a first person perspective.
 
In this and other posts it seems that you have a very different view on what it means to die to my own. I would say that if a person (or for that matter any object) moves from spacetime coordinates (x1,t1) to (x2,t2), that is equivalent to saying that he is annihilated at (x1,t1) and rematerializes at (x2,t2), albeit with a discontinuity. Ordinary movement through spacetime is the limiting case where the discontinuity approaches zero. In these examples I have assumed constant measure, but I don't see how increasing or decreasing measure could possibly make a difference. If the person at (x2,t2) has the same memories and other mental attributes as the person at (x1,t1), then ipso facto he has survived the move. If there are two instantiations at (x1a,t1a) and (x1b,t1b) in perfect lockstep, but only one at (x2,t2), on what basis could you choose one of a or b to die and the other to survive? It is a mistake to think that a particular individual stretches like a piece of string across spacetime, dying if the connection between the different instantiations is cut. Each instantiation stands on its own, and the "connection" between different instantiations is determined by their information content, like the relationship between different elements in a set.
 
Stathis Papaioannou



> To: everything-list@googlegroups.com> Subject: Teleportation thought experiment and UD+ASSA> Date: Mon, 19 Jun 2006 23:47:26 -0700> From: [EMAIL PROTECTED]> > > I'll offer my thoughts on first-person indeterminacy.  This is based> on Wei Dai's framework which I have called UD+ASSA.  I am working on> some web pages to summarize the various conclusions I have drawn from> this framework.  (Actually, here I am going to in effect use the SSA> rather than the ASSA, i.e. I will work not with observer-moments but> with entire observer lifetimes.  But the same principles apply.)> > Let us consider Bruno's example where you are annihilated in Brussels> and then copies of your final state are materialized in Washington and> Moscow, and allowed to continue to run.  What can we say about your> subjective first-person expectations in this experiment?> > Here is how I would approach the problem.  It is a very straightforward> computational procedure (in principle).  Consider any hypothetical> subjective, first person stream of consciousness.  This would basically> be a record of the thoughts and experiences of a hypothetical observer.> Let us assume that this can be written and recorded in some form.> Perhaps it is a record of neural firing patterns over the course of the> observer's lifetime, or perhaps a more compressed description based on> such information.> > The question I would aim to answer is this: for any proposed, hypothetical> first-person lifetime stream of consciousness, how much measure does> this hypothetical subjective lifetime acquire from the third-person> events in the universe?> > The answer is very simple: it is the conditional Kolmogorov measure of> the subjective lifetime record, given the universe as input.  In other> words, consider the shortest program which, given the universe as input,> produces that precise subjective lifetime record as output; if the length> of that program is L, then this universe contributes 1/2^L to the measure> of that subjective lifetime.> > Note that I am not trying to start from the universe and decide what the> first-person stream of consciousness is; rather, I compute the numerical> degree to which the universe instantiates any first-person stream of> consciousness.  However, this does in effect answer the first question,> since we can consider all possible streams of consciousness, and> determine which one(s) the universe mostly adds measure to.  These would> be the ones that we would informally say that the universe instantiates.> > Now, let me illustrate how this would be applied to the situation in> question, and some other thought experiments.  Specifically, let us> imagine three hypothetical streams of consciousness: B goes through life> until the moment the subject is annihilated in Brussels, then stops.> W goes through life as does B but continues with the life experiences> from Washington.  And M is like W, going through life until the event> in Brussels but then continuing with the events in Moscow.> > Normally we only consider first-person experiences like M