Re: problem of size '10

2010-03-10 Thread Jack Mallah
--- On Mon, 3/8/10, Stathis Papaioannou stath...@gmail.com wrote:
 In the original fading qualia thought experiment the artificial neurons could 
 be considered black boxes, the consciousness status of which is unknown. The 
 conclusion is that if the artificial neurons lack consciousness, then the 
 brain would be partly zombified, which is absurd.

That's not the argument Chalmers made, and indeed he couldn't have, since he 
believes zombies are possible; he instead talks about fading qualia.

If you start out believing that computer zombies are NOT possible, the original 
thought experiment is moot; you already believe the conclusion.   His argument 
is aimed at dualists, who are NOT computationalists to start out.

Since partial consciousness is possible, which he didn't take into account, his 
argument _fails_; a dualist who does believe zombies are possible should have 
no problem believing that partial zombies are.  So dualists don't have to be 
computationalists after all.

 I think this holds *whatever* is in the black boxes: computers, biological 
 tissue, a demon pulling strings or nothing.

Partial consciousness is possible and again ruins any such argument.  If you 
don't believe to start out that consciousness can be based on whatever (e.g. 
nothing), you don't have any reason to accept the conclusion.

 whatever is going on inside the putative zombie's head, if it reproduces the 
 I/O behaviour of a human, it will have the mind of a human.

That is behaviorism, not computationalism, and I certainly don't believe it.  I 
wouldn't say that a computer that uses a huge lookup table algorithm would be 
conscious.

 The requirement that a computer be able to handle the counterfactuals in 
 order to be conscious seems to have been brought in to make computationalists 
 feel better about computationalism.

Not at all.  It was always part of the notion of computation.  Would you buy a 
PC that only plays a movie?  It must handle all possible inputs in a reliable 
manner.

 Brains are all probabilistic in that disaster could at any point befall them 
 causing them to deviate widely from normal behaviour

It is not a problem, it just seems like one at first glance.  Such cases 
include input to the formal system; for some inputs, the device halts or acts 
differently.  Hence my talk of derailable computations in my MCI paper.

 or else prevent them from deviating at all from a rigidly determined pathway

If that were done, that would change what computation is being implemented.  
Depending on how it was done, it might or might not affect consciousness.  We 
can't do such an experimemt.

--- On Tue, 3/9/10, Stathis Papaioannou stath...@gmail.com wrote:
 Suppose box A contains a probabilistic mechanism that displays the right I/O 
 behaviour 99% of the time. Would the consciousness of the system be perfectly 
 normal until the box misbehaved ... ?

I'd expect it to be.  As above, I'd treat it as a box with input.

Now, as far as we know, there really is no such thing as true randomness.  It's 
all down to initial conditions (which are certainly to be treated as input) or 
to quantum splitting (which is again deterministic).  I don't believe in true 
randomness.

However, if true randomness is possible, then you'd have the same problem with 
Platonia.  In addition to having all of the determininistic Turing machines, 
you'd have all of the probabilistic Turing machines.  It is not an issue that 
bears on physicalism.




  

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: problem of size '10

2010-03-07 Thread Jack Mallah
--- On Tue, 3/2/10, David Nyman david.ny...@gmail.com wrote:
 computationalist theory of mind would amount to the claim that consciousness 
 supervenes only on realisations capable of instantiating this complete range 
 of underlying physical activity (i.e. factual + counterfactual) in virtue of 
 relevant physical laws.

Right (assuming physicalism).  Of course, implementing only part of the range 
of a computation that leads to consciousness might lead to the same 
consciousness, if it is the right part.

 In the case of a mechanism with the appropriate arrangements for 
 counterfactuals - i.e. one that in principle at least could be re-run in 
 such a way as to elicit the counterfactual activity - the question of whether 
 the relevant physical law is causal, or merely inferred, would appear to be 
 incidental.

Causality is needed to define implementation of a computation because otherwise 
we only have correlations.  Correlations could be coincidental or due to a 
common cause (such as the running of a movie).

--- On Fri, 3/5/10, Stathis Papaioannou stath...@gmail.com wrote:
 If the inputs to the remaining brain tissue are the same as they would have 
 been normally then effectively you have replaced the missing parts with a 
 magical processor, and I would say that the thought experiment shows that the 
 consciousness must be replicated in this magical processor. 

No, that's wrong. Having the right inputs could be due to luck (which is 
conceptually the cleanest way), or it could be due to pre-recording data from a 
previous simulation.  The only consciousness present is the partial one in the 
remaining brain.

 computationalism is only a subset of functionalism.

I used to think so but the terms don't quite mean what they sound like they 
should. It's a common misconception that functionalism means 
computationalism generalized to include analog and noncomputatble systems.

Functionalism as philosophers use it focuses on input and output.  It holds 
that any system which behaves the same in terms of i/o and which acts the same 
in terms of memory effects has the same consciousness.  There are different 
ways to make this more precise, and I believe that computationalism is one way, 
but it is not the only way.  For example, some functionalists would claim that 
a 'swampman' who spontaneously formed in a swamp due to random thermal motion 
of atoms, but who is physically identical to a human and coincidentally speaks 
perfect English, would not be conscious because he didn't have the right 
inputs.  I obviously reject that; 'swapman' would be a normal human.

Computationalism doesn't necessarily mean only digital computations, and it 
can include super-Turing machines that perform infinite steps in finite time.  
The main characteristic of computationalism is its identification of 
consciousness with systems that causally solve initial-value math problems 
given the right mapping from system to formal states.

--- On Fri, 3/5/10, Charles charlesrobertgood...@gmail.com wrote:
 The only fundamental difficulty I can see with this is if the brain actually 
 uses quantum computation, as suggested by some evidence that photopsynthesis 
 does (quoted by Bruno in another thread) - in which case it might be 
 impossible, even in principle, to reproduce the activity of the rest of the 
 brain (I'm not sure whether it would, but it seems a lot more likely).

It seems very unlikely that the brain uses QC for neural processes, which are 
based on electrical and chemical signals which decohere rapidly.  Also, I 
wouldn't make too much of the hype about photosynthesis using it - that seems 
an exaggeration; you can't make a general purpose quantum computer just by 
having some waves interfere.  Protein folding might use it in a sense but again 
nothing that could be used for a real QC.

But, that aside, even a quantum computer could be made partial.  I think that 
due to the no-signalling condition, the partial QC's interaction with the other 
part amounts to some combination of unitary operations which can be perfomed on 
the partial QC, and entanglement-induced decoherence.  You would still have to 
have something entangled with the partial QC but it wouldn't have to perform 
the computations associated with the missing parts if you perform the right 
operations on the remaining parts and know when to entangle or recohere things, 
I think.

In any case, a normal classical computer could simulate a QC - which should be 
good enough for a computationalist - and you could make the simulation partial 
in the normal way.

I should also note that if you _can't_ make a partial quantum brain, you 
probably don't have to worry about the things my argument is designed to 
attack, either, such as substituting _part_ of the brain with a movie (with no 
change in the rest) and invoking the 'fading qualia' argument.




  

-- 
You received this message because you are subscribed to the Google Groups 
Everything List 

Re: problem of size '10

2010-03-04 Thread Jack Mallah
--- On Wed, 3/3/10, Stathis Papaioannou stath...@gmail.com wrote:
 Jack Mallah jackmal...@yahoo.com wrote:
  For partial replacement scenarios, where part of a brain has 
  counterfactuals and the rest doesn't, see my partial brain paper: 
  http://cogprints.org/6321/
 
 I've finally come around to reading this paper. You may or may not be aware 
 that there is a condition called Anton's syndrome in which some patients who 
 are blind as a result of a lesion to their occipital cortex are unaware that 
 they are blind. It is not a matter of denial: the patients honestly believe 
 they have normal vision, and confabulate when asked to describe things placed 
 in front of them. They are deluded about their qualia, in other words.

Interesting, Stathis. I hadn't heard of that before. Despite the superficial 
similarity, though, it's very different from the partial brains I consider in 
the paper.

 similarly in your paper where you consider a gradual removal of brain tissue. 
 It would have to be very specific surgery to produce the sort of delusional 
 state you describe.

I'm not sure if you overlooked it but the key condition in my paper is that the 
inputs to the remaining brain are identical to what they would have been if the 
whole brain were present.  Thus, the neural activity in the partial brain is by 
definition identical to what would have occured in the corresponding part of a 
whole brain.  It is of course grossly implausible that this could be done in 
practice for a real biological brain (for one thing, you'd pretty much have to 
know in advance the microscopic details of everything that would have gone on 
in the removed part of the brain, or else guess and get incredibly lucky), but 
it presents no difficulties in priciple for a digital simulation, and in any 
case is a thought experiment.





  

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: problem of size '10

2010-03-04 Thread Jack Mallah
Bruno, I hope you feel better.  My quarrel with you is nothing personal.

--- Bruno Marchal marc...@ulb.ac.be wrote:
 Jack Mallah wrote:
  Bruno, you don't have to assume any 'prescience'; you just have to assume 
  that counterfactuals count.  No one but you considers that 'prescience' or 
  any kind of problem.
 
 This would lead to fading qualia in the case of progressive substitution from 
 the Boolean Graph to the movie graph.

I thought you said you don't use the 'fading qualia' argument (see below), 
which in any case is invalid as my partial brain paper shows.  So, you are 
wrong.

  gradually replace the components of the computer (which have the standard 
  counterfactual (if-then) functioning) with components that only play out 
  a pre-recorded script or which behave correctly by luck.
  
  You could then invoke the 'fading qualia' argument (qualia could 
  plausibly not vanish either suddenly or by gradually fading as the 
  replacement proceeds) to argue that this makes no difference to the 
  consciousness.  My partial brain paper shows that the 'fading qualia' 
  argument is invalid.
  
  I am not using the 'fading qualia' argument.
  
  Then someone else on the list must have brought it up at some point.  In 
  any case, it was the only interesting argument in favor of your position, 
  which was not trivially obviously invalid.  My PB paper shows that it is 
  invalid though.
 
 ?

What do you mean by ??

  I guess by 'physical supervenience' you mean supervenience on physical 
  activity only.
 
 Not at all. In the comp theory, it means supervenience on the physical 
 realization of a computation.

So, it includes supervenience on the counterfactuals?  If so, the movie 
obviously doesn't have the right counterfactuals, so your MGA fails.  I see 
nothing nontrivial in your arguments.

   Computationalism assumes supervenience on both physical activity and 
 physical laws (aka counterfactuals).
 
 ? You evacuate the computation?

I have no idea what you mean by that.  Computations are implemented based on 
both activity and counterfactuals, which is the same as saying they supervene 
on both.

 Consciousness does not arise from the movie, because the movie has the wrong 
 physical laws.  There is nothing about that that has anything to do with 
 'prescience'.
 
 This is not computationalism.

Of course it is.  Any mainstream computationalist agrees that the right 
counterfactuals (aka the right 'physical' laws) are needed.  Certainly Chalmers 
would agree.  What else would you call this position?

(I should note that when I say 'physical laws' it might instead be Platonic 
laws, if Platonic stuff exists in the right way.  I say 'physical' for short.  
I am agnostic on whether Platonic stuff exists in a strong enough sense.  In 
any case I maintain that it *could* be physical, as far as we know.)

  Bruno, try to read what I write instead of putting in your own meanings to 
  my words.
 
 I try politely to make sense to what you say by interpreting favorably your 
 term.

There is no polite way to say this: C'est merde.  You tried to twist my words 
towards your position.  Don't.

 Show the error, then.

I have already done so (for MGA): You claim that taking counterfactuals into 
account amounts to assuming 'prescience' and is thus implausible, but that's 
NOT true. Using counterfactuals/laws is how computation is defined.

Your repeated claims that the error has not been pointed out are a standard 
crackpot behavior.

 It helps to be agnostic on primitive matter before trying to understand the 
 reasoning.

In that case I should be the perfect candidate, being that I am agnostic on 
Platonism.  Your arguments don't sway me because they don't make any sense.

Remember, I came to this list because like many others here I thought up the 
'everthing that exists mathematically exists in the same way we do' idea by 
myself, and only found out online that others had thought of it too.  So I'm 
not prejudiced against it.  I just don't know if it's true, and I think it's 
important not to jump to conclusions.  Your 'work' has had no effect on my 
views on that.




  

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: problem of size '10

2010-03-02 Thread Jack Mallah
I finally figured out what was happening to my emails: the spam filter got 
overly agressive and it was sending some of the list posts to the spam folder, 
but letting others into the inbox.  The post I'm replying to now was one that 
was hidden that way.

--- On Sun, 2/14/10, Bruno Marchal marc...@ulb.ac.be wrote:
  Jack Mallah wrote:
  What is false is your statement that The only way to escape the conclusion 
  would be to attribute consciousness to a movie of a computation.  So your 
  argument is not valid.
 
 OK. I was talking in a context which is missing. You can also conclude in the 
 prescience of the neurons for example. The point is that if you assume the 
 physical supervenience thesis, you have to abandon comp and/or to introduce 
 magical (non Turing emulable) property in matter.

That is false. Bruno, you don't have to assume any 'prescience'; you just have 
to assume that counterfactuals count.  No one but you considers that 
'prescience' or any kind of problem.

  gradually replace the components of the computer (which have the standard 
  counterfactual (if-then) functioning) with components that only play out a 
  pre-recorded script or which behave correctly by luck.
 
  You could then invoke the 'fading qualia' argument (qualia could plausibly 
  not vanish either suddenly or by gradually fading as the replacement 
  proceeds) to argue that this makes no difference to the consciousness.  My 
  partial brain paper shows that the 'fading qualia' argument is invalid.
 
 I am not using the 'fading qualia' argument.

Then someone else on the list must have brought it up at some point.  In any 
case, it was the only interesting argument in favor of your position, which was 
not trivially obviously invalid.  My PB paper shows that it is invalid though.

  I think there was also a claim that counterfactual sensitivity amounts to 
  'prescience' but that makes no sense and I'm pretty sure that no one (even 
  those who accept the rest of your arguments) agrees with you on that.
 
 It is a reasoning by a an absurdum reduction. If you agree (with any 
 computationalist) that we cannot attribute prescience to the neurons, then 
 the physical activity of the movie is the same as the physical activity of 
 the movie, so that physical supervenience + comp entails that the  
 consciousness supervenes on the movie (and this is absurd, mainly because the 
 movie does not compute anything).

I guess by 'physical supervenience' you mean supervenience on physical activity 
only.  That is not what computationalism assumes. Computationalism assumes 
supervenience on both physical activity and physical laws (aka 
counterfactuals).  There is no secret about that.  Consciousness does not arise 
from the movie, because the movie has the wrong physical laws.  There is 
nothing about that that has anything to do with 'prescience'.

Now, there is a school of thought that says that physical laws don't exist per 
se, and are merely descriptions of what is already in the physical activity.  A 
computationalist physicalist obviously rejects that view.

  Counterfactual behaviors are properties of the overall system and are 
  mathematically defined.
 
 But that is the point: the counterfactuals are in the math.
 Not in the physical activity.

Bruno, try to read what I write instead of putting in your own meanings to my 
words.

A physical system has mathematically describable properties.  Among these are 
the physical activity and also the counterfactuals.  There is no distinction to 
make on that basis.  That is what I was saying.  That has nothing whatsoever to 
do with Platonism.

 machine ... its next personal state has to be recovered from the statistics 
 on the possible relative continuations.

No, nyet, non, and hell no.  That is merely your view, which I obviously reject 
and which has nothing to recommend it - especially NOT computationalism, your 
erroneous claims to the contrary.




  

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: problem of size '10

2010-02-24 Thread Jack Mallah
Last post didn't show up in email.  Seems random.

--- On Tue, 2/23/10, Jesse Mazer laserma...@gmail.com wrote:
 -even if there was a one-to-one relationship between distinct computations 
 and distinct observer-moments with distinct qualia, very similar computations 
 could produce very similar qualia,

Sure. So you want to know if there are different (though similar in certain 
ways) computations that would produce _identical_ consciousness?  I'd say yes, 
and see below.  Some cases are obvious - e.g. simulating a brain + other 
stuff and varying the other stuff, which does change the computation.

I think though that you are trying to get at something a little more subtle, so 
I'll go further.  In my MCI paper (arxiv.org/abs/0709.0544), I note that 

One computation may simulate some other computation and give rise to conscious 
experience only because it does so. In this case it would be unjustified double 
counting to allow the implementations of both computations to contribute to the 
measure. This problem is easily avoided by only considering computations which 
give rise to consciousness in a way that is not due merely to simulation of 
some other conscious computation.
Such a computation is a fundamental conscious computation (FCC).

So what you really want to know is whether different FCCs could give rise to 
the same consciousness.  Again I would say yes.

 you're not really saying that the Earth computation *taken as a whole* is 
 associated with multiple qualia. It's as if we associated distinct qualia 
 with distinct sets-

Again I think you are trying to get at FCCs.  So now you want to know if a 
single FCC can give rise to multiple observers.  That one is a bit harder but I 
suspect it could.

 Well, the idea is that to determine what causal structures are contained in a 
 given universe (whether a physical universe or a computation), we adopt the 
 self-imposed rule that we *only* look at a set of propositions concerning 
 events that actually occurred

 Aside from that though, the counterfactuals you mention are of a very limited 
 kind, just involving negations of propositions about events that actually 
 occurred. Perhaps I'm misunderstanding, but I thought that the way you (and 
 Chalmers) wanted to define implementations of computations using 
 counterfactuals involved a far richer set of counterfactuals about detailed 
 alternate histories of what could have occurred if the inputs were different.

Yes - computations are defined using a full spectrum of counterfactual 
behaviors.  I would certainly not change that definition as it is the simplest 
way to describe the dynamics of the system.

However, I think there could be some common ground between what you want to do 
and my approach.  As I wrote in the MCI paper (p. 21), 

... if a computer is built that ‘derails’ for the wrong input, that does not 
mean the computer does not implement any computations. It is true that it will 
not implement the same CSSA as it would if it did not suffer from the 
derailment issue, but it will still implement some CSSA which is related to the 
normal one. This new CSSA may be sufficient to give rise to consciousness.

Now, I think your approach is equivalent to the following conjecture:

Factual Implications Conjecture (FIC): If different computations have the same 
logical implication relationships among states (and conjuctions of states) that 
actually occur in the actual run, then they give rise to the same type of 
consciousness regardless of their dynamics for other (counterfactual) 
situations.

I'm not sure the FIC holds in all cases but it does seem plausible at least for 
many cases.





  

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



RE: problem of size '10

2010-02-23 Thread Jack Mallah
My last post worked (I got it in my email).  I'll repost one later and then 
post on the measure thread - though it's still a very busy time for me so maybe 
not today.

--- On Mon, 2/22/10, Jesse Mazer laserma...@hotmail.com wrote:
 OK, so you're suggesting there may not be a one-to-one relationship between 
 distinct observer-moments in the sense of distinct qualia, and distinct 
 computations defined in terms of counterfactuals? Distinct computations might 
 be associated with identical qualia, in other words?

Sure.  Otherwise, there'd be little point in trying to simulate someone, if any 
detail could change everything.

 What about the reverse--might a single computation be associated with 
 multiple distinct observer-moments with different qualia?

Certainly. For example, a sufficiently detailed simulation of the Earth would 
be associated with an entire population of observers.

 You say Suppose that a(t),b(t),and c(t) are all true, but that's not enough 
 information--the notion of causal structure I was describing involved not 
 just the truth or falsity of propositions, but also the logical relationships 
 between these propositions given the axioms of the system.

OK, I see what you're saying, Jesse.  I don't think it's a good solution though.

First, you are implicitly including a lot of counterfactual information 
already, which is the reason it works at all.  B implies A is logically 
equivalent to Not A implies Not B.  I'll use ~ for Not, -- for 
implies, and the axiom context is assumed.  A,B are Boolean variables / bits. 
 So if you say

A -- B
B -- A

that's the same as saying

A -- B
~A -- ~B

which is the same as saying B = A.  Your way is just a clumsy way to provide 
some of the counterfactual information, which is often most consisely expressed 
as equations.  So if you think you have escaped counterfactuals, I disagree.

The next problem is that for a larger number of bits, you won't express the 
full dynamics of the system.  For example with 10 bits, there are more possible 
combinations than your system will have statements.  I guess you see that as a 
feature rather than a bug - after all, it's what allows you to ignore inert 
machinery.  I don't like it but perhaps that's a matter a taste.

Now, that may work OK for bits, but it really seems to lose a lot for more 
general systems.  For example, suppose A,B,C are trits, or perhaps qubits, or 
real numbers such as positions.  Your logical implications remain limited to 
Boolean statements.  Do you really want to disregard so much of the system's 
dynamics?  I see no reason to do so when using counterfactuals in the usual way 
works just fine.  I consider any initial value problem to be a computation, 
including those that use differential equations.




  

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



RE: problem of size '10

2010-02-22 Thread Jack Mallah
Jesse, how do you access the everything list?  I ask because I have not 
recieved my own posts in my inbox, nor have others such as Bruno replied.  I 
use yahoo email.  I may need to use a different method to prevent my posts from 
getting lost.  They do seem to show up on Google groups though.  There was 
never a problem until recently, so I'll see if this one works.

--- On Mon, 2/22/10, Jesse Mazer laserma...@hotmail.com wrote:
 Hi Jack, to me the idea that counterfactuals would be essential to defining 
 what counts as an implementation has always seemed counterintuitive for 
 reasons separate from the Olympia or movie-graph argument. The 
 thought-experiment I'd like to consider is one where some device is implanted 
 in my brain that passively monitors the activity of a large group of neurons, 
 and only if it finds them firing in some precise prespecified sequence does 
 it activate and stimulate my brain in some way, causing a change in brain 
 activity; otherwise it remains causally inert
 According to the counterfactual definition of implementations, would the mere 
 presence of this device change my qualia from what they'd be if it wasn't 
 present, even if the neurons required to activate it never actually fire in 
 the correct sequence and the device remains completely inert? That would seem 
 to divorce qualia from behavior in a pretty significant way...

The link between qualia and computations is, of course, hard to know anything 
about.  But it seems to me quite likely that qualia would be insensitive to the 
sort of changes in computations that you are talking about.  Such modified 
computations could give rise to the same (or nearly the same) set of qualia for 
the 'inert device' runs as unmodified ones would have.  I am not saying that 
this must always be the case, since if you take it too far you could run into 
Maudlin-type problems, but in many cases it would make sense.

 If you have time, perhaps you could take a look at my post
 http://www.mail-archive.com/everything-list@googlegroups.com/msg16244.html
 where I discussed a vague idea for how one might define isomorphic causal 
 structures that could be used to address the implementation problem, in a 
 way that wouldn't depend on counterfactuals at all

You do need counterfactuals to define implementations.

Consider the computation c(t+1) = a(t) AND b(t), where a,b,c, are bits.  
Suppose that a(t),b(t),and c(t) are all true.  Without counterfactuals, how 
would you distinguish the above from another computation such as c(t+1) = a(t)?

Even worse, suppose that c(t+1) is true no matter what.  a(t) and b(t) happen 
to be true.  Is the above computation implemented?

This gets even worse when you allow time-dependent mappings, which make a lot 
of intuitive sense in many practical cases.  Now c=1 can mean c is true at 
time t+1, but so can c=0 under a different mapping.

All of these problems go away when you require correct counterfactual behavior.

You might wonder about time dependent mappings.  If a(t)=1, b(t)=1, and c(t+1) 
= 0, can that implement the computation, considering a,b as true and c=0 as c 
is true?  Only if c(t+1) _would have been 1_ (thus, c is false) if a(t) or 
b(t) had been zero.

Clearly, due to the various and time-dependent mappings, there are a lot of 
computations that end up equivalent.  But the point is that real distinctions 
remain.  No matter what mappings you choose, as long as counterfactual 
behaviors are required, there is NO mapping that would make a AND b 
equivalent to a XOR b.  If you drop the counterfactual requirement, that is 
no longer the case.

--- On Mon, 2/22/10, Brent Meeker meeke...@dslextreme.com wrote:
 It seems that these thought experiments inevitably lead to considering a 
 digital simulation of the brain in a virtual environment.  This is usually 
 brushed over as an inessential aspect, but I'm coming to the opinion that it 
 is essential.

It's not essential, just convenient for thought experiments.

 Once you have encapsulated the whole thought experiment in a closed virtual 
 environment in a digital computer you have the paradox of the rock that 
 computes everything.

No. Input/output is not the solution for that; restrictions on mappings is.  
See my MCI paper:  http://arxiv.org/abs/0709.0544




  

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



RE: problem of size '10

2010-02-17 Thread Jack Mallah
--- On Mon, 2/15/10, Stephen P. King stephe...@charter.net wrote:
 On reading the first page of your paper a thought occurred to me. What 
 actually happens in the case of progressive Alzheimer’s disease is a bit 
 different from the idea that I get from the discussion.

Hi Stephen.  Certainly, Alzheimer's disease is not the same as the kind of 
partial brains that I talk about in my paper, which maintain the same inputs as 
they would have within a full normal brain.

 Are you really considering “something” that I can realistically map to my own 
 1st person experience or could it be merely some abstract idea.

That brings in the 'hard problem' discussion, which has been brought up on this 
list recently and which I have also been thinking about recently.  I won't 
attempt to answer it right now.  I will say that ALL approaches (eliminativism, 
reductionism, epiphenomenal dualism, interactionist dualism, and idealism) seem 
to have severe problems.  'None of the above' is no better as the list seems 
exhaustive.  In any case, if my work sheds light on only some of the approaches 
that is still progress.

BTW, I replied to Bruno and the reply appeared on Google groups but I don't 
think I got a copy in my email so I am putting a copy of what I posted here:

--- On Fri, 2/12/10, Bruno Marchal marc...@ulb.ac.be wrote:
 Jack Mallah wrote:
 --- On Thu, 2/11/10, Bruno Marchal marc...@ulb.ac.be
   MGA is more general (and older).
   The only way to escape the conclusion would be to attribute consciousness 
   to a movie of a computation
 
  That's not true.  For partial replacement scenarios, where part of a brain 
  has counterfactuals and the rest doesn't, see my partial brain paper: 
  http://cogprints.org/6321/

 It is not a question of true or false, but of presenting a valid or non valid 
 deduction.

What is false is your statement that The only way to escape the conclusion 
would be to attribute consciousness to a movie of a computation.  So your 
argument is not valid.

 I don't see anything in your comment or links which prevents the conclusions 
 of being reached from the assumptions. If you think so, tell me at which 
 step, and provide a justification.

Bruno, I don't intend to be drawn into a detailed discussion of your arguments 
at this time.  The key idea though is that a movie could replace a computer 
brain.  The strongest argument for that is that you could gradually replace the 
components of the computer (which have the standard counterfactual (if-then) 
functioning) with components that only play out a pre-recorded script or which 
behave correctly by luck.  You could then invoke the 'fading qualia' argument 
(qualia could plausibly not vanish either suddenly or by gradually fading as 
the replacement proceeds) to argue that this makes no difference to the 
consciousness.  My partial brain paper shows that the 'fading qualia' argument 
is invalid.

I think there was also a claim that counterfactual sensitivity amounts to 
'prescience' but that makes no sense and I'm pretty sure that no one (even 
those who accept the rest of your arguments) agrees with you on that.  
Counterfactual behaviors are properties of the overall system and are 
mathematically defined.

 Jack Mallah wrote:
  It could be physicalist or platonist - mathematical systems can implement 
  computations if the exist in a strong enough (Platonic) sense.  I am 
  agnostic on Platonism.
 
 This contradicts your definition of computationalism given in your papers.
 I quote your glossary: Computationalism:  The philosophical belief that 
 consciousness arises as a result of implementation of computations by 
 physical systems. 

It's true that I didn't mention Platonism in that glossary entry (in the MCI 
paper), which was an oversight, but not a big deal given that the paper was 
aimed at physicists.  The paper has plenty of jobs to do already, and 
championing the possibility of the Everything Hypothesis was not the focus.

On p. 14 of the the MCI paper I wrote A computation can be implemented by a 
physical system which shares appropriate features with it, or (in an analogous 
way) by another computation.  If a computation exists in a Platonic sense, 
then it could implement other computations.

On p. 46 of the paper I briefly discussed the All-Universes Hypothesis.  That 
should leave no doubt as to my position.




  

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: problem of size '10

2010-02-13 Thread Jack Mallah
--- On Fri, 2/12/10, Bruno Marchal marc...@ulb.ac.be wrote:
 Jack Mallah wrote:
 --- On Thu, 2/11/10, Bruno Marchal marc...@ulb.ac.be
   MGA is more general (and older).
   The only way to escape the conclusion would be to attribute consciousness 
   to a movie of a computation
 
  That's not true.  For partial replacement scenarios, where part of a brain 
  has counterfactuals and the rest doesn't, see my partial brain paper: 
  http://cogprints.org/6321/

 It is not a question of true or false, but of presenting a valid or non valid 
 deduction.

What is false is your statement that The only way to escape the conclusion 
would be to attribute consciousness to a movie of a computation.  So your 
argument is not valid.

 I don't see anything in your comment or links which prevents the conclusions 
 of being reached from the assumptions. If you think so, tell me at which 
 step, and provide a justification.

Bruno, I don't intend to be drawn into a detailed discussion of your arguments 
at this time.  The key idea though is that a movie could replace a computer 
brain.  The strongest argument for that is that you could gradually replace the 
components of the computer (which have the standard counterfactual (if-then) 
functioning) with components that only play out a pre-recorded script or which 
behave correctly by luck.  You could then invoke the 'fading qualia' argument 
(qualia could plausibly not vanish either suddenly or by gradually fading as 
the replacement proceeds) to argue that this makes no difference to the 
consciousness.  My partial brain paper shows that the 'fading qualia' argument 
is invalid.

I think there was also a claim that counterfactual sensitivity amounts to 
'prescience' but that makes no sense and I'm pretty sure that no one (even 
those who accept the rest of your arguments) agrees with you on that.  
Counterfactual behaviors are properties of the overall system and are 
mathematically defined.

 Jack Mallah wrote:
  It could be physicalist or platonist - mathematical systems can implement 
  computations if the exist in a strong enough (Platonic) sense.  I am 
  agnostic on Platonism.
 
 This contradicts your definition of computationalism given in your papers.
 I quote your glossary: Computationalism:  The philosophical belief that 
 consciousness arises as a result of implementation of computations by 
 physical systems. 

It's true that I didn't mention Platonism in that glossary entry (in the MCI 
paper), which was an oversight, but not a big deal given that the paper was 
aimed at physicists.  The paper has plenty of jobs to do already, and 
championing the possibility of the Everything Hypothesis was not the focus.

On p. 14 of the the MCI paper I wrote A computation can be implemented by a 
physical system which shares appropriate features with it, or (in an analogous 
way) by another computation.  If a computation exists in a Platonic sense, 
then it could implement other computations.

On p. 46 of the paper I briefly discussed the All-Universes Hypothesis.  That 
should leave no doubt as to my position.




  

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: problem of size '10

2010-02-11 Thread Jack Mallah
--- On Thu, 2/11/10, Bruno Marchal marc...@ulb.ac.be wrote:
 A little thin brain would produce a zombie?

Even if size affects measure, a zombie is not a brain with low measure; it's a 
brain with zero measure.  So the answer is obviously no - it would not be a 
zombie.  Stop abusing the language.

We know that small terms in the wavefunction have low measure.  I would not 
call these terms 'zombies'.  Many small terms together can equal or exceed the 
measure of big terms.

 MGA is more general (and older). The only way to escape the conclusion would 
 be to attribute consciousness to a movie of a computation

That's not true.  For partial replacement scenarios, where part of a brain has 
counterfactuals and the rest doesn't, see my partial brain paper: 
http://cogprints.org/6321/

 What you call computationalism is a form of physicalist computationalism.

Not true.  It could be physicalist or platonist - mathematical systems can 
implement computations if the exist in a strong enough (Platonic) sense.  I am 
agnostic on Platonism.

 The measure is determined relatively by the universal machine by the set of 
 the maximal consistent extensions of its beliefs.

Also not true.  That's just your idea for how it should be done, which stems 
from your false beliefs in QTI.




  

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: problem of size '10

2010-02-10 Thread Jack Mallah
It's been a very busy week. I will reply to the measure thread (which is 
actually more important) but that could be in a few days.

--- On Thu, 1/28/10, Jason Resch jasonre...@gmail.com wrote:
 What about if half of your neurons were 1/2 their normal size, and the other 
 half were twice their normal size?  How would this be predicted to effect 
 your measure?

If it had any effect - and as I said, I don't think it would in a QM universe - 
I guess it would decrease the measure of part of your brain and increase that 
of the other part.  That may sound weird but it's certainly possible for one 
part of a parallel computation to have more measure than the rest which can be 
done by duplicating only that part of the brain.  See my paper on partial 
brains:

http://cogprints.org/6321/

--- On Thu, 1/28/10, Stathis Papaioannou stath...@gmail.com wrote:
 Do you think that simply doubling up the size of electronic components (much 
 easier to do than making brains bigger) would double measure?

The effect should be the same for brains or electronics.

 You could then flick the switch and alternate between two separate but 
 parallel circuits or one circuit. Would flicking the switch cause a 
 doubling/halving of measure? 

If the circuits don't interact, then it is two separate implementations, and 
measure would double.  If they do interact, we are back to 'big components' 
which as I said could go either way.

 Would it be tantamount to killing one of the consciousnesses every time you 
 did it?

Basically.  Killing usually implies an irreversible process; otherwise, someone 
is liable to come along and flick the switch back, so it's more like knocking 
someone out.  If the measure is halved and then you break the switch so it 
can't go back, that would be, yes.

--- On Thu, 1/28/10, Bruno Marchal marc...@ulb.ac.be wrote:
 Does the size of the components affects the computation?

Other than measure, the implemented computation would be the same, at least for 
the cases that matter.

 I don't assume the quantum stuff. It is what I want to understand. I gave an 
 argument showing that if we assume computationalism, then we have to derive 
 physics from (classical) computer science

Of course I know about your argument. It's false.

 You wrote convincing posts on the implementation problem. I thought, and 
 still think, that you understood that there is no obvious way to attribute a 
 computation to a physical process. With strict criteria we get nothing, with 
 weak criteria even a rock thinks.

The implementation problem is: Given a physical or mathematical system, does it 
implement a given computation?  As you say, if the answer is always yes - as 
it is on a naive definition of implementation - then computationalism can not 
work.

This was an important problem - which I presented a solution for in my '07 MCI 
paper:

http://arxiv.org/abs/0709.0544

So I now consider it a solved problem, using my CSSA framework.  The solution 
presented there does need a bit of refinement and I plan to write up a separate 
paper to present it more clearly and hopefully get some attention for it, but 
the main ideas are there.

But that's only half the story.  There is still the measure problem: Given that 
a system does implement some set of computations, what is the measure for each? 
 Without the answer to that, you can't predict what a typical observer would 
see.  This problem remains unsolved (though I do have proposals in the paper) 
and relates to the problem of size.




  

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: measure again '10

2010-02-01 Thread Jack Mallah
--- On Wed, 1/27/10, Brent Meeker meeke...@dslextreme.com wrote:
 Jack is talking about copies in the common sense of initially physically 
 identical beings who however occupy different places in the same spacetime 
 and hence have different viewpoints and experiences.

No, that's incorrect.  I don't know where you got that idea but I'd best put 
that misconception to rest first.

When I talk about copies I mean the same thing as the others on this list - 
beings who not only start out as the same type but also receive the same type 
of inputs and follow the same type of sequence of events.  Note: They follow 
the same sequence because they use the same algorithm but they must operate 
independently and in parallel - there are no causal links to enforce it.  If 
there are causal links forcing them to be in lockstep I might say they are 
shadows, not copies.

Such copies each have their own, separate consciousness - it just happens to be 
of the same type as that of the others.  It is not redundancy in the sense of 
needless redundancy.  Killing one would end that consciousness, yes.  In 
philosophy jargon, they are of the same type but are different tokens of it.

--- On Thu, 1/28/10, Jason Resch jasonre...@gmail.com wrote:
 Total utilitarianism advocates measuring the utility of a population based on 
 the total utility of its members.
 Average utilitarianism, on the other hand, advocates measuring the utility of 
 a population based on the average utility of that population.

I basically endorse total utilitarianism.  (I'm actually a bit more 
conservative but that isn't relevant here.)  I would say that average 
utilitarianism is completely insane and evil.  Ending the existence of a 
suffering person can be positive, but only if the quality of life of that 
person is negative.  Such a person would probably want to die.  OTOH not 
everyone who wants to die has negative utility, even if they think they do.

--- On Wed, 1/27/10, Stathis Papaioannou stath...@gmail.com wrote:
 if there were a million copies of me in lockstep and all but one were 
 destroyed, then each of the million copies would feel that they had 
 continuity of consciousness with the remaining one, so they are OK with what 
 is about to happen.

Suppose someone killed all copies but lied to them first, saying that they 
would survive.  They would not feel worried.  Would that be OK?  It seems like 
the same idea to me.

 Your measure-preserving criterion for determining when it's OK to kill a 
 person is just something you have made up because you think it sounds 
 reasonable, and has nothing to do with the wishes and feelings of the person 
 getting killed.

First, I should reiterate something I have already said: It is not generally OK 
to kill someone without their permission even if you replace them.  The reason 
it's not OK is just that it's like enslaving someone - you are forcing things 
for them.  This has nothing particularly to do with killing; the same would 
apply, for example, to cutting off someone's arm and replacing it with a new 
one.  Even if the new one works fine, the guy has a right to be mad if his 
permission was not asked for this.  That is an ethical issue.  I would make an 
exception for a criminal or bad guy who I would want to imprison or kill 
without his permission.

That said, as my example of lying to the person shows, Stathis, your criterion 
of caring about whether the person to be killed 'feels worried' is irrelevant 
to the topic at hand.

Measure preservation means that you are leaving behind the same number of 
people you started with.  There is nothing arbitrary about that.  If, even 
having obtained Bob's permission, you kill Bob, I'd say you deserve to be 
punished if I think Bob had value.  But if you also replace him with Charlie, 
then if I judge that Bob and Charlie are of equal value, I'd say you deserve to 
be punished and rewarded by the same amount.  The same goes if you kill Bob and 
Dave and replace them with Bob' and Dave', or if you kill 2 Bobs and replace 
them with 2 other Bobs.  That is measure preservation.  If you kill 2 Bobs and 
replace them with only one then you deserve a net punishment.

  Suppose there is a guy who is kind of a crazy oriental monk.  He meditates 
  and subjectively believes that he is now the reincarnation of ALL other 
  people.  Is it OK to now kill all other people and just leave alive this 
  one monk?
 
 No, because the people who are killed won't feel that they have continuity of 
 consciousness with the monk, unless the monk really did run emulations of all 
 of them in his mind. 

They don't know what's in his mind either way, so what they believe before 
being killed is utterly irrelevant here.  We can suppose for arguments' sake 
that they are all good peasants, they never miss giving their rice offerings, 
and so they believe anything the monk tells them.  And he believes what he says.

Perhaps what you were trying to get at is that _after_they 

Re: problem of size '10

2010-01-27 Thread Jack Mallah
I'm replying to this bit seperately since Bruno touched on a different issue 
than the others have.  My reply to the main measure again '10 thread will 
follow under the original title.

--- On Wed, 1/27/10, Bruno Marchal marc...@ulb.ac.be wrote:
 I would also not say yes to a computationalist doctor, because my 
 consciousness will be related to the diameter of the simulated neurons, or to 
 the redundancy of the gates, etc.  (and this despite the behavior remains 
 unaffected). This entails also the existence of zombie. If the neurons are 
 very thin , my absolute measure can be made quasi null, despite my behavior 
 remains again non affected.

This relates to what I call the 'problem of size', namely: Does the size of the 
components affect the measure?  The answer is not obvious.

My belief is that, given that it is all made of quantum stuff, the size will 
not matter - because the set of quantum variables involved actually doesn't 
change if you leave some of them out of the computer - they are still 
parameters of the overall system.

But there is an important and obvious way in which size does matter - the size 
of the amplitude of the wavefunction, the square of which is proportional to 
measure according to the Born Rule.

I would say that if we really had a classical world and made a computer out of 
classical water waves, the measure might be proportional to the square of the 
amplitude of those waves.  I don't know - I have different proposals for how 
the actual Born Rule comes about, and depending on how it works, it could come 
out either way.

I don't think there is any experimental evidence that size matters.  But some 
might disagree.  If they do, there are a few points they could make:

- Maybe big brains have more measure.  This could help explain why we are men 
and not mice.

- Maybe in the future, people will upload their brains into micro-electronic 
systems.  If those have small measure, it could explain the Doomsday argument 
- if the future people have low measure, it makes sense that we are not in that 
era.

- Maybe neural pathways that recieve more reinforcement get bigger and give 
rise to more measure.  This could result in increased effective probablility to 
observe more coincidences in your life than would be expected by chance.  Now, 
coincidences often are noticed by us and we tend to think there are many.  I 
think this has more to do with psychology than physics - but who knows?




  

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Jack Mallah's paper on QS.

2010-01-26 Thread Jack Mallah
-- On Mon, 1/25/10, Stephen Paul King stephe...@charter.net wrote:
 Does not the mutual interfearence between the copies hace something to do 
 with a QM systems ability to compute exponensially more than a classical 
 system? If so, then reducing the number or density of copies would lead to an 
 attenuation in the computational power of the associated system. That is 
 clearly not a good thing!

Hi Stephen.

In answer to your question, the first thing I must point out is that there is 
no evidence that the human brain can perform any quantum computing, and good 
reasons to think it can't - it's hard to isolate qbits from the environment.

Also, even for a quantum computer, by 'copy' I don't think we just mean other 
parts of the wavefunction; we mean systems that perform the same computation.  
So in any case, if it's really 'copies' that we are reducing, and not limiting 
it in some other way, by definition there would be no change in the type or 
output of computation.  However, the number of implementations would be reduced.

An interesting question is whether conscious quantum computers would tend to 
observe the Born Rule as we do.  I think they would, but it's not something 
that can be tested experimentally by us, because the only thing we would test 
is that our own Born Rule predicts what replies from them we tend to recieve.  
In most of our worlds they would agree that they see the Born Rule, but we'd 
have no way to test if that's true in most of their own worlds.

 PS: I still would like to understand how the notion of measure or density is 
 considered.

Perhaps you could ask a more specific question. Measure I thought I explained 
in the paper.  By 'density' I'm not sure what you mean here.



  

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: measure again '10

2010-01-26 Thread Jack Mallah
--- On Tue, 1/26/10, Bruno Marchal marc...@ulb.ac.be wrote:
 On 25 Jan 2010, at 23:16, Jack Mallah wrote:
  Killing one man is not OK just because he has a brother.
  
 In our context, the 'brother' has the same consciousness.

The brother most certainly does not have the same consciousness. If he did, 
then killing him would not change the total _amount_ of consciousness; measure 
would be conserved. What the brother does have is his own, different in terms 
of who experiences it, but qualitatively identical consciousness.

 From this I conclude you would say no to the doctor. All right? The doctor 
 certainly kill a 'brother' .

As you should know by now Bruno, if you are now talking about a teleportation 
experiment, in that case you kill one guy (bad) but create another, 
qualitatively identical guy (good).  So the net effect is OK.  Of course the 
doctor should get the guy's permission before doing anything, if he can.

BTW, it may seem that I advocate increased population - that is, if we had a 
cloning device, we should use it.  In general, yes, but a planet has a limited 
capacity to support a high population over a long term, which we may have 
already exceeded.  Too much at once will result in a lower total population 
over time due to a ruined environment as well as lower quality of life.  So in 
practice, it would cause problems.  But if we has a second planet available and 
the question is should we populate it, I'd say yes.

--- On Mon, 1/25/10, Stathis Papaioannou stath...@gmail.com wrote:
 Killing a man is bad because he doesn't want to be killed,

Actually that's not why - but let that pass for now.

 and he doesn't want to be killed because he believes that act would cause his 
 stream of consciousness to end. However, if he were killed and his stream of 
 consciousness continued, that would not be a problem provided that the manner 
 of death was not painful. Backing up his mind, killing him and then making an 
 exact copy of the man at the moment before death is an example of this 
 process.

See above. That would be a measure-conserving process, so it would be OK.

It is just a matter of definition whether it is the same guy or a different 
guy. Because now we have one guy at a time, it is convenient to call them the 
same guy.  If we had two at once, we could call them the same if we like, but 
the fact would remain that they would have different (even if qualitatively the 
same) consciousnesses, so it is better to call them different guys.

 Making two copies running in lockstep and killing one of them is equivalent 
 to this: the one that is killed feels that his stream of consciousness 
 continues in the one that is not killed. It is true that in the second case 
 the number of living copies of the person has halved, but from the point of 
 view of each copy it is exactly the same as the first case, where there is 
 only ever one copy extant.

That one that is killed doesn't feel anything after he is killed.  The one that 
lives experiences whatever he would have experienced anyway.  There is NO 
TRANSFER of consciousness.  Killing a guy (assuming he is not an evil guy or in 
great pain) and not creating a new guy to replace him is always a net loss.

 The general point is that what matters to the person is not the objective 
 physical events, but the subjective effect that the objective physical events 
 will have.

What matters is the objective reality that includes all subjective experiences.

Suppose there is a guy who is kind of a crazy oriental monk.  He meditates and 
subjectively believes that he is now the reincarnation of ALL other people.  Is 
it OK to now kill all other people and just leave alive this one monk?




  

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Changing the past by forgetting

2009-03-11 Thread Jack Mallah


--- On Tue, 3/10/09, Saibal Mitra smi...@zeelandnet.nl wrote:
 http://arxiv.org/abs/0902.3825
 
 I've written up a small article about the idea that you could end up in a 
 different sector of the multiverse by selective memory erasure. I had written 
 about that possibility a long time ago on this list, but now I've made the 
 argument more rigorous.

Saibal, I have to say that I disagree.  As you acknowledge, erasing memory 
doesn't recohere the branches.  There is no meaningful sense in which you could 
end up in a different branch due to memory erasure.

You admit the 'effect' has no observable consequences.  But it has no 
unobservable meaning either.

In fact, other than what I call 'causal differentiation', which clearly will 
track the already-decohered branches (so you don't get to reshuffle the deck), 
there is no meaningful sense in which you will end up in one particular 
future branch at all.  Other than causal differentiation tracking, either 'you' 
are all of your future branches, or 'you' are just here for the moment and are 
none of them.




  


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: language, cloning and thought experiments

2009-03-07 Thread Jack Mallah


--- On Fri, 3/6/09, Wei Dai wei...@weidai.com wrote:
  No.  First, I don't agree that the real question is what the utility 
  function is or should be.  The real question is whether the measure, M, is 
  conserved or whether it decreases.  It's just that a lot of people don't 
  understand what that means.
 
 I agree that a lot of people don't understand what that means, and I 
 certainly appreciate your effort to educate them. But it seems to me that 
 once someone does understand that issue, it's not assured that they'll fall 
 into the U=M*Q camp automatically.

They might not, but I'm sure most would; maybe not exactly that U, but a lot 
closer to it.

 U=Q would be generalized to (Sum_i M_i Q_i) / (Sum_i M_i).
 This seems just  as well defined as Sum_i M_i Q_i. You objected that 
 personal identity is not well-defined but don't you need to define personal 
 identity to compute Sum_i M_i Q_i as well, in order to determine which i to 
 sum over?

No.  In U = Sum_i M_i Q_i, you sum over all the i's, not just the ones that are 
similar to you.  Of course your Q_i (which is _your_ utility per unit measure 
for the observer i) might be highly peaked around those that are similar to 
you, but there's no need for a precise cutoff in similarity.  And it's even 
very likely that it will have even higher peaks around people that are not very 
much like you at all (these are the people that you would sacrifice yourself 
for).

By contrast, in your proposal for U, you do need a precise cutoff, for which 
there is no justification.

-- On Fri, 3/6/09, Stathis Papaioannou stath...@gmail.com wrote:
  It's not the addition of the other copy that's the problem; it's the loss 
  of it.  Losing people is bad.
 
 How would the addition then loss of the extra copy be bad for the original, 
 or for that matter for the disappearing extra copy, given that neither copy 
 has any greater claim to being resurrected in the morning as B?

It's not the addition then loss that's bad (since you end up with the same 
measure you started with); it's the loss.

In the culling teleportation, both people are lost, which is doubly bad.  
Elsewhere, one new person appears, which is good, but not as good as there 
being two people.  So it's not a wash; it's a loss.

 I don't agree with the way you calculate utility at all.

It's easy to say you don't agree but you haven't given an alternative.  
Precisely how would you calculate it?  U = ...




  


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



language, cloning and thought experiments

2009-02-24 Thread Jack Mallah

--- On Wed, 2/11/09, Stathis Papaioannou stath...@gmail.com wrote:
 Well, this seems to be the real point of disagreement between you and the 
 pro-QI people. If I am one of the extra versions and die overnight, but the 
 original survives, then I have survived. This is why there can be a many to 
 one relationship between earlier and later copies. If you don't agree with 
 this then you should make explicit your theory of personal identity.

It is close to the point, but there is room for a misunderstanding so I have to 
be careful.  Here I am consolidating replies to some of the branched post 
threads and will present some thought experiments.

On personal identity:

As I explained, there are several possible definitions of personal identity, 
and the most useful ones are 1) All branched/fused people are the same person, 
2) Causal chains determine identity, and 3) Observer - moments.

This can become confusing because it is not always clear which definition 
someone is using, especially if quickly typing out a reply to a tangentially 
related post.  This can lead to a kind of Hydra-whacking effect in which one 
point is dealt with, only another confusion is simultaneously created because 
(for example) I was not clear on what I did not spell out as it was not the 
main point at issue in the post I was responding to.

That was the case recently when some people misconstrued my use of the causal 
chain (in terms of you might die, only the original will survive) as some 
kind of crucial point.  Causal differentiation applied to the question at hand, 
so I used that one.  If anyone has read my QI paper, you would have known that 
I accept that teleportation is OK and that measure is what matters, not so 
much the original vs. copy issue.  I will explain more below on these important 
points.

If I had to pick one definition and stick with it, I would go with the least 
misleading one, which is an observer-moment.

The important thing to realize is that _definitions don't matter_!  
Predictions, decisions, appropriate emotions to a situation - these are 
completely independent of definitions of personal identity.  Personal identity 
is a useful concept in practice but not a fundamental thing, and therefore can 
have no fundamental relevance, unlike its misuse in QS thinking where it could 
supposedly affect a measure distribution.

On probability:

Bruno Marchal wrote:
 You say: no randomness involved but you seem to accept probabilities. Do I 
 just miss something here?

Yes, Bruno, you did, though my quickness contributed.  In my QI paper I defined 
effective probability and carefully spelled out the roles it can play.  But 
again, in posting on tangentially related topics, it is much easier to just say 
probability and hope that people remember what I am really talking about.

Classically there are two kinds of probability: true randomness, and subjective 
uncertainty due to ignorance.  I do not believe that the former exists..  When 
I talk about probability it either involves some ignorance on the part of the 
subject (as in the Reflection argument), or the use of effective probability 
in theory confirmation.

I may get sloppy sometimes (and say probability) when talking about a situation 
after an experiment that is yet to be performed, but in thinking about such 
cases it is absolutely necessary to remember that in the MWI there is neither 
randomness nor subjective ignorance, and that one must use Caring Measure.

On the first person slogan:

Any observation is made by the person observing it.  In that sense, they are 
all first person.

Truths do not depend on point of view.  We do not know the measure 
distribution, but we can guess about it, and can study a model for it.  
Assuming the model is accurate, it is the distibution of these first person 
observations.

Calling it a third person view is a false charge; an accurate model is not a 
view, it is simply the truth.  Invoking first person measure distributions as 
an alternative is an empty slogan.

The real key point at which the QS fallacy appears seems to be that some people 
find it inconcievable that they will not have a future.  Thus, they assume that 
they will survive and only need to take into account effective probabilities 
that are conditional on survival.  This fallacy is undefined (in terms of 
personal identity which is required for the condition) and is false by 
definition of the measure distribution.

This can be seen using either causal chains (if a person is defined as a casual 
chain, then when the chain ends, so will he) or more generally just in terms of 
decreasing measure of observer-moments with age.  In the latter case increasing 
age is no different than, for example, increasing brightness of your visual 
field.  There is a sequence of observer-moments in which what you see is more 
and more bright, and after some point the measure distribution will decline as 
a function of increasing brightness.  You can define 

Re: Born rule

2009-02-14 Thread Jack Mallah

--- On Wed, 2/11/09, Brent Meeker meeke...@dslextreme.com wrote:
  Two copies don't increase the measure of a computation and reducing it's 
  vector in Hilbert space doesn't diminish it.
  
  If that is so then how do you explain the Born rule?
 
 The Born rule assumes you start with a normalized vector (i.e. ray), so it 
 calculates predicted probabilities conditional on the state preparation.  
 After each measurement, the vector is renormalized because the prediction is 
 always conditional on the present state.

How do you explain why it works?  I say it is because people in higher 
amplitude branches have more measure.

 This is quite different from applying a probability measure to the evolution 
 of a multiverse in which decoherence defines many different orthogonal 
 subspaces, each of which gets a small projection of the state vector of the 
 multiverse.

Then it is not the standard MWI in the Everett tradition.




  


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: ASSA vs. RSSA and the no cul-de-sac conjecture

2009-02-14 Thread Jack Mallah

Hi Johnathan.  I see that there are some new people like yourself here.  I like 
to see new people and younger people take an interest in the philosophical 
issues, though at the same time it saddens me to see so many continue to fall 
victim to the the QS fallacy.

I have made an important discovery: the save as draft feature of email. 
Rather than shoot off quick piecemeal replies to the various threads on the 
topic, I will be posting a consolidated reply and several thought experiments, 
which I hope will explain everything.  (No, not yet the Platonic Everything.)

Jack




  


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Measure Increases or Decreases? - entropy

2009-02-12 Thread Jack Mallah

--- On Thu, 2/12/09, George Levy gl...@quantics.net wrote:
 I have also been overwhelmed by the volume on this list.
 The idea is not to take more than you can chew.

Indeed.

  --- On Wed, 2/11/09, George Levy
  If that were the case, the Born Rule would fail. 
 Perhaps the probability rule would be more like proportionality to norm^2 
 exp(entropy) instead of just norm^2.  If that was it, then for example 
 unstable nuclei would be observed to decay a lot faster than the Born Rule 
 predicts.

 I do not understand why you say that the Born rule would fail.

High entropy branches would have more probability than low entropy ones 
compared to the standard Born rule.
   
 Yes I am linking the entropy to MW branching.

But you should realize that the Born rule is self-consistent in the face of 
branching.  If there is branching to N states, then on average the squared norm 
of each will be 1/N of the original.  That much is proven by the math.  Linking 
squared norm to measure is of course a tougher issue.

 You say that the Born Rule would fail if measure *increases*.

Actually, all I said was that it would fail if measure is linked to entropy.  
Any significant modification to it would make it fail.

 Using your own argument I could say that the Born rule would fail if measure 
 *decreases *according to function f(t). For example it could be norm^2 f(t) .

That would make it fail but if the modification is only a function of time it 
would be hard to detect.  Making it a function of a branch-dependent observable 
like entropy leads to a much easier-to-detect deviation.

 So using your own argument since the Born rule is only norm^2 therefore 
 measure stays constant?

In ordinary experimental situations, total measure stays constant.

In life or death situations there is a correction factor but it is well known: 
the measure in a given world is proportional to the number of people alive in 
it as well as to the squared norm.  This is taken into account under the 
Anthropic principle, and explains why our universe seems fine-tuned for life 
even though worlds like that presumably have a relatively small total squared 
norm compared to the sum of the others.




  


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: children and measure

2009-02-11 Thread Jack Mallah

--- On Wed, 2/11/09, Quentin Anciaux allco...@gmail.com wrote:
 I don't get it. Why should the measure suddenly decrease at 80 (or 100) 
 years old ? Why not 30 ? Why not 4 ?

Heart disease.  Cancer.  Stroke.  Degradation of various organs leading to 
death.  Such ailments are known to strike older people more than young people.  
Are such things unheard of in your country?

I wouldn't call it sudden, but certainly by 100 the measure has dropped off a 
lot.  By 200, survival is theoretically possible, so the measure isn't zero, 
but such cases are obviously quite rare.

 Also this is still assuming ASSA and does not take in accound that my next 
 momemt is not a random momemt (with high measure) against all momemts, but a 
 random momemt again all momemts that have my current moment as 
 memories/previous.

There is no randomness whatsoever involved.  See my replies to Stathis.




  


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: adult vs. child AB causation

2009-02-11 Thread Jack Mallah

--- On Tue, 2/10/09, Brent Meeker meeke...@dslextreme.com wrote:
  2)  If the data saved to the disk is only based on A1 (e.g. discarding 
  any errors that A2 might have made) then one could say that A1 is the 
  same person as B, while A2 is not.  This is causal differentiation.
 
  Yes, but I'm assuming A1 and A2 have identical content.
  
  That actually doesn't matter - causation is
 defined in terms of counterfactuals.  If - then, considering
 what happens at that moment of saving the data.  If x=1 and
 y=1, and I copy the contents of x to z, that is not the same
 causal relationship as if I had copied y to z.
 
 Isn't that making the causal chain essential to the experience; contrary to 
 the idea that the stream of consciousness is just the computation?  The 
 causal chain is not part of the computation, A1 and A2 could be implemented 
 by different physics and hence different causation.

--- On Tue, 2/10/09, russell standish li...@hpcoders.com.au wrote:
 But surely the counterfactuals are the same in each case too? In which case 
 it is the same causal relationship. We're talking computations here, each 
 computation will respond identically to the same counterfactual input.

I believe you both are taking what I wrote out of context.  Sorry if I was not 
clear.

In the above I was talking about the moment at which the data is saved, from 
either A1 or A2, when making the transition to B in the thought experiment.

BTW, causation (sensitivity to counterfactuals) is part of the criteria for an 
implementation of a computation.  So in that sense causation is essential to 
the experience.




  


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: AB continuity

2009-02-11 Thread Jack Mallah

--- On Wed, 2/11/09, Stathis Papaioannou stath...@gmail.com wrote:
 I don't think it makes a difference if life is continuous or discrete: it is 
 still possible to assert that future versions of myself are different people 
 who merely experience the illusion of being me.
 However, this just becomes a semantic exercise. Saying that I will wake up in 
 my bed tomorrow is equivalent to saying that someone sufficiently similar to 
 me will wake up in my bed tomorrow.

Exactly.

And if your measure were to drop off dramatically overnight, it is equivalent 
to saying that many _more people_ woke up in your bed today as compared to the 
number of people who will wake up in your bed tommorrow.

Which is equivalent to saying that, for all practical purposes, you will 
probably die overnight.  And that is the point.




  


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Dreams and measure

2009-02-11 Thread Jack Mallah

Hello again, Saibal!

It is good to see that I am not alone here in taking a stand against QS/QI.  
What do you think of my paper?  Is it unclear, convincing, unconvincing?

Are there others like us who still post here?

Regards,
Jack




  


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: children and measure

2009-02-11 Thread Jack Mallah

--- On Wed, 2/11/09, Brent Meeker meeke...@dslextreme.com wrote:
 Indeed there seems to be a conflict between MWI of QM and the feeling of 
 consciousness.  QM  evolves unitarily to preserve total probability, which 
 implies that the splitting into different quasi-classical subspaces reduces 
 the measure of each subspace.  But there's no perceptible diminishment of 
 consciousness.  I think this is consistent with the idea that consciousness 
 is a computation, since in that case the computation either exists or it 
 doesn't. 
 Two copies don't increase the measure of a computation and reducing it's 
 vector in Hilbert space doesn't diminish it.

If that is so then how do you explain the Born rule?




  


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: AB continuity

2009-02-11 Thread Jack Mallah

--- On Wed, 2/11/09, Quentin Anciaux allco...@gmail.com wrote:
 2009/2/11 Jack Mallah jackmal...@yahoo.com
  And if your measure were to drop off dramatically overnight, it is 
  equivalent to saying that many _more people_ woke up in your bed today as 
  compared to the number of people who will wake up in your bed tommorrow.
 
  Which is equivalent to saying that, for all practical purposes, you will 
  probably die overnight.  And that is the point.
 
 I don't think so, the point is that there is still someone who will wake up 
 in the bed tomorrow... as long as the measure is not null this is true, and 
 that's what count for the argument to be valid.

There are some people who will, but relatively few.  That is what counts for QS 
to be invalid.

 So what you are saying is that at some point the measure fall to be strictly 
 null... and that needs an argument from your part.

No, I never suggested it is zero.  It doesn't have to be.

 Also you did not answer the question about the realness feeling of observer 
 B... he has twice less measure according to you, does it feel less 
 alive/real/conscious ?

I answered that previously.  Measure affects the commonness of an observation, 
not what it feels like.




  


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: AB continuity

2009-02-11 Thread Jack Mallah

--- On Wed, 2/11/09, Quentin Anciaux allco...@gmail.com wrote:
 From a 1st perspective commonness is useless in the argument. The important 
 is what it feels like for the experimenter.

You seem to be saying that commonness of an experience has no effect on, what 
for practical purposes, is whether people should expect to experience it.  That 
is a contradiction in terms.  It is false by definition.  If an uncommon 
experience gets experienced just as often as a common experience, then by 
definition they are equally common and have equal measure.





  


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Measure Increases or Decreases? - Was adult vs. child

2009-02-11 Thread Jack Mallah

Hi George.  The everything list feels just like old times, no?  Which is nice 
in a way but has a big drawback - I can only take so much of arguing the same 
old things, and being outnumbered.  And that limit is approaching fast again.  
At least I think your point here is new to the list.

--- On Wed, 2/11/09, George Levy gl...@quantics.net wrote:
 One could argue that measure actually increases continuously and corresponds 
 to the increase in entropy occurring in everyday life. So even if you are 90 
 or 100 years old you could still experience an increase in measure.

I guess you are basing that on some kind of branch-counting idea.

If that were the case, the Born Rule would fail.  Perhaps the probability rule 
would be more like proportionality to norm^2 exp(entropy) instead of just 
norm^2.  If that was it, then for example unstable nuclei would be observed to 
decay a lot faster than the Born Rule predicts.

Conventional half life calculations are accurate.  So either entropy would not 
be a factor, or the MWI is experimentally disproven already.  Well, if it is a 
weak enough function of entropy then maybe it hasn't been disproven, but 
inclusion of free parameters like that which can always be made small enough 
goes against Occam's Razor.  Otherwise there'd be no end of possible correction 
factors.

At least your idea was testable, with none of the meaningless first person 
sloganeering.  Ideas like that, keep em' coming!

 In any case, measure is measured over a continuum and its value is infinite 
 to begin with. So whether it increases or decreases may be a moot point.

It's not moot.  Just take density ratios.  The size of the universe may be 
infinite, but that didn't stop Hubble from saying it's getting bigger.

 As I said, the increase or decrease in measure is at the crux of this 
 problem.  Your paper really did not illuminate the issue in a satisfactory 
 manner.

It could no doubt use some tweaking, which is why I'm on the list now.  I know 
I'm not always a good communicator.  What should be clarified or added to it?




  


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: continuity - cloning

2009-02-11 Thread Jack Mallah

--- On Wed, 2/11/09, Stathis Papaioannou stath...@gmail.com wrote:
 You agree that if one version of me goes to bed tonight and one version of me 
 wakes up tomorrow, then I should expect to wake up tomorrow. But if extra 
 versions of me are manufactured and run today, then switched off when I go to 
 sleep, then you are saying that I might not wake up tomorrow. 

You won't know this evening if you are one of the extra versions or the 
original.  So yes, in that situation, you will probably not be around tomorrow. 
 Only the original will.

 The extra copies of me have somehow sapped my life strength.

Not at all.  I guess that is a joke?

Creating more copies, then getting rid of the same number, does not result in a 
net decrease in measure.  That is why the movie The Prestige bears no 
resemblance whatsoever to QS despite rumors to the contrary.

If you create extra copies and leave them alive, there is a net increase in 
measure.  That is equivalent to new people being born even if they have your 
memories.  This once happenned to Will Riker on Star Trek: TNG.




  


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: AB continuity

2009-02-11 Thread Jack Mallah

--- On Wed, 2/11/09, Quentin Anciaux allco...@gmail.com wrote:
   From a 1st perspective commonness is useless in
 the argument. The important is what it feels like for the experimenter.
 
  You seem to be saying that commonness of an experience has no effect on, 
  what for practical purposes, is whether people should expect to experience 
  it.  That is a contradiction in terms.  It is false by definition.  If an 
  uncommon experience gets experienced just as often as a common 
  experience, then by definition they are equally common and have equal 
  measure.
 
 That's not what I said. I said however uncommon an experience is, if it 
 exists... it exists by definition, if mwi is true, and measure is never 
 strictly null for any particular moment to have a successor then any moment 
 has a successor hence there exists a me moment of 1000 years old and it is 
 garanteed to be lived by definition.

It will be experienced - but not by most of you.  For all practical purposes 
it might as well not exist.

 What you're saying is uncommon moment are *never* experienced (means their 
 measure is strictly null), for the QI argument to hold it is suffisant to 
 have at least *one* next moment for every moment.

No and no.




  


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: adult vs. child

2009-02-10 Thread Jack Mallah

--- On Tue, 2/10/09, Stathis Papaioannou stath...@gmail.com wrote:
 It seems that the disagreement may be one about personal identity. It is not 
 clear to me from your paper whether you accept what Derek Parfit calls the 
 reductionist theory of personal identity. Consider the following experiment:
 
 There are two consecutive periods of consciousness, A and B, in which you are 
 an observer in a virtual reality program. A is your experiences between 5:00 
 PM and 5:01 PM while B is your experiences between 5:01 PM and 5:02 PM, 
 subjective time. A is being implemented in parallel on two computers MA1 and 
 MA2, so that there are actually two qualitatively identical streams of 
 consciousness which we can call A1 and A2. At the end of the subjective 
 minute, data is saved to disk and both MA1 and MA2 are switched off. An 
 external operator picks up a copy of the saved data, walks over to a third 
 computer MB, loads the data and starts up the program. After another 
 subjective minute MB is switched off and the experiment ends.
 
 As the observer you know all this information, and you look at the clock and 
 see that it is 5:00 PM. What can you conclude from this and what should you 
 expect? To me, it seems that you must conclude that you are currently either 
 A1 or A2, and that in one minute you will be B, with 100% certainty. Would 
 you say something else?

I'd say it's a matter of definition, and there are three basic ones:

1)  If I am A1 and will become B, then A2 has an equal right to say that he 
will become B.  Thus, one could say that I am the same person as A2.  This is 
personal fusion.

2)  If the data saved to the disk is only based on A1 (e.g. discarding any 
errors that A2 might have made) then one could say that A1 is the same person 
as B, while A2 is not.  This is causal differentiation.

3)  If I am defined as an observer-moment, then I am part of either A1 or A2, 
not even the whole thing - just my current experience.  This is the most 
conservative definition and thus may be the least misleading.

Regardless of definitions, what will be true is that the measure of A will be 
twice that of B.  For example, if have not yet looked at the clock, and I want 
to place a bet on what it currently reads, and my internal time sense tells me 
only that about a minute has passed (so it is near 5:01, but I don't know which 
side of it), then I should bet that it is before 5:01 with effective 
probability 2/3.  This Reflection Argument is equivalent to the famous 
Sleeping Beauty thought experiment.




  


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



re: children and measure

2009-02-10 Thread Jack Mallah

--- On Mon, 2/9/09, Quentin Anciaux allco...@gmail.com wrote:
 Also I still don't understand how I could be 30 years old and not 4, there 
 are a lot more OM of 4 than 30... it is the argument you use for 1000 years 
 old, I don't see why it can hold for 30 ?

Quentin, why would the measure of 4 year olds be a lot more than the measure 
of 30 year olds?  I have already explained that the effect of differentiation 
(eg by learning) is exactly balanced by the increased number of versions to sum 
over (the N/N explanation) and the effect of child mortality is small.

Is there some third factor that you think comes into play?  Can you estimate 
quantitatively what you think the measure ratio would be?

 Also even if absolute measure had sense, do you mean that the measure of a 
 1000 years old OM is strictly zero (not infinitesimal, simply and strictly 
 null)?

No, it is not zero, but it is extremely small.  I have never suggested that 
there is no long time tail in the measure distribution that extends to infinite 
time.  Of course there is.  Any MWIer knows that.  But it is negligable.  You 
will never experience it, or depending on definitions, at least not in any 
significant measure.  The general argument against immortality proves that.  It 
is no more significant then any other very-small-measure set of observations, 
such as the ones in which you are king of the demons.  You might as well forget 
about it.




  


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: adult vs. child AB

2009-02-10 Thread Jack Mallah

--- On Tue, 2/10/09, Stathis Papaioannou stath...@gmail.com wrote:
 2009/2/11 Jack Mallah jackmal...@yahoo.com:
  2)  If the data saved to the disk is only based on A1
 (e.g. discarding any errors that A2 might have made) then
 one could say that A1 is the same person as B, while A2 is
 not.  This is causal differentiation.
 
 Yes, but I'm assuming A1 and A2 have identical content.

That actually doesn't matter - causation is defined in terms of 
counterfactuals.  If - then, considering what happens at that moment of saving 
the data.  If x=1 and y=1, and I copy the contents of x to z, that is not the 
same causal relationship as if I had copied y to z.

  3)  If I am defined as an observer-moment, then I am
 part of either A1 or A2, not even the whole thing - just my
 current experience.  This is the most conservative
 definition and thus may be the least misleading.
 
 This is the way I think of it, at least provisionally.

OK.

 But the point is, I do look at the clock and I do know that I am A, with 
 probability 1, and therefore that I will soon be B with probability 1.

That contradicts what you said above about being an observer-moment.  If you 
are, then some _other_ observer-moments will be in B, not you.




  


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: briefly wading back into the fray re: UDA

2009-02-09 Thread Jack Mallah

--- On Mon, 2/9/09, Bruno Marchal marc...@ulb.ac.be wrote:
 good idea to resume UDA again

Bruno, I will post on the subject - but not yet.  I do not want to get 
sidetracked from improving my paper.

 I see you have make some progress on the subject (but not yet on  
 diplomacy, unless your crackpot wording is just an affectionate  
 mark: I could be OK with that. Well we will see).

I will admit that diplomacy is not always my strong suit when dealing with 
controversial subjects.

My characterization of it is sincere, not affectionate, though mainly what made 
me say that is that you call it a proof.  It's an argument, not a proof, and 
the argument fails to be convincing.  Now many people make arguments that I 
don't buy and I don't necessarily call those arguments crackpot, but I will if 
they make too-strong claims.

 Welcome back to the list Jacques,

Thanks :)




  


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: briefly wading back re: BB's and measure

2009-02-09 Thread Jack Mallah

--- On Sat, 2/7/09, Quentin Anciaux allco...@gmail.com wrote:
 2009/2/7 Jack Mallah jackmal...@yahoo.com
  1. Initially, before evolution occurred, a typical Boltzmann brain (BB) had 
  about the same measure as a brain which was like what we consider a normal 
  person's (an atypical BB).
  2. The typical BB's all together vastly outnumbered the atypical ones, so 
  they had much more total measure.
  3. We are assuming here that a person's measure can't change as a function 
  of time.
  4. Therefore the initial measure advantage of the typical BB's would hold 
  for all time.

 You are here explicitely assuming ASSA, meaning that there exists an absolute 
 measure over all OM... which seems to me dubious. Your argument here is not 
 valid for relative continuation (RSSA).

Hi.  In the above, I was describing the consequences of #3, the assumption that 
a person's measure can't change over time.  That assumption is certainly not 
what people have been calling the ASSA - obviously, I believe that measure 
does change as a function of time.  Rather, #3 is my attempt to put what you 
call the RSSA in well-defined terms so that its consequences can be explored.

  Instead I covered the Bayesian issues in my sections on the Reflection 
  Argument and Theory Confirmation.
 
 What measure then are you talking about ? Bayesian probabilities are 
 relative, it is non-sense to talk about absolute measure.

I don't understand your comment.  The sections of my paper that I mentioned 
explain how to use what I call effective probabilities in certain situations. 
 If there is a problem with those procedures that you would like to point out, 
that would make it impossible to use them, you'd have to be a lot more specific.

   He goes on to mention rather briefly in passing his doomsday style
   argument against QI, but not in detail.
 
  I think the argument is presented in full.  What part is missing?
 
 What happen to your you ?

Do you mean why don't you reach the super-old ages?  The number of super-old 
copies of you is much less than for normal ages.  This is equivalent to most 
copies of you die off first.  Which is equivalent to most people die off 
first.  It is irrelevant whether the people are different, or similar enough 
to be called copies.

The You you know (no quotes around it this time) is just one copy among the 
you ones that are similar to you.

In other words, perhaps too compactly said for people to appreciate, your 
measure is reduced.




  


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



RE: briefly wading back into the fray - re: dualism

2009-02-08 Thread Jack Mallah

So far the responses here have not been as hostile as I feared :)

--- On Sat, 2/7/09, Jesse Mazer laserma...@hotmail.com wrote:
 are you open to the idea
 that there might be truths about subjectivity (such as
 truths about what philosophers call 'qualia') which
 cannot be reduced to purely physical statements? Are you
 familiar with the ideas of philosopher David Chalmers, who
 takes the latter position? He doesn't advocate
 interactive dualism, where there's some kind of
 soul-stuff that can influence matter--he assumes that the
 physical world is causally closed, so all
 physical events have purely physical causes, including all

I am very familiar with David Chalmers' position.  My view is that he's wrong: 
If I have qualia, I don't find it plausible that they can have no influence 
over my spelled-out thoughts and words or actions, which is what 
epiphenomenalism would imply.  If true qualia must be in addition to whatever 
is making me think and say I have qualia, then I have no reason to think I have 
the true ones.  I am a reductive computationalist.

 If one buys into
 the possibility of objective truths about mental
 states/qualia and psychophysical laws, it wouldn't be
 such a stretch to imagine that there may be objective truths
 about the first-person probabilities of experiencing
 different branches in either the MWI or duplication
 experiments in a single universe (so that you don't have
 to rely on decision theory, which depends on non-objective
 choices about which future possibilities you 'care'
 about, to discuss quantum immortality), and that these
 probabilities could be determined by some combination of an
 objective physical measure on different brainstates and some
 set of psychophysical laws. If so, the question
 of quantum immortality would boil down to whether a given
 mind always has a 100% chance of experiencing a
 next observer-moment as long as a
 next brainstate exists somewhere, or whether
 there is some nonzero chance of one's flow of experience
 just ending.Jesse

In the QI paper, in some of the arguments I explicitly appeal to functionalism. 
 Most MWIers are functionalists, so those arguments should apply for them.

If dualism is assumed, there are few limits on what can happen, but if Occam's 
razor is applied to it you can assume things won't end up much different than 
without it.  Chalmers himself is a computationalist (just not a reductive one).

The concept of measure, and the empirical arguments such as the Boltzmann 
Brains one and the general argument against immortality, should apply 
regardless of the physicalism/platonism/dualism debate.




  


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Born rule

2009-02-08 Thread Jack Mallah

--- On Sun, 2/8/09, russell standish li...@hpcoders.com.au wrote:
 He must have some model in mind which tells us how
 the amplitude of the branches relates to the amplitude of the
 original state.

The Schrodinger equation is linear and unitary.  As long as it applies (in 
other words, assuming the MWI, so no collapse) the norm of a branch remains 
equal to the norm of whatever term in the original wavefunction evolved to form 
that branch.

In other words, in the MWI,

|psi = a|A + b|B evolves to |psi' = a'|A' + b'|B'

where |a| = |a'| and |b| = |b'|

and conventionally we assume psi|psi = 1.

If A|B = 0, as for a measurement, then |a|^2 = psi| |AA| |psi

Now if |A' and |B' are decoherent branches, the Born rule states that the 
probability for branch |A is |a|^2 = |a'|^2.  |a'|^2 is the squared norm of 
the branch, and is more instructive for the MWer to talk about than |a|^2.

The norm of each branch world is no longer 1, as the collapse interpretation 
would have set it to.  Conceptually, in the MWI only the wavefunction of the 
entire multiverse should really be normalized to 1 (or to whatever).  But for 
convenience, whenever we start an experiment, we renormalize what we started 
with to 1 and throw out the rest of the branches from consideration.

Incidentally, I think that could be the reason QM is linear: Maybe the real 
physics is not linear, but since the amplitude of each branch is so small (the 
average of the squared norms is decreasing with time as the number of branches 
increases), the higher order terms quickly became negligable.




  


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: adult vs. child

2009-02-08 Thread Jack Mallah

--- On Sun, 2/8/09, Stathis Papaioannou stath...@gmail.com wrote:
  Suppose you differentiate into N states, then on
 average each has 1/N of your original measure.  I guess
 that's why you think the measure decreases.  But the sum
 of the measures is N/N of the original.
 
 I still find this confusing. Your argument seems to be that you won't live to 
 1000 because the measure of 1000 year old versions of you in the multiverse 
 is very small - the total consciousness across the multiverse is much less 
 for 1000 year olds than 30 year olds. But by an analogous argument, the 
 measure of 4 year old OM's is higher than that of 30 year old OM's, since you 
 might die between age 4 and 30.
 But here you are, an adult rather than a child.

You might die between 4 and 30, but the chance is fairly small, let's say 10% 
for the sake of argument.  So, if we just consider these two ages, the 
effective probability of being 30 would be a little less than that of being 4 - 
not enough less to draw any conclusions from.

The period of adulthood is longer than that of childhood so actually you are 
more likely to be an adult.  How likely?  Just look at a cross section of the 
population.  Some children, more adults, basically no super-old folks.

 Should you feel your consciousness more thinly spread or something?

No, measure affects how common an observation is, not what it feels like.




  


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



briefly wading back into the fray

2009-02-07 Thread Jack Mallah

--- On Fri, 2/6/09, russell standish li...@hpcoders.com.au wrote:
 So sorry Jacques - you need to do better. I'm sure you can!

Russell, I expected there might be some discussion of my latest eprint on this 
list.  That's why I'm here now - to see if there are any clarifications I 
should make in it.  I intend to make it better - and perhaps I'll have you guys 
to thank!

Don't expect me to stick around.  I see the list hasn't changed much - Bruno is 
still pushing his crackpot UDA.  I could tell you what's wrong with his MGA, 
but I'm here to deal with the QS paper first.

 http://arxiv.org/ftp/arxiv/papers/0902/0902.0187.pdf

 I mentioned the interesting comment on how we should expect to find ourselves 
 a Boltzmann brain shortly after the big bang, but there was no follow up to 
 this. I have no idea how he came up with that notion.

I wrote
If one denies that the amount of “a person’s” consciousness can change as a 
function of time after it begins to exist and as long as there is at least some 
of it left, then in the quantum MWI, since there deterministically is some 
slight amplitude that any given particle configuration (such as that of a 
person’s brain) exists even shortly after the Big Bang, there would again be no 
reason to expect that a typical person would be the result of normal 
evolutionary processes – you would have been ‘born’ way back then.

Seems pretty straightforward to me:
1. Initially, before evolution occurred, a typical Boltzmann brain (BB) had 
about the same measure as a brain which was like what we consider a normal 
person's (an atypical BB).
2. The typical BB's all together vastly outnumbered the atypical ones, so they 
had much more total measure.
3. We are assuming here that a person's measure can't change as a function of 
time.
4. Therefore the initial measure advantage of the typical BB's would hold for 
all time.

Perhaps I should spell out the steps like that in the paper, but I thought it 
was self-explanatory already.

 His discussion of the Born rule is incorrect. The probability given by
 the Born rule is not the square of the state vector

Russell, Jesse Mazer has already pointed out that it is your discussion of my 
discussion of it that is incorrect.  It's true that people use various 
terminology (maybe I should have said squared norm instead of squared 
amplitude) and I was trying to keep technicalities to a minimum.  See
http://en.wikipedia.org/wiki/Probability_amplitude

 After observation, the state vector describing the new will be
 proportional to the eigenvector corresponding the measured eigenvalue,
 but nothing in QM says anything about its amplitude. Indeed it is
 conventional to normalise the resulting state vector

That only makes sense in a collapse interpretation (or for practical 
convenience).  My guess is you looked up the Born Rule in some textbook and 
naturally it did not have an MWI perspective.

 What I think he is trying to discuss, somewhat clumsily, in the
 section on measure, is the ASSA notion of a unique well-defined
 measure for all observer moments.

The charge of 'clumsiness' is too vague for me to do anything about, so perhaps 
you could be more specific.  As for self-sampling, I didn't want to use that 
term because it can create the confusion that something random is really going 
on.  Instead I covered the Bayesian issues in my sections on the Reflection 
Argument and Theory Confirmation.

 He goes on to mention rather briefly in passing his doomsday style
 argument against QI, but not in detail.

I think the argument is presented in full.  What part is missing?

 Which is just as well, as that argument predicts that we should be neonatal 
 infants! 

I remembered that odd confusion of yours has been discussed on the list before, 
so I Googled it.  I found a 2003 post by Saibal Mitra that covers it.  I think 
I must have posted about it too, in the old days.

http://www.mail-archive.com/everything-list@eskimo.com/msg04697.html

 ... once you take into account the possibility of dying then you will see a 
 decrease. But ignoring that, the measure should be conserved. The measure for 
 being in a particular state at age 30 should be much smaller than the measure 
 for being in a particular state at age 4, but after summation over all 
 possible states you can be in, you should find that the total measure is 
 conserved.

Suppose you differentiate into N states, then on average each has 1/N of your 
original measure.  I guess that's why you think the measure decreases.  But the 
sum of the measures is N/N of the original.

This is trivially obvious so I saw no reason to mention it explicitly in the 
paper.  If there are people other than Russell with the same confusion, then I 
may add it in.

 He also mentions Tegmark's amoeba croaks argument, which is not
 actually an argument against QI, but rather a discussion of
 what QI might actually mean.

I quoted Tegmark verbatim.  He says my brain cells will gradually give out