conservation laws
http://www.arxiv.org/abs/quant-ph/0207071 --- Hal Finney [EMAIL PROTECTED] wrote: This then raises the question of whether a particular physical system (like a computer) is implementing a particular computation Hopefully I will be able to write my new paper on this soon, which I think basically solves it. It means that we cannot view causality as just an artifact of our perceptions, it must be a true element of reality. Sure. (Or more precisely, laws must exist, not just functions.) = - - - - - - - Jacques Mallah ([EMAIL PROTECTED]) Physicist / Many Worlder / Devil's Advocate I know what no one else knows - 'Runaway Train', Soul Asylum My URL: http://hammer.prohosting.com/~mathmind/ __ Do You Yahoo!? Yahoo! Autos - Get free new car price quotes http://autos.yahoo.com
Re: Who is the enemy?
From: Marchal [EMAIL PROTECTED] Jacques Mallah wrote: (I'm currently in North Dakota, but have lived in NYC most of my life. I did not know anyone who was in the WTC.) I told you my relief, but I begin to doubt ! What do you mean by that? Recently, of course, I have been more concerned with the destruction caused in NYC by the advocates of suicide and believers in immortality. I understand your concern with NYC. I share with you the concern of those terrible and crual 11 sept. events. Now a pecularity of this war consists in figuring out who is the enemy, exactly. It looks like you have solved that problem too. The enemy are the believers in immortality, the religious people !?! I did mention that they were also advocates of suicide. I am fearing amalgamations, like the amalgamations between Muslims and terrorists (to name one which has been done by some). But you are the champion: the enemy are all religious people. The war between atheism and religion !?! Perhaps I should tell you what are, according to G*, the canonical enemies of the sound universal machine. Bruno, you have an amazing ability to misunderstand what I say. I used to think that the problem was that many of the posts on this list concern arcane philosophical and technical points, so that misunderstanding was understandable. By now I know better. I could say Coke is better than Pepsi, and you would interpret that to mean that I don't know they are both colas. Further, you would believe that the only way to illustrate the relationship between the two drinks is by analogy to G and G*. The sound machine is maximaly humble, she is agnostic on both her own consistency and her own inconsistency. Does that imply it is agnostic on any question? One of these days, I'll have to check out what kind of analogy you are making between Godel's theorem and belief systems. Right now I doubt there's much to it. A couple more points. When I say X is true you can assume I mean I believe X is very likely to be true, so my Bayesian probability of (not X) is so low it is best to neglect it. If I say for example This is a chair that is what I mean. Also, if humans have properties that are not shared by your consistent machine model, then it is not the humans' fault. It is not their job to describe your model. It means your model is faulty. The universal sound machine is forever undecided about any of its possible ultimate worldview and, by doubting, never imposes its religion or worldview on different machines. Why not? Even if I'm not sure of X, I might still want you to believe X. Or, equivalently, I might want your Bayesian probability for X to be high even if mine is not. (Not that I'm that sort though, but some people are.) Today I guess we have still the choice between a war between moderates and fanatics and a war between fanatics and fanatics. It's going to be a war between ordinary people vs. evil. Personally I am not too concerned right now with the philosophical differences among the ordinary people, be they religious or intelligent. This will be serious, I fear. In the second case we loose the war at the start, isn't it? Do you agree with this last statement? Or are you really, Mister the Devil's Advocate, a fanatical atheist? I'm an atheist, and have no doubts of any significance about it. I do believe that other people should be atheists too, and that on the whole religion is an evil. Of the major religions I would say that Islam and Christianity are the worst, but the main factor is how seriously the believers take it and how radical they are in interpretation. But again, for now I am putting disagreements among non-evil people on the back burner. - - - - - - - Jacques Mallah ([EMAIL PROTECTED]) Physicist / Many Worlder / Devil's Advocate I know what no one else knows - 'Runaway Train', Soul Asylum My URL: http://hammer.prohosting.com/~mathmind/ _ Get your FREE download of MSN Explorer at http://explorer.msn.com/intl.asp
Re: doomsday argument
I will eventually get around to answering some of the other recent posts on this list, none of which contain any really new or interesting ideas BTW. Recently, of course, I have been more concerned with the destruction caused in NYC by the advocates of suicide and believers in immortality. (I'm currently in North Dakota, but have lived in NYC most of my life. I did not know anyone who was in the WTC.) From: Saibal Mitra [EMAIL PROTECTED] Bruno wrote: Charles wrote: (BTW, would I be right in thinking that, applying the SSA to a person who finds himself to be 1 year old, the chances that he'll live to be 80 is 1/80?) That's basically right in a special case: If a genius baby is abandoned on a deserted island, and figures out how to feed himself, use Bayesian reasoning, find shelter, and other such basics, then indeed (since he has no other information) his best estimate of what his lifespan will be is indeed about his current age. There is no way he could do better at that point, especially not by refusing to make any guess! Of course, if he found additional information, like seeing a bunch of natives of various sizes and noting that the small ones slowly got bigger, then he would be able to revise his prior and would come up with a better estimate for his expected lifespan. This argument (against Leslie Bayesian Doomsday argument) has been It's by no means an argument against the Doomsday argument! It's complete baloney to call it that. I guess it's like saying the Chinese room is an argument against computationalism - in both cases, people just take a particular example - a perfectly normal example in which the idea under attack works fine, say aint that cwazy, and think they have said something interesting. They haven't. I think it is a good point against too quick use of Bayes in infinite or continuous context. For shame, Bruno. Not only is it a total non-point, it has nothing to do with whether the case is infinite or continuous. An interesting article by Ken Olum can be obtained from: http://xxx.lanl.gov/abs/gr-qc/0009081 I read the abstract. It's just the same stupid stuff we've seen before. (Yawn.) And he's obviously no friend of the MWI. It says Treating possible and actual observers alike also allows sensible anthropic predictions from quantum cosmology, which would otherwise depend on one's interpretation of quantum mechanics. In other words, he's trying to prevent the fact that the MWI is observationally supported by anthropic considerations from counting as evidence for the MWI. Possible observers can't use Bayesian reasoning unless they also happen to be actual. Hence the paper is 100% wrong. QED. - - - - - - - Jacques Mallah ([EMAIL PROTECTED]) Physicist / Many Worlder / Devil's Advocate I know what no one else knows - 'Runaway Train', Soul Asylum My URL: http://hammer.prohosting.com/~mathmind/ _ Get your FREE download of MSN Explorer at http://explorer.msn.com/intl.asp
Re: Conventional QTI = False
From: Russell Standish [EMAIL PROTECTED] I suspect you are trying to find ways of making QTI compatible with Jacques ASSA based argument, when it is clear his argument fails completely. Not that the argument is unimportant, as the reasons for the failure are also interesting. What the hell are you babbling about? - - - - - - - Jacques Mallah ([EMAIL PROTECTED]) Physicist / Many Worlder / Devil's Advocate I know what no one else knows - 'Runaway Train', Soul Asylum My URL: http://hammer.prohosting.com/~mathmind/ _ Get your FREE download of MSN Explorer at http://explorer.msn.com/intl.asp
Re: Conditional probability continuity of consciousness (was: Re: FIN Again)
From: Jesse Mazer [EMAIL PROTECTED] From: Jacques Mallah [EMAIL PROTECTED] You is just a matter of definition. As for the conditional effective probability of an observation with characteristics A given that it includes characteristics B, p(A|B), that is automatically defined as p(A|B) = M(A and B) / M(B). There is no room to have a rival relative conditional probability. (E.g. A = I think I'm in the USA at 12:00 today, B=I think I'm Bob.) Well, I hope you'd agree that which observer-moment I am right now is not a matter of definition, but a matter of fact. Depends what you mean by that ... My opinion is that the global measure on all observer-moments is not telling us something like the number of physical instantiations of each one, but rather the probability of *being* one particular observer-moment vs. some other one. No, if taken at face value that really doesn't make any sense at all. There is no randomness in the multiverse. On the other hand, it is proportional to the *effective* probability of being one. In this case, effective refers to the role it plays in Bayesian reasoning. The reason it plays that role is to maximize the fraction of people who, using Bayesian reasoning, guess well. By people here I mean what you would call instantiations of OM's. I would be interested to hear what you think the measure means, though, since my version seems to require first-person facts which are separate from third-person facts (i.e., which observer-moment *I* am). The measure is just the number of observer-moments (where I mean different people count as different people) that see that type of observation. It is really a measure on the characteristics of OM's, rather than on OM's, since each O-M is counted equally. # of O-M = # of observers * moments. In any case, I'm pretty sure there's room in a TOE for a conditional probability which would not be directly deducible from the global probability distribution. Suppose I have a large population of individuals, and I survey them on various personal characteristics, like height, IQ, age, etc. Using the survey results I can create a global probability function which tells me, for example, what the likelihood is that a random individualis more than 5 feet tall. But If I then want to find out the conditional probability that a given individual over 5 feet tall weighs more than 150 pounds, there is no way to deduce this directly given only the global probability distribution. Sure there is, as you go on to say ... In this example it may be that p(A|B) = M(A and B) / M(B), but the point is that M(A and B) cannot be found simply by knowing M(A) and M(B). Of course it can't, unless you know that A and B are independent. Why the heck would you even think of trying? The global measure is on the whole set of OM characteristics: M(...,a,b,c,d,...). To find M(A), you have to set a = A and sum over all possible values of b, c, d, etc. The global measure has all the information, so to actually use it you have to ignore most of that stuff by summing over irrelevant details. And a TOE could conceivably work other ways too. Suppose we have a large number of interconnected bodies of water, each flowing into one another at a constant rate so that the total amoung of water in any part stays constant over time. In that case you could have something like a global measure which would tell you the probability that a randomly selected water molecule will be found in a given body of water at a given time, but also a kind of conditional probability that a water molecule currently in river A will later be found in any one of the various other rivers that river A branches into. This would approximate the idea that my consciousness is in some sense flowing between different experiences, splitting and merging as it goes. Just as the path of a given molecule is determined by the geographical relationships between the various bodies of water, so the path of my conscious experience might be determined by some measure of the continuity between different observer-moments...even though an observer-moment corresponding to my brain 5 seconds from now and another one corresponding to your own brain at this very moment might have equal *global* measure, I would presumably be much more likely to flow into a future observer-moment which is more similar to my current one. The appeal of that kind of model is based on the illusion that we can remember past experiences. We can't remember past experiences at all, actually. We only experience memory because of the _current_ way our brains are structured. It's possible to remember things that never happenned, not just a la Total Recall but even in simple cases like swearing that you just parked in one place, but your car is on the other side of the parking lot. Eyewitness evidence is the least reliable form. Well, with actual mind-like
Bayes
just that you have to be careful not to jump the gun, not to incorporate information into your prior that should enter later in the form of observational evidence. In the FIN case, the prior becomes how likely you would think it is for the FIN to be true, a priori, as if you have not yet considered your age at all. Suppose you pick 0.5, which I think is ridiculously high, but just for the sake of argument. Then you can look at the conditional probability of seeing your observational evidence given each hypothesis. In this case, that's the conditional probability of being younger than some natural reference point - which (surprise!) you are. For the FIN that's almost zero, so the posterior probability for the FIN to be true is almost zero regardless of the prior you started with. Now, if I were really older than the natural reference points (such as being too old to calculate), then Bayes' theorem would no longer argue against the FIN. It's simply not true that Bayes is biased against the FIN. Its just that the actual, observed evidence is overwhelmingly unlikely if the FIN were true. Or more likely a beetle... Or even more likely a microbe (assuming microbes have observer moments). Now the reference class issue is actually an interesting question that I'll probably address a bit in reply to another person, but actually I don't think the solution should be mysterious. The procedure is likely to work because most people using it will get the right answer. So, creatures that can't use it don't count. You should assume, a priori, that you are of a type of creature that is able to use Bayesian reasoning. If everyone who can use it makes that assumption, then it will be reliable for the people that can use it. - - - - - - - Jacques Mallah ([EMAIL PROTECTED]) Physicist / Many Worlder / Devil's Advocate I know what no one else knows - 'Runaway Train', Soul Asylum My URL: http://hammer.prohosting.com/~mathmind/ _ Get your FREE download of MSN Explorer at http://explorer.msn.com/intl.asp
Re: plato
From: Brent Meeker [EMAIL PROTECTED] On 01-Sep-01, Jacques Mallah wrote: There is more than that in mathematics. Structures, for example. Anything that could be described mathematically, such as geometries, computations, and anything that could be a model of a (hypothetical) world. There's plenty of room for implementations there. Even structures in mathematics, e.g. an isocolese triangle, or a set of partial differential equations with boundary conditions are either different or they are identical (and hence indistinguishable). I don't see how they can be regarded as different implementations of the same structures or theorems. We only speak of different implementations because we can run the same program on two different computers, we can write down the same theorems on two different pieces of paper. In the Platonic realm two things that are mathematically identical are absolutely identical and hence are the same thing. There is plenty of room in Plato's kitchen, though. For example the structure that physicists think best describes our universe is in there. Different parts of that structure are not identical, but they can implement the same (type of) computation as other parts. - - - - - - - Jacques Mallah ([EMAIL PROTECTED]) Physicist / Many Worlder / Devil's Advocate I know what no one else knows - 'Runaway Train', Soul Asylum My URL: http://hammer.prohosting.com/~mathmind/ _ Get your FREE download of MSN Explorer at http://explorer.msn.com/intl.asp
RE: FIN too
From: Charles Goodwin [EMAIL PROTECTED] Um, OK, I don't want to get into an infinite argument here. I guess we both understand the other's viewpoint. (For the record: I don't see any reason to accept QTI as correct, but think that *if* it is, it would fit in with the available (subjective) observational evidence - that being the point on which we differ. Um, no, I still don't understand your view. I think the point that Bayesian reasoning would work with 100% reliability, even though the FIN is technically compatible with the evidence, is perfectly clear. Any reason for disagreeing, I have no understanding of. It may help you to think of different moments of your life as being different observers (observer-moments). That's really just a matter of definition. - - - - - - - Jacques Mallah ([EMAIL PROTECTED]) Physicist / Many Worlder / Devil's Advocate I know what no one else knows - 'Runaway Train', Soul Asylum My URL: http://hammer.prohosting.com/~mathmind/ _ Get your FREE download of MSN Explorer at http://explorer.msn.com/intl.asp
RE: FIN too
From: Charles Goodwin [EMAIL PROTECTED] [Jacques Mallah wrote] But there's one exception: your brain can only hold a limited amount of information. So it's possible to be too old to remember how old you are. *Only if you are that old, do you have a right to not reject FIN on these grounds.* Are you that old? Yeah, that's one of my objections to QTI. Although perhaps add-on memory chips will become available one day :-) OK. (And even if the chips become available, you'd probably only be able to add a finite # before collapsing into a black hole.) Right. Do you think you are in an infinitesimal fraction, or in a typical fraction? Infinitesimal, if QTI is correct, otherwise fairly typical. Assuming QTI is correct and ignoring any other objections to it, it's *possible* for me to be in an infinitesimal fraction - in fact it's necessary. Right - which is why Bayesian reasoning falsifies FIN, but only with 100% reliability as opposed to complete reliability. but according to QTI I *must* pass through a phase when I see the unlikely bits, no matter how unlikely it is that a typical moment will fall into that phase. Even if I later spend 99.999% of my observer moments seeing the stars going out one by one, there still has to be that starting point! Right, again, that's why the reliability is just 100%. My (ahem) point is, though, that none of us ARE at a typical point (again, assuming QTI). In fact we're in a very atypical point, just as the era of stars might be a very atypical point in the history of the universe - but it's a point we (or the universe) HAVE TO PASS THROUGH to reach more typical points (e.g. very old, no stars left...). Hence it's consistent with QTI that we find ourselves passing through this point... Right, consistent with it but only 0% of the time, hence the Bayesian argument is to put 0 credence in the FIN rather than strictly no credence. I'm not arguing for QTI here, but I do think that you can't argue from finding yourself at a particular point on your world-line to that world-line having finite length, because you are guaranteed to find yourself at that particular point at some (ah) point. Right, which is why I'm (now) careful not to make *that* argument by arbritarily using one's current age to base a reference point on. (e.g. in my reply to Bruno.) Rather, I argue that from being at a point prior to some _natural reference point_ such as the can calculate my age crierion, one can conclude that one's world-line is finite. So I'm rejecting, not Bayesian logic per se, but the application of it to what (according to QTI) would be a very special (but still allowable) case. There are no grounds to reject it in this case, since it would be reliable almost all of the time. There's no difference between using a method because it works for most people vs. using a method because it works for me most of the time. At any given time, it works for most people, too. The basic problem is that we experience observer moments as a sequence. Hence we *must* experience the earlier moments before the later ones, and if we happen to come across QTI before we reach QTI-like observer moments then we might reject it for lack of (subjective) evidence. But that doesn't contradict QTI, which predicts that we have to pass through these earlier moments, and that we will observe everyone else doing so as well. I wish I could put that more clearly, or think of a decent analogy, but do you see what I mean? Our observations aren't actually *incompatible* with QTI, even if they do only cover an infinitsimal chunk of our total observer moments. Indeed so, I know only too well what you mean. This has come up more than once on the list. I hope you understand why I say it's irrelevant. _Just like_ in the A/B case, it would be wrong to not use Bayesian reasoning just because seeing A is, yes, compatible with both #1 and #2. Seeing A could even have been a way to confirm theory #2, if the rival theory #1 hadn't existed. The bottom line is that Bayesian reasoning usually works for most people. - - - - - - - Jacques Mallah ([EMAIL PROTECTED]) Physicist / Many Worlder / Devil's Advocate I know what no one else knows - 'Runaway Train', Soul Asylum My URL: http://hammer.prohosting.com/~mathmind/ _ Get your FREE download of MSN Explorer at http://explorer.msn.com/intl.asp
Re: FIN insanity
From: Saibal Mitra [EMAIL PROTECTED] There are different versions of QTI (let's not call it FIN). I'm certainly not going to call it a theory. Doing so lends it an a priori aura of legitimacy. Words mean things, as Newt Gingrich once said in one of his smarter moments. The most reasonable one (my version, of course) takes into account the possibility that you find yourself alive somewhere else in the universe, without any memory of the atomic bomb that exploded. I totally ignore the possibility that one could survive an atomic bomb exploding above one's head. My version doesn't imply that your a priory expected lifetime should be infinite. Your version may not imply immortality, but I don't really see how it's different from other versions (and thus why it doesn't). I say: 1) If you are hurt in a car accident and the surgeon performes brain surgery and you recover fully, then you are the same person. OK, that's merely a matter of definition though. 2) You would also be the same person if the surgeon made a new brain identically to yours. I'm not sure what you mean here. The new brain would be the same as the old you, the old one would remain the same, the old one was destroyed, or what? 3) From 2) it follows that if your brain was first copied and then destroyed, you would become the copy. A matter of definition agin, but let me point out something important. If your brain is copied, then there is a causal link between the old brain and any copies. Thus it's quite possible for an extended implementation of a computation to start out in the old brain and end up in the copy, without violating the requirement that implementations obey the proper direct causal laws. 4) From 3) you can thus conclude that you will always experience yourself being alive, because copies of you always exist. I don't see how 4 is supposed to follow from 3. In any case, it's certainly not true that copies of you always exist. Rather, people who are structurally identical do exist, but they are not copies as they are not causally linked. Even if they were linked in the past, they have diverged on the level of causal relationships between your brain parts vs. their brain parts. 5) It doesn't follow that you will experience surviving terrible accidents. If 4 were true, I don't see how 5 could be true. - - - - - - - Jacques Mallah ([EMAIL PROTECTED]) Physicist / Many Worlder / Devil's Advocate I know what no one else knows - 'Runaway Train', Soul Asylum My URL: http://hammer.prohosting.com/~mathmind/ _ Get your FREE download of MSN Explorer at http://explorer.msn.com/intl.asp
Re: plato
From: Brent Meeker [EMAIL PROTECTED] [Jacques Mallah wrote] The basic computational explanation is not of the world - it's of conscious observations. As for the arena where things get implemented - that could either be a physical world, or it could be Plato's realm of math. That an implementation might be in another physical world I can understand. I don't see how an implementation can be in Plato's realm of mathematics. In mathematics there are axioms and theorems and proofs - none of these imply any occurence in time. There is more than that in mathematics. Structures, for example. Anything that could be described mathematically, such as geometries, computations, and anything that could be a model of a (hypothetical) world. There's plenty of room for implementations there. You might be able to impose an order on theorems (ala' Godel) and it might be possible to identify this with time (although I doubt this can work), I doubt it too, although there was some kind of paper suggesting that the strings of string theory are made of theorems. But I strongly doubt that one's on the right track. but even so it is just a single order that is implicit - there is no way to distinguish two different implementations of this order. I don't know about that, though. Probably there could be ways. But I haven't really studied the idea of building things out of theorems, since I think things are just structures, and structures are in Plato's kitchen already. - - - - - - - Jacques Mallah ([EMAIL PROTECTED]) Physicist / Many Worlder / Devil's Advocate I know what no one else knows - 'Runaway Train', Soul Asylum My URL: http://hammer.prohosting.com/~mathmind/ _ Get your FREE download of MSN Explorer at http://explorer.msn.com/intl.asp
RE: FIN too
From: Charles Goodwin [EMAIL PROTECTED] you can't apply any sort of statistical argument to your own experience unless you assume that you're a typical observer. But if you do that you're just assuming the result you want. Not so. You don't assume you're typical exactly, just that you are more likely to be typical. You have no choice but to believe that, or else you reject basic Bayesian logic. My objections to the QTI are more along the lines of how the mechanism is supposed to work - why can't you experience your own death, or just stop having experiences altogether, in 99.9(etc)% of the universes that contain you? It's nice that you reject FIN! Of course, those who support it can give (and have given) no reason, since it's a nonsensical belief. From: Jacques Mallah [mailto:[EMAIL PROTECTED]] The problem is that the probability isn't 0% that you'd find yourself at your current age (according to the QTI - assume I put that after every sentence!). Because you HAVE to pass through your current age to reach QTI-type ages, the probability of finding yourself at your current age at some point is 100%. At some point, yes. At a typical point? 0%. Using your argument (assuming QTI...) then your chances of finding yourself at ANY age would be 0%. This imples to me that the SSA can't be used in this case, rather than that QTI *must* be wrong. Nope! It's just that with FIN, your expected age diverges. If you want to say that's impossible, fine with me. FIN is logically impossible for a sane person to believe! But there's one exception: your brain can only hold a limited amount of information. So it's possible to be too old to remember how old you are. *Only if you are that old, do you have a right to not reject FIN on these grounds.* Are you that old? (Of course, you must still reject it on other grounds!) After all whether QTI is correct or not, you can imagine that it is and see what the results would be; and one result is that you will find yourself (at some point) having any age from 0 to infinity, which is consistent with your current observations. Consistent with them, but not nearly as likely in the FIN case. Remember Bayes' theorem: the posterior favored hypothesis is the one that would be more likely to predict your observations. That's OK so far. And it turns out correctly for most cases (i.e. 99.(etc)% of observers WILL turn out to have ages of infinity (if QTI etc)). But an infinitesimal fraction won't - including everyone you observe around you (the multiverse is very very very (keep typing very til doomsday) big! (assuming MWI)). Right. Do you think you are in an infinitesimal fraction, or in a typical fraction? In the same way, the SSA helps you guess things. It's just a procedure to follow which usually helps the people that use it to make correct guesses. It doesn't seem to help in this case though. I don't need to guess my age, it's a given. Maybe the following example will help. Suppose there are two possibilities: 1. 90% of people see A, 10% see B 2. 10% of people see A, 90% see B You see A. But you want to know whether #1 or #2 is true. A priori, you feel that they are equally likely to be true. Should you throw up your hands simply because both #1 and #2 are both consistent with your observation? No. So use Bayes' theorem as follows: p(1|A) = [p(A|1) p_0(1)] / [p(A|1) p_0(1) + p(A|2) p_0(2)] = [ (.9) (.5) ] / [ (.9) (.5) + (.1) (.5) ] = .9 So you now think #1 is 90% likely to be true, if you use this procedure. So you will guess #1. OK, lets try and check to see if this procedure is good. If #1 is true then 90% of people who use the procedure guess #1 (right). If #2 is true then 10% of people who use the procedure guess #1 (wrong). Well I'd say that's pretty good, and also the best you can do. I gotta go. - - - - - - - Jacques Mallah ([EMAIL PROTECTED]) Physicist / Many Worlder / Devil's Advocate I know what no one else knows - 'Runaway Train', Soul Asylum My URL: http://hammer.prohosting.com/~mathmind/ _ Get your FREE download of MSN Explorer at http://explorer.msn.com/intl.asp
Re: FIN Again (was: Re: James Higgo)
From: Jesse Mazer [EMAIL PROTECTED] I don't understand your objection. It seems to me that it is perfectly coherent to imagine a TOE which includes both a universal objective measure on the set of all observer-moments and also a relative conditional probability which tells me what the probability is I'll have experience B in the future if I'm having experience A right now. You is just a matter of definition. As for the conditional effective probability of an observation with characteristics A given that it includes characteristics B, p(A|B), that is automatically defined as p(A|B) = M(A and B) / M(B). There is no room to have a rival relative conditional probability. (E.g. A = I think I'm in the USA at 12:00 today, B=I think I'm Bob.) In statistics we have both absolute and conditional probability, so what's wrong with having the same thing in a TOE? In fact there is no choice but to have conditional probability - as long as it's the one that the absolute measure distribution automatically defines. I suppose one objection might be that once we have an objective measure, we understand everything we need to know about why I find myself having the types of experiences I do Indeed so. and that defining an additional conditional probability measure on the set of all observer-moments would be purely epiphenomenal and inelegant. Is that what your problem with the idea is? It's not just inelegant. It's impossible, if by additional you mean one that's not the automatic one. self-sampling assumption--what does it mean to say that I should reason as if I had an equal probability of being any one of all possible observer-moments? It means - and I admit it does take a little thought here - _I want to follow a guessing procedure that, in general, maximizes the fraction of those people (who use that procedure) who get the right guess_. (Why would I want a more error-prone method?) So I use Bayesian reasoning with the best prior available, the uniform one on observer-moments, which maximizes the fraction of observer-moments who guess right. No soul-hopping in that reasoning, I assure you. if I am about to step into a machine that will replicate one copy of me in heaven and one copy in hell, then as I step into the imaging chamber I will be in suspense about where I will find myself a moment from now, and if the conditional probability of each possible future observer-moment is 50% given my current observer-moment, then I will interpret that as a 50/50 chance that I'm about to experience torture or bliss. That depends on the definition of you. In any case, one copy will be happy (the one partying with the succubi in hell) and the other will be sad (the one stuck hanging out with Christians). So your utility function should be about even. I assume you'd care about both future copies at that point. Surely you agree that there is nothing *mathematically* incoherent about defining both absolute and conditional probability measures on the set of all observer-moments. So what's your basis for calling the idea crazy? I've explained that in other posts, but as you see, the idea is indeed mathematically incoherent - unless you just mean the conditional effective probability which a measure distribution defines by definition. And _that_ one, of course, leads to a finite expectation value for ones's observed age (that is, no immortality). - - - - - - - Jacques Mallah ([EMAIL PROTECTED]) Physicist / Many Worlder / Devil's Advocate I know what no one else knows - 'Runaway Train', Soul Asylum My URL: http://hammer.prohosting.com/~mathmind/ _ Get your FREE download of MSN Explorer at http://explorer.msn.com/intl.asp
Re: FIN
From: Saibal Mitra [EMAIL PROTECTED] You wrote earlier that consciousness can't be transferred to a copy. But consciousness isn't transferred, the copies had the same consciousness already because they were identical. No, they weren't _identical_. They were different people, who happened to have the same type of experiences and the same brain design. I would say: I exist because somewhere I am computed. You appear to say that (forgive me if I am wrong) I must identify myself with one computation. Even an identical computation performed somewhere else will have a different identity. Ok, although I would say to be more precise that you should identify yourself with an implementation of a computation. A computation must be performed (implemented) for it to give rise to consciousness. At this point I would like to reiterate something I have stated in the past. We all agree, I think, that not all computations have the same measure associated with them. But what you don't seem to realize is the implication of that fact: the mere existance of the abstract computation is not what is associated with measure of consciousness, so the number of implementations must be what determines the measure. That's why leaping is a necessary part of the Fallacious Immortality Nonsense (FIN). The mind must be associated with an implementation, and if it termintates that measure then is said to (in effect) leap to the remaining implementations. (Although, as I have also said, in that case the remaining implemementations would really be of a different computation.) This also means that knowing the current situation would not be enough, for one who believes the FIN, to in principle determine the measure distribution either at that time or any time in the future. In other words, the FIN requires mind-like hidden variables. the brain is constantly changing due to various processes. The typical timescales of these processes is about a millisecond. True. FIN thus predicts that I shouldn't find myself alive after a few milliseconds. I'm guessing here that you misunderstood what I meant by FIN. By FIN I mean that belief which some have called QTI. So I guess you are attacking my position, but I don't see on what grounds. Suppose that your current implementation is indeed localized in time, and that at other moments you are considered to be a different person. (It's really just a matter a definition, especially if input is allowed.) So what? All that means is that the old you sees only that moment. Now there is a new you seeing this moment. So if you want to just define yourself to be a one-moment guy, then indeed you are no longer with the living. By the same token, the would be a new guy in your body and (hypothetically, not that you would) he'd be the one typing nonsense like I'm still here. - - - - - - - Jacques Mallah ([EMAIL PROTECTED]) Physicist / Many Worlder / Devil's Advocate I know what no one else knows - 'Runaway Train', Soul Asylum My URL: http://hammer.prohosting.com/~mathmind/ _ Get your FREE download of MSN Explorer at http://explorer.msn.com/intl.asp
RE: FIN too
From: Charles Goodwin [EMAIL PROTECTED] Hi, I have just joined this list after seeing it mentioned on the Fabric of Reality list Hi. BTW, what's up on the FOR list? Ever see anything interesting there? I thought the book sucked except for chapter 2 (I think; the one explaining the MWI), but at least there are some MWIers on that list I would think. Would someone mind briefly explaining what FIN is (or at least what the letters stand for)? Is it some version of QTI (Quantum theory of immortality) ? Yes, any version of QTI is FIN. Why should a typical observer find himself to be older than the apparent lifetime of his species? I guess you mean assuming FIN, why ... so *very* few observeres are going to notice the TU versions of anyone else. So the only way to actually experience this phenomenon is to live to be that old yourself. Right ... I must ask, though, what makes you think that a typical observer ISN'T much older than the lifetime of his species would allow? I'm not so old, but if FIN were true, the effective chance of me being old would be 100%. So by Bayesian reasoning, it must be false. Given that you can't observe anyone but yourself in this state (or it's TU that you ever will) (and I'm assuming you haven't reached 120 yet), you can't really use a self-sampling argument on this, surely? On the contrary, you do use a SSA. After all, you will never (for any question) have more than the one data point for use in the SSA. But with a probability of 0% or 100%, that's plenty! It means - and I admit it does take a little thought here - _I want to follow a guessing procedure that, in general, maximizes the fraction of those people (who use that procedure) who get the right guess_. (Why would I want a more error-prone method?) So I use Bayesian reasoning with the best prior available, the uniform one on observer-moments, which maximizes the fraction of observer-moments who guess right. No soul-hopping in that reasoning, I assure you. I'm sorry, I still don't see how that applies to me. If I know which observer moments I'm in (e.g. I know how old I am) why should I reason as though I don't? Because you want to know things, don't you? It's no different from any Bayesian reasoning, in that regard. Suppose you know that you just flipped a coin 10 times in a row, and it landed on heads all ten times. Now you can apply Bayesian reasoning to guess whether it is a 2-headed coin, or a regular coin. How to do it? p(2-headed|got 10 heads) = [p(got 10 heads|2-headed) p_0(2-headed)] / N p(1-headed|got 10 heads) = [p(got 10 heads|1-headed) p_0(1-headed)] / N where N = p(got 10 heads) is the normalization factor so that these two conditional probabilities sum to 1 (they are the only possibilities). That's a standard use of Bayes' theorem. But - whoa there - what's the p(got 10 heads) and the like? You already _know_ you got 10 heads, so why not just set p(got 10 heads) to 1? Obviously, you consider the counterfactual case of (didn't get 10 heads) for a reason - that is, to help you guess something about the coin. In the same way, the SSA helps you guess things. It's just a procedure to follow which usually helps the people that use it to make correct guesses. - - - - - - - Jacques Mallah ([EMAIL PROTECTED]) Physicist / Many Worlder / Devil's Advocate I know what no one else knows - 'Runaway Train', Soul Asylum My URL: http://hammer.prohosting.com/~mathmind/ _ Get your FREE download of MSN Explorer at http://explorer.msn.com/intl.asp
Re: FIN Again (was: Re: James Higgo)
From: [EMAIL PROTECTED] Jacques Mallah writes: The problem comes when some people consider death in this context. I'll try to explain the insane view on this, but since I am not myself insane I will probably not do so to the satisfaction of those that are. I have mixed feelings about this line of reasoning, but I can offer some arguments in favor of it. I guess you mean in favor of FIN. How about against it too, since you have mixed feelings? The insane view however holds that the mind of the killed twin somehow leaps into the surviving twin at the moment he would have been killed. Thus, except for the effect on other people who might have known the twins, the apparent death is of no consequence. It's not that the mind leaps. That would imply that minds have location, wouldn't it? And spatial limits? But that notion doesn't work well. Mind is not something that is localized in the universe in the way that physical objects are. You can't pin down the location of a mind. Where in our brains is mind located? In the glial cells? In the neurons? The whole neuron, or just the synapse? It doesn't make sense to imagine that you can assign a numerical value to each point in the brain which represents its degree of mind-ness. Location is not a property of mind. A computationalist would say that the mind is due to the functioning of the brain, and thus is located where the parts that function are. But this is totally irrelevant. Suffice it to say that a mind is associated with that brain, while a different mind would be associated with a different brain. Hence we cannot speak of minds leaping. I remind you that _I_ never said they leap, could leap, or that such a thing is logically possible at all. I said only that the insane hold such a view, which many posters on this list do. Whatever they may mean by what they say, the effect is best described as saying they think minds leap. It makes more sense to think of mind as a relational phenomenon, like greater than or next to, but enormously more complicated. In that sense, if there are two identical brains, then they both exhibit the same relational properties. That means that the mind is the same in both brains. It's not that there are two minds each located in a brain, but rather that all copies of that brain implement the mind. Nope. That make no (0) sense at all. Sure, you could _define_ a mind to be some computation, as you seem to want, rather than being a specific implementation of that computation. But that's a rather silly definition, since it's a specific implementation that would be associated with conscious thinking of a particular brain, and thus with measure. Of course, even a twin who dies could never have the same computation as one that lived, since HALT is obviously a significant difference in the computation. Further support for this model can be found by considering things from the point of view of that mind. Let it consider the question, which brain am I in at this time? Which location in the universe do I occupy? There is no way for the mind to give a meaningful, unique response to this question. There's no way to know for sure, you mean. OK, I agree with that. You can still guess with high confidence. In any case, there's still a fact of the matter, regardless of whether you know that fact. Any answer will be both wrong and right. That makes no sense. The answer will be either wrong XOR right, for a particular mind; but you can't know for sure which of those minds is you. Hence you use indexical Bayesian reasoning or SSA. In this model, if the number of brains increases or decreases, the mind will not notice, it will not feel a change. Surviving minds won't notice a change. Dead minds won't feel a thing, which is the reason death sucks. No introspection will reveal the number of implementations of itself that exist in a universe or a multiverse. True, although with the SSA you can make some reasonable guesses. This is only dangerous if the belief is wrong, of course. The contrary belief could be said to be dangerous in its way, if it were wrong as well. (For example, it might lead to an urgent desire to build copies.) Even supposing the logical belief to be wrong - what's so dangerous about building copies? In any case, that would require a lot more tech than we have. I have repeated pointed out the obvious consequence that if that were true, then a typical observer would find himself to be much older than the apparent lifetime of his species would allow; the fact that you do not find yourself so old gives their hypothesis a probability of about 0 that it is the truth. However, they hold fast to their incomprehensible beliefs. This is a different argument and has nothing to do with the idea of leaping, which is mostly what I want to take issue with. Sure it has to do with it, because it proves
Re: imps.
From: Marchal [EMAIL PROTECTED] Jacques Mallah wrote It doesn't matter, of course. First, the measure of James-like beings (summing over time) is now known to be smaller than we thought it would be; that's true no matter what. Sometimes you speak like if you *have* solved your implementation problem. I'm not sure what you mean. Anyway, I might have. My proposal is on my web page in the Plank Syposium paper http://hammer.prohosting.com/~mathmind/100y.htm How could you know now? What, that his measure was reduced? Surely you heard that news. To be precise, if the fact that he might die this way was something we anticipated, then indeed his total measure would be reduced by the event, but our knowledge of that reduction would not depend on our being in the world where he died. So we would not have extra reason to feel bad about the reduction. But this news came as more of a surprise, wouldn't you say? With the comp hyp., or just the QM hyp., (and this in a completely provable way taking just Everett memory machines in the non relativistic setting), you should not sum up on time, but you must sum up on *all* consistent neighborhoods. (Time and space emerges on that eventually through comp). I don't know what you mean. A couple of points though: 1) You know that as I've said before, Everett's memory-based formulation is not computationalist and IMO he would have readily admitted that it was only meant as a preliminary ansatz to start studying the MWI. 2) The nonrelativistic setting surely includes space and time a priori. You really speak like a quantum Bohmian, discarding quasi-magically all computational histories but one. I never did anything like that, have no clue why you think I did. Decoherence explains only why those worlds get rather quickly inaccessible for most of *each* of us, Obviously. Any preschool kid should know that, so why do you bring it up now? Why do you put many world in your signature? I put Many Worlder to let people know a little about my beliefs. The James Higgos of the other worlds are zombie or what? Eh? Why do you ask? How do you distinguish yourself from numerically indentical counterparts? Depends what you mean. Are the numbers 5 and 6 identical? I don't think so. But look on a number line at the internal structure of these points. They look the same. They're just located in different spots, in this case in Plato's funhouse. Same with two different implementations of the same computation, whether in Plato's funhouse or a physical world. If you disagree and say they are the same, give me $6 and I'll give you $5. - - - - - - - Jacques Mallah ([EMAIL PROTECTED]) Physicist / Many Worlder / Devil's Advocate I know what no one else knows - 'Runaway Train', Soul Asylum My URL: http://hammer.prohosting.com/~mathmind/ _ Get your FREE download of MSN Explorer at http://explorer.msn.com/intl.asp
Re: FIN
From: Saibal Mitra [EMAIL PROTECTED] Jacques Mallah wrote: `` I have repeated pointed out the obvious consequence that if that were true, then a typical observer would find himself to be much older than the apparent lifetime of his species would allow; the fact that you do not find yourself so old gives their hypothesis a probability of about 0 that it is the truth. However, they hold fast to their incomprehensible beliefs.ยดยด According to FIN, however, the probability of being alive at all is almost zero, which contradicts our experience of being alive. Whatchya mean? I wouldn't mind acquiring a new argument against FIN to add to the ones I give, but your statement doesn't appear to make any sense. - - - - - - - Jacques Mallah ([EMAIL PROTECTED]) Physicist / Many Worlder / Devil's Advocate I know what no one else knows - 'Runaway Train', Soul Asylum My URL: http://hammer.prohosting.com/~mathmind/ _ Get your FREE download of MSN Explorer at http://explorer.msn.com/intl.asp
Re: UDA last question (was UDA step 9 10).
From: Jesse Mazer [EMAIL PROTECTED] Personally, I've never been able to understand the attitude of the anti-measurists--how can anything make sense without one? What possible reason would I have to believe that the future will resemble the past in any way whatsoever? After all, there are an infinite number of possible universes that resemble the one I've experienced up to the present moment, and then suddenly transform into a swarm of white rabbits--should I be bracing myself for such a possibility at every moment? Without some kind of measure on the Plenitude we cannot even talk about the probability that the laws of physics will continue to operate normally a minute from now...you can't really talk about anything but the present moment, in fact. You're right, almost. But what _about_ the present? Without an _objective_ measure on possible experiences, there would be no reason for even the present moment to be as wabbit-free as it is! (e.g. The present moment suggests evolution, etc.) And you can never see the future (maybe you _will_, depending on the definition of you, but you never have yet!), so clearly it is only the present that supplies the info you have to make such Bayesian deductions. In fact it's simpler to define you as just existing now. (Which is not to say your utility function shouldn't care about future guys.) It should be apparent to all that an objective measure is needed on observer-moments. I do not call this a 3rd person measure because that would falsely imply the existance of some other type of measure to be a logical possibility. - - - - - - - Jacques Mallah ([EMAIL PROTECTED]) Physicist / Many Worlder / Devil's Advocate I know what no one else knows - 'Runaway Train', Soul Asylum My URL: http://hammer.prohosting.com/~mathmind/ _ Get your FREE download of MSN Explorer at http://explorer.msn.com
lowly complexity
From: Joel Dobrzelewski [EMAIL PROTECTED] All of this may seem academic really, since we all know that any universal computer is as good as any other. [...] But there MAY be some reasons to want to know exactly which algorithm is really being run on the bottom... You guys are going about it all wrong. Sure, some computers seem simpler than others. But there's no one way to pick the simplest. So much for zero information. I have tolerated the talk of UDs because it's about equivalent to saying that all programs should be run. But I have always advocated the latter. The set of all is the simplest possibility, rather than choosing one simple program. (Joel's 3 dimensional cellular automata are particularly ridiculous to me. How could he think the 3-d is not anthropically chosen?) I'll expand that to include hardware (i.e. the bottom algorithm). All possible algorithms should exist: TMs, CAs, etc. The typical machine on the bottom should therefore be of huge dimension, with a huge number of states, etc. As usual, most programs are junk but some will implement universes, or in this case, I should say that most parts of a typical computer are junk but some are useful. I don't think this solves the measure problem (which is the real issue), but is it possible that the measure distribution of computations implemented (see my page Plank Syposium paper if you don't know what I mean by that) by these infinitely varied bottom machines is somehow independent of the way the machines themselves are described? - - - - - - - Jacques Mallah ([EMAIL PROTECTED]) Physicist / Many Worlder / Devil's Advocate I know what no one else knows - 'Runaway Train', Soul Asylum My URL: http://hammer.prohosting.com/~mathmind/ _ Get your FREE download of MSN Explorer at http://explorer.msn.com
Re: M. alors
From: Marchal [EMAIL PROTECTED] Jacques Mallah wrote: I'm not sure what you mean by much of the above. e.g. 3rd person, Just take 3rd person as objective like in Everett thesis, and 1st person as Everett subjective. Everett is discussed below; he does _not_ believe in your 1st person by another name subjective. BM: Implementation must be some sort of relative computations. I don't know what you mean by the above But what do *you* mean by implementation ? A relationship between a computation and another mathematical structure, in which the properties of the latter are similar in the right ways (presumably a mapping satisfying restrictions) to the former that for many practical purposes (such as consciousness) we can regard the existance of the latter as having the same effect as the existance of the former, in addition to any other possible effects such as those due to considering more than one implementation mapping. You know where to find my thoughts on how to restrict the mappings. OK. What I was trying to say is that I take arithmetical truth as something independent of myself, although I am in there with all the rest. Sounds like Platonism, but doesn't define existence. It is not always obvious whether a given description is consistent or not, and one way discussion of such a question is often framed is in terms of whether the hypothetical structure that would be described 'exists'. (e.g. Does a power series expansion for f(x) exist?) OK. What's the point? What does exist mean in the above? Does it suggest a weak form of existence? That's the point. I don't take a word like Earth as granted in a metaphysical argument. But I agree that the ten thousand earth and their variants exists in Plato's heaven. Why is 10,000 such an important number for you? Surely 666 is the only # that holds the key to existence. (Just kidding!) Except that U is undefined. All Universes are indeed in Plato's heaven, but none are material. They appears material to their inhabitant, that's all. Suppose instead they are all material by hypothetical logical necessity. Tell me what would be different. (i.e. define material.) Unlike the classical position, this requires a unique and inevitable *measure distribution* on the set of mathematical structures. No. measure are relevant for inhabitant relatively to their computational histories. And that need trans-univers reasoning. (A good thing, because without that there would be no hope for a computationalist explanation of quantum interferences). I guess by relatively you mean there is no objective measure distribution. Sorry, that won't cut it. Without an objective measure distribution you can say nothing about what a typical observer would see. (For example, you can't explain Darwin.) We would certainly like to be able to do that. Unfortunately, as I have argued in other posts, the AUH is not falsifiable, so even if we prove that F=ma is very unlikely for us to see with the AUH, we are still stuck with the AUH. But it would be very unsatisfactory, of course. We need to derive F=ma not to prove the AUH, but to make ourselves happy with it. The All Computation + comp *is* quasi-falsifiable. Mmmh... How? (I'm not sure what you mean by comp though, since you like to talk about whether you survive substitutions but did not define you.) For me the key word is appearances. I am sure Everett did NOT believe in any discontinuous or probabilistic elements in his interpetation. First I agree with Bryce Dewitt that Everett did not propose an interpretation of QM. Everett proposes only a new formulation which is essentially SE+COMP. Then he explicitely derives the Wave reduction as a subjective process, from an analysis of the memories of normal (Gaussian) classical machines, embedded in a universal quantum computation. (I show the last is redundant). You are throwing out undefined terms left and right. Everett never defined machines, which is why his formulation leads to paradoxes like a stationary state being able to be conscious. My interpretation of Everett is proved by his statement that The behavior of these observers shall always be treated within the framework of wave mechanics. Exactly. Behavior is typically objective/third person. I agree with this reading of Everett. I do not mean just externally observable behavior, BTW. This indicates that he did NOT intend to introduce any mind-like hidden variables Obviously. - and thus, no 1st person merde about consciousness flowing from one observer-moment to another. The flowing of consciousness is part of the appearances. I still don't figure out why you want dismiss those appearances so much. Especially talking on Everett who cares so much about it. First, memory (which he deals with) is nothing like the kind of flow QTI freaks talk about. Certainly 1st person probabilities
Re: another anthropic reasoning
From: Wei Dai [EMAIL PROTECTED] On Wed, Mar 21, 2001 at 09:39:10PM -0500, Jacques Mallah wrote: He thinks he is only 1/101 likely to be in round 1. However, he also knows that if he _is_ in round 1, the effect of his actions will be magnified 100-fold. Thus he will push button 2. You might see this better by thinking of measure as the # of copies of him in operation. If he is in round 1, there is 1 copy operating. The decision that copy makes will affect the fate of all 100 copies of him. If he is in round 2, all 100 copies are running. Thus any one copy of him will effectively only decide its own fate and not that of its 99 brothers. You'll have to define what effectively decide means and how to apply that concept generally. (Have you introduced it before? I think this is the first time I've seen it.) I thought the meaning to be obvious in this context. The simplest interpretation of your little experiment is that whatever fraction of him push a particular button, is the same fraction of him that end up with the corresponding payoff. That's how I always interpreted it. If the measure of him is the same for the 2nd round as for after both rounds, then it's the same as if each copy gets to influence its own payoff. Suppose in round 2 he gets the $-9 payoff if any of the copies decide to push button 1. Intuitvely, each copy affects the fate of every other copy. Now you're changing the game. And it is a game, since as you said yourself, each guy affects the others. How will round 1 work? Am I to assume that there is indeed just 1 copy active in round 1, or did you mean to say he gets -$9 if any fraction of him pushes 1 in round 1 as well? If not, and assuming there are 100 copies in round 2, he may be best advised to push 1 about 2% of the time, randomly chosen. (I have not bothered to find the optimal solution for this game.) That way, most of the time (98%) he will push 2 in round 1 and usually (~75%) still get the full payoff of pushing 1 in round 2. (On the other hand, if we assume an infinite # of copies and that round 1 is also modified, then it is inevitable that at least some copies will get the idea to push 1 in both rounds. So the thought processes of the majority of his copies is completely irrelevant, and the outcome is certain to be button 1 for both rounds. Unless, that is, all copies are 100% identical with no access to a random # generator (Gieger counter, etc.); in that case, he will simply push button 2 as a matter of principle.) I suggest that he instead think I'm in both round 1 and round 2, and I should give equal consideration to the effects of my decision in both rounds. I assume you mean he should think I am, or was, in round 1, and I am, or will be, in round 2. There is no need for him to think that, and it's not true. Only one of the brothers was in round 1. No, I meant he should think himself as being in both round 1 and round 2 simultaneously, and making both decisions at once. I'm sorry to say it but that is ridiculous. It makes no sense. I hope you have not started to smoke the same weeds as the 1st person crowd. First, anyone whose utility function does not depend on measure is definately insane in my book. One good utility function could have the form U = [sum_(thoughts) f(thought) M(thought)] + V I think anyone whose utility function does have that form is insane. Who isn't? :) I admit it's not a perfect model, though. He's going to spend most of his resources running repeated simulations of himself having the thought that he values most. Unlikely. First, there may be no maximum of f. For example, f could be proportional to the depth of the thought (roughly, the age the guy seems to be) as well as to a happiness factor. Second, it is unlikely that his resources would be such that doing so would maximize the utility. Even if they are, it doesn't seem so strange that he would want to relive his happiest moment. - - - - - - - Jacques Mallah ([EMAIL PROTECTED]) Physicist / Many Worlder / Devil's Advocate I know what no one else knows - 'Runaway Train', Soul Asylum My URL: http://hammer.prohosting.com/~mathmind/ _ Get your FREE download of MSN Explorer at http://explorer.msn.com
Re: another anthropic reasoning
From: Wei Dai [EMAIL PROTECTED] To: Jacques Mallah [EMAIL PROTECTED] On Tue, Mar 20, 2001 at 06:14:58PM -0500, Jacques Mallah wrote: Effectively it is [a game], since Bob has a Bayesian probability of affecting Alice and so on. He doesn't know whether he is Alice or Bob, but he does know that his payoff only depends on his own action. Bob has a Bayesian probability of affecting Alice is true in the sense that Bob doesn't know whether he is Alice or Bob, so he doesn't know whether his action is going to affect Alice or Bob, but that doesn't matter if he cares about himself no matter who he is, rather than either Alice or Bob by name. [Prepare for some parenthetical remarks. (I assume you mean (s)he cares only about his/her implementation, body, gender or the like. A utility function that depends on indexical information. Fine, but tricky. If I care only about my implementation, then I don't care about my brothers. Things will depend on exactly how the experiment works. On the other hand, I don't think it's unreasonable for the utility function to not depend on indexical information. For example, Bob might like Alice and place equal utility on both Alice's money and his own, like in the example I used. In practice, I think people mainly place utility on those who will remember the stuff they are currently experiencing. Thus if there was a way to partially share memories, things could get interesting.) (Note for James Higgo: the concept of self can be defined in various ways. I do not mean to imply that there is any objective reason for him to use these ways. e.g. I might decide, not knowing my gender, that I still care only about people who have the same gender as me. Thus Bob would not care about Alice. Silly, but possible. I guess the drugs also make you forget which body parts go with what, but you can still look, so the experiences are not identical. Just mentioning that in case someone would have jumped in on that point.) (I would also say that any wise man - which I am not - will certainly have a utility function that does _not_ depend on indexical information! We are fools.) OK, back to the question. Forget I said a thing.] It's effectively a game. But there's no point in debating sematics. In any case, just choose a utility function (which will also depend on indexical information), analyse it based on that, and out comes the correct answer to maximize expected utility. So let's concentrate on the case with just Bob below. You are correct as far as him thinking he is more likely to be in round 2. However, you are wrong to think he will push button 1. It is much the same as with the Bob and Alice example: You said that in the Bob and Alice example, they would push button 1 if they were selfish, which I'm assuming that they are, and you said that the seeming paradox is actually a result of game theory (hence the above discussion). But in this example you're saying that the participant would push button 2. How is that the same? If you're saying that even if they are selfish they would push button 2, I won't argue. I was just using a different utility function for being selfish, one that did not depend on indexical info. Pushing 2 is better anyway, so why complain? He thinks he is only 1/101 likely to be in round 1. However, he also knows that if he _is_ in round 1, the effect of his actions will be magnified 100-fold. Thus he will push button 2. You might see this better by thinking of measure as the # of copies of him in operation. If he is in round 1, there is 1 copy operating. The decision that copy makes will affect the fate of all 100 copies of him. If he is in round 2, all 100 copies are running. Thus any one copy of him will effectively only decide its own fate and not that of its 99 brothers. That actually illustrates my point, which is that the measure of oneself is irrelevant to decision making. It's really the magnitude of the effect of the decision that is relevant. You say that the participant should think I'm more likely to be in round 2, but if I were in round 1 my decision would have a greater effect. First, it's nice to see that you accept my resolution of the paradox. But I have a hard time believing that your point was, in fact, the above. You brought forth an attack on anthropic reasoning, calling it paradoxical, and I parried it. Now you claim that you were only pointing out that anthropic reasoning is just an innocent bystander? Of course it's just a friendly training exercise, but you do seem to be pulling a switch here. I suggest that he instead think I'm in both round 1 and round 2, and I should give equal consideration to the effects of my decision in both rounds. I assume you mean he should think I am, or was, in round 1, and I am, or will be, in round 2. There is no need for him to think that, and it's
Re: another anthropic reasoning
I am resending this because I sent it several hours ago and it hasn't shown up. So if it posts twice you know why. From: Wei Dai [EMAIL PROTECTED] I'm sorry for this greatly delayed response. I shouldn't have sent off the original message right before I hoped onto a plane and moved to Boston. If anyone can't remember what this thread was about, please see http://www.escribe.com/science/theory/m2514.html. OK. On Thu, Mar 01, 2001 at 03:14:03AM -0500, Jacques Mallah wrote: True, they could be more selfish. Effectively they are playing a prisoner's dilemma type of game where first Bob is given a move, then Alice is. This experiment is not a game, since the action of each participant only affects his or her own payoff, and not the payoff of the other Effectively it is, since Bob has a Bayesian probability of affecting Alice and so on. Actually you can do this with just one participant, and maybe that will make the paradoxical nature of anthropic reasoning clearer. It should make the _non_paradoxical nature clearer. Suppose the new experiment has two rounds. In each round the participant will be given temporary amnesia so he can't tell which round he is in. In round one he will have low measure (1/100 of normal). In round two he will have normal measure. He is also told: If you push button 1, you will lose $9. If you push button 2 and you are in round 1, you will win $10. If you push button 2 and you are in round 2, you will lose $10. According to anthropic reasoning, the participant when faced with the choices should think that he is much more likely to be in round 2, and therefore push button 1 in both rounds, but obviously he would have been better off pushing button 2 in both rounds. You are correct as far as him thinking he is more likely to be in round 2. However, you are wrong to think he will push button 1. It is much the same as with the Bob and Alice example: He thinks he is only 1/101 likely to be in round 1. However, he also knows that if he _is_ in round 1, the effect of his actions will be magnified 100-fold. Thus he will push button 2. You might see this better by thinking of measure as the # of copies of him in operation. If he is in round 1, there is 1 copy operating. The decision that copy makes will affect the fate of all 100 copies of him. If he is in round 2, all 100 copies are running. Thus any one copy of him will effectively only decide its own fate and not that of its 99 brothers. - - - - - - - Jacques Mallah ([EMAIL PROTECTED]) Physicist / Many Worlder / Devil's Advocate I know what no one else knows - 'Runaway Train', Soul Asylum My URL: http://hammer.prohosting.com/~mathmind/ p.s.Brent, I'm sure you realize by now you should have given your math a once-over before sending that post, and that the correct explanation is as above. _ Get your FREE download of MSN Explorer at http://explorer.msn.com
Re: another anthropic reasoning
From: Wei Dai [EMAIL PROTECTED] I'm sorry for this greatly delayed response. I shouldn't have sent off the original message right before I hoped onto a plane and moved to Boston. If anyone can't remember what this thread was about, please see http://www.escribe.com/science/theory/m2514.html. OK. On Thu, Mar 01, 2001 at 03:14:03AM -0500, Jacques Mallah wrote: True, they could be more selfish. Effectively they are playing a prisoner's dilemma type of game where first Bob is given a move, then Alice is. This experiment is not a game, since the action of each participant only affects his or her own payoff, and not the payoff of the other Effectively it is, since Bob has a Bayesian probability of affecting Alice and so on. Actually you can do this with just one participant, and maybe that will make the paradoxical nature of anthropic reasoning clearer. It should make the _non_paradoxical nature clearer. Suppose the new experiment has two rounds. In each round the participant will be given temporary amnesia so he can't tell which round he is in. In round one he will have low measure (1/100 of normal). In round two he will have normal measure. He is also told: If you push button 1, you will lose $9. If you push button 2 and you are in round 1, you will win $10. If you push button 2 and you are in round 2, you will lose $10. According to anthropic reasoning, the participant when faced with the choices should think that he is much more likely to be in round 2, and therefore push button 1 in both rounds, but obviously he would have been better off pushing button 2 in both rounds. You are correct as far as him thinking he is more likely to be in round 2. However, you are wrong to think he will push button 1. It is much the same as with the Bob and Alice example: He thinks he is only 1/101 likely to be in round 1. However, he also knows that if he _is_ in round 1, the effect of his actions will be magnified 100-fold. Thus he will push button 2. You might see this better by thinking of measure as the # of copies of him in operation. If he is in round 1, there is 1 copy operating. The decision that copy makes will affect the fate of all 100 copies of him. If he is in round 2, all 100 copies are running. Thus any one copy of him will effectively only decide its own fate and not that of its 99 brothers. - - - - - - - Jacques Mallah ([EMAIL PROTECTED]) Physicist / Many Worlder / Devil's Advocate I know what no one else knows - 'Runaway Train', Soul Asylum My URL: http://hammer.prohosting.com/~mathmind/ _ Get your FREE download of MSN Explorer at http://explorer.msn.com
Re: Transporter Paradox
From: Brent Meeker [EMAIL PROTECTED] On 17-Mar-01, James Higgo wrote: How many more ingenious 'solutions' will there be to the paradoxes that belief in a 'first person' leads to? Quite a few I imagine, Of course we are hard-wired to perceive the passage of time, three-dimensional space, and the pleasure of sex. Physics and Darwin provide explanations of this. What's your explanation?...oh, never mind, I know...It just is. You know, if James Higgo would relax his empiricism a bit and believe in an absolute measure distibution (which allows for Darwin to be meaningful), and the 1st person crowd would get a clue and forget about the myth of first person measure (which could never explain Darwin anyway), then we could all just get along :) - - - - - - - Jacques Mallah ([EMAIL PROTECTED]) Physicist / Many Worlder / Devil's Advocate I know what no one else knows - 'Runaway Train', Soul Asylum My URL: http://hammer.prohosting.com/~mathmind/ _ Get your FREE download of MSN Explorer at http://explorer.msn.com
Re: on formally indescribable merde
From: Marchal [EMAIL PROTECTED] Jacques Mallah wrote: Sorry, that doesn't help. What do you mean by a real actual one? What other kind is there, a fake one? Either it exists, or not. OK. In that sense we agree that the DU exist. I am glad to see that you are a classical platonist. An intuitionist would'nt accept the idea that something exist ... or not. I'm a quantum platonist :) Of course, in your macintosh example, the UD was itself implemented by some other mathematical structure - your local decor. Does that matter? A big part of my reasoning is that it *doesn't matter* indeed. For most people this is a difficulty. You're the one who tried to distinguish it as a concrete UD. I take it, from your above statement, that you do not object to my use of the term implemented. It seems that, in fact, your claim that while I have a problem because I need a precise definition of implementation, you supposedly don't, was totally groundless. Actually, I would say that any mathematical structure that has real existance (in the strong sense) should be called physical. I do not know of any better definition for physical existance. What is that strong sense of existence? It's hard to define existance, isn't it? Certainly, I would say that whatever structure is responsible for my own thoughts must exist in this sense. I only distinguish a strong sense from the weaker sense used in mathematics, which basically just means self-consistent. And why do you want to classify as physical any mathematical structures. That's the everything idea, that all math exists, and that the physics we see is just a subset of that TOE. If you do that (a little like Tegmark) you are obliged to explain how we feel a difference between physicalness and mathematicalness We use physical to refer to the structure that we guess exists in the strong sense. If you believe in the AUH, then the distinction disappears. Most people don't. Of course, we can also use it to refer to things directly related to what we are seeing. This leads to statements like the branch of the wavefunction that I see is physically real, while the rest aren't. I don't like that kind of statement. Tegmark, like Everett, *do* distinguish the first and third person, which helps to make sense of that idea. Leave Everett alone, he is dead and can't defend himself against your abuse of his name. I'm sure that, otherwise, he would find my approach to be the logical next step for the MWI. The physical would be some mathematical structures sufficiently rich for having inside point of views (through SAS point of views for exemple). The physical point of view (pov) would correspond to these internal pov. It sounds more like you want to say that the view seen by a conscious observer-moment is his physical world view. If so, this has absolutely nothing to do with your terms first person vs. 3rd person views as it has nothing to do with time evolution or measure. I don't like to call a thought's view his physical world view, but I might call it his effective physical world view. (Ever notice how the main change that is needed in using the English language, to make it correct, is to preface almost every word with effective?) So, your objection is irrelevant. You do believe a UD implements other computations. Sure. Yes. UD implements all computations, and even all implementations of all computations. Great. So you need a precise definition of implementation in order to find the measure distribution. So much for your claim not to need it. Actuality is a first person concept. I have no clue as to what you mean. In Newtonian Physics one could imagine some third person time (objective time), but since relativity I guess most believe that time is either a parameter or do refer to some relative measurement done by an observer. Time as a parameter is exactly the same idea as Newtonian time. Even those who refer to measurements done by observers do so in objective or 3rd person terms. (If an observer does measurement X, he sees Y. That's a perfectly objective statement.) Since, after all, there can never be any other kind of terms! Actuality, modern, here, now, there, elsewhere, are words with meaning dependent of the locutor. Indexicals, as the philosophers call them. I would agree for the other terms, but actuality has the opposite meaning. It refers to the reality that exists. Most are true or false only from a first person point of view. They are words, not statements, so they are neither true nor false. The truth of the statement Bob is here depends on the location of the speaker. If I said it, it would _mean_ Bob is where Jack is and would be objectively false. If you are with Bob, you will not say Yes, you are right. Bob is here. Instead you should say That's a lie. I know because Bob is here
Re: on formally describable universes and measures
From: Marchal [EMAIL PROTECTED] Jacques Mallah wrote: I really don't know what you mean by concrete. Math is math, but is physic math? By a concrete UD I was meaning a real actual one, like the one I have implemented on a macintosh SE/30, and which has been running during two weeks in 1990 at Brussels. Of course I postulate here some physical universe as a local decor. Sorry, that doesn't help. What do you mean by a real actual one? What other kind is there, a fake one? Either it exists, or not. Of course, in your macintosh example, the UD was itself implemented by some other mathematical structure - your local decor. Does that matter? If anything, that makes it more of a virtual UD than the one we discuss for the AUH! Look, to be sure we are using impIementation in the same sense, I quote yourself (from http://hammer.prohosting.com/~mathmind/cwia.htm#II3) In turn, a computation is associated with a physical system only if it has been implemented by that system. So either you believe there is only math (including computer science and all computations), then implementation is a emerging concept, as are anything linked to physical predicates. Or you believe there exists something physical per se. Then indeed you can defined implementation in a sense relative to that physicalness. Actually, I would say that any mathematical structure that has real existance (in the strong sense) should be called physical. I do not know of any better definition for physical existance. Of course, those who do not believe in the AUH are thus forced to believe that some subset of math has somehow been singled out to be real. Nowhere did I say that _only_ a physical system could implement a computation. But you did bring to my attention the fact that I should make the definition of implementation more clear on this point. In other places, I do point out that one computation can implement another. (In turn, the second one might implement another, etc.; the first one will therefore implement all of those.) So, your objection is irrelevant. You do believe a UD implements other computations. The third person view is fully capable of describing the entire situation. (Notice that _I_ never use the term 3rd person view; a better term would be actual situation.) Actuality is a first person concept. I have no clue as to what you mean. 3rd person view is everything you can communicate in a scientific manner without taking into account the subjective view of a person. If the person has some set of beliefs, they can be described as part of the true description of the situation. (Which you is what I thought you call the 3rd person view.) Hey, what's the french word for crap? I bet it would sound much more elegant ... unless the french just stole it. Crap means merde according to my dictionnary. Is it true crap means shit? It's true, but it is not considered as vulgar. Don't ask me why, but the meaning of a word does not seem to determine whether it is vulgar. Thus excrement is considered fine. You know merde, isn't it?, The famous word used by the general Cambrone during the Napoleonian wars ... I'm not familiar with it. Does merde have a special meaning, the way crap does? - - - - - - - Jacques Mallah ([EMAIL PROTECTED]) Physicist / Many Worlder / Devil's Advocate I know what no one else knows - 'Runaway Train', Soul Asylum My URL: http://hammer.prohosting.com/~mathmind/ _ Get your FREE download of MSN Explorer at http://explorer.msn.com
Re: another anthropic reasoning
From: Wei Dai [EMAIL PROTECTED] Consider the following thought experiment. Two volunteers who don't know each other, Alice and Bob, are given temporary amnesia and placed in identical virtual environments. They are then both presented with three buttons and told the following: If you push 1, you will lose $9 If you push 2 and you are Alice, you will win $10 If you push 2 and you are Bob, you will lose $10 I'll assume that everyone agrees that both people will push button 2. Of course it depends on their utility functions, but we can assume they will push 2. The paradox is what happens if we run Alice and Bob's minds on different substrates, so that Bob's mind has a much higher measure than Alice's. If they apply anthropic reasoning they'll both think they're much more likely to be Bob than Alice, and push button 1. No paradox, but this time the choice of utility function is more important. While they have amnesia, their utility functions will surely be different than what they usually are, since it could be a big giveaway if (for example) the person knows it doesn't give a hoot about Bob. To an outside observer who cares about Bob and Alice equally, the best outcome _would_ be if button #1 is pressed more often in this situation. (Assuming the substrate would not later be switched.) If you don't think this is paradoxical, suppose we repeat the choice but with the payoffs for button 2 reversed, so that Bob wins $10 instead of Alice, and we also swap the two minds so that Alice is running on the substrate that generates more measure instead of Bob. They'll again both push button 1. But notice that at the end both people would have been better off if they pushed button 2 in both rounds. If they knew during the first round that this would happen, they probably wouldn't press #1. What they would have thought is equivalent to I'm probably Bob, and after this round I'm probably going to die while Alice will replace me. Then look at the expected utilities. First, assume that they place equal utility on Bob's and Alice's money. Then they will press #2. The expected utility of this is: = (-10) (final measure of Bob) + (10) (final measure of Alice) You might think that the current measures of Bob and Alice (during the first round) should be a factor. Although the person in round 1 is more likely to be Bob, it's also true that if he is Bob the effect of his action will be more diluted (afterwards); if she is Alice, the effect will be magnified. The final measure distribution is what counts. True, they could be more selfish. Effectively they are playing a prisoner's dilemma type of game where first Bob is given a move, then Alice is. In this case they might both push 1, but only if they don't expect to interact in the future, and don't care about each other. (And also don't expect to gain a bad reputation.) Still no paradox; this case is an example of game theory. Consider the case where they always have the same measure and never lose memory. They have the choice of 1) hurt yourself a little, or 2) hurt yourself a lot but help the other person by an equal amount. They might both choose 1, but that's no paradox. From: George Levy [EMAIL PROTECTED] Erasing their memory puts a big cabosh on the meaning of who they really are. I could argue that Bob is not really Bob and Alice is not really Alice. Their identity has been reduced to a *bare* I and they are actually identical No, he didn't say they were now identical. For example, if Bob was evil and likes to smoke, while Alice was good and likes to yoyo, we can assume they still have those traits. They just don't remember who they are or who has their traits. Of course, hardware doesn't make the man. Nothing does, really: Bobness is just a loose set of traits. This shows the hurluberlu nature of 1st person merde. Likewise, if you want to base your utility function on what's good for 'you', you will find that this means nothing. The closest you can come is to place utility upon certain traits within the measure distribution that are based on what you remember. Most naive forms of selfishness are not really possible utility functions, but it is certainly easy to be _effectively_ selfish, e.g. you value traits which are mostly just associated with certain thoughts _highly_ similar to your own. If you could measure your measure you would find the measurement always identical no matter where, when or who you are. If that were true there would be as many white rabbits as there are crackpots. - - - - - - - Jacques Mallah ([EMAIL PROTECTED]) Physicist / Many Worlder / Devil's Advocate I know what no one else knows - 'Runaway Train', Soul Asylum My URL: http://hammer.prohosting.com/~mathmind/ _ Get your FREE download
Re: on formally describable universes and measures
From: Marchal [EMAIL PROTECTED] Jacques Mallah wrote: We discussed it; as I said then, it's wrong. You call it the crackpot proof :-) (hurluberlu in french) Pourquoi hurluberlu? Expliquez-moi ce mot (en anglais), s'il vous plait. (Je ne parle pas francais!) Sorry to break it to you, but you do. A physical universe is not the only (hypothetically real) mathematical structure that should implement computations. Obviously, you believe that a universal dovetailer (a single computation) implements all the computations it dovetails. I don't believe that. Only the concrete (implemented) DU does that, and then enter the crackpot proof, ... , or OCCAM. (see the UDA post): there is no need for a concrete running of the DU. The word concrete appears in the mouth of machine (if I can say) relatively to stable (without wabbits!) stories. Unless you postulate the existence of a concrete world. I don't. The existence of a concrete universe is what need an explanation (for me). And with comp I got only appearances of The existence of a concrete universe. *Concrete* is just *abstract* made familiar (and seen from inside). I really don't know what you mean by concrete. If you believe there's a UD, you believe there's a UD. If not, stop sounding like you do and tell us in plain anglais what you mean. I am sure the distinction is totally irrelevant. Math is math. In any case, you either believe that it implements the computations, or you believe that it doesn't. If the latter, then it certainly can't be a candidate for any kind of TOE. At least you don't believe (unless you change your mind) in the 1-person/3-person distinction, so I don't need even to try explaining my way, do I? The third person view is fully capable of describing the entire situation. (Notice that _I_ never use the term 3rd person view; a better term would be actual situation.) Anything an observer-moment sees is just a property of his observer-moment. The measure distribution predicts everything (to the extent possible); one can look at conditional effective probabilities by holding some property of an observation fixed. (Such as the observer thinks his name is Jack and that the time is 10:00 pm.) Simple. Forget your first person probabilities crap, it doesn't mean anything. By the way, computational continuation is also meaningless undefined crap. A computation either halts or doesn't; in either case the only continuation is that it either halts or doesn't. It seems to me that I need to repeat myself a lot here. Hey, what's the french word for crap? I bet it would sound much more elegant ... unless the french just stole it. - - - - - - - Jacques Mallah ([EMAIL PROTECTED]) Physicist / Many Worlder / Devil's Advocate I know what no one else knows - 'Runaway Train', Soul Asylum My URL: http://hammer.prohosting.com/~mathmind/ _ Get your FREE download of MSN Explorer at http://explorer.msn.com
Re: need for anthropic reasoning
From: Wei Dai [EMAIL PROTECTED] On Fri, Feb 16, 2001 at 10:22:35PM -0500, Jacques Mallah wrote: Any reasonable goal will, like social welfare, involve a function of the (unnormalized) measure distribution of conscious thoughts. What else would social welfare mean? For example, it could be to maximize the number of thoughts with a happiness property greater than life sucks. My current position is that one can care about any property of the entire structure of computation. Beyond that there are no reasonable or unreasonable goals. One can have goals that do not distinguish between conscious or unconscious computations, or goals that treat conscious thoughts in emulated worlds differently from conscious thoughts in real worlds (i.e., in the same level of emulation as the goal-holders). None of these can be said to be unreasonable, in the sense that they are not ill-defined or obviously self-defeating or contradictory. I disagree on two counts. First, I don't consider self-consistency to be the only requirement to call something a reasonable goal. To be honest, I consider a goal reasonable only if it is not too different from my own goals. It is only this type of goal that I am interested in. Second, there is no way of knowing whether you are in a so called real world or in a virtual world. So if I don't care about virtual people, I don't even know whether or not I care about myself. That doesn't seem reasonable to me. In the end, evolution decides what kinds of goals are more popular within the structure of computation, but I don't think they will only involve functions on the measure distribution of conscious thoughts. For example, caring about thoughts that arise in emulations as if they are real (in the sense defined above) is not likely to be adaptive, but the distinction between emulated thoughts and real thoughts can't be captured in a function on the measure distribution of conscious thoughts. Evolution is just the process that leads to the measure distribution. (Conversely, those who don't believe in an absolute measure distribution have no reason to expect Darwin to appear in their world to have been correct.) Also, I disagree that caring about others (regardless of who they are) is not likely to be popular. In my speculation, it's likely to occur in intelligent species that divide into groups, and then merge back into one group peacefully. So you also bring in measure that way. By the way, this is a bad idea: if the simulations are too perfect, they will give rise to conscious thoughts of their own! So, you should be careful with it. The very act of using the oracle could create a peculiar multiverse, when you just want to know if you should buy one can of veggies or two. The oracle was not meant to be a realistic example, just to illustrate my proposed decision procedure. However to answer your objection, the oracle could be programmed to ignore conscious thoughts that arise out of its internal computations (i.e., not account for them in its value function) and this would be a value judgement that can't be challenged on purely objective grounds. I've already pointed out a problem with that. Let me add that your solution is also a rather boring solution to what could be an interesting problem, for those who do care about virtual guys (and have the computer resources). Decision theory is not exactly the same as anthropic reasoning. In decision theory, you want to do something to maximize some utility function. By contrast, anthropic reasoning is used when you want to find out some information. Anthropic reasoning can't exist apart from a decision theory, otherwise there is no constraint on what reasoning process you can use. You might as well believe anything if it has no effect on your actions. I find that a very strange statement, especially coming from you. First, I (and other people) value knowledge as an end in itself. Even if I were unable to take other actions, I would seek knowledge. (You might argue that it's still an action, but clearly it's the *outcome* of this action that anthropic reasoning will affect, not the decision to take the action.) Further, I do not believe that even in practice my motivation for studying the AUH (or much science) is really so as to make decisions about what actions to take; it is pretty much just out of curiousity. One so motivated could well say you might as well do anything, if it has no effect on your knowledge. (But you can't believe just anything, since you want to avoid errors in your knowledge.) Secondly, it well known that you believe a static string of bits could be conscious. Such a hypothetical observer would, by definition, be unable to take any actions. (Including thinking, but he would have one thought stuck in his head.) - - - - - - - Jacques Mallah ([EMAIL
Re: no need for anthropic reasoning
From: Wei Dai [EMAIL PROTECTED] The selection of the proper reference class is a serious problem for anthropic reasoning. Yes, though there have been suggestions about it. But perhaps anthropic reasoning is not necessary to take advantage of a theory of everything. Consider how an non-sentient being (excluded by most proposals for the reference class) can use a TOE. Imagine a non-sentient oracle that was built to accomplish some goal (for example to maximize some definition of social welfare) by answering questions people send it. Any reasonable goal will, like social welfare, involve a function of the (unnormalized) measure distribution of conscious thoughts. What else would social welfare mean? For example, it could be to maximize the number of thoughts with a happiness property greater than life sucks. The oracle could work by first locating all instances of itself with significant measure which are about the answer the question it's considering (by simulating all possible worlds and looking for itself in the simulations). So you also bring in measure that way. By the way, this is a bad idea: if the simulations are too perfect, they will give rise to conscious thoughts of their own! So, you should be careful with it. The very act of using the oracle could create a peculiar multiverse, when you just want to know if you should buy one can of veggies or two. Then for every potential answer, it computes the approximate consequences for the meta-universe if all of its instances were to give that answer. Finally it gives the answer that would maximize the value function. I'm glad you said approximate. Of course Godel's theorem implies that it will never be able to exactly model a system containing itself. Sentient beings can follow the same decision procedure used by the oracle. Suppose you are faced with a bet involving a tossed coin. There is no need to consider probabilistic questions like what is the probability that the coin landed heads? which would involve anthropic reasoning. You know that there are worlds where it landed heads and worlds where it landed tails, and that there are instances of you in all of these worlds which are indistinguishable from each other from the first-person perspective. You can make the decision by considering the consequences of each choice if all instances of you were to make that choice. You need to know which type of thought has greater measure, I saw heads, and ... or I saw tails, and I call the measure of one, divided by the total measure, the *effective* probability, since it (roughly) plays the role of the probability for decision theory. But you have a point in a way ... Decision theory is not exactly the same as anthropic reasoning. In decision theory, you want to do something to maximize some utility function. By contrast, anthropic reasoning is used when you want to find out some information. This difference could be important in terms of the concern over the reference class: while the reference class for anthropic reasoning may leave out all thoughts that don't employ anthropic reasoning, there is no reason that the utility function shouldn't depend on *all* thoughts. (It could also depend on other things, but any sane person will mainly care about thoughts.) For example, suppose if I choose A I will be more likely to later think about anthropic reasoning than if I choose B. This will affect my guess as to how likely I am to choose A, but it will not affect my decision unless I place a special utility on (or have a special dislike of) thinking about it. But the reference class issue has another component that does still come into play. How am I to evaluate my utility function, unless I know how to identify and measure conscious thoughts in the math model? By the way, I don't think you should say first-person perspective. Some people on this list think it means something other than just saying you look at all instances of a particular class of thoughts, but it doesn't. But that's for another post and another day. (I guess I'll have to partially back up James Higgo.) If you really want to see a first-person perspective, see http://hammer.prohosting.com/~mathmind/1psqb-a6.exe - - - - - - - Jacques Mallah ([EMAIL PROTECTED]) Physicist / Many Worlder / Devil's Advocate I know what no one else knows - 'Runaway Train', Soul Asylum My URL: http://hammer.prohosting.com/~mathmind/ _ Get your FREE download of MSN Explorer at http://explorer.msn.com
new paper on computationalism
Hello. I finally wrote my paper for the Plank Symposium (which I spoke at in October). In it I propose a new critereon for implementation of a computation, and mention the everything-list. I put the paper on my site at http://hammer.prohosting.com/~mathmind/100y.htm Let me know what you think. The focus of the paper is not intended to be on the AUH, but on the MWI of QM, so don't bother saying you should have focused more on the AUH. That's not what the Plank Symposium was about. - - - - - - - Jacques Mallah ([EMAIL PROTECTED]) Physicist / Many Worlder / Devil's Advocate I know what no one else knows - 'Runaway Train', Soul Asylum My URL: http://hammer.prohosting.com/~mathmind/ _ Get your FREE download of MSN Explorer at http://explorer.msn.com
RE: Time and my proposed model
From: Hal Ruhl [EMAIL PROTECTED] the full system is completely random. I'm just wondering - not that I'm interested in your model - what you mean by random. i.e. Do you realize that having no information implies determinism, and thus when you say random you really mean effectively random since the ensemble contains all sequences for sure; or are you implying some true stochastic element, thus marking you as yet another complete idiot? - - - - - - - Jacques Mallah ([EMAIL PROTECTED]) Physicist / Many Worlder / Devil's Advocate I know what no one else knows - 'Runaway Train', Soul Asylum My URL: http://hammer.prohosting.com/~mathmind/ _ Get Your Private, Free E-mail from MSN Hotmail at http://www.hotmail.com. Share information about yourself, create your own public profile at http://profiles.msn.com.
Re: Natural selection (spinoff from History-less observer moments)
--- Russell Standish [EMAIL PROTECTED] wrote: As has been previously mentioned, the RSSA is simply measure conditional upon the observer being who it/he/she is. The RSSA values can't be used because there isn't any way to compare *different* observers in the RSSA. I would not be so sure about this! See the first quote. How do you reconcile that? BTW, don't think that just because we're discussing on this thread, you're off the hook with respect to my point on the other thread that Bayesian reasoning is the same as the ASSA. What rot! Bayesian reasoning underpins both systems. The real difference between them is whether one accepts the concept of history. The real difference is that you refuse, by some trick of doublethink, to apply Bayesian reasoning to age and across different observers. = - - - - - - - Jacques Mallah ([EMAIL PROTECTED]) Physicist / Many Worlder / Devil's Advocate I know what no one else knows - 'Runaway Train', Soul Asylum My URL: http://hammer.prohosting.com/~mathmind/ __ Do You Yahoo!? Yahoo! Photos -- now, 100 FREE prints! http://photos.yahoo.com
Re: Turing Machines Have no Real Time Clock (Was The Game of Life)
--- [EMAIL PROTECTED] wrote: Turing Machines have no real time clock ... If we assume the comp hypothesis (purely based on Turing machines) and the anthropic principle, then the flow of consciousness can only be constrained by the logical nature of the links pernitting transitions from one observer moment to the next. Time therefore is an illusion derived from such a logical flow. Please!!! Of course Turing Machines have clocks [...] But they don't have REAL TIME CLOCKS, Jacques You know the kind that tells computers the time of day and the date... OK, so you admit time is real but unknown. I guess your illusion claim was due to schitzophrenia on your part. = - - - - - - - Jacques Mallah ([EMAIL PROTECTED]) Physicist / Many Worlder / Devil's Advocate I know what no one else knows - 'Runaway Train', Soul Asylum My URL: http://hammer.prohosting.com/~mathmind/ __ Do You Yahoo!? Send instant messages get email alerts with Yahoo! Messenger. http://im.yahoo.com/
Re: Natural selection (spinoff from History-less observer moments)
--- [EMAIL PROTECTED] wrote: [EMAIL PROTECTED] writes: http://parallel.hpc.unsw.edu.au/rks/pubs.html need to apply the self sampling assumption, namely that we expect to find ourselves in an anthropic principle consistent history that is nearly maximal in its measure. I was surprised to see this because here RS uses the SSA in a way that cuts across different people, i.e. he uses the ASSA. If he believes the ASSA then by self consistency he should not believe the RSSA/QTI. wabbits (I like this term coined by Jacques). Thanks George, and that's the first logical thing you've said in a long time. = - - - - - - - Jacques Mallah ([EMAIL PROTECTED]) Physicist / Many Worlder / Devil's Advocate I know what no one else knows - 'Runaway Train', Soul Asylum My URL: http://hammer.prohosting.com/~mathmind/ __ Do You Yahoo!? Send instant messages get email alerts with Yahoo! Messenger. http://im.yahoo.com/
Re: Tegmark's big TOE
--- Brent Meeker [EMAIL PROTECTED] wrote: Tegmark's TOE predicts that our observations are the most generic ones that are consistent with out existence. I'm not sure what this means because I can't think how (sets of) observations can be put into a partial order except maybe by size - which doesn't seem to capture the idea of more generic. For generic, try of largest measure within the plenitude. Measure, basically, is the number of observers with a set. Thus if almost all observers were supermen within the plenitude, us being regular Joes would falsify the TOE. As to how to calculate the measure distribution, see the archive for some ideas, and tell us if you find a better solution. = - - - - - - - Jacques Mallah ([EMAIL PROTECTED]) Physicist / Many Worlder / Devil's Advocate I know what no one else knows - 'Runaway Train', Soul Asylum My URL: http://hammer.prohosting.com/~mathmind/ __ Do You Yahoo!? Send instant messages get email alerts with Yahoo! Messenger. http://im.yahoo.com/
RE: this very moment
--- Higgo James [EMAIL PROTECTED] wrote: the ideas Bruno, Jacques and I put forward are idealist. My view is that math is fundamental. Ideas should be derivable from the math of computations. The physical world is real in that it is mathematical. = - - - - - - - Jacques Mallah ([EMAIL PROTECTED]) Physicist / Many Worlder / Devil's Advocate I know what no one else knows - 'Runaway Train', Soul Asylum My URL: http://hammer.prohosting.com/~mathmind/ __ Do You Yahoo!? Send instant messages get email alerts with Yahoo! Messenger. http://im.yahoo.com/
Re: Quantum Time Travel
--- Russell Standish [EMAIL PROTECTED] wrote: You're playing with words. The point is a measure distribution must be measure *of something*. Thus it makes no sense to speak of the measure distribution given a wavefunction, unless you state what it is measure *of*. The only measure distribution we have been dealing with in that context is of observer-moments. I call that M(c) Not at all true. A lot of discussion has taken place in this list re measure of strings in a Schmidhuber plenitude. Which is not in that context, of a wavefunction. Measure is always taken to be the strength or density of a particular object from within an ensemble (continuous or otherwise) of objects. It is readily related to a sampling probability when the measure distribution is normalisable. Schroedinger's equation gives a measure distribution for outcomes of particular observables, given certain constraints (a Hamiltonian and a boundary condition). An observer moment must be the conjunction of some vast array of observables having particular values. Really? Are you saying that the Sh. eq. gives a measure distribution for outcomes of observables even when there are no obserrver-moments? What is that supposed to mean? If by observables you mean Hermitian operators, how does the Sh. eq. do the above? My view, as I have stated repeatedly, is that it should be possible to derive a measure distribution for computations implemented by a physical system and that given a wavefunction the Sh. eq., it should be possible to show that the ratios of the measures of appropriate computations that could be conscious (if present, e.g if there is a brain in the system) to the total measure of such are the usual effective probabilities. My own preference is to talk about a quantum history, which under some (perhaps rather flaky) assumptions, could be identified with the concept of observer moment. What's a quantum history? Any relation to the consistent histories interpretation? = - - - - - - - Jacques Mallah ([EMAIL PROTECTED]) Physicist / Many Worlder / Devil's Advocate I know what no one else knows - 'Runaway Train', Soul Asylum My URL: http://hammer.prohosting.com/~mathmind/ __ Do You Yahoo!? Talk to your friends online and get email alerts with Yahoo! Messenger. http://im.yahoo.com/
Re: Quantum Time Travel
--- Russell Standish [EMAIL PROTECTED] wrote: An observer moment is not devoid of information. The mere fact that it is an observer moment, implies that the observer can observe itself. Following the logic of my Occam paper, one can conclude that measure of an observer moment - i.e. _given_ that I am an observer is highly non-uniform, with greatest measure given for systems indestinguishable from lawlike (hence no WRs). Sounds OK. Now with the multiverse, for which there is an objective measure uniform measures can exist as (rather unnatural) solutions to the SE. eg a superposition of all plane waves \int_{-\infty}^\infty exp(-ipx/\hbar)dp. This beast is clearly unnormalisable, but that is not a problem in itself. That's also known as a position eigenstate. However, observers will constrain the form of the universal wavefunction such that the measure is nonuniform, effectively giving it a value. The above sentence doesn't make sense. First, the term 'measure' is only defined with respect to observer-moments. Second, obsevers do not change the universal wavefunction, though they do for many practical purposes have to deal only with the relative state. The more information one has, the more non-uniform will be the measure. The objective measure distribution itself remains the same, but one can define a conditional measure distribution to reflect the information, useful for Bayesian purposes. (One can work with the origianl objective measure and carefully apply Bayesian analysis.) But it is still the observer selecting a quantum history that defines the measure. What's a quantum history? An observer-moment doesn't see a history. = - - - - - - - Jacques Mallah ([EMAIL PROTECTED]) Physicist / Many Worlder / Devil's Advocate I know what no one else knows - 'Runaway Train', Soul Asylum My URL: http://hammer.prohosting.com/~mathmind/ __ Do You Yahoo!? Send online invitations with Yahoo! Invites. http://invites.yahoo.com