Re: Emotions (was: Indeterminism
Le 27-juil.-06, à 03:21, David Nyman a écrit : Mmmmhh This sounds a little bit too much idealist for me. Numbers exist with some logic-mathematical priority, and then self-intimacy should emerge from many complex relations among numbers. Also, the many universes (both with comp and/or the quantum) contains some complex giant universes without any self-awareness in it (like a parallele world with different constant so that the complexity of histories are bounded. As is regrettably normal in this area, we are having (as you suspected) terminological difficulties. Thanks for you kind attempt to be clearer. I'm afraid we are not just having terminological difficulties, but then this what make a conversation or a discussion interesting. I cc it to the everything list because your theory is close to the one advocated sometimes by George Levy, who seems to like the idea that reality is ultimately first person, which does not really work once we assume comp. I don't think I'm helping the situation by using different formulations to try to convey the same meaning, since none of them are altogether satisfactory (as you will see from the dialogue with Peter Jones). I think I will abandon 'self-intimacy' in this context and substitute 'first person', with the following restricted sense: 1. I claim, motivated by conceptual economy, that 'first person' is the fundamental ontological situation. That is: the context or field of everything that exists is inherently a first person context or field. Even if that was the case, do you agree that the scientific discourse has to be a third person discourse? From this some scientist infer that we cannot even talk on the first person issue in a scientific manner: they are just making a common category error. Nothing prevent us of choosing some definition of first person, and then communicating about it in a first person way. Now, of course two scientists wanting to communicate have to agree on some common third person describable base. With respect to this my axioms are 1) There exists a level of description of myself (whatever really describes me) such that I can survive--or experience no changes---when a digital functional substitution is made at that level. I sum up this by yes doctor. The comp practitioners says yes to his/her doctor when this one proposes an artificial digital brain/body. 2) Church thesis (all universal machine compute the same functions from N to N). I need it just to make the expression digital clear enough. 3) Arithmetical realism: it means that proposition like 5 is divisible by 4 is true or false independently of me. Of course 5 is a name for the number of vertical stroke in |, and 4 is a name for the number of stroke in . perhaps we will agree, because the first person (and the first person plural) plays a major role in the building of the physical world. But numbers are more fundamental. I will (try to) explain you in this post or in another one, how the first person (with her qualia, feeling, suffering, joy, and all that) emerges necessarily and unavoidably from number theoretical relations once we take the comp hyp sufficiently seriously). 2. I have referred to first person as 'a global feature of reality', but IMO it's not logically coherent to describe it as a 'property', as it isn't something superadded to an already existing situation. I totally agree. this is a key point. And this is what is cute with the comp hyp (and some of my results there): although I give a completely transparent third person definition of the notion of first person, it will appear that machines cannot even give a name to its first person. The reason is a generalization of Tarski theorem which shows that no correct machine can even name its own truth predicate. Strictly speaking truth is not even a predicate for the machine, nor is the first person attached to the machine nameable by the machine. It's really an equivalence to 'existence'. That is: whatever exists, is already potentially 'somebody'. Reality is inherently first personal. That's why find ourselves here (or anywhere else in MW of course). This does not really make sense for me. Nevertheless, if you are patient enough to follow some reasoning I propose (see my url), it should even be clear why first persons can believe what you say, but comp makes it wrong at some level. 3. Structure arises through whatever processes within the first person field (this is the subject matter of QM, MW and comp, not to speak of chemistry, biology, psychology, sociology, etc.). Some of this structure differentiates 'mini first persons', bounded within 'perceiver/ perception dyads'. This is what putatively gives rise to 'phenomenal consciousness' - structures with the 'efficacy' to differentiate the experiential field into a characteristicly dense informational coherence. The structure within the 'perceiver'
Re: Bruno's argument
Stathis Papaioannou wrote: Peter Jones writes (quoting SP): There is a very impoertant difference between computations do not require a physical basis and computations do not require any *particular* physical basis (ie computations can be physical implemented by a wide variety of systems) Yes, but any physical system can be seen as implementing any computation with the appropriate rule mapping physical states to computational states. I don't think such mappings are valid a) without constraints on the simplicity of the mapping rules or b) without attention to counterfactuals/dispositions Attempts are made to put constraints on what counts as implementation of a computation in order to avoid this uncomfortable idea, but it doesn't work unless you say that certain implementations are specially blessed by God or something. I don't know where you get that idea. Dispositions are physically respectable. Simplicity constraints are the lifeblood of science. The constraints (a) and (b) you mention are ad hoc and an unnecessary complication. Suppose Klingon computers change their internal code every clock cycle according to the well-documented radioactive decay pattern of a sacred stone 2000 years ago. If we got our hands on one of these computers and monitored its internal states it would seem completely random; but if we had the Klingon manual, we would see that the computer was actually multiplying two numbers, or implementing a Klingon AI, or whatever. Would you say that these computations were not valid because it's a dumb way to design a computer? I'd say that a defintion of computer that applies to everything is useless. Would it make any difference if the Klingons were extinct and every copy of the manual destroyed? What about if the exact same states in a malfunctioning human computer arose by chance, before the Klingons came up with their design? Having the manual is necessary to make the computer useful, so that we can interact with it, but it doesn't magically *create* computation where previously there was just noise. So at least you have to say that every computation is implemented if any physical universe at all exists, even if it is comprised of a single atom which endures for a femtosecond. Hmmm. So much for the quantitative issue. What a strange view of physics you have. This says nothing about physics. There may well be a physical universe, with orderly physical laws, and our computers would have to be of the familiar type which will consistently handle counterfactuals in order to be of use to us. But I think it is trivially obvious that any computation is hiding in noise just as any statue is hiding in a block of marble. There is a quantitaive issue. There are only so many bits in a phsycial system. This is not very interesting unless you say that computation can lead to consciousness. You could specify that only brains can lead to consciousness, or that only non-solipsistic computations with inputs and outputs based on physical reality can lead to consciousness, but that's not straight computationalism any more. I can say that a hydrogen atom can't compute an entire virtual universe, because there isn't enough room. And even so, there is the other part of the problem. You can't validly infer from any computation can be implemented by any physical system to any computation can be implemented by without any physical basis --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups Everything List group. To post to this group, send email to everything-list@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/everything-list -~--~~~~--~~--~--~---
Re: Bruno's argument
Please see after your remark/question at the end John - Original Message - From: Bruno Marchal [EMAIL PROTECTED] To: everything-list@googlegroups.com Sent: Friday, July 28, 2006 10:48 AM Subject: Re: Bruno's argument Le 28-juil.-06, à 02:52, John M a écrit : Then again is the 'as - if' really a computation as in our today's vocabulary? Or, if you insist (and Bruno as well, that it IS) is it conceivable as our digital process, that embryonic first approach, or we may hope to understand later on a higher level (I have no better word for it): the analog computation of qualia and meaning? Certainly not the Turing or Church ways and not on Intel etc. processors. What makes you so sure? Sometimes you talk like if you were sure we are not digital machine. Is that not a human prejudice? At least I can explain why If we are machine, we cannot *know* it (just bet on it). There is mathematical description of machine's prejudices. Bruno --- JM: ...We May Hope... does not seem to me as beiing so sure. Look please at the -IF- in your offered explanation. How about if not? the mathematical description is part of the human prejudice you mention. You are within a mindset and not responsive to outside ideas. Which is natural. Once I allow to my (outside) ideas to be dragged INTO your circle of your mindset I am lost. Which may not be so bad, but if I am mistaken, I want to get it verified from arguments applicable within my thinking. Just as you cannot argue with a religious belief taken as very 'sufficient evidence' by the adherents. They KNOW and my agnostic doubt looks to 'them' as a typical Nescio non est Argumentum,. (Nor are if-s - I think). Best John --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups Everything List group. To post to this group, send email to everything-list@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/everything-list -~--~~~~--~~--~--~---
Re: Bruno's argument
Thanks, Colin, I feel we also agree in your last sentence statement, however I could not decide whether abstraction is reductionist model forming or a generalization into wider horizons? Patterns - I feel - are IMO definitely reductive. that scale-game (40-50 orders of m. down) seems to me valid within the physical explanatory equationalized circumstances - so I scrutinize it (accept it within physics-thinking). It does not refer to 'time' (whichever you prefer). I had the notion that there 'is' only change ie. movement and space is a time-coordinate of it, while time is a space-coordinate of it, as long as we think in terrestrially (not even of THIS universe) formed explanations of those figments we conclude upon the latestly primitive instrumental observations in our reductionist science domain. Matter and its 'behavior' is similarly 'concluded' to reflect the 'personal' experiencing of the unknown effects. I am deterred by the semantic direction of 'computing'. If it is Bruno's manipulation of ordinary numbers, I feel OK, but then I feel domains of incompetence. Your as-if changes that and I felt lost. Why use a word with 'other' meaning 'as - if'? It is a cheap excuse that we have no better one G. Sorry for just multiplying the words in this exchange. John M - Original Message - From: Colin Hales [EMAIL PROTECTED] To: everything-list@googlegroups.com Sent: Thursday, July 27, 2006 10:32 PM Subject: RE: Bruno's argument John M Colin, the entire discussion is too much for me, I pick some remarks of yours and ask only about them. I am glad to see that others are also struggling to find better and more fitting words... (I search for better fitting concepts as well to be expressed by those better fitting wods). You wrote: ... *the rest of the universe that is not 'us' behave in a way with respect to us that we label 'physical'... Do I sense a separation us versus the 'rest of the universe'? I figure it is not a relation between them (the rest of the universe) and us (what is this? God's children?) especially after your preceding sentence: *whatever the universe is we are part of it, made of it, not separably 'in it'. I am looking for distinctive features which help us 'feel' as ourselves in the total and universal interconnectedness. The closeness (interrelation?) vs a more remote connectivity. The 'self', which I do not expropriate for us. I have no idea about 'physical', it reflects our age-old ways of observing whatever was observable with that poor epistemic cognitive inventory our ancestors used reducing mindset, observation and explanation to their models (level of the era). 40 or 50 orders of spatial magnitude down deep, space and matter merge into their common organisational parent. There is no 'separateness', we have never justified that, only assumed it and seen no convincing empirical evidence other than a failure of science to sort out consciousness because of the assumption. Whatever the depth of structure, we humans are ALL of it. The existence of consciousness (qualia) is proof that the separateness is virtual (as-if). IMO the separation is merely a delineation - a notional boundary supported by our perception systems. Just because a perceived boundary is closed does not mean that it is not 'open' in some other way down deep in the structure of the universe. So I guess we are in agreement here. Then again is the 'as - if' really a computation as in our today's vocabulary? Or, if you insist (and Bruno as well, that it IS) is it conceivable as our digital process, that embryonic first approach, or we may hope to understand later on a higher level (I have no better word for it): the analog computation of qualia and meaning? Certainly not the Turing or Church ways and not on Intel etc. processors. John M Not sure I follow you here. All abstracted computing everywhere is 'as-if'. None of the input domains of numbers or anything else are ever reified. We simply declare a place to act like it was there and then behave as if it were. The results work fine! I'm writing this using exactly that process. Looks 'as-if' I'm writing a letter no? :-) Qualia requires that form of computation executed by the 'natural domain'... IMO it's computation..it just doesn't fit neatly into our limited idealized mathematics done by creature constructed of it from within it. The natural world does not have to comply with our limited abstractions, nor does the apparent existence of an abstraction that seems to act 'as-if' it captures everything in the natural world. Abstractions are just abstractions... ultimately it's all expressed as patterns in the stuff of the universe... IMO If there's any property intrinsic and implicit to the reality of the universe (whatever it is, it is it!) then the abstraction throws it away. Cheers Colin hales
Re: Interested in thoughts on this excerpt from Martin Rees
Russell Standish writes, regarding http://arxiv.org/abs/astro-ph/0607227 : Thanks for giving a digested explanation of the argument. This paper was discussed briefly on A-Void a few weeks ago, but I must admit to not following the argument too well, nor RTFA. My comment on the observer moment issue, is that in a Multiverse, the measure of older observer moments is less that younger ones. After a certain point in time, the measure probably decreases exponentially or faster, so there will be a mean observer moment age. I had a similar thought. I am curious to know your reasoning or justification for why this should be true. I have not read the papers referenced by this one, but the authors allude to previous work: Given some a priori distribution of the values of the fundamental constants across the ensemble, the probability for a 'typical' obserer to measure a certain value of one or more of these constants is usually taken to be proportional to the number of observers (or the number of observers per baryon). It is this last parenthetical comment I found interesting. Apparently there has been a difference in previous work about whether the measure should be proportional to observers vs observers per baryon. Consider two cases: one observer in a universe of a given size, or one observer in a universe twice that big. These would be considered the same by a number-of-observers measure, but the first would have twice the measure if it was observers per baryon. I argued some time past, based on some hand-wavey arguments, that the latter measure is better - we attribute a portion of a universe's measure to an observer, proportional to the fraction of the universe that the observer takes up. This came from the UDASSA concept I was describing in detail last year. It amounts to the observers-per-baryon measure. It's interesting that physicists have considered a similar idea. In terms of time, like Russell I would say that ancient observer-moments should get less measure than early ones, for the same basic reason - it takes more information to specify the location of the physical system that instantiates the OM within the universe. My reasoning though would imply that measure should be inversely proportional to age, rather than Russell's suggestion of an exponential decay. So I am curious where he got that. I could describe my reasoning in more detail if there is interest. So contra all these old OMs dominating the calculation, and giving rise to an expected value of Lambda close to zero, we should expect only a finite contribution, leading to an expected finite value of Lambda. We don't know what the mean age for an observer moment should be, but presumably one could argue anthropically that is around 10^{10} years. What does this give for an expected value of Lambda? I don't know if I know enough physics to figure that out. I'll take another look at the paper. I see that I misstated the reason why the CC limits observation. It's not that the universe becomes uninhabitable. Rather, computation and observation is assumed to be proportional to internal energy divided by external universe temperature. It turns out that the optimal strategy is to accumulate and store up as much energy as you can, as the universe expands and cools. Then, when the universe is all cooled down, you go ahead and do all your observations and calculations. In a universe with a high CC, you can't accumulate as much energy, because it expands more quickly and hence mass-energy thins out faster. It cools down sooner and you don't have as much stored up at that time, so you can't do as much. So what we would have to say is that that strategy is no longer optimal because such distant observer-moments will have low measure, and we care more about OMs which have high measure. (I admit that few people take the idea seriously that seemingly undetectable OM measure changes should matter, but I can assume that this is a super-advanced civilization and everyone is smart, so of course they will agree with me!) Instead, the optimal strategy maximizes the total measure of OM- computations, and that requires doing more computations early. OTOH, it is more efficient to wait until the universe is cooler, we can do more computing with the same amount of energy. Maximizing the product of these two effects would require a detailed model for how quickly measure decays with time. (We'd also have to consider whether measure should change with temperature, which it might in my model, I have to think about it more.) Of course their argument does sound plausible for a single universe - is this observational evidence in favour of a Multiverse? I think what you're saying is that if this is the only universe, and if civilizations adopt the strategies advocated in this paper, then most OMs will be far in the future, hence by the ASSA we are unlikely to be experiencing present-day OMs. This was the basic concept of a paper
Re: Bruno's argument
Thanks, Colin, I feel we also agree in your last sentence statement, however I could not decide whether abstraction is reductionist model forming or a generalization into wider horizons? Patterns - I feel - are IMO definitely reductive. Abstraction I would characterise as a mapping into a representational domain. As to the level of reduction, that would depend on the domain of symbols and their mapping to the mapped domain. The questions to ask yourself are: a) Who decides what the lowest level domain is to be? b) What do you lose when you choose? Let's look at abstracting a whole human: a-1) Say we 100% abstract a human down to a representation of cells. Cells would be the base level descriptive domain. Organs would be data patterns that the cells express under the rules of the abstraction. And so on. You are not letting the natural rules run. You are merely moving symbols around. No matter how powerful the computer and how detailed (lowest level domain) the abstraction is just the computer's representation of the symbols being moved around. a) If you build a squillion little computers, each to act 'as-if' they were the, say, the cell level of the abstraction with a little physical interface that meant it was just like a cell from the outside, then you have reinstated some level of the natural world's involvement...the resulting human may be indistinguiishable from a human. Organs are an emergent property of these collaborating 'Turing Cells'. b) Then again, if you reduced the abstraction level to build tiny computers that become substitute molecules, so to all intents they looked like molecules...the human would look the same. Cells are an emergent property of collaborations of these 'Turing Molecules'. (please ignore the need for fluids and food etc in this body for the moment!) If you inspected the human a-1) at the molecular level all you see is a computer playing with patterns depending on the chosen abstraction domain. If you inspected human a) at the molecular level you would see only the computer that runs cells, but the cells would look normal. There are no human molecules here, only the molecules of the computers inside the cells. If you inspected human b) at the molecular level you would see what appears to be real molecules. The cells would look normal. However, look for atoms and you won't find any. Q. What is it like to be human a-1) cf a) cf b) and how well does each human operate cognitively? You could extend the argument to simulated Turing-quarks and Turing-leptons... and so on... at some point the human would acquire consciousness. What level of Turing-granularity is that? My answer to this would be probably waay down deep below where the matter and the space differentiate their behaviour. We have no justification that any one level of organisation is an end-point. that scale-game (40-50 orders of m. down) seems to me valid within the physical explanatory equationalized circumstances - so I scrutinize it (accept it within physics-thinking). It does not refer to 'time' (whichever you prefer). I had the notion that there 'is' only change ie. movement and space is a time-coordinate of it, while time is a space-coordinate of it, Change as a structural primitive is quite workable. Imagine being human shaped water in one place in a waterfall... i.e. regular structure within change. At any instant there is a human, but the water is flowing, so the componentry of the human is dynamically refreshed. Think of humanity. Humanity survives where all the humans in it don't. Same thing at all scales. An infinity of potential collaborations of that one tiny change primitive that have a net value of 1 change primitive can be substituted for any other change primitive. This recursiveness is the basis of a calculus/logic. In this system time results merely from the state of the collaboration undergoing a transition as the change primitive does what it does (eg changes from state A to B then back to A). There's not such thing as time in this structure. If the state changes happen at a regular enough rate then equations with a t in it are possible as descriptors. The universe acts 'as-if' there was time. If you are made of a pile of these changes then, if there was an observation faculty, all you would see is the collaboration evolving according to the rules of the structural primitives. You would need to see only the structural regularity, not the change primitives. In the waterfall metaphor, two humans as regularity in this waterfall would not see any water. They would see only each other and the space in between. This is nature of In this structural domain these things are really simple. Also: If you take a slice _across_ this structure around the level of atoms, photons etc and devise mathematical descriptions for the behaviour of identified structures you get quantum mechanics. QM says absolutely nothing about 'what it is that is behaving quantum mechanically. There
RE: Bruno's argument
Peter Jones writes (quoting SP): The constraints (a) and (b) you mention are ad hoc and an unnecessary complication. Suppose Klingon computers change their internal code every clock cycle according to the well-documented radioactive decay pattern of a sacred stone 2000 years ago. If we got our hands on one of these computers and monitored its internal states it would seem completely random; but if we had the Klingon manual, we would see that the computer was actually multiplying two numbers, or implementing a Klingon AI, or whatever. Would you say that these computations were not valid because it's a dumb way to design a computer? I'd say that a defintion of computer that applies to everything is useless. I agree, it's completely useless to *us* because we couldn't interact with it. That would be the end of the matter unless we say that computation can lead to consciousness, creating as it were its own observer. Are you prepared to argue that the aforementioned Klingon AI suddenly stops being conscious when the last copy of the manual which would allow us to interact with it is destroyed? ... I can say that a hydrogen atom can't compute an entire virtual universe, because there isn't enough room. If you can map multiple computation states to one physical state, then all the requisite computations can be run in parallel on a very limited physical system. And even so, there is the other part of the problem. You can't validly infer from any computation can be implemented by any physical system to any computation can be implemented by without any physical basis Yes, that is a valid point, and the same can be said about mathematical Platonism in general. Perhaps we have to say: all of mathematics is contingent on the existence of a real universe with at least one physical state. Stathis Papaioannou _ Be one of the first to try Windows Live Mail. http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups Everything List group. To post to this group, send email to everything-list@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/everything-list -~--~~~~--~~--~--~---