2008/11/27 Brent Meeker <[EMAIL PROTECTED]>: > Doesn't this antinomy arise because we equivocate on "running Firefox". Do we > mean a causal chain of events in the computer according to a certain program > specification or do we mean the appearance on the screen of the same thing > that > the causal chain would have produced? We'd say "no" by the first meaning, but > "yes" by the second. Obviously, there the question is not black-and-white. > If > the computer simply dropped a bit or two and miscolored a few pixels, no one > would notice and no one would assert it wasn't running Firefox. So really, > when > we talk about "running Firefox" we are referring to a fuzzy, holistic process > that admits of degrees.
A functionally equivalent copy of Firefox behaves in the same way as the standard copy to which we are comparing it, giving the same output for a given input. Differences which the program can't "know" about are not important in this context, and the exact nature of the hardware - whether solid state or valve, causal or random - is one such difference. Of course, if the hardware is causal the program will run much more reliably, but if the random hardware runs appropriately through luck, I don't see how the program could know this. > I'm developing a suspicion of arguments that say "suppose by accident...". If > we say that the (putative) possibility of something happening "by accident" > destroys the relevance of it happening as part of a causal chain, we are, in a > sense, rejecting the concept of causal chains and relations - and not just in > consciousness, as your Firefox example illustrates. I would say that the significance of the causal chain is in reliability, not in the experience the computation has, such as it may be. > I wrote "putative" above because this kind of thought experiment hypothesizes > events whose probability is infinitesimal. If you take a finitist view, there > is a lower bound to non-zero probabilities. Can't we stay finitist and say these improbable things are very likely to happen given a very big universe, say 3^^^3 metres across in Knuth's notation? > It is still trivial in the sense that it could be said to instantiate all > possible conscious worlds (at least up to some size limit). Since we don't > know > what is necessary to instantiate consciousness, this seems much more > speculative > than saying the block of marble instantiates all computations - which we > already > agree is true only in a trivial sense. We do know what it takes to instantiate consciousness: chemical reactions in the brain. If these chemical reactions are computable then an appropriate computation should also instantiate consciousness. If we consider only the case of inputless conscious beings, I still don't see why they won't be instantiated in randomness. >>as I see no reason why the >> consciousness of these observers should be contingent on the >> possibility of interaction with the environment containing the >> substrate of their implementation. My conclusion from this is that >> consciousness, in general, is not dependent on the orderly physical >> activity which is essential for the computations that we observe. > > Yet this is directly contradicted by those specific instances in which > consciousness is interrupted by disrupting the physical activity. But if it's all a virtual reality, it isn't a concrete physical disruption that affects consciousness. It's just that the program takes a turn which manifests in the virtual world as brain and consciousness disruption. >> Rather, consciousness must be a property of the abstract computation >> itself, which leads to the conclusion that the physical world is >> probably a virtual reality generated by the big computer in Platonia, > > This seems to me to be jumping to a conclusion by examining only one side of > the > argument and, finding it flawed, embracing the contrary. Abstract > computations > are atemporal and don't have to be generated. So it amounts to saying that > the > physical world just IS in virtue of there being some mapping between the world > and some computation. Yes. But I arrive at this conclusion because I can't think of a reason to constrain computation so that it is only implemented by conventional computers, and not by any and every random process. >> The Fading Qualia argument proves functionalism, assuming that the >> physical behaviour of the brain is computable (some people like Roger >> Penrose dispute this). Functionalism then leads to the conclusion that >> consciousness isn't dependent on physical activity, as discussed in >> the recent threads. So, either functionalism is wrong, or >> consciousness resides in the Platonic realm. > > Of there's something wrong with the argument that functionalism implies > consciousness isn't dependent on physical activity. Yes, but I find the argument convincing. -- Stathis Papaioannou --~--~---------~--~----~------------~-------~--~----~ You received this message because you are subscribed to the Google Groups "Everything List" group. To post to this group, send email to [EMAIL PROTECTED] To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/everything-list?hl=en -~----------~----~----~----~------~----~------~--~---