Brent wrote: ... *"But is causality an implementation detail? There seems to be an implicit assumption that digitally represented states form a sequence just because there is a rule that defines(*) that sequence, but in fact all digital (and other) sequences depend on(**) causal chains." ...*
I would insert at (*): *'in digitality'* - and at (**): *'(the co-interefficiency of) unlimited'* - because in my vocabulary (and I do not expect the 'rest of the world to accept it) the conventional term * 'causality'*, meaning to find *"A CAUSE"* within the (observed) topical etc. model that entails the (observed) 'effect' - gave place to the unlimited inteconnections that - in their total interefficiency - result in the effect we observed within a model-domain, irrespective of the limits of the observed domain. "Cause" - IMO - is a limited term of ancient narrow epistemic (model based?) views, not fit for discussions in a "TOE"-oriented style. Using obsolete words impress the coclusions as well. John Mikes On Thu, Nov 27, 2008 at 3:43 PM, Brent Meeker <[EMAIL PROTECTED]>wrote: > > Bruno Marchal wrote: > > > > On 25 Nov 2008, at 20:16, Brent Meeker wrote: > > > >> Bruno Marchal wrote: > > > >>>> Brent: I don't see why the mechanist-materialists are > >>>> logically disallowed from incorporating that kind of physical > >>>> difference into their notion of consciousness. > >>> > >>> Bruno: In our setting, it means that the neuron/logic gates have > >>> some form of > >>> prescience. > >> Brent: I'm not sure I agree with that. If consciousness is a > >> process it may be > >> instantiated in physical relations (causal?). But relations are in > >> general not > >> attributes of the relata. Distance is an abstract relation but it > >> is always > >> realized as the distance between two things. The things themselves > >> don't have > >> "distance". If some neurons encode my experience of "seeing a rose" > >> might not > >> the experience depend on the existence of roses, the evolution of > >> sight, and the > >> causal chain as well as the immediate state of the neurons? > > > > > > With *digital* mechanism, it would just mean that we have not chosen > > the right level of substitution. Once the level is well chosen, then > > we can no more give role to the implementations details. They can no > > more be relevant, or we introduce prescience in the elementary > > components. > > But is causality an implementation detail? There seems to be an implicit > assumption that digitally represented states form a sequence just because > there > is a rule that defines that sequence, but in fact all digital (and other) > sequences depend on causal chains. > > > > > > >>> > >>>> Bostrom's views about fractional > >>>> "quantities" of experience are a case in point. > >>> If that was true, why would you say "yes" to the doctor without > >>> knowing the thickness of the artificial axons? > >>> How can you be sure your consciousness will not half diminish when > >>> the > >>> doctor proposes to you the new cheaper brain which use thinner > >>> fibers, > >>> or half the number of redundant security fibers (thanks to a progress > >>> in security software)? > >>> I would no more dare to say "yes" to the doctor if I could loose a > >>> fraction of my consciousness and become a partial zombie. > >> But who would say "yes" to the doctor if he said that he would take > >> a movie of > >> your brain states and project it? Or if he said he would just > >> destroy you in > >> this universe and you would continue your experiences in other > >> branches of the > >> multiverse or in platonia? Not many I think. > > > > > > I agree with you. Not many will say yes to such a doctor! Even > > rightly so (with MEC). I think MGA 3 should make this clear. > > The point is just that if we assume both MEC *and* MAT, then the > > movie is "also" conscious, but of course (well: by MGA 3) it is not > > conscious "qua computatio", so that we get the (NON COMP or NON MAT) > > conclusion. > > It's not so clear to me. One argument leads to CONSCIOUS and the other > leads to > NON-CONSCIOUS, but there is not direct contradiction - only a contradiction > of > intuitions. So it may be a fault of intuition in evaluating the thought > experiments. > > Brent > > > > > > > I keep COMP (as my working hypothesis, but of course I find it > > plausible for many reasons), so I abandon MAT. With comp, > > consciousness can still supervene on computations (in Platonia, or > > more concretely in the universal deployment), but not on its physical > > implementation. By UDA we have indeed the obligation now to explain > > the physical, by the computational. It is the reversal I talked about. > > Somehow, consciousness does not supervene on brain activity, but brain > > activity supervene on consciousness. To be short, because > > consciousness is now somehow related with the whole of arithmetical > > truth, and things are no so simple. > > > > Bruno > > http://iridia.ulb.ac.be/~marchal/ > > > > > > > > > > > > > > > > > > --~--~---------~--~----~------------~-------~--~----~ You received this message because you are subscribed to the Google Groups "Everything List" group. To post to this group, send email to [EMAIL PROTECTED] To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/everything-list?hl=en -~----------~----~----~----~------~----~------~--~---

