Bruno Marchal writes:

> I will ask you to be patient until next wednesday because I am busy  
> right now. I think we agree on many things, and this is an opportunity  
> to search where exactly we diverge, if we diverge.
> For example I disagree with the expression "brain are conscious", but I  
> am read you more carefully to see if this is just a way of talk, or if  
> this hides some deeper divergence.

Thanks for making the time to reply. I will probably have limited internet 
access 
for the next couple of weeks.

Regarding consciousness being generated by physical activity, would it help if 
I said that if a conventional computer is conscious, then, to be consistent, a 
rock would also have to be conscious? 
 
> Also, when you say:
> 
> > If consciousness is decoupled from physical activity, this means that  
> > our conscious experience would be unchanged if the apparent physical  
> > activity of our brain were not really there. This would make all the  
> > evidence of our senses on which we base the existence of a physical  
> > world illusory: we would have these same experiences if the physical  
> > world suddenly disappeared or never existed in the first place. We can  
> > keep (1) and (2) because I qualified them with the word "appears", but  
> > the necessity of a separate physical reality accounting for this  
> > appearance goes.
> 
> I really don't understand. It is not because we lost the primitive  
> physical reality that we lost a reality. Remember that my point is that  
> IF we keep comp, then indeed we must explain the appearance of physical  
> reality, without conjecturing such a reality as a primitive one, and  
> then I show how to do that (including some results encouraging for the  
> comp hyp.).
> 
> It is a way to sum up my point. If we are machine, then the assumption  
> of a physical reality cannot explain why we believe in a physical  
> reality. Indeed the believe in a physical reality is something  
> emergent. With comp the mind-body problem is two times more difficult  
> than without, because we have to explain "matter appearance" without  
> assuming it.
> 
> Why do you think all the evidence of our senses would be illusory?  
> Nobody has seen "primitive matter". I thought even just Kant made this  
> clear.
> Paper by physicists never assume "primitive matter", except in some  
> implicit background which play no role in the ideas. All papers by  
> physicists just propose relation between numbers, and eventually "I" do  
> relate those numbers with some of my qualia (like the qualia of being  
> convinced having seen a needle in such or such places. Primitive matter  
> is a metaphysical concept, today, and with comp it is has been made  
> scientific: indeed it is made quasi-contradictory.
> 
> I believe in the moon, but I don't believe the moon is made of atoms  
> (this is just a good local description). I believe in atoms, but I  
> don't believe atoms are made of elementary particles (this is just a  
> good local description). I believe in elementary particles, but I don't  
> believe elementary particles are made of strings (this could be a good  
> local description), etc. (and with comp probably the "etc" is  
> obligatory).
> I believe in strings, but I don't believe strings are made of something  
> else, except in some first approximation.
> But with comp I do believe all that *must* emerge from the "collective  
> behavior" of numbers.
> 
> Regards,
> 
> Bruno

It's difficult to find the right words here. I think we can all agree on the 
appearance 
of a physical reality as a starting point. The common sense view is that there 
is an 
underlying primitive physical reality generating this appearance, without which 
the 
appearance would vanish and relative to which dream and illusion can be 
defined. 
If this is so, it is not a scientifically testable theory. We can't just switch 
off the 
physical reality to see whether it changes the appearance, and the further we 
delve 
into matter all we see is more appearance (and stranger and stranger appearance 
at 
that). Moreover, dream and illusion are defined relative to the appearance of 
regular 
physical reality, not relative to the postulated primitive physical reality.

> Le 08-janv.-07, à 23:36, Stathis Papaioannou a écrit :
> 
> >
> >
> > Bruno Marchal writes:
> >
> >> >> Le 07-janv.-07, à 19:21, Brent Meeker a écrit :
> >> >> > And does it even have to be very good?  Suppose it made a sloppy  
> >> >> copy > of me that left out 90% of my memories - would it still be  
> >> >> "me"?  How > much fidelity is required for Bruno's argument?  I  
> >> think >> not much.
> >> >> The argument does not depend at all of the level of fidelity.  
> >> Indeed >> I make clear (as much as possible) that comp is equivalent  
> >> to the >> belief there is a level of substitution of myself  
> >> (3-person) such >> that I (1-person) survive a functional  
> >> substitution done at that >> level. Then I show no machine can know  
> >> what is her level of >> substitution (and thus has to bet or guess  
> >> about it).
> >> >> This is also the reason why comp is not jeopardized by the idea  
> >> that >> the environment is needed: just put the environment in the  
> >> definition >> of my "generalized brain".
> >> >> Imagine someone who say that his brain is the entire galaxy, >>  
> >> described at the level of all interacting quantum strings. This can  
> >> >> be captured by giant (to say the least) but finite, rational  
> >> complex >> matrices. Of course the thought experiment with the "yes  
> >> doctor" will >> look very non-realist, but *in fine*, all what is  
> >> needed (for the >> reversal) is that the Universal Dovetailer get  
> >> through the state of >> my generalized brain, and the UD will get it  
> >> even if my "state" is >> the state of the whole galaxy, or more.
> >> >> If it happens that my state is the galaxy state AND that the  
> >> galaxy >> state cannot be captured in a finite ('even giant) way(*),  
> >> then we >> are just out of the scope of the comp- reasoning. This is  
> >> possible >> because comp may be wrong.
> >> >
> >> > This is right, and it is perhaps a consequence of comp that >  
> >> computationalists did not brgain on. If the functional equivalent of  
> >> > my brain has to interact with the environment in the same way that  
> >> I > do then that puts a constraint what sort of machine it can be, as  
> >> well > as necessitating of course that it be an actual physical  
> >> machine. For > example, if as part of asserting my status as a  
> >> conscious being I > decide to lift my hand in the air when I see a  
> >> red ball, then my > functional replacement must (at least) have  
> >> photoreceptors which send > a signal to a central processor which  
> >> then sends a motor signal to its > hand. If it fails the red ball  
> >> test, then it isn't functionally > equivalent to me.
> >> > However, what if you put the red ball, the hand and the whole >  
> >> environment inside the central processor? You program in data which >  
> >> tells it is seeing a red ball, it sends a signal to what it thinks is  
> >> > its hand, and it receives visual and proprioceptive data telling it  
> >> it > has successfully raised the hand. Given that this self-contained  
> >> > machine was derived from a known computer architecture with known >  
> >> sensors and effectors, we would know what it was thinking by >  
> >> eavesdropping on its internal processes. But if we didn't have this >  
> >> knowledge, is there any way, even in theory, that we could figure it  
> >> > out? The answer in general is "no": without benefit of  
> >> environmental > interaction, or an instruction manual, there is no  
> >> way to assign > meaning to the workings of a machine and there is no  
> >> way to know > anything about its consciousness.
> >> Up to here I do agree.
> >> > The corollary of this is that under the right interpretation a  
> >> machine > could have any meaning or any consciousness.
> >> I don't think that this corollary follows. Unless you are postulating  
> >> a "physical world" having some special property (and then the  
> >> question is: what are your axiom for that physical realm, and where  
> >> does those axioms come from). Even in classical physics, where a  
> >> point can move along "all real numbers", I don't see any reason such  
> >> move can represent any computation. Of course I consider a  
> >> computation as being something non physical, and essentially  
> >> discrete, at the start. The physical and continuous aspect comes from  
> >> the fact that any computation is "embedded" into an infinity of  
> >> "parallel" computations (those generated by the UD).
> >> Brent says that the "evil problem" is a problem only for those who  
> >> postulate an omniscient and omnipotent good god. I believe that  
> >> somehow a large part of the "mind/body" problem comes from our  
> >> (instinctive) assumption of a basic (primitive) physical reality.
> >
> > Yes, but you're assuming here that which I (you?) set out to prove. In  
> > your UDA you do not explicitly start out with "there is no physical  
> > world" but arrive at this as a conclusion. Consider the following  
> > steps:
> >
> > 1. There appears to be a physical world
> > 2. Some of the substructures in this world appear to be conscious,  
> > namely brains
> > 3. The third person behaviour of these brains can be copied by an  
> > appropriate digital computer
> > 4. The first person experience of these brains would also thereby be  
> > copied by such a computer
> > 5. The first person experience of the computer would remain unchanged  
> > if the third person behaviour were made part of the program
> > 6. But this would mean there is no way to attach meaning or  
> > consciousness to the self-contained computer - it could be thinking of  
> > a red ball, blue ball, or no ball
> >
> > This is an unexpected result, which can be resolved several ways:
> >
> > 7. (3) is incorrect, and we need not worry about the rest
> > 8. (4) is incorrect, and we need not worry about the rest
> > 9. (5) is incorrect, and we need not worry about the rest
> >
> > [(7) or (8) would mean that there is something non-computational about  
> > the brain; (9) would mean there is something non-computational about a  
> > computer interacting with its environment, which seems to me even less  
> > plausible.]
> >
> > But if (3) to (5) are all correct, that leaves (6) as correct, which  
> > implies that consciousness is decoupled from physical activity. This  
> > would mean that in those cases where we do associate consciousness  
> > with particular physical activity, such as in brains or computers, it  
> > is not really the physical activity which is "causing" the  
> > consciousness. I think of it as analogous to a computer doing  
> > arithmetic: it is not "causing" the arithmetic, but is harnessing a  
> > mathematical truth to some physical task.
> >
> > If consciousness is decoupled from physical activity, this means that  
> > our conscious experience would be unchanged if the apparent physical  
> > activity of our brain were not really there. This would make all the  
> > evidence of our senses on which we base the existence of a physical  
> > world illusory: we would have these same experiences if the physical  
> > world suddenly disappeared or never existed in the first place. We can  
> > keep (1) and (2) because I qualified them with the word "appears", but  
> > the necessity of a separate physical reality accounting for this  
> > appearance goes.
> >
> >> > You can't avoid the above problem without making changes to  
> >> (standard) > computationalism. You can drop computationalism  
> >> altogether and say > that the brain + environment is not Turing  
> >> emulable. Or, as Bruno has > suggested, you can keep computationalism  
> >> and drop the physical > supervenience criterion.
> >> I am OK with this ('course).
> >> Bruno
> >> http://iridia.ulb.ac.be/~marchal/

Stathis Papaioannou
_________________________________________________________________
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~---------~--~----~------------~-------~--~----~
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~----------~----~----~----~------~----~------~--~---


Reply via email to