On Monday, February 24, 2014 3:11:47 PM UTC-5, Quentin Anciaux wrote:
>
>
>
>
> 2014-02-24 20:24 GMT+01:00 Craig Weinberg <[email protected]<javascript:>
> >:
>
>>
>>
>> On Monday, February 24, 2014 2:06:24 PM UTC-5, Quentin Anciaux wrote:
>>>
>>>
>>>
>>>
>>> 2014-02-24 20:02 GMT+01:00 Craig Weinberg <[email protected]>:
>>>
>>>
>>>>
>>>> On Monday, February 24, 2014 1:10:03 PM UTC-5, David Nyman wrote:
>>>>
>>>>> On 24 February 2014 17:38, Craig Weinberg <[email protected]> wrote:
>>>>>
>>>>> No, that's the point of the analogy, so you can see for yourself why 
>>>>>> the question is not reasonable. The question posed over and over to me 
>>>>>> here 
>>>>>> has been some variation of this same "But if the world didn't work the 
>>>>>> way 
>>>>>> that it does, wouldn't you have to agree that you were wrong and the 
>>>>>> world 
>>>>>> was right?"
>>>>>
>>>>>
>>>>> You've lost me. Surely such questions are more like "If the world 
>>>>> turned out not to work the way in the way you predict, wouldn't you have 
>>>>> to 
>>>>> agree that you were wrong and the world was right?" 
>>>>>
>>>>
>>>> It's not the way that I predict though, it is the way that the world 
>>>> already it. It is CTM which is predicting a future technology that 
>>>> transcends consciousness and can duplicate it.
>>>>  
>>>>
>>>>> IOW I thought I was asking a question capable of a definite answer in 
>>>>> principle. I thought you had a definite view about whether any 
>>>>> significant 
>>>>> part of the brain could be functionally substituted without subjective 
>>>>> consequences for the patient. 
>>>>>
>>>>
>>>> Yes, I have a definite view - some parts of the brain can be 
>>>> functionally substituted without subjective consequences for the personal 
>>>> experience of the patient, but that has nothing to do with the 
>>>> transpersonal and subpersonal experiences of the patient, which would be 
>>>> impacted in some way. The overall effect may or may not be 'significant' 
>>>> to 
>>>> us personally, but it makes absolutely no difference and is a Red Herring 
>>>> to the question of whether consciousness can be generated mechanically.
>>>>  
>>>>
>>>>> In fact I assumed that your view was that this wouldn't be possible. 
>>>>> Is that incorrect?  On that assumption, I asked you to consider, 
>>>>> hypothetically, my telling you that I had survived such a substitution 
>>>>> without any loss or difference. If such an eventuality were to occur, 
>>>>> wouldn't you at least consider that this anomaly put your theory in doubt?
>>>>>
>>>>
>>>> Why would it put my theory in doubt? If you can substitute the brake 
>>>> pedal on a Rolls Royce with a piece of plywood and duct tape, does that 
>>>> mean that a Rolls Royce can be replaced entirely by plywood and duct tape? 
>>>> Does it mean that there is some magical point where the Rolls stops being 
>>>> a 
>>>> Rolls if you keep replacing parts? If you start with the wood and tape, 
>>>> you 
>>>> can never get a Rolls, but if you start with a Rolls, you can do quite a 
>>>> bit of modification without it being devalued significantly.
>>>>
>>>>
>>> So that amount to say that you can't replace the whole brain with a 
>>> functionnaly working replacement... if you start piece by piece there will 
>>> be a point where it is not working anymore, and the external behavior is 
>>> changed... that's what you mean ?
>>>
>>> So if one day, you're presented with someone having endured such process 
>>> and there is absolutely no difference in his external behavior... would 
>>> that point to a possibility your theory is wrong ?
>>>
>>
>> If you see two Rolls Royces and are told that one of them is made of duct 
>> tape and plywood, but you can't tell them apart, would that mean that duct 
>> tape and plywood can be used to build a Rolls Royce?
>>
>
> Yes, if I  "can't tell them apart" then by definition I "can't tell them 
> apart"...
>
> You still didn't answer the question...
>

The answer is that one person not being able to tell them apart at some 
particular moment doesn't mean anything. 

I don't know how much clearer I can make it:
 

<http://multisenserealism.files.wordpress.com/2014/02/simberg.jpg?w=595>











 

>
>> Think of computation as containment, and universal machine is one which 
>> can be programmed to be box, bag, jar, or bottle. You could make boxes of 
>> bottles of bags, but there is nothing about containment in and of itself 
>> which conjures something to be contained. 
>>
>> Craig
>>  
>>
>>>
>>> Quentin
>>>  
>>>
>>>> Craig
>>>>  
>>>>
>>>>>  
>>>>> David
>>>>>
>>>>  -- 
>>>> You received this message because you are subscribed to the Google 
>>>> Groups "Everything List" group.
>>>> To unsubscribe from this group and stop receiving emails from it, send 
>>>> an email to [email protected].
>>>> To post to this group, send email to [email protected].
>>>>
>>>> Visit this group at http://groups.google.com/group/everything-list.
>>>> For more options, visit https://groups.google.com/groups/opt_out.
>>>>
>>>
>>>
>>>
>>> -- 
>>> All those moments will be lost in time, like tears in rain. (Roy 
>>> Batty/Rutger Hauer)
>>>  
>>  -- 
>> You received this message because you are subscribed to the Google Groups 
>> "Everything List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to [email protected] <javascript:>.
>> To post to this group, send email to [email protected]<javascript:>
>> .
>> Visit this group at http://groups.google.com/group/everything-list.
>> For more options, visit https://groups.google.com/groups/opt_out.
>>
>
>
>
> -- 
> All those moments will be lost in time, like tears in rain. (Roy 
> Batty/Rutger Hauer)
>  

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.

Reply via email to