On Saturday, October 20, 2012 1:47:28 PM UTC-4, stathisp wrote:
>
>
>
> On Oct 15, 2012, at 4:10 AM, Craig Weinberg <whats...@gmail.com<javascript:>> 
> wrote:
>
>
> >> But since you misunderstand the first assumption you misunderstand the 
>> >> whole argument. 
>> > 
>> > 
>> > Nope. You misunderstand my argument completely. 
>>
>> Perhaps I do, but you specifically misunderstand that the argument 
>> depends on the assumption that computers don't have consciousness. 
>
>
> No, I do understand that.
>
>
> Good.
>
> You 
>> also misunderstand (or pretend to) the idea that a brain or computer 
>> does not have to know the entire future history of the universe and 
>> how it will respond to every situation it may encounter in order to 
>> function. 
>
>
> Do you have to know the entire history of how you learned English to read 
> these words? It depends what you mean by know. You don't have to 
> consciously recall learning English, but without that experience, you 
> wouldn't be able to read this. If you had a module implanted in your brain 
> which would allow you to read Chinese, it might give you an acceptable 
> capacity to translate Chinese phonemes and characters, but it would be a 
> generic understanding, not one rooted in decades of human interaction. Do 
> you see the difference? Do you see how words are not only functional data 
> but also names which carry personal significance?
>
>
> The atoms in my brain don't have to know how to read Chinese. They only 
> need to know how to be carbon, nitrogen, oxygen etc. atoms. The complex 
> behaviour which is reading Chinese comes from the interaction of billions 
> of these atoms doing their simple thing. 
>

I don't think that is true. The other way around makes just as much sense 
of not more: Reading Chinese is a simple behavior which drives the behavior 
of billions of atoms to do a complex interaction. To me, it has to be both 
bottom-up and top-down. It seems completely arbitrary prejudice to presume 
one over the other just because we think that we understand the bottom-up 
so well.

Once you can see how it is the case that it must be both bottom-up and 
top-down at the same time, the next step is to see that there is no 
possibility for it to be a cause-effect relationship, but rather a dual 
aspect ontological relation. Nothing is translating the functions of 
neurons into a Cartesian theater of experience - there is nowhere to put it 
in the tissue of the brain and there is no evidence of a translation from 
neural protocols to sensorimotive protocols - they are clearly the same 
thing. 
 

> If the atoms in my brain were put into a Chinese-reading configuration, 
> either through a lot of work learning the language or through direct 
> manipulation, then I would be able to understand Chinese.
>

It's understandable to assume that, but no I don't think it's like that. 
You can't transplant a language into a brain instantaneously because there 
is no personal history of association. Your understanding of language is 
not a lookup table in space, it is made out of you. It's like if you walked 
around with Google translator in your brain. You could enter words and 
phrases and turn them into you language, but you would never know the 
language first hand. The knowledge would be impersonal - accessible, but 
not woven into your proprietary sense.
 

>
> What are some equivalently simple, uncontroversial things in 
>> what you say that i misunderstand? 
>>
>
> You think that I don't get that Fading Qualia is a story about a world in 
> which the brain cannot be substituted, but I do. Chalmers is saying 'OK 
> lets say that's true - how would that be? Would your blue be less and less 
> blue? How could you act normally if you...blah, blah, blah'. I get that. 
> It's crystal clear.
>
> What you don't understand is that this carries a priori assumptions about 
> the nature of consciousness, that it is an end result of a distributed 
> process which is monolithic. I am saying NO, THAT IS NOT HOW IT IS.
>
> Imagine that we had one eye in the front of our heads and one ear in the 
> back, and that the whole of human history has been to debate over whether 
> walking forward means that objects are moving toward you or whether it 
> means changes in relative volume of sounds.
>
> Chalmers is saying, 'if we gradually replaced the eye with parts of the 
> ear, how would our sight gradually change to sound, or would it suddenly 
> switch over?' Since both options seem absurd, then he concludes that 
> anything that is in the front of the head is an eye and everything on the 
> back is an ear, or that everything has both ear and eye potentials.
>
> The MR model is to understand that these two views are not merely 
> substance dual or property dual, they are involuted juxtapositions of each 
> other. The difference between front and back is not merely irreconcilable, 
> it is mutually exclusive by definition in experience. I am not throwing up 
> my hands and saying 'ears can't be eyes because eyes are special', I am 
> positively asserting that there is a way of modeling the eye-ear relation 
> based on an understanding of what time, space, matter, energy, entropy, 
> significance, perception, and participation actually are and how they 
> relate to each other.
>
> The idea that the newly discovered ear-based models out of the back of our 
> head is eventually going to explain the view eye view out of the front is 
> not scientific, it's an ideological faith that I understand to be 
> critically flawed. The evidence is all around us, we have only to interpret 
> it that way rather than to keep updating our description of reality to 
> match the narrowness of our fundamental theory. The theory only works for 
> the back view of the world...it says *nothing* useful about the front view. 
> To the True Disbeliever, this is a sign that we need to double down on the 
> back end view because it's the best chance we have. The thinking is that 
> any other position implies that we throw out the back end view entirely and 
> go back to the dark ages of front end fanatacism. I am not suggesting a 
> compromise, I propose a complete overhaul in which we start not from the 
> front and move back or back and move front, but start from the split and 
> see how it can be understood as double knot - a fold of folds.
>
>
> I'm sorry, but this whole passage is a non sequitur as far as the fading 
> qualia thought experiment goes. You have to explain what you think would 
> happen if part of your brain were replaced with a functional equivalent. 
>

There is no functional equivalent. That's what I am saying. Functional 
equivalence when it comes to a person is a non-sequitur. Not only is every 
person unique, they are an expression of uniqueness itself. They define 
uniqueness in a never-before-experienced way. This is a completely new way 
of understanding consciousness and signal. Not as mechanism, but as 
animism-mechanism.

 

> A functional equivalent would stimulate the remaining neurons the same as 
> the part that is replaced. 
>

No such thing. Does any imitation function identically to an original?
 

> The original paper says this is a computer chip but this is not necessary 
> to make the point: we could just say that it is any device, not being the 
> normal biological neurons. If consciousness is substrate-dependent (as you 
> claim) then the device could do its job of stimulating the neurons normally 
> while lacking or differing in consciousness. Since it stimulates the 
> neurons normally you would behave normally. If you didn't then it would be 
> a miracle, since your muscles would have to contract normally. Do you at 
> least see this point, or do you think that your muscles would do something 
> different?
>

I see the point completely. That's the problem is that you keep trying to 
explain to me what is obvious, while I am trying to explain to you 
something much more subtle and sophisticated. I can replace neurons which 
control my muscles because muscles are among the most distant and 
replaceable parts of 'me'. These nerves are outbound efferent nerves and 
the target muscle cells are for the most part willing servants. The same 
goes for amputating my arm. I can replace it in theory. What I am saying 
though is that amputating my head is not even theoretically possible. 
Wherever my head is, that is where I have to be. If I replace my brain with 
other parts, the more parts there are the less of me there is left. The 
brain isn't like a computer though. You can't just pull out something and 
then put it back in if it doesn't work. In the brain, as soon as you screw 
it up, you get coma, death, dementia, stroke, etc. It's part of a living 
creature made of smaller living creatures. It doesn't matter how closely 
you think your substitute brain acts like my brain, I am never going to be 
found in your substitute brain, and the substitute brain will never even 
get close to working properly. Computers do not work very well. Every time 
I turn on my stupid phone there are like 25 updates, and I hardly do 
anything with it. Can you imagine how unreliable a network the size of a 
synthetic brain would be? How easy it would be to halt the thalamus program 
and kill you? It's wildly overconfident and factually misguided to think of 
the self and the brain in these terms. I see it like 19th century Jules 
Verne sci-fi now. It's just silly and every week there are more studies 
which suggest that our neuroscientific models continue to be more and more 
inadequate. They don't add up.

Craig
 

>
>
> -- Stathis Papaioannou
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/everything-list/-/O_L6jNtBd8oJ.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to