On Wednesday, September 19, 2012 1:10:10 PM UTC-4, Jason wrote:
>
>
>
> I didn't mean to say that any information can be functionally useful 
>> without qualia, only that there is a proof of concept for the principle 
>> that some information can be used functionally without qualia. This is why 
>> blindsight is such a big deal in philosophy of mind. It absolutely 
>> disproves the representational theory of qualia, 
>
>
> It doesn't, because we haven't shown no visual qualia exists in the brain 
> of someone with blindsight.  All we know is that the part of the brain 
> responsible for talking is isolated from that qualia.
>
> It is like there being two people sitting side by side, one with there 
> eyes closed, and one with their eyes open. You ask the person with their 
> eyes closed if they can see and from their response conclude that neither 
> person experienced sight.
>
> You haven't proven anything about the person with their eyes open.
>

I don't have to prove anything about them, because the person with their 
eyes closed knows how many fingers I am holding up. He is getting 
information but doesn't know how. He has no experience of qualia associated 
with it.


> in that we know for certain that it is not necessary to experience 
> personal visual qualia in order to receive personally useful information. 
> They are not inseparable on the level of a human person. You can have one 
> without the other.
>  
>
>>  
>>
>>>  
>>>
>>>>  For example, they may still have reflexes, like the ability to avoid 
>>>> obsticles or catch a thrown ball, but the language center of their brain 
>>>> is 
>>>> disconnected, and so the part of the brain that talks says it can't see.
>>>>
>>>
>>> I understand, but people with blindsight don't have a problem with their 
>>> speech centers. 
>>>
>>
>> They don't, but their speech center is "blind" as the data from their 
>> visual sense never makes it to all the parts of the brain it would normally.
>>
>> See the BBC Brain Series: 
>> <http://mindhacks.com/2007/08/08/excellent-bbc-brain-story-series-available-online/>
>> http://mindhacks.com/
>> 2007/08/08/excellent-bbc-brain-story-series-available-online/
>>
>> It has some good explanations of this concept, showing various waves of 
>> activity emanating from different parts of the brain to others, which is 
>> also a good model for attention.
>>
>
> It doesn't matter in this case though, because with blindsight it is only 
> the visual processing which is damaged. The psychology of the person is not 
> split so that what they say is a reflection of what they intend to say. 
>
>
> It depends on the form of brain damage.
>

Sure, but that isn't a consideration in the cases of blindsight that have 
been studied.
 

>
> At the sub-personal level, sure, there is all kinds of specialization and 
> sharing of experience, but I think it is a-mereological 
>
>
> What does mereological mean?
>

Mereology is the study of part-whole relations: 
http://plato.stanford.edu/entries/mereology/

like 

(1) The handle is part of the mug.  (2) This cap is part of my pen.  (3) The 
left half is your part of the cake.  (4) The cutlery is part of the 
tableware.  (5) The contents of this bag is only part of what I bought.  (6) 
That 
area is part of the living room.  (7) The outermost points are part of the 
perimeter.  (8) The first act was the best part of the play.
I am saying that phenomenology is, in its purest expression, unlike all of 
these. Irony is and is not part of a story. I am and am not my mind. I am 
and am not a thing. etc. My hypothesis is that subjectivity is time, and 
that time is orthogonal to space, so that by virtue of subjectivity 
occupying no space (or being mereologically agnostic toward space is more 
like it), it is free from the constraints that we associate with objects 
and objective conditions. Hence, private imagination is like omnipotence 
except that it completely lacks the satisfaction that it seeks from public 
realism.


> and not a feed-forward information process of activity emanations like you 
> are assuming. If it were, all qualia would be superfluous.
>
>
> No, qualia are neccessary. 
>

How do you explain blindsight then? Which qualia are necessary? How do you 
explain synesthesia and anosognosia? Where do these qualia come from? Why 
would they be necessary?
 

>  I don't believe zombies are logically consistent.  
>

What is a person with blindsight but a visual zombie?
 

> It seems you think they are possible.  Read smulleyan's story on the guy 
> who takes a pill that obliterates his awareness and tell me if you think it 
> is possible, and if not, why not.
>

I'm not the one who is curious about this. I understand my position and 
your position completely.
 

>
>  
>
>>  
>>
>>> Why fight it? Why not try looking at the evidence for what it actually 
>>> says? Information doesn't need experience. Even if it did, how would it 
>>> conjure such a thing out of thin air, and why doesn't it do that when we 
>>> are looking? Why does information never appear as a disembodied entity, say 
>>> haunting the internet or appearing spontaneously in a cartoon?
>>>  
>>>
>>>>
>>>> Sure, to us it makes sense that the feeling of pain should have a 
>>>> function, but it makes no sense to a function to have a feeling. None.
>>>>
>>>>
>>>> It can make sense if you think about it long enough.  Think of googles 
>>>> self-driving cars.  Might they have some quale representing the experience 
>>>> of spotting a green light or a stop sign? 
>>>>
>>>
>>> The only reason to imagine that they would have a quale is because we 
>>> take our own word for the fact that there is a such thing as experience. 
>>> Otherwise there is no reason to bring qualia into it at all.
>>>  
>>>
>>>>
>>>>
>>>>
>>>>
>>>>> According to Minsky, human consciousness involves the interplay 
>>>>> between as many as 400 separate sub-organs of the brain.  One can imagine 
>>>>> a 
>>>>> symphony of activity resulting from these individual regions,
>>>>>
>>>> A symphony of what? Who is there to hear it? 
>>>>
>>>>
>>>> It's a metaphor for a large number of interacting and interfering parts.
>>>>
>>>
>>> But what in this metaphor is receiving the totality of the interaction?
>>>  
>>>
>>
>> All the parts of the brain to some extent, can "hear" the other parts.
>>
>
> Then they each would have to have a sub-brain homunculus to make sense of 
> all of that. 
>
>
> Together they lead to one large informational state.
>

Why would they? Does Bugs Bunny lead to a Looney Tunes state?
 

>
> Not only the symphony but every sub-symphony of participating synapses. 
> Hundreds of billions of notes being played every second on as many 
> micro-instruments. Why have any regions or neurological differences at all? 
>
>
> They are specialized to perform specific functions.
>

Why partition them regionally though if they can all hear each other? It 
would be like saying it makes sense to keep iPhones in one area of the 
country and Androids in another because they are specialized to perform 
specific functions.
 

>
> Why not just use the same neuron over and over?
>
>  
>>
>>>
>>>> Stop imagining things and think of what is actually there once you 
>>>> reduce the universe to unconscious processing of dead data.
>>>>
>>>>
>>>> The difference between dead and alive is a question of the 
>>>> organization, the patterns of the constituent matter.
>>>>
>>>
>>> I don't think that it is. I can make a pattern of a cell out of charcoal 
>>> or chalk and there will be no living organism that comes out of it.
>>>
>>
>> You can take some lumps of coal, some water, some air, and a few trace 
>> elements, and by appropriately arranging those atoms end up with a 
>> bacterium, a rose, or a human being.
>>
>
> Easier said than done, 
>
>
> It may not be easy but it is possible.
>

Not necessarily. The fact that nobody has ever come close to doing anything 
remotely like that might give us pause before asserting that it is 
possible. It may or may not be possible.
 

>
> but even so, once it dies, we haven't figured out how to bring it back to 
> life. 
>
>
> Sure we have, put the parts back where they were when it was alive and it 
> will come back to life.
>
> We just don't have the technical means to do this today.
>

There may not be any such thing. Every change that you make in the system 
influences the other. It may be thermodynamically impossible to 'put the 
parts back where they were', because 'where they were' no longer 
exists...it is a moment in time as well as a place.
 

>
> We haven't been so successful when we have tried to build life from 
> scratch. Since they did Cosmos in the late 70s have we progressed at all in 
> getting a living cell out of primordial ooze?
>
>
> I am not sure.  If we had, would it change your mind?
>

If we were commercially producing living cells from methane and ammonia or 
whatever, then I would have more reason to doubt the significance between 
the biological and chemical ontological frames. 


>  
>
>>
>>  
>>
>>> The possibility of living organisms has to be inherent in the universe 
>>> to begin with.
>>>  
>>>
>>>>
>>>> You could reduce any life form to "lifeless bouncing around of dead 
>>>> atoms.". But this doesn't get anywhere useful.
>>>>
>>>> All I suggest is the same applies to the difference between 
>>>> consciousness and lack of consciousness.  The organization and patterns of 
>>>> some system determine what it is or can be conscious of.
>>>>
>>>
>>> If that were the case, we should see dead bodies spontaneously 
>>> self-resurrecting from time to time, Boltzmann brains cropping up in the 
>>> clouds, etc.
>>>  
>>>
>>
>> The arrow of time makes such spontaneous constructions very unlikely.  It 
>> is not surprising that we don't see them.
>>
>
> The entire biosphere is a spontaneous construction, so they seem pretty 
> likely on Earth.
>
>
> Our whole biosphere is descended from the same organism, so only the first 
> (rather simple) life form had to come into being spontaneously.
>

But that life form has to keep spontaneously creating mutations of itself 
which don't wipe out everything else.
 

>
>
> There is a difference between replacing a part of the brain that a person 
> uses to hear and replacing the parts of a brain that a person uses to be 
> themselves. 
>
>
> The only difference I see is that we haven't done it.
>

Do you see a difference between getting an arm amputated and a head 
amputated? Why would a hook suffice for one but not the other?
 

>
> This is a case for having a much, much higher standard for replacing core 
> structures than *any other medical technology in history*.
>
>
> When we replace someone's hippocampus with a chip will you tell them they 
> are zombies?
>

You won't need to tell them anything because they will be in a coma.
 

>
>  
>
>>  
>>
>>>  
>>>
>>>>
>>>>
>>>> How would such an experience appear? Where is the point of translation?
>>>>
>>>>  
>>>>>
>>>>>>
>>>>>> If the brain is doing all of the work, why does the top level 
>>>>>> organism have some other worthless abstraction layer of "experience" 
>>>>>> when, 
>>>>>> as blindsight proves, we are perfectly capable of processing information 
>>>>>> without any conscious qualia at all.
>>>>>>
>>>>>
>>>>> It's not worthless at all.  Would you still be able to function if all 
>>>>> you knew were the raw firing data of the millions of photosensitive cells 
>>>>> in your retina?  No, it takes many layers of perception, detecting lines, 
>>>>> depth perception, motion, colors, objects, faces, etc. for the sense of 
>>>>> sight to be as useful as it is to us.
>>>>>
>>>>
>>>> Ugh. I don't know if there is any way that I can show you this blind 
>>>> spot if you don't see it for yourself, but if you are interested I will 
>>>> keep trying to explain it. If you aren't interested, then you are wasting 
>>>> your time talking to me, because what your view says I have known 
>>>> backwards 
>>>> and forwards for many years.
>>>>
>>>> Let's say I am a computer. You are telling me "Would you still be able 
>>>> to function if all you knew were the raw firing data of the millions of 
>>>> electronically sensitive semiconductors in your graphics card? Yes. I 
>>>> would. 
>>>>
>>>>
>>>> You wouldn't be processing it in the same way as a brain so I would not 
>>>> expect a video card to be conscious in the same way.
>>>>
>>>
>>> The principle is the same though. The level of complexity doesn't change 
>>> anything.
>>>
>>
>> The particular function that is implemented is everything.
>>
>
> The function is being accomplished the same regardless. If I am a graphics 
> card, I don't need to see any graphics.
>
>
> It is no wonder why you have no faith in functionalism, if you see no 
> difference between what a videocard does and what the visual cortex does. 
>

Just the opposite. You are the one who thinks that the visual cortex is a 
computer.
 

>
>
>  
>
>>  
>>
>>>  
>>>
>>>>
>>>> I require no layers of software to organize this data into other kinds 
>>>> of data, nor would it make any sense that there could be any such thing as 
>>>> 'other kinds of data'. To the contrary, the raw firing of the 
>>>> semiconductors is all that is required to render data from the motherboard 
>>>> to be spewed out to a video screen (which would of course be invisible and 
>>>> irrelevant to a computer).
>>>>
>>>>
>>>> The videocard can't recognize objects or faces.
>>>>
>>>
>>> It doesn't need to. As long as we can digitally categorize pixel 
>>> regions, there is no need for 'faces' or 'objects'.
>>>  
>>>
>>
>> Then it will suffer face blindness and visual agnosia; it won't 
>> experience visual sensation in the same way we do.
>>
>
> It won't need to experience anything. The function of recognition 
> continues regardless.
>  
>
>>  
>>
>>>
>>>>
>>>>  
>>>>
>>>>>   After the different layers process this information and share it 
>>>>> with the other brain regions, we lose the ability to explain how it is we 
>>>>> recognize a face, or how red differs from green.  These determinations 
>>>>> were 
>>>>> done by a lower level module, and its internal processing is not privy to 
>>>>> other brain regions (such as the brain region that talks), and so it 
>>>>> remains mysterious.
>>>>>
>>>>
>>>> All of that can and would occur without anything like 'experience'.
>>>>
>>>>
>>>> So it is an accident that we can see and know we can see, since we 
>>>> could be zombies?  How do you know I am not a zombie?  Maybe only 
>>>> conscious 
>>>> people can understand your theory and everyone who fails to get it is 
>>>> confused due to their zombiehood.
>>>>
>>>
>>> Not an accident, no. Sense is self-translucent. That's how I know that 
>>> you aren't a zombie and how I know that I don't need to know that you 
>>> aren't a zombie, and how I know that if I wanted to I could make a 
>>> plausible case for how I know you aren't a zombie.
>>>
>>
>> Good, then when computers are conscious this will be self-translucent to 
>> you, and you won't end up treating them as second-class citizens.
>>
>
> Promissory materialism only sounds desperate to me. It weakens the case. 
> "Just wait until Jesus comes...then you'll be sorry!"
>
>
> If you are so certain I am conscious, then you have affirmed Turing's test.
>
> My emails could be the output of a program, and yet my "self translucent 
> self" has shown through, you know someone is inside.
>
> If/when computer based minds walk around and marry your daughter, you will 
> similarly come to accept their consciousness.
>

It doesn't matter who is fooled by sophisticated androids, they still have 
no experience of their own. People believe that someone is inside of their 
stuffed animals too. So what? It is easy for people to fool other people.  

So is it dualism or monism?
>
 
>
> A nested dual aspect monism. 
> <http://media.tumblr.com/tumblr_m9vtbj9N0m1qe3q3v.jpg>
> http://media.tumblr.com/tumblr_m9vtbj9N0m1qe3q3v.jpg
>  
>
>>  
>>
>>> <http://media.tumblr.com/tumblr_m9vtbj9N0m1qe3q3v.jpg>
>>>
>>
>
>
>
>
>
>
>
>
>
>
>
>
> Can you explain this picture?
>
>
I cannibalized it from this Wiki 
http://en.wikipedia.org/wiki/File:Dualism-vs-Monism.png

Putting multisense realism into these terms, I show how the neutral monism 
is both the outer pie and the nested pie, with all of the juxtaposition 
relations. The dashed lines are derivative, the hard lines are fundamental, 
so that there are two derivative aspects of the fundamental 
sense-within-sense. The right hand wedge is the more derivative of the 
derivative - ie subjective phenomenology, with it's qualitative sensations 
which exist for aesthetic pleasure as well as arithmetic function. The left 
hand wedge is the opposite, objective or public realism, with no color, no 
jumping to conclusions, just a 'simply is' presentation of stable 
structure. 

>  
>
>> which is ordinary 'sense'. Matter is a spatial public exterior, 
>>> experience is a temporal private interior. They are the same thing but 
>>> 'rotated 90 degrees'. Sense is what does the rotating and the discerning of 
>>> its own rotations and levels of meta-juxtaposition.
>>>
>>
>> How do you know there is matter (rather than the illusion of matter) if 
>> the only thing that is concrete is experience?
>>
>
> Because illusion only means that there is some alternative explanation 
> which makes more sense. Matter already makes sense.
>
>
> Then maybe other things besides awareness have a concrete independent 
> existence too.
>

If they do, we will never know about them, so what does it matter?

Craig
 

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/everything-list/-/OrMhJJy-JEoJ.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to