On Monday, September 30, 2013 8:00:11 AM UTC-4, Pierz wrote:
>
> Yes indeed, and it is compelling. Fading qualia and all that. It's the 
> absurdity of philosophical zombies. Those arguments did have an influence 
> on my thinking. On the other hand the idea that we *can* replicate all the 
> brain's outputs remains an article of faith. I remember that almost the 
> first thing I read in Dennett's book was his claim that rich, detailed 
> hallucinations (perceptions in the absence of physical stimuli) are 
> impossible. Dennett is either wrong on this - or a vast body of research 
> into hallucinogens is. Not to mention NDEs and OBEs. Dennett may be right 
> and these reports may all be mistakes and lies, but I doubt it. If he is 
> wrong, the his arguments become a compelling case in quite the opposite 
> sense to what he intended: the brain not as a manufacturer of consciousness 
> but as something more like a receptor. My instinct tells me we don't know 
> enough about the brain or consciousness to be certain of any conclusions 
> derived from logic alone. We may be like Newtonians arguing cosmology 
> without the benefit of QM and relativity.
>


The key, IMO, lies in drilling down on the fundamentals. What does it mean 
to 'receive'? What is a 'signal' and what is it doing in physics?

Also, (and this is the one Chalmers argument where I think that he missed 
it) we can turn the fading qualia argument around. Is it any less absurd to 
propose that qualia fades in, or suddenly appears due to complex wiring? 

Instead of building a brain which is like our own, couldn't we also build a 
brain that measures and analyzes social data to become a perfect sociopath? 
What if we intentionally want to suppress understanding and emotion and 
build a perfect actor, a p-Zelig, who uses chameleon-like algorithms to 
ingratiate itself in any context.

This paper from Chalmers http://consc.net/papers/combination.pdf does a 
good job of getting more into the different views on the combination 
problem, and how the micro and macro relate. I think that PIP exposes an 
assumption which all of the other approaches listed in the paper do not, 
which is that there is even a possibility of nonphenomenal phenomenal. Once 
we take that away, we can see that our personal awareness may not be 
created by microphysical states, rather our personal awareness is a 
particular range of a total awareness that has sub-personal, 
super-personal, and impersonal (public physical) facets. 

Thanks,
Craig

 

>
> On Monday, September 30, 2013 2:08:23 PM UTC+10, stathisp wrote:
>>
>> On 30 September 2013 11:36, Pierz <pie...@gmail.com> wrote: 
>> > If I might just butt in (said the barman)... 
>> > 
>> > It seems to me that Craig's insistence that "nothing is Turing 
>> emulable, 
>> > only the measurements are" expresses a different ontological assumption 
>> from 
>> > the one that computationalists take for granted. It's evident that if 
>> we 
>> > make a flight simulator, we will never leave the ground, regardless of 
>> the 
>> > verisimilitude of the simulation. So why would a simulated 
>> consciousness be 
>> > expected to actually be conscious? Because of different ontological 
>> > assumptions about matter and consciousness. Science has given up on the 
>> > notion of consciousness as having "being" the same way that matter is 
>> > assumed to. Because consciousness has no place in an objective 
>> description 
>> > of the world (i.e., one which is defined purely in terms of the 
>> measurable), 
>> > contemporary scientific thinking reduces consciousness to those 
>> apparent 
>> > behavioural outputs of consciousness which *can* be measured. This is 
>> > functionalism. Because we can't measure the presence or absence of 
>> > awareness, functionalism gives up on the attempt and presents the 
>> functional 
>> > outputs as the only things that are "really real". Hence we get the 
>> Turing 
>> > test. If we can't tell the difference, the simulator is no longer a 
>> > simulator: it *is* the thing simulated. This conclusion is shored up by 
>> the 
>> > apparently water-tight argument that the brain is made of atoms and 
>> > molecules which are Turing emulable (even if it would take the lifetime 
>> of 
>> > the universe to simulate the behaviour of a protein in a complex 
>> cellular 
>> > environment, but oh well, we can ignore quantum effects because it's 
>> too hot 
>> > in there anyway and just fast forward to the neuronal level, right?). 
>> It's 
>> > also supported by the objectifying mental habit of people conditioned 
>> > through years of scientific training. It becomes so natural to step 
>> into the 
>> > god-level third person perspective that the elision of private 
>> experience 
>> > starts seems like a small matter, and a step that one has no choice but 
>> to 
>> > make. 
>> > 
>> > Of course, the alternative does present problems of its own! Craig 
>> > frequently seems to slip into a kind of naturalism that would have it 
>> that 
>> > brains possess soft, non-mechanical sense because they are soft and 
>> > non-mechanical seeming. They can't be machines because they don't have 
>> > cables and transistors. "Wetware" can't possibly be hardware. A lot of 
>> his 
>> > arguments seem to be along those lines — the refusal to accept 
>> abstractions 
>> > which others accept, as telmo aptly puts it. He claims to "solve the 
>> hard 
>> > problem of consciousness" but the solution involves manoeuvres like 
>> "putting 
>> > the whole universe into the explanatory gap" between objective and 
>> > subjective: hardly illuminating! I get irritated by neologisms like PIP 
>> > (whatever that stands for now - was "multi-sense realism' not obscure 
>> > enough?), which to me seem to be about trying to add substance to vague 
>> and 
>> > poetic intuitions about reality by attaching big, intellectual-sounding 
>> > labels to them. 
>> > 
>> > However the same grain of sand that seems to get in Craig's eye does 
>> get in 
>> > mine too. It's conceivable that some future incarnation of "cleverbot" 
>> > (cleverbot.com, in case you don't know it) could reach a point of 
>> passing a 
>> > Turing test through a combination of a vast repertoire of recorded 
>> > conversation and some clever linguistic parsing to do a better job of 
>> > keeping track of a semantic thread to the conversation (where the 
>> program 
>> > currently falls down). But in this case, what goes in inside the 
>> machine 
>> > seems to make all the difference, though the functionalists are 
>> committed to 
>> > rejecting that position. Cleverly simulated conversation just doesn't 
>> seem 
>> > to be real conversation if what is going on behind the scenes is just a 
>> > bunch of rules for pulling lines out of a database. It's Craig's clever 
>> > garbage lids. We can make a doll that screams and recoils from damaging 
>> > inputs and learns to avoid them, but the functional outputs of pain are 
>> not 
>> > the experience of pain. Imagine a being neurologically incapable of 
>> pain. 
>> > Like "Mary", the hypothetical woman who lives her life seeing the world 
>> > through a black and white monitor and cannot imagine colour qualia 
>> until she 
>> > is released, such an entity could not begin to comprehend the meaning 
>> of 
>> > screams of pain - beyond possibly recognising a self-protective 
>> function. 
>> > The elision of qualia from functional theories of mind has potentially 
>> very 
>> > serious ethical consequences - for only a subject with access to those 
>> > qualia truly understand them. Understanding the human condition as it 
>> really 
>> > is involves inhabiting human qualia. Otherwise you end up with Dr 
>> Mengele — 
>> > humans as objects. 
>> > 
>> > I've read Dennett's arguments against the "qualophiles" and I find them 
>> > singularly unconvincing - though to say why is another long post. 
>> Dennett 
>> > says we only "seem" to have qualia, but what can "seem" possibly mean 
>> in the 
>> > absence of qualia? An illusion of a quality is an oxymoron, for the 
>> quality 
>> > *is* only the way it seems. The comp assumption that computations have 
>> > qualia hidden inside them is not much of an answer either in my view. 
>> Why 
>> > not grant the qualia equal ontological status to the computations 
>> > themselves, if they are part and parcel? And if they cannot be known 
>> except 
>> > from the inside, and if the computation's result can't be known in 
>> advance, 
>> > why not say that the "logic" of the qualitiative experience is 
>> reflected in 
>> > the mathematics as much as the other way round? 
>> > 
>> > Well enough. I don't have the answer. All I'm prepared to say is we are 
>> > still confronted by mystery. "PIP" seems to me to be more 
>> impressionistic 
>> > than theoretical. Comp still seems to struggle with qualia and zombies. 
>> I 
>> > suspect we still await the unifying perspective. 
>>
>> Have you read this paper by David Chalmers? 
>>
>> http://consc.net/papers/qualia.html 
>>
>> It assumes for the sake of argument that it is possible to make a 
>> device that replicates the externally observable behaviour of a brain 
>> component, but lacking qualia, and then shows that this leads to 
>> absurdity. 
>>
>>
>> -- 
>> Stathis Papaioannou 
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.

Reply via email to