On Sunday, September 29, 2013 9:36:28 PM UTC-4, Pierz wrote:
>
> If I might just butt in (said the barman)...
>
> It seems to me that Craig's insistence that "nothing is Turing emulable, 
> only the measurements are" expresses a different ontological assumption 
> from the one that computationalists take for granted. It's evident that if 
> we make a flight simulator, we will never leave the ground, regardless of 
> the verisimilitude of the simulation. So why would a simulated 
> consciousness be expected to actually be conscious? Because of different 
> ontological assumptions about matter and consciousness. Science has given 
> up on the notion of consciousness as having "being" the same way that 
> matter is assumed to. Because consciousness has no place in an objective 
> description of the world (i.e., one which is defined purely in terms of the 
> measurable), contemporary scientific thinking reduces consciousness to 
> those apparent behavioural outputs of consciousness which *can* be 
> measured. This is functionalism. Because we can't measure the presence or 
> absence of awareness, functionalism gives up on the attempt and presents 
> the functional outputs as the only things that are "really real". Hence we 
> get the Turing test. If we can't tell the difference, the simulator is no 
> longer a simulator: it *is* the thing simulated. This conclusion is shored 
> up by the apparently water-tight argument that the brain is made of atoms 
> and molecules which are Turing emulable (even if it would take the lifetime 
> of the universe to simulate the behaviour of a protein in a complex 
> cellular environment, but oh well, we can ignore quantum effects because 
> it's too hot in there anyway and just fast forward to the neuronal level, 
> right?). It's also supported by the objectifying mental habit of people 
> conditioned through years of scientific training. It becomes so natural to 
> step into the god-level third person perspective that the elision of 
> private experience starts seems like a small matter, and a step that one 
> has no choice but to make. 
>
> Of course, the alternative does present problems of its own! Craig 
> frequently seems to slip into a kind of naturalism that would have it that 
> brains possess soft, non-mechanical sense because they are soft and 
> non-mechanical seeming.
>

Actually not. The aesthetic qualities of living organs do seem 
non-mechanical, and that may be a clue about their nature, but it doesn't 
have to be. We could make slippery, wet machines which were just as bad at 
feeling and experiencing deep qualia as a cell phone is. The naturalism 
that I appeal to arises not from the brain but from the nature of common 
experiences among humans, animals, and organisms and their apparent 
distance from inorganic systems. We can tell that a dog feels more like a 
person than a plant. Maybe it's not true. Maybe a Venus Flytrap feels like 
a dog? If I were going to recreate the universe from scratch however, and I 
had to bet on whether this intuitive hierarchy was important to include, I 
would bet that it was. It seems important, at least to living organisms. We 
need to know what we can eat and what we can impregnate with a high degree 
of veracity, and there seems to be a very natural understanding of that 
which does not require a Turing test.
 

> They can't be machines because they don't have cables and transistors. 
> "Wetware" can't possibly be hardware.
>

No, that's a Straw Man of my position - but an understandable and very 
common one. Wetware is hardware, but what is using the hardware is 
different than what uses the hardware of a silicon crystal.
 

> A lot of his arguments seem to be along those lines — the refusal to 
> accept abstractions which others accept, as telmo aptly puts it. He claims 
> to "solve the hard problem of consciousness" but the solution involves 
> manoeuvres like "putting the whole universe into the explanatory gap" 
> between objective and subjective: hardly illuminating!
>

It is illuminating to me. The universe becomes a continuum of aesthetic 
qualities modulated by physical-sense. There is no gap because there is 
nothing in the universe which does not bridge that gap. 
 

> I get irritated by neologisms like PIP (whatever that stands for now - was 
> "multi-sense realism' not obscure enough?), which to me seem to be about 
> trying to add substance to vague and poetic intuitions about reality by 
> attaching big, intellectual-sounding labels to them. 
>

I'm going to be posting a glossary in the next day or so. I know it sounds 
pretentious, but that's the irony. Like legalese, the point is not to 
obscure but to make absolutely clear. Multisense Realism is about the 
overall picture of experience and reality, while PIP (Primordial Identity 
Pansensitivty) describes the particular way that this approach differs from 
other views, like panpsychism or panexperientialism. Philosophical jargon 
is our friend :)
 

>
> However the same grain of sand that seems to get in Craig's eye does get 
> in mine too. It's conceivable that some future incarnation of "cleverbot" (
> cleverbot.com, in case you don't know it) could reach a point of passing 
> a Turing test through a combination of a vast repertoire of recorded 
> conversation and some clever linguistic parsing to do a better job of 
> keeping track of a semantic thread to the conversation (where the program 
> currently falls down). But in this case, what goes in inside the machine 
> seems to make all the difference, though the functionalists are committed 
> to rejecting that position. Cleverly simulated conversation just doesn't 
> seem to be real conversation if what is going on behind the scenes is just 
> a bunch of rules for pulling lines out of a database. It's Craig's clever 
> garbage lids. We can make a doll that screams and recoils from damaging 
> inputs and learns to avoid them, but the functional outputs of pain are not 
> the experience of pain. Imagine a being neurologically incapable of pain. 
> Like "Mary", the hypothetical woman who lives her life seeing the world 
> through a black and white monitor and cannot imagine colour qualia until 
> she is released, such an entity could not begin to comprehend the meaning 
> of screams of pain - beyond possibly recognising a self-protective 
> function. The elision of qualia from functional theories of mind has 
> potentially very serious ethical consequences - for only a subject with 
> access to those qualia truly understand them. Understanding the human 
> condition as it really is involves inhabiting human qualia. Otherwise you 
> end up with Dr Mengele — humans as objects.
>

Nice.
 

>
> I've read Dennett's arguments against the "qualophiles" and I find them 
> singularly unconvincing - though to say why is another long post. Dennett 
> says we only "seem" to have qualia, but what can "seem" possibly mean in 
> the absence of qualia?
>

Dennett seems to be gradually sliding away from his previous certainty in 
his old age. The last video I saw of him shows him acknowledging that 
brains work differently than he once thought. In time it seems like he 
would get much closer to a Chalmers position. 
 

>
> An illusion of a quality is an oxymoron, for the quality *is* only the way 
> it seems. 
>

Exactly
 

> The comp assumption that computations have qualia hidden inside them is 
> not much of an answer either in my view. Why not grant the qualia equal 
> ontological status to the computations themselves, if they are part and 
> parcel? And if they cannot be known except from the inside, and if the 
> computation's result can't be known in advance, why not say that the 
> "logic" of the qualitiative experience is reflected in the mathematics as 
> much as the other way round? 
>

You've got it. If anything, wouldn't the unexplainable aesthetic textures 
and modalities of qualia make more sense as mysterious fundamentals? Since 
we know how counting works - how it objectifies anything that we care to 
apply numbers to, it is not mysterious why there would be unity within 
arithmetic truth. Five apples, five fingers, or five times we have 
forgotten our keys all have the same five in common, but that is all they 
have in common. Fiveness doesn't know about apples or fingers or keys.


> Well enough. I don't have the answer. All I'm prepared to say is we are 
> still confronted by mystery. "PIP" seems to me to be more impressionistic 
> than theoretical. Comp still seems to struggle with qualia and zombies. I 
> suspect we still await the unifying perspective.
>
> PIP is probably more impressionistic than theoretical, but I think it is 
still the right direction to go in. I'm not really the theory guy, I'm more 
about improving (what I call) the philosophical vacuum, so that our 
beginning assumptions do not leak anthropomorphism or mechanemporphism. I 
guess what I'm really looking for is collaboration. If someone really could 
pick up where MSR/PIP leaves off, and give it more of a 
technical/mathematical treatment, that seems like the best result I could 
hope for. I think I have the frame of the puzzle put together, or at least 
the corners, but I thing I can only do that because I have no particular 
talent for grasping the interior.

Thanks,
Craig
 

>
>
> On Thursday, September 26, 2013 8:17:04 PM UTC+10, telmo_menezes wrote:
>>
>> Hi Craig (and all), 
>>
>> Now that I have a better understanding of your ideas, I would like to 
>> confront you with a thought experiment. Some of the stuff you say 
>> looks completely esoteric to me, so I imagine there are three 
>> possibilities: either you are significantly more intelligent than me 
>> or you're a bit crazy, or both. I'm not joking, I don't know. 
>>
>> But I would like to focus on sensory participation as the fundamental 
>> stuff of reality and your claim that strong AI is impossible because 
>> the machines we build are just Frankensteins, in a sense. If I 
>> understand correctly, you still believe these machines have sensory 
>> participation just because they exist, but not in the sense that they 
>> could emulate our human experiences. They have the sensory 
>> participation level of the stuff they're made of and nothing else. 
>> Right? 
>>
>> So let's talk about seeds. 
>>
>> We now know how a human being grows from a seed that we pretty much 
>> understand. We might not be able to model all the complexity involved 
>> in networks of gene expression, protein folding and so on, but we 
>> understand the building blocks. We understand them to a point where we 
>> can actually engineer the outcome to a degree. It is now 2013 and we 
>> are, in a sense, living in the future. 
>>
>> So we can now take a fertilised egg and tweak it somehow. When done 
>> successfully, a human being will grow out of it. Doing this with human 
>> eggs is considered unethical, but I believe it is technically 
>> possible. So a human being grows out of this egg. Is he/she normal? 
>>
>> What if someone actually designs the entire DNA string and grows a 
>> human being out of it? Still normal? 
>>
>> What if we simulate the growth of the organism from a string of 
>> virtual DNA and then just assemble the outcome at some stage? Still 
>> normal? 
>>
>> What if now we do away with DNA altogether and use some other Turing 
>> complete self-modifying system? 
>>
>> What if we never build the outcome but just let it live inside a 
>> simulation? We can even visit this simulation with appropriate 
>> hardware: http://www.oculusvr.com/. What now? 
>>
>> In your view, at what point does this break? And why? 
>>
>> Best, 
>> Telmo. 
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.

Reply via email to