On 30 Sep 2013, at 14:00, Pierz wrote:
Yes indeed, and it is compelling. Fading qualia and all that. It's
the absurdity of philosophical zombies. Those arguments did have an
influence on my thinking. On the other hand the idea that we *can*
replicate all the brain's outputs remains an article of faith.
OK. That is behavioral mechanism (Be-Me), and I agree that it asks an
act of faith, despite the strog evidence that nature already exploits
this.
Comp asks for a nuch bigger act of faith, as you have to believe that
you survive through the duplication. It is logically conceivable that
we can replicate ourselves, but fail to survive through that
replication. Comp -> Be-Me, but Be-Me does not imply comp.
I remember that almost the first thing I read in Dennett's book was
his claim that rich, detailed hallucinations (perceptions in the
absence of physical stimuli) are impossible. Dennett is either wrong
on this - or a vast body of research into hallucinogens is. Not to
mention NDEs and OBEs. Dennett may be right and these reports may
all be mistakes and lies, but I doubt it. If he is wrong, the his
arguments become a compelling case in quite the opposite sense to
what he intended: the brain not as a manufacturer of consciousness
but as something more like a receptor.
Yes, the brain seems to be (with comp) more a filter of consciousness
that a producer of consciousness.
My instinct tells me we don't know enough about the brain or
consciousness to be certain of any conclusions derived from logic
alone.
In all cases, logic alone is too poor a device to dwelve in the
matter. But with comp, arithmetic (and its internal meta-arithmetic)
is enough, especillay for the negative part (the mystery) which has to
remain a mystery in all possible mechanical extensions of the machine.
That is what comp explains the better: that there must be a mystery.
Abstract machines like PA and ZF can be said to know that already.
Bruno
We may be like Newtonians arguing cosmology without the benefit of
QM and relativity.
On Monday, September 30, 2013 2:08:23 PM UTC+10, stathisp wrote:
On 30 September 2013 11:36, Pierz <[email protected]> wrote:
> If I might just butt in (said the barman)...
>
> It seems to me that Craig's insistence that "nothing is Turing
emulable,
> only the measurements are" expresses a different ontological
assumption from
> the one that computationalists take for granted. It's evident that
if we
> make a flight simulator, we will never leave the ground,
regardless of the
> verisimilitude of the simulation. So why would a simulated
consciousness be
> expected to actually be conscious? Because of different ontological
> assumptions about matter and consciousness. Science has given up
on the
> notion of consciousness as having "being" the same way that matter
is
> assumed to. Because consciousness has no place in an objective
description
> of the world (i.e., one which is defined purely in terms of the
measurable),
> contemporary scientific thinking reduces consciousness to those
apparent
> behavioural outputs of consciousness which *can* be measured. This
is
> functionalism. Because we can't measure the presence or absence of
> awareness, functionalism gives up on the attempt and presents the
functional
> outputs as the only things that are "really real". Hence we get
the Turing
> test. If we can't tell the difference, the simulator is no longer a
> simulator: it *is* the thing simulated. This conclusion is shored
up by the
> apparently water-tight argument that the brain is made of atoms and
> molecules which are Turing emulable (even if it would take the
lifetime of
> the universe to simulate the behaviour of a protein in a complex
cellular
> environment, but oh well, we can ignore quantum effects because
it's too hot
> in there anyway and just fast forward to the neuronal level,
right?). It's
> also supported by the objectifying mental habit of people
conditioned
> through years of scientific training. It becomes so natural to
step into the
> god-level third person perspective that the elision of private
experience
> starts seems like a small matter, and a step that one has no
choice but to
> make.
>
> Of course, the alternative does present problems of its own! Craig
> frequently seems to slip into a kind of naturalism that would have
it that
> brains possess soft, non-mechanical sense because they are soft and
> non-mechanical seeming. They can't be machines because they don't
have
> cables and transistors. "Wetware" can't possibly be hardware. A
lot of his
> arguments seem to be along those lines — the refusal to accept
abstractions
> which others accept, as telmo aptly puts it. He claims to "solve
the hard
> problem of consciousness" but the solution involves manoeuvres
like "putting
> the whole universe into the explanatory gap" between objective and
> subjective: hardly illuminating! I get irritated by neologisms
like PIP
> (whatever that stands for now - was "multi-sense realism' not
obscure
> enough?), which to me seem to be about trying to add substance to
vague and
> poetic intuitions about reality by attaching big, intellectual-
sounding
> labels to them.
>
> However the same grain of sand that seems to get in Craig's eye
does get in
> mine too. It's conceivable that some future incarnation of
"cleverbot"
> (cleverbot.com, in case you don't know it) could reach a point of
passing a
> Turing test through a combination of a vast repertoire of recorded
> conversation and some clever linguistic parsing to do a better job
of
> keeping track of a semantic thread to the conversation (where the
program
> currently falls down). But in this case, what goes in inside the
machine
> seems to make all the difference, though the functionalists are
committed to
> rejecting that position. Cleverly simulated conversation just
doesn't seem
> to be real conversation if what is going on behind the scenes is
just a
> bunch of rules for pulling lines out of a database. It's Craig's
clever
> garbage lids. We can make a doll that screams and recoils from
damaging
> inputs and learns to avoid them, but the functional outputs of
pain are not
> the experience of pain. Imagine a being neurologically incapable
of pain.
> Like "Mary", the hypothetical woman who lives her life seeing the
world
> through a black and white monitor and cannot imagine colour qualia
until she
> is released, such an entity could not begin to comprehend the
meaning of
> screams of pain - beyond possibly recognising a self-protective
function.
> The elision of qualia from functional theories of mind has
potentially very
> serious ethical consequences - for only a subject with access to
those
> qualia truly understand them. Understanding the human condition as
it really
> is involves inhabiting human qualia. Otherwise you end up with Dr
Mengele —
> humans as objects.
>
> I've read Dennett's arguments against the "qualophiles" and I find
them
> singularly unconvincing - though to say why is another long post.
Dennett
> says we only "seem" to have qualia, but what can "seem" possibly
mean in the
> absence of qualia? An illusion of a quality is an oxymoron, for
the quality
> *is* only the way it seems. The comp assumption that computations
have
> qualia hidden inside them is not much of an answer either in my
view. Why
> not grant the qualia equal ontological status to the computations
> themselves, if they are part and parcel? And if they cannot be
known except
> from the inside, and if the computation's result can't be known in
advance,
> why not say that the "logic" of the qualitiative experience is
reflected in
> the mathematics as much as the other way round?
>
> Well enough. I don't have the answer. All I'm prepared to say is
we are
> still confronted by mystery. "PIP" seems to me to be more
impressionistic
> than theoretical. Comp still seems to struggle with qualia and
zombies. I
> suspect we still await the unifying perspective.
Have you read this paper by David Chalmers?
http://consc.net/papers/qualia.html
It assumes for the sake of argument that it is possible to make a
device that replicates the externally observable behaviour of a brain
component, but lacking qualia, and then shows that this leads to
absurdity.
--
Stathis Papaioannou
--
You received this message because you are subscribed to the Google
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,
send an email to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.
http://iridia.ulb.ac.be/~marchal/
--
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.