On Thursday, February 27, 2014 9:47:33 AM UTC-5, David Nyman wrote:
>
> On 27 February 2014 14:02, Craig Weinberg <[email protected]> wrote:
>
> In other words, why, in a functionalist/materialist world would we need a 
>> breakable program to keep telling us that our hand is not Alien? When 
>> you start by assuming that I'm always wrong, then it becomes very easy to 
>> justify that with ad hoc straw man accusations.
>
>
> I do not in fact start with that assumption and, if you believe that I do, 
> I suggest you should question it. I do find however that I am unable to 
> draw the same conclusion as you from the examples you give. They simply 
> seem like false inferences to me (and to Stathis, based on his comment).
>

You are unable to draw the same conclusion because you aren't considering 
any part of what I have laid out. I'm looking at CTM as if it were true, 
and then proceeding from there to question whether what we observe (AHS, 
blindsight, etc) would be consistent with the idea of consciousness as a 
function. What I conclude is that since the function of the limb is not 
interrupted, there is no plausible basis for the program which models the 
limb to add in any extra alarm for a condition of 'functional but not 'my' 
function'. AHS is the same as a philosophical zombie, except that it is at 
the level where physiological behavior is exhibited rather than 
psychological behavior.
 

> If you have a compelling argument to the contrary, I wish you would find a 
> way to give it in a clearer form.
>

See above. Hopefully that is clearer.
 

> I can't see that what you say above fits the bill.
>

I don't see that criticism without any details or rebuttals fit the bill 
either. Whenever the criticism is "It seems to me that your argument 
fails', it only makes me more suspicious that there is no legitimate 
objection. I can't relate to it, since as far as I know, my objections are 
always in the form of an explanation - what specifically seems wrong to me, 
and how to see it differently so that what I'm objecting to is not 
overlooked.
 

> You seem to regard rhetorical questions beginning "why would we need..?" 
> as compelling arguments against a functional account, but they seem to me 
> to be beside the point. 
>

That's because you are only considering the modus ponens view where since 
functionalism implies that a malfunctioning brain would produce anomalies 
in conscious experience, it would make sense that AHS affirms functionalism 
being true. I'm looking at the modus tollens view where since functionalism 
implies that brain function requires no additional ingredient to make the 
function of conscious machines seem conscious, some extra, non-functional 
ingredient is required to explain why AHS is alarming to those who suffer 
from it. Since the distress of AHS is observed to be real, and that is 
logically inconsistent with the expectations of functionalism, I conclude 
that the AHS example adds to the list of counterfactuals to 
CTM/Functionalism. It should not matter whether a limb feels like it's 
'yours',* functionalism implies that the fact of being able to use a limb 
makes it feel like 'yours' by definition*. This is the entire premise of 
computationalist accounts of qualia; that the mathematical relations simply 
taste like raspberries or feel like pain because that is the implicit 
expression of those relations.

 

> They invite the obvious rejoinder that AHS doesn't seem in principle to 
> present any special difficulties to functionalism in explaining the facts 
> in its own terms.
>

It does present special difficulties though (see above). AHS doesn't make 
much sense for functionalism, particularly combined with blindsight, and 
all of the problems with qualia and the hard problem/explanatory gap.
 

> You recently proposed the example of tissue rejection which invited a 
> similar response.
>
> None of this is to say that I don't regard functional / material accounts 
> as problematic, but this is for a different reason; I think they obfuscate 
> the categorical distinctions between two orthogonal versions of "the 
> facts": at the reduced level of function and at the integrated level of 
> sensory awareness / intention. Comp, for example, seeks to remedy this 
> obfuscation by elucidating principled correlations between formal notions 
> of reduction and integration via computational theory. Hence, per comp, the 
> principle of digital substitution is not the terminus of an explanation but 
> the starting point for a deeper theory. ISTM that alternative theories 
> cannot avoid a similar burden of explanation.
>

They can if they begin by accepting that what we cannot explain about 
consciousness is unexplainable for a good reason, namely that consciousness 
cannot be made any more or less plain than it is. Consciousness is what 
makes all things plain, so it is circular to expect that it could be 
subject to its own subjugation.

Craig
 

>
> David
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.

Reply via email to