On 27 February 2014 16:43, Craig Weinberg <whatsons...@gmail.com> wrote:

>
>
> On Thursday, February 27, 2014 9:47:33 AM UTC-5, David Nyman wrote:
>
>> On 27 February 2014 14:02, Craig Weinberg <whats...@gmail.com> wrote:
>>
>> In other words, why, in a functionalist/materialist world would we need a
>>> breakable program to keep telling us that our hand is not Alien? When
>>> you start by assuming that I'm always wrong, then it becomes very easy to
>>> justify that with ad hoc straw man accusations.
>>
>>
>> I do not in fact start with that assumption and, if you believe that I
>> do, I suggest you should question it. I do find however that I am unable to
>> draw the same conclusion as you from the examples you give. They simply
>> seem like false inferences to me (and to Stathis, based on his comment).
>>
>
> You are unable to draw the same conclusion because you aren't considering
> any part of what I have laid out. I'm looking at CTM as if it were true,
> and then proceeding from there to question whether what we observe (AHS,
> blindsight, etc) would be consistent with the idea of consciousness as a
> function.
>

Yes, but functionalism doesn't necessarily force the claim that
consciousness *just is* a function: that is the eliminativist version. More
usually it is understood as the claim that consciousness *supervenes on*
(or co-varies with) function (i.e. the epiphenomenalist or crypto-dualist
versions). But, if we take this latter view, the conundrum is more peculiar
even than you seem to imply by these piecemeal pot-shots. Rather, the
*entire story* of awareness / intention now figures only as a
causally-irrelevant "inside" interpretation of a complete and
self-sufficient functional history that neither knows nor cares about it.
You remember my analogy of the dramatis personae instantiated by the pixels
of the LCD screen?

However it would be self-defeating if our response to such bafflement
resulted in our misrepresenting its patent successes because it cannot
explain everything. We should rather seek a resolution of the dichotomy
between apparently disparate accounts in a more powerful explanatory
framework; one that could, for example, explain how *just this kind of
infrastructure* might emerge as the mise-en-scène for *just these kinds of
dramatis personae*. Comp is a candidate for that framework if one accepts
at the outset that there is some functional level of substitution for the
brain. If one doesn't, there is certainly space for alternatives, but it is
fair to demand a similar reconciliatory account in all cases, rather than a
distortion of particular facts to suit one's preference.

What I conclude is that since the function of the limb is not interrupted,
> there is no plausible basis for the program which models the limb to add in
> any extra alarm for a condition of 'functional but not 'my' function'. AHS
> is the same as a philosophical zombie, except that it is at the level where
> physiological behavior is exhibited rather than psychological behavior.
>
>
>>  If you have a compelling argument to the contrary, I wish you would find
>> a way to give it in a clearer form.
>>
>
> See above. Hopefully that is clearer.
>
>
>>  I can't see that what you say above fits the bill.
>>
>
> I don't see that criticism without any details or rebuttals fit the bill
> either. Whenever the criticism is "It seems to me that your argument
> fails', it only makes me more suspicious that there is no legitimate
> objection. I can't relate to it, since as far as I know, my objections are
> always in the form of an explanation - what specifically seems wrong to me,
> and how to see it differently so that what I'm objecting to is not
> overlooked.
>
>
>>  You seem to regard rhetorical questions beginning "why would we need..?"
>> as compelling arguments against a functional account, but they seem to me
>> to be beside the point.
>>
>
> That's because you are only considering the modus ponens view where since
> functionalism implies that a malfunctioning brain would produce anomalies
> in conscious experience, it would make sense that AHS affirms functionalism
> being true. I'm looking at the modus tollens view where since functionalism
> implies that brain function requires no additional ingredient to make the
> function of conscious machines seem conscious, some extra, non-functional
> ingredient is required to explain why AHS is alarming to those who suffer
> from it. Since the distress of AHS is observed to be real, and that is
> logically inconsistent with the expectations of functionalism, I conclude
> that the AHS example adds to the list of counterfactuals to
> CTM/Functionalism. It should not matter whether a limb feels like it's
> 'yours',* functionalism implies that the fact of being able to use a limb
> makes it feel like 'yours' by definition*. This is the entire premise of
> computationalist accounts of qualia; that the mathematical relations simply
> taste like raspberries or feel like pain because that is the implicit
> expression of those relations.
>

I think if you consider my comments above you may concur that the problems
with functionalism are even deeper than this and necessitate a fundamental
paradigm shift if they are to be successfully resolved. But we can't be
content if such a shift results in the distortion or wholesale junking of
the functional account per se (which can only be self-defeating). Rather we
want to be able to explain how it is possible for a functionally-complete
account to emerge as an apparently discrete level of reality and, moreover,
in a manner that makes it possible to understand how the functional
manifestation of sensation might coincide lawfully with its inner
expression. It is an explication of that "coincidence" - the conjunction of
true belief with the truth to which it refers - that, for me, will be more
convincing than any intuition of the ultimate primacy of sensation; an
intuition that surely begs the very questions it seeks to resolve.

David


>
>
>
>> They invite the obvious rejoinder that AHS doesn't seem in principle to
>> present any special difficulties to functionalism in explaining the facts
>> in its own terms.
>>
>
> It does present special difficulties though (see above). AHS doesn't make
> much sense for functionalism, particularly combined with blindsight, and
> all of the problems with qualia and the hard problem/explanatory gap.
>
>
>>  You recently proposed the example of tissue rejection which invited a
>> similar response.
>>
>> None of this is to say that I don't regard functional / material accounts
>> as problematic, but this is for a different reason; I think they obfuscate
>> the categorical distinctions between two orthogonal versions of "the
>> facts": at the reduced level of function and at the integrated level of
>> sensory awareness / intention. Comp, for example, seeks to remedy this
>> obfuscation by elucidating principled correlations between formal notions
>> of reduction and integration via computational theory. Hence, per comp, the
>> principle of digital substitution is not the terminus of an explanation but
>> the starting point for a deeper theory. ISTM that alternative theories
>> cannot avoid a similar burden of explanation.
>>
>
> They can if they begin by accepting that what we cannot explain about
> consciousness is unexplainable for a good reason, namely that consciousness
> cannot be made any more or less plain than it is. Consciousness is what
> makes all things plain, so it is circular to expect that it could be
> subject to its own subjugation.
>
> Craig
>
>
>>
>> David
>>
>  --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at http://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/groups/opt_out.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.

Reply via email to