On Friday, February 28, 2014 8:29:52 AM UTC-5, David Nyman wrote:
>
> On 27 February 2014 16:43, Craig Weinberg <[email protected]<javascript:>
> > wrote:
>
>>
>>
>> On Thursday, February 27, 2014 9:47:33 AM UTC-5, David Nyman wrote:
>>
>>> On 27 February 2014 14:02, Craig Weinberg <[email protected]> wrote:
>>>
>>> In other words, why, in a functionalist/materialist world would we need 
>>>> a breakable program to keep telling us that our hand is not Alien?When you 
>>>> start by assuming that I'm always wrong, then it becomes very easy 
>>>> to justify that with ad hoc straw man accusations.
>>>
>>>
>>> I do not in fact start with that assumption and, if you believe that I 
>>> do, I suggest you should question it. I do find however that I am unable to 
>>> draw the same conclusion as you from the examples you give. They simply 
>>> seem like false inferences to me (and to Stathis, based on his comment).
>>>
>>
>> You are unable to draw the same conclusion because you aren't considering 
>> any part of what I have laid out. I'm looking at CTM as if it were true, 
>> and then proceeding from there to question whether what we observe (AHS, 
>> blindsight, etc) would be consistent with the idea of consciousness as a 
>> function.
>>
>
> Yes, but functionalism doesn't necessarily force the claim that 
> consciousness *just is* a function: that is the eliminativist version. More 
> usually it is understood as the claim that consciousness *supervenes on* 
> (or co-varies with) function (i.e. the epiphenomenalist or crypto-dualist 
> versions). 
>

That's even more eliminativist IMO. To say that consciousness is identical 
to the function of a machine at least acknowledges that phenomenology is 
causally efficacious. To add in supervenience to non-computational 
epiphenomena is not really functionalism or digital functionalism or 
computationalism. What is overlooked is that supervenience and emergence 
both depend themselves on consciousness to provide a perspective in which 
some phenomena appear to 'emerge' from the supervening substrate. From the 
point of view of computation, surely computationalism cannot allow that 
consciousness comes as a surprise. From any comp perspective, we humans can 
define consciousness as emergent or supervenient, but surely arithmetic 
itself would not define its own conscious functionality as 
non-computational.
 

> But, if we take this latter view, the conundrum is more peculiar even than 
> you seem to imply by these piecemeal pot-shots. Rather, the *entire story* 
> of awareness / intention now figures only as a causally-irrelevant "inside" 
> interpretation of a complete and self-sufficient functional history that 
> neither knows nor cares about it. 
>

What self-sufficient functional history do you mean? When I use history I'm 
generally talking about a collection of aesthetic resources which have been 
accumulated through direct experience and remain present implicitly locally 
and explicitly in the absolute sense.
 

> You remember my analogy of the dramatis personae instantiated by the 
> pixels of the LCD screen?
>

Semi remember.
 

>
> However it would be self-defeating if our response to such bafflement 
> resulted in our misrepresenting its patent successes because it cannot 
> explain everything. We should rather seek a resolution of the dichotomy 
> between apparently disparate accounts in a more powerful explanatory 
> framework; one that could, for example, explain how *just this kind of 
> infrastructure* might emerge as the mise-en-scène for *just these kinds of 
> dramatis personae*. Comp is a candidate for that framework if one accepts 
> at the outset that there is some functional level of substitution for the 
> brain. If one doesn't, there is certainly space for alternatives, but it is 
> fair to demand a similar reconciliatory account in all cases, rather than a 
> distortion of particular facts to suit one's preference.
>

I agree. Who is calling for facts to be distorted? Once you have 
pansensitivity as the primordial identity, then computation becomes 
explainable as the skeletal reflection of sense through insensitivity 
(pan-entropy (pan-negentropy)). This replaces UDA and places a limit on 
computation to the context of public facing communication/encapsulation and 
leaves some aspect of privacy trans-measurable and locally omnipotent.


> What I conclude is that since the function of the limb is not interrupted, 
>> there is no plausible basis for the program which models the limb to add in 
>> any extra alarm for a condition of 'functional but not 'my' function'. AHS 
>> is the same as a philosophical zombie, except that it is at the level where 
>> physiological behavior is exhibited rather than psychological behavior.
>>  
>>
>>>  If you have a compelling argument to the contrary, I wish you would 
>>> find a way to give it in a clearer form.
>>>
>>
>> See above. Hopefully that is clearer.
>>  
>>
>>>  I can't see that what you say above fits the bill.
>>>
>>
>> I don't see that criticism without any details or rebuttals fit the bill 
>> either. Whenever the criticism is "It seems to me that your argument 
>> fails', it only makes me more suspicious that there is no legitimate 
>> objection. I can't relate to it, since as far as I know, my objections are 
>> always in the form of an explanation - what specifically seems wrong to me, 
>> and how to see it differently so that what I'm objecting to is not 
>> overlooked.
>>  
>>
>>>  You seem to regard rhetorical questions beginning "why would we 
>>> need..?" as compelling arguments against a functional account, but they 
>>> seem to me to be beside the point. 
>>>
>>
>> That's because you are only considering the modus ponens view where since 
>> functionalism implies that a malfunctioning brain would produce anomalies 
>> in conscious experience, it would make sense that AHS affirms functionalism 
>> being true. I'm looking at the modus tollens view where since functionalism 
>> implies that brain function requires no additional ingredient to make the 
>> function of conscious machines seem conscious, some extra, non-functional 
>> ingredient is required to explain why AHS is alarming to those who suffer 
>> from it. Since the distress of AHS is observed to be real, and that is 
>> logically inconsistent with the expectations of functionalism, I conclude 
>> that the AHS example adds to the list of counterfactuals to 
>> CTM/Functionalism. It should not matter whether a limb feels like it's 
>> 'yours',* functionalism implies that the fact of being able to use a 
>> limb makes it feel like 'yours' by definition*. This is the entire 
>> premise of computationalist accounts of qualia; that the mathematical 
>> relations simply taste like raspberries or feel like pain because that is 
>> the implicit expression of those relations.
>>
>
> I think if you consider my comments above you may concur that the problems 
> with functionalism are even deeper than this and necessitate a fundamental 
> paradigm shift if they are to be successfully resolved. But we can't be 
> content if such a shift results in the distortion or wholesale junking of 
> the functional account per se 
>

If functionalism asserts that consciousness is fundamentally functional, 
then I think it must be junked. Function is a sensible expectation, but 
sense is not a plausible expectation of function alone.
 

> (which can only be self-defeating). 
>

why?
 

> Rather we want to be able to explain how it is possible for a 
> functionally-complete account to emerge as an apparently discrete level of 
> reality and
>

It's possible because the functionally complete account is the one which 
excludes the non-functional aesthetic context from which that account 
arises. If you get rid of yourself, then in your estimation, what remains 
appears to be functional. It works the same way with the visual blind-spot, 
or any number of neurological disorders where sense compensates for what is 
missing by eliding the awareness that something is missing and allowing the 
underlying context of expectation to fill in the hole.
 

> , moreover, in a manner that makes it possible to understand how the 
> functional manifestation of sensation 
>

That manifestation is functional is already only a quality of sense. We 
could dream of being in an Escher painting without ever suspecting any 
unusual functional mismatches.
 

> might coincide lawfully with its inner expression. It is an explication of 
> that "coincidence" - the conjunction of true belief with the truth to which 
> it refers - that, for me, will be more convincing than any intuition of the 
> ultimate primacy of sensation; an intuition that surely begs the very 
> questions it seeks to resolve. 
>

I think it is the intuition of the primacy of sense-independent truth which 
is begging the very questions that it seeks to resolve. Truth about what? 
What 'refers' to truth?

Craig


> David
>
>
>>
>>  
>>
>>> They invite the obvious rejoinder that AHS doesn't seem in principle to 
>>> present any special difficulties to functionalism in explaining the facts 
>>> in its own terms.
>>>
>>
>> It does present special difficulties though (see above). AHS doesn't make 
>> much sense for functionalism, particularly combined with blindsight, and 
>> all of the problems with qualia and the hard problem/explanatory gap.
>>  
>>
>>>  You recently proposed the example of tissue rejection which invited a 
>>> similar response.
>>>
>>> None of this is to say that I don't regard functional / material 
>>> accounts as problematic, but this is for a different reason; I think they 
>>> obfuscate the categorical distinctions between two orthogonal versions of 
>>> "the facts": at the reduced level of function and at the integrated level 
>>> of sensory awareness / intention. Comp, for example, seeks to remedy this 
>>> obfuscation by elucidating principled correlations between formal notions 
>>> of reduction and integration via computational theory. Hence, per comp, the 
>>> principle of digital substitution is not the terminus of an explanation but 
>>> the starting point for a deeper theory. ISTM that alternative theories 
>>> cannot avoid a similar burden of explanation.
>>>
>>
>> They can if they begin by accepting that what we cannot explain about 
>> consciousness is unexplainable for a good reason, namely that consciousness 
>> cannot be made any more or less plain than it is. Consciousness is what 
>> makes all things plain, so it is circular to expect that it could be 
>> subject to its own subjugation.
>>
>> Craig
>>  
>>
>>>
>>> David
>>>
>>  -- 
>> You received this message because you are subscribed to the Google Groups 
>> "Everything List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to [email protected] <javascript:>.
>> To post to this group, send email to [email protected]<javascript:>
>> .
>> Visit this group at http://groups.google.com/group/everything-list.
>> For more options, visit https://groups.google.com/groups/opt_out.
>>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.

Reply via email to