2009/8/27 Brent Meeker <meeke...@dslextreme.com>:

>> I'm questioning something more subtle here, I think.  First, one could
>> simply decide to be eliminativist about experience, and hold that the
>> extrinsic PM account is both exhaustive and singular.  In this case,
>> 'being' anything is simply an extrinsic notion.  But if we're not in
>> this sort of denial, then the idea of 'being' the system subtly
>> encourages the intuition that there's some way to be that
>> simultaneously satisfies two criteria:
>>
>> 1) Point-for-point isomorphism - in some suitable sense - with the
>> extrinsic description.
>> 2) An intrinsic nature that is incommunicable in terms of the
>> extrinsic description alone.
>
> Even if there PM and functionalism is true, (1) and (2) are dubious.
> Extrinsic descriptions are necessarily in terms of shared experiences
> and so may not be complete.

You do realise I was arguing *against* any possibility of a monistic
conjunction of 1) and 2)?  I was heading step-by-step for the
conclusion that all our notions of being should supervene on the
intrinsic account, with the extrinsic part representing what is
shareable between contexts. Aside from this, I'm not sure what you
mean by "even if PM and functionalism is true".  I'll assume that
you're not taking the eliminativist line, since then there would be
nothing further for you to claim vis-a-vis mind.  It's difficult for
me to follow arguments on the basis of PM+CTM, because I'm with Bruno
in believing them to be hollow.  So if we're to stay with PM, then for
me it would have to be on the basis of a theory of mind that was
reducible to physical causation in the *hierarchical* - rather than
functional - sense that 'life' is, in the typical example of a
higher-order organisational concept you've previously suggested to me.

Your statement "extrinsic descriptions are necessarily in terms of
shared experiences and so may not be complete" is interesting to me,
not exclusively because I happen to agree with it!  Do you agree that
extrinsic descriptions are thereby *necessarily* incomplete, or merely
contingently so - i.e. that we are ignorant?

> "Incommunicable" is ambiguous. It could
> mean impossible in principle or it could mean we haven't developed the
> words or pictures for it.  Assuming there's something incommunicable
> in the later sense doesn't imply that PM or functionalism are false.

I meant it in the former sense.  To be more precise, I mean that there
is information available in context that can't be directly
incorporated in what can be communicated out-of-context - hence
"incommunicable".  But that's not the end of the story.  The
uninterpreted - and thus incomplete - data are re-instantiable in
another interpretative context, and hence there is the possibility of
re-completing the picture, at least to some tolerance.  Hence we can
refer to what's left out of the abstractable, and hence - crucially -
shareable, part *ostensively*.  In effect, we're saying to each other
"take this contextless relational dataset and instantiate it in terms
of your local interpretation, take a look at the bit I appear to be
pointing at, and then let's compare notes".

> The idea of 'being' somebody (or thing) else already assumes dualism.
> It assumes some 'I' that could move to be Stathis or a bat and yet
> retain some identity.  But on a functionalist view 'I' already am
> Stathis and a bat - in other words there is no 'I', it's the creation
> of viewpoint by each functional entity.  In that case being someone
> else in incommunicable in principle because the concept in incoherent.

Well, I completely agree with all of that, but what made you think
that what I was saying was anything to do with being somebody else?  I
think I did a bad job of articulating my line of argument.  As I've
said, I can't make any sense of a functionalist view on the basis of
PM.  To be coherent, functionalism must treat physical entities as
mere relational placeholders, and hence the supplementary assumption
of PM or any other primitively non-functional ontology is either
simply redundant or weirdly dualistic AFAICS.  I thought this before
ever encountering Bruno's ideas, but his articulation of comp has
given me another angle of attack on this key intuition.  To be clear:
I'm not per se arguing against functionalist accounts, but like Bruno
I believe that their task is to explain the *appearance* of the
material, not their own spooky emergence from it.

But beyond even that, what I was articulating was my own version of
strict eliminativism.  IOW if we sincerely want to be monists we must
be ready in principle to reduce *all* our various conceptual accounts
to one in terms of the differentiables of a single ontic context.  And
unless we're eliminativists about personal existence, that had better
be the one we already occupy.  There's a tendency to argue this
context away as merely epistemic and not ontic, but this distinction
can be shown to collapse with very little logical effort.  I know not
everyone accepts this view, especially in the 'hard' sciences, but
there are very notable exceptions, some of them amongst its most
distinguished practitioners.  Obviously, for this to have any chance
of success as a programme, all the other accounts must be in principle
paraphraseable as aspects or modalities of this single contextual
account, but again IMO the standard arguments against this seem to
miss the point.

David

>
> David Nyman wrote:
>> 2009/8/26 Brent Meeker <meeke...@dslextreme.com>:
>>
>>> I don't see that.  I conjectured that with sufficient knowledge of the
>>> environment in which the alien functioned and input-outputs at the
>>> corresponding level, one could provide and account of the alien's
>>> experience.  I was my point that simply looking at the alien's brain,
>>> without the context of its function, would not suffice.
>>
>> I can't tell what you mean by "provide an account".  Do you mean that
>> one could provide some account of all this in functional terms that
>> *we could interpret* in ways that made contextual sense *for us* -
>> standing in, as it were, for the alien?  If so, this is what I meant
>> when I said to Stathis that it really becomes equivalent to the
>> problem of other minds, in that if we can coax the data into making
>> sense for us, we can extrapolate this by implication to the alien.
>> But that would tend to make it a rather human alien, wouldn't it?
>>
>>> The question is whether PM is sufficient to describe the system.
>>> Language is almost certainly inadequate to describing what it is like
>>> to 'be' the system - you cannot even fully describe what it is like to
>>> be you.
>>
>> I'm questioning something more subtle here, I think.  First, one could
>> simply decide to be eliminativist about experience, and hold that the
>> extrinsic PM account is both exhaustive and singular.  In this case,
>> 'being' anything is simply an extrinsic notion.  But if we're not in
>> this sort of denial, then the idea of 'being' the system subtly
>> encourages the intuition that there's some way to be that
>> simultaneously satisfies two criteria:
>>
>> 1) Point-for-point isomorphism - in some suitable sense - with the
>> extrinsic description.
>> 2) An intrinsic nature that is incommunicable in terms of the
>> extrinsic description alone.
>
> Even if there PM and functionalism is true, (1) and (2) are dubious.
> Extrinsic descriptions are necessarily in terms of shared experiences
> and so may not be complete.  "Incommunicable" is ambiguous. It could
> mean impossible in principle or it could mean we haven't developed the
> words or pictures for it.  Assuming there's something incommunicable
> in the later sense doesn't imply that PM or functionalism are false.
>
> The idea of 'being' somebody (or thing) else already assumes dualism.
> It assumes some 'I' that could move to be Stathis or a bat and yet
> retain some identity.  But on a functionalist view 'I' already am
> Stathis and a bat - in other words there is no 'I', it's the creation
> of viewpoint by each functional entity.  In that case being someone
> else in incommunicable in principle because the concept in incoherent.
>
> Brent
>
>>
>> This intuition has a lot of work to do to stay monistic - i.e. to
>> claim to refer to a unique existent.  First it has to justify why
>> there's still a gap between the 'extrinsic' system-as-described and
>> the 'intrinsic' system-as-instantiated - i.e. the description can no
>> longer be considered exhaustive.  Then it has to explain the existence
>> of the former as some mode of the latter.   Finally, it has to
>> dispense with any implied referent of the former, except in the guise
>> of the latter - i.e. it has to dispense with any fundamental notion of
>> the extrinsic except as a metaphor or mode of the intrinsic.
>>
>> Dispensing with the extrinsic in this way leaves us with 'being' as a
>> fundamentally intrinsic notion.  Not doing so is an implicit appeal to
>> dualism.
>>
>> David
>>
>>> David Nyman wrote:
>>>> 2009/8/26 Stathis Papaioannou <stath...@gmail.com>:
>>>>
>>>>> With the example of the light, you alter the photoreceptors in the
>>>>> retina so that they respond the same way when to a blue light that
>>>>> they would have when exposed to a red light.
>>>> Ah, so the alien has photoreceptors and retinas?  That's an assumption
>>>> worth knowing!  This is why I said "a successful theory wouldn't be
>>>> very distant from the belief that the alien was, in effect, human, or
>>>> alternatively that you were,  in effect, alien".
>>>>
>>>>> I think what I have proposed is consistent with functionalism, which
>>>>> may or may not be true. A functionally identical system produces the
>>>>> same outputs for the same inputs, and functionalism says that
>>>>> therefore it will also have the same experiences, such as they may be.
>>>>> But what those experiences are like cannot be known unless you are the
>>>>> system, or perhaps understand it so well that you can effectively run
>>>>> it in your head.
>>>> Well, it's precisely the conjunction of functionalism with a
>>>> primitively material assumption that prompted this part of the thread.
>>>>  Peter asked me if I thought a brain scan at some putatively
>>>> fundamental physical level would be an exhaustive account of all the
>>>> information that was available experientially, and I was attempting to
>>>> respond specifically to that.  Given what you say above, I would again
>>>> say - for all the reasons I've argued up to this point - that a purely
>>>> functional account on the assumption of PM gives me no reason to
>>>> attribute experience of any kind to the system in question.
>>>>
>>>> The way you phrase it rightly emphasises the focus on invariance of
>>>> inputs and outputs as definitive of invariance of experience, rather
>>>> than the variability of the actual PM process that performs the
>>>> transformation.  As Brent has commented, this seems a somewhat
>>>> arbitrary assumption, with the implied rider of "what else could it
>>>> be?"  Well, whatever else could provide an account of experience, this
>>>> particular conjecture happens to fly directly in the face of the
>>>> simultaneous assumption of primitively physical causation.
>>> I don't see that.  I conjectured that with sufficient knowledge of the
>>> environment in which the alien functioned and input-outputs at the
>>> corresponding level, one could provide and account of the alien's
>>> experience.  I was my point that simply looking at the alien's brain,
>>> without the context of its function, would not suffice.
>>>
>>>> There's something trickier here, too.  When you say "unless you are
>>>> the system", this masks an implicit - and dualistic - assumption in
>>>> addition to PM monism.  It is axiomatic that any properly monistic
>>>> materialist account must hold all properties of a system to be
>>>> extrinsic, and hence capable of *exhaustive* extrinsic formulation.
>>>> IOW if it's not extrinsically describable, it doesn't exist in terms
>>>> of PM.  So what possible difference could it make, under this
>>>> restriction, to 'be' the system?
>>> The question is whether PM is sufficient to describe the system.
>>> Language is almost certainly inadequate to describing what it is like
>>> to 'be' the system - you cannot even fully describe what it is like to
>>> be you.  That's why I think the "hard problem" of consciouness will
>>> not be "solved" it will just wither away.  Eventually we will
>>> understand brains sufficiently to create AI with specifically designed
>>> memories, emotions, and cogitation, as evidenced by their behavior and
>>> the similarity of their processes to human ones.  We won't *know* that
>>> they are conscious, but we'll believe they are.
>>>
>>> Brent
>>>
>>>> If the reply is that it makes just
>>>> the somewhat epoch-making difference of conjuring up an otherwise
>>>> unknowable world of qualitative experience, can we still lay claim to
>>>> a monistic ontology, in any sense that doesn't beggar the term?
>>>>
>>>> David
>>>>
>>>>> 2009/8/26 David Nyman <david.ny...@gmail.com>:
>>>>>> On 25 Aug, 14:32, Stathis Papaioannou <stath...@gmail.com> wrote:
>>>>>>
>>>>>>> Let's say the alien brain in its initial environment produced a
>>>>>>> certain output when it was presented with a certain input, such as a
>>>>>>> red light. The reconstructed brain is in a different environment and
>>>>>>> is presented with a blue light instead of a red light. To deal with
>>>>>>> this, you alter the brain's configuration so that it produces the same
>>>>>>> output with the blue light that it would have produced with the red
>>>>>>> light.
>>>>>> In terms of our discussion on the indispensability of an
>>>>>> interpretative context for assigning meaning to 'raw data', I'm not
>>>>>> sure exactly how much you're presupposing when you say that "you alter
>>>>>> the brain's configuration".  You have a bunch of relational data
>>>>>> purporting to correspond to the existing configuration of the alien's
>>>>>> brain and its relation to its environment.  This is available to you
>>>>>> solely in terms of your interpretation, on the basis of which you
>>>>>> attempt to come up with a theory that correlates the observed 'inputs'
>>>>>> and 'outputs' (assuming these can be unambiguously isolated).  But how
>>>>>> would you know that you had arrived at a successful theory of the
>>>>>> alien's experience?  Even if you somehow succeeded in observing
>>>>>> consistent correlations between inputs and outputs, how could you ever
>>>>>> be sure what this 'means' for the alien brain?
>>>>> With the example of the light, you alter the photoreceptors in the
>>>>> retina so that they respond the same way when to a blue light that
>>>>> they would have when exposed to a red light. Photoreceptors are
>>>>> neurons and synapse with other neurons, further up the pathway of
>>>>> visual perception. The alien will compare his perception of the blue
>>>>> sky of Earth with his memory of the red sky of his home planet and
>>>>> declare it looks the same. Now it is possible that it doesn't look the
>>>>> same and he only thinks it looks the same, but the same could be said
>>>>> of ordinary life: perhaps yesterday the sky looked green, and today
>>>>> that it looks blue we only think it looks the same because we are
>>>>> deluded.
>>>>>
>>>>>> I would say that in effect what you have posed here is 'the problem of
>>>>>> other minds', and that consequently a 'successful' theory wouldn't be
>>>>>> very distant from the belief that the alien was, in effect, human, or
>>>>>> alternatively that you were,  in effect, alien.  And, mutatis
>>>>>> mutandis, I guess this would apply to rocks too.
>>>>> I think what I have proposed is consistent with functionalism, which
>>>>> may or may not be true. A functionally identical system produces the
>>>>> same outputs for the same inputs, and functionalism says that
>>>>> therefore it will also have the same experiences, such as they may be.
>>>>> But what those experiences are like cannot be known unless you are the
>>>>> system, or perhaps understand it so well that you can effectively run
>>>>> it in your head.
>>>>>
>>>>>
>>>>> --
>>>>> Stathis Papaioannou
>>>>>
>>>
>>
>> >
>>
>
>
> >
>

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to