On 14 January 2012 18:56, Stephen P. King <stephe...@charter.net> wrote:

> But the Turing Test is a bit of an oxymoron because it
> is impossible to prove the existence of something that is solely 1p. There
> is no 3p of consciousness.

I agree, and in a sense this implies the futility of all attempts to
argue from 3p to 1p.  But there may be other ways to get there.  For
example, I've always tended towards the view that Bruno often calls a
"universal mind" (cf. Schopenhauer, Schrödinger, Hoyle, Dyson,
et.al.).  Think of this as a universal 1p.  The argument from this
point of departure begins in such uniquely present conscious
instances, whose internal logic implies the possibility of other,
mutually exclusive, such instances.  As a first approximation, this
internal logic might imply that the present instance is a selection
from a uniquely "personal" serialisation of such instances (i.e. the
RSSA).  However the same logic is consistent with all possible such
instances, the implied personal serialisation now playing a secondary
role to some transcendental, "impersonal" selection (i.e. the ASSA).
This insight offers an escape route from solipsism.

Can one apply a view like this to the problem in hand?  3p is the
label applied to our theoretical proxies for the regularities of 1p
phenomena. These regularities are so compelling that for most purposes
we treat them perfectly naturally as realities independent of the 1p
context in which they manifest. We situate them in an ever more
general explanatory framework, in terms of which we hope to trap even
the 1p localisation to which all such explanation is ultimately
referred.  But frustratingly, attempts to achieve this by the direct
3p route seem always to rely in the end on some sort of unsatisfactory
bait-and-switch.  Nonetheless we cannot deny that there are subsets of
the 3p schema which correlate strongly with the implied serialisation
of 1p moments: those subsets we accept as our local physical
embodiments.  Consequently, it seems reasonable to postulate the
tightest of inter-relations, short of identity, between these two
domains, at least locally.

Returning to the original point of departure with the inference of a
tight local correlation between some appropriate 3p physical
embodiment and the presently selected 1p instance, it might seem a
reasonable experiment to reverse the logic.  If we could but identify
the relevant species of 3p embodiment - given the anti-solipsism
argument derivable from a strictly 1p point of departure - we could
reasonably infer its correlation with an instance of consciousness
mutually exclusive of the present one, entangled with its own coherent
personal serialisation (or more baldly, another person).

But how to identify the "relevant species"?  Ordinarily, we do not
hesitate to ascribe this status to other human embodiments, because it
seems reasonable to suppose that if our own 3p constitution is of the
relevant species, so is theirs.  But as we have no widely-accepted
definitive account of what this entails specifically, we must rely
essentially on the behavioural manifestations of intelligence.
Accordingly, we have little option but to ascribe the "definitively
conscious" 1p-3p correlation to any embodiment that displays
sufficiently intelligent behaviour, by some agreed criterion such as
the TT.

The critical exception to the foregoing is that we would clearly wish
to withdraw this ascription where there is demonstrable evidence of
fraud or pretence.  Hence the vanishing point for controversy may well
be FAPP when the pretence of intelligence has become practically
indistinguishable, by any available criterion, from its actuality.

David

> On 1/14/2012 1:15 PM, Evgenii Rudnyi wrote:
>
> On 14.01.2012 18:12 John Clark said the following:
>
> On Fri, Jan 13, 2012  meekerdb<meeke...@verizon.net>  wrote:
>
> There is no way consciousness can have a direct Darwinian
> advantage
>
> so it must be a byproduct of something that does have that virtue,
> and the obvious candidate is intelligence.\
>
>
>
> That's not so clear since we don't know exactly what is the
> relation of consciousness to intelligence.  For a social animal
> having an internal model of ones self and being able to model the
> thought processes of others has obvious reproductive advantage.
>
>
> To do any one of the things you suggest would require intelligence,
> and indeed there is some evidence that in general social animals tend
> to have a larger brain than similar species that are not social. But
> at any rate we both seem to agree that Evolution can only see
> behavior, so consciousness must be a byproduct of some sort of
> complex behavior. Thus the Turing Test must be valid not only for
> intelligence but for consciousness too.
>
>
> How would you generalize the Turing Test for consciousness?
>
> Evgenii
>
> John K Clark
>
>
> Hi,
>
>     Perhaps we can generalize the Turing test by insisting on questions that
> would require for their answer computational resources in excess of that
> would be available to a computer + power suply in a small room. Think of the
> Berkenstein bound.... But the Turing Test is a bit of an oxymoron because it
> is impossible to prove the existence of something that is solely 1p. There
> is no 3p of consciousness. I recall Leibniz' discussion of this...
>
> Onward!
>
> Stephen
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To post to this group, send email to everything-list@googlegroups.com.
> To unsubscribe from this group, send email to
> everything-list+unsubscr...@googlegroups.com.
> For more options, visit this group at
> http://groups.google.com/group/everything-list?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to