Re: How would a computer know if it were conscious?

2007-06-20 Thread Colin Hales

down a wys..
===
Russell Standish wrote:
> On Sun, Jun 17, 2007 at 03:47:19PM +1000, Colin Hales wrote:
>> Hi,
>>
>> RUSSEL
>>> All I can say is that I don't understand your distinction. You have
>> introduced a new term "necessary primitive" - what on earth is that? But
>> I'll let this pass, it probably isn't important.
>>
>> COLIN
>> Oh no you don't!! It matters. Bigtime...
>>
>> Take away the necessary primitive: no 'qualititative novelty'
>> Take away the water molecules: No lake.
>> Take away the bricks, no building
>> Take away the atoms: no molecules
>> Take away the cells: no human
>> Take away the humans: no humanity
>> Take away the planets: no solar system
>> Take away the X: No emergent Y
>> Take away the QUALE: No qualia
>>
>> Magical emergence is when but claim Y exists but you can't
>> identify an X. Such as:
>
>
> OK, so by necessary primitive, you mean the syntactic or microscopic
> layer. But take this away, and you no longer have emergence. See
> endless discussions on emergence - my paper, or Jochen Fromm's book for
> instance. Does this mean "magical emergence" is oxymoronic?

I do not think I mean what you suggest. To make it almost tediously
obvious I could rephrase it " NECESSARY PRIMITIVE ORGANISATIONAL LAYER.
Necessary in that if you take it away the 'emergent' is gone.PRIMITIVE
ORGANISATIONAL LAYER = one of the layers of the hierarchy of the natural
world (from strings to atoms to cells and beyond): real observable
-on-the-benchtop-in-the-lab - layers. Not some arm waving "syntactic"
or "information" or "complexity" or "Computaton" or "function_atom" or
"representon". Magical emergence is real, specious and exactly what I have
said all along:

You claim consciousness arises as a result of  ["syntactic" or
"information" or "complexity" or "Computational" or "function_atom"] =
necessary primitive, but it has no scientifically verifiable correlation
with any real natural world phenomenon that you can stand next to and have
your picture taken.

>
>
>> You can't use an object derived using the contents of
>> consciousness(observation) to explain why there are any contents of
>> consciousness(observation) at all. It is illogical. (see the wigner quote
>> below). I find the general failure to recognise this brute reality very
>> exasperating.
>>
>
> People used to think that about life. How can you construct (eg an
> animal) without having a complete discription of that animal. So how
> can an animal self-reproduce without having a complete description of
> itself. But this then leads to an infinite regress.
>
> The solution to this conundrum was found in the early 20th century -
> first with such theoretical constructs as combinators and lambda
> calculus, then later the actual genetic machinery of life. If it is
> possible in the case of self-reproduction, the  it will also likely to
> be possible in the case of self-awareness and consciousness. Stating
> this to illogical doesn't help. That's what people from the time of
> Descartes thought about self-reproduction.
>
>> COLIN
>> 
>>> So this means that in a computer abstraction.
 d(KNOWLEDGE(t))
 ---  is already part of KNOWLEDGE(t)
   dt
>> RUSSEL
>>> No its not. dK/dt is generated by the interaction of the rules with the
>> environment.
>>
>> No. No. No. There is the old assumption thing again.
>>
>> How, exactly, are you assuming that the agent 'interacts' with the
>> environment? This is the world external to the agent, yes?. Do not say
>> "through sensory measurement", because that will not do. There are an
>> infinite number of universes that could give rise to the same sensory
>> measurements.
>
> All true, but how does that differ in the case of humans?

The extreme uniqueness of the circumstance aloneWe ARE the thing we
describe. We are more entitled to any such claims .notwithstanding
that...

Because, as I have said over and over... and will say again: We must live
in the kind of universe that delivers or allows access to, in ways as yet
unexplained, some aspects of the distal world, so which sensory I/O can be
attached, and thus conjoined, be used to form the qualia
representation/fields we experience in our heads.

Forget about HOWthat this is necessarily the case is unavoidable.
Maxwell's equations prove it QED - style...Without it, the sensory I/O
(ultimately 100% electromagnetic phenomena) could never resolve the distal
world in any unambiguous way. Such disambiguation physically
happens.such qualia representations exist, hence brains must have
direct access to the distal world. QED.

>
>> We are elctromagnetic objects. Basic EM theory. Proven
>> mathematical theorems. The solutions are not unique for an isolated
>> system.
>>
>> Circularity.Circularity.Circularity.
>>
>> There is _no interaction with the environment_ except for that provided by
>> the qualia as an 'as-if' proxy for the environment. The origins of an
>> a

Re: How would a computer know if it were conscious?

2007-06-20 Thread David Nyman

On Jun 5, 3:12 pm, Bruno Marchal <[EMAIL PROTECTED]> wrote:

> Personally I don' think we can be *personally* mistaken about our own
> consciousness even if we can be mistaken about anything that
> consciousness could be about.

I agree with this, but I would prefer to stop using the term
'consciousness' at all.  To make a decision (to whatever degree of
certainty) about whether a machine possessed a 1-person pov analogous
to a human one, we would surely ask it the same sort of questions one
would ask a human.  That is: questions about its personal 'world' -
what it sees, hears, tastes (and perhaps extended non-human
modalitiies); what its intentions are, and how it carries them into
practice.  From the machine's point-of-view, we would expect it to
report such features of its personal world as being immediately
present (as ours are), and that it be 'blind' to whatever 'rendering
mechanisms' may underlie this (as we are).

If it passed these tests, it would be making similar claims on a
personal world as we do, and deploying this to achieve similar ends.
Since in this case it could ask itself the same questions that we can,
it would have the same grounds for reaching the same conclusion.

However, I've argued in the other bit of this thread against the
possibility of a computer in practice being able to instantiate such a
1-person world merely in virtue of 'soft' behaviour (i.e.
programming).  I suppose I would therefore have to conclude that no
machine could actually pass the tests I describe above - whether self-
administered or not - purely in virtue of running some AI program,
however complex.  This is an empirical prediction, and will have to
await an empirical outcome.

David

On Jun 5, 3:12 pm, Bruno Marchal <[EMAIL PROTECTED]> wrote:
> Le 03-juin-07, à 21:52, Hal Finney a écrit :
>
>
>
> > Part of what I wanted to get at in my thought experiment is the
> > bafflement and confusion an AI should feel when exposed to human ideas
> > about consciousness.  Various people here have proffered their own
> > ideas, and we might assume that the AI would read these suggestions,
> > along with many other ideas that contradict the ones offered here.
> > It seems hard to escape the conclusion that the only logical response
> > is for the AI to figuratively throw up its hands and say that it is
> > impossible to know if it is conscious, because even humans cannot agree
> > on what consciousness is.
>
> Augustin said about (subjective) *time* that he knows perfectly what it
> is, but that if you ask him to say what it is, then he admits being
> unable to say anything. I think that this applies to "consciousness".
> We know what it is, although only in some personal and uncommunicable
> way.
> Now this happens to be true also for many mathematical concept.
> Strictly speaking we don't know how to define the natural numbers, and
> we know today that indeed we cannot define them in a communicable way,
> that is without assuming the auditor knows already what they are.
>
> So what can we do. We can do what mathematicians do all the time. We
> can abandon the very idea of *defining* what consciousness is, and try
> instead to focus on principles or statements about which we can agree
> that they apply to consciousness. Then we can search for (mathematical)
> object obeying to such or similar principles. This can be made easier
> by admitting some theory or realm for consciousness like the idea that
> consciousness could apply to *some* machine or to some *computational
> events" etc.
>
> We could agree for example that:
> 1) each one of us know what consciousness is, but nobody can prove
> he/she/it is conscious.
> 2) consciousness is related to inner personal or self-referential
> modality
> etc.
>
> This is how I proceed in "Conscience et Mécanisme".  ("conscience" is
> the french for consciousness, "conscience morale" is the french for the
> english "conscience").
>
>
>
> > In particular I don't think an AI could be expected to claim that it
> > knows that it is conscious, that consciousness is a deep and intrinsic
> > part of itself, that whatever else it might be mistaken about it could
> > not be mistaken about being conscious.  I don't see any logical way it
> > could reach this conclusion by studying the corpus of writings on the
> > topic.  If anyone disagrees, I'd like to hear how it could happen.
>
> As far as a machine is correct, when she introspects herself, she
> cannot not discover a gap between truth (p) and provability (Bp). The
> machine can discover correctly (but not necessarily in a completely
> communicable way) a gap between provability (which can potentially
> leads to falsities, despite correctness) and the incorrigible
> knowability or knowledgeability (Bp & p), and then the gap between
> those notions and observability (Bp & Dp) and sensibility (Bp & Dp &
> p). Even without using the conventional name of "consciousness",
> machines can discover semantical fixpoint playing the role of non
> 

Re: Asifism

2007-06-20 Thread David Nyman

On Jun 20, 8:56 am, "Mohsen Ravanbakhsh" <[EMAIL PROTECTED]>
wrote:

> There is no first person experience problem, because there is no first
> person experience."
>
> Once more here you've interpreted the situation from a third person point of
> view. I don't care what YOU can conclude from MY behavior. It's ONE'S own
> perception of his OWN experience matters! and it is more obvious than any
> other fact.

Mohsen, I agree with what you're trying to say here, but I wonder
whether the best 'move' against Torgny's little 'game' (I'm sure he's
playing with us!) is actually to accept what he's saying.  I can agree
with him that:

"there is no first person experience"

because I don't find myself 'experiencing' my 'first person
experience' (this would lead to an infinite regression of
'experiencers').  Rather, I find myself always simply participating in
a 1-person world, which is a subset of a larger participatory
actuality.  Torgny is of course equally a participant in this
actuality.  His error is that he confuses 3-person descriptions with
the 'participants' they merely 'represent'.  3-person descriptions are
always proxies for some distal participant, 'external' to our own 1-
person world: they are 'abstractions'.

As soon as one commits this cognitive error, one is of course struck
by the lack of 1-person characteristics from the proxy 3-person 'point
of view'.  Quite correct: the proxy in itself *doesn't have* an
independent point of view: it's just a parasite on one's own 1-person
world. Metaphorically, it's a sort of 'mirror' that 'reflects' an
external actuality.  'Proxy Torgny' *represents* something else: i.e.
'Participatory Torgny' - and *he* of course may well be granted such a
point of view (as you imply) by reflexive analogy.  But the two must
not be confused.  Ironically, Torgny is presenting us with a textbook
case of the category error that arises from mistaking one's
'reflection' for oneself!

David

> What you're referring to, is another problem, namely the "other's mind". how
> we know that another human is experiencing what we do? We actually assume
> that to be true, that everyone has consciousness.
> But it doesn't justify the other mistake. This does not mean you can deny
> your possible(!) consciousness.
>
> "What you call "the subjective experience of first person" is just some sort
> of behaviour.  When you claim that you have "the subjective experience of
> first person", I can see that you are just showing a special kind of
> behaviour.  You behave as if you have "the subjective experience of first
> person".  And it is possible for an enough complicated computer to show up
> the exact same behaviour.  But in the case of the computer, you can see that
> there is no "subjective experience", there are just a lot of electrical
> fenomena interacting with each other.
>
> There is no first person experience problem, because there is no first
> person experience."
>
> Once more here you've interpreted the situation from a third person point of
> view. I don't care what YOU can conclude from MY behavior. It's ONE'S own
> perception of his OWN experience matters! and it is more obvious than any
> other fact.
>
> On 6/19/07, Torgny Tholerus <[EMAIL PROTECTED]> wrote:
>
>
>
>
>
> > > On Tuesday 19 June 2007 11:37:09 Torgny Tholerus wrote:
> > >>  What you call "the subjective experience of first person" is just some
> > >> sort of behaviour.  When you claim that you have "the subjective
> > >> experience
> > >> of first person", I can see that you are just showing a special kind of
> > >> behaviour.  You behave as if you have "the subjective experience of
> > >> first
> > >> person".  And it is possible for an enough complicated computer to show
> > >> up
> > >> the exact same behaviour.  But in the case of the computer, you can see
> > >> that there is no "subjective experience", there are just a lot of
> > >> electrical fenomena interacting with each other.
>
> > >>  There is no first person experience problem, because there is no first
> > >> person experience.
>
> > > In all your reasoning you implicitely use "consciousness" for example
> > when
> > > you
> > > says "When you claim that you have the subjective experience
> > > of first person, *I* can see that you are just showing a special kind of
> > > behaviour."
>
> > > Who/what is "I" ? Who/what is seeing ? What does it means for you to see
> > > if
> > > you have no inner representation of what you (hmmm if you're not
> > > conscious,
> > > you is not an appropriate word) see, what does it means to see at all ?
>
> > > In all your reasonning you allude to "I", this is what 1st pov is about
> > > not
> > > about you (the conscious being/knower) looking at another person as if
> > > there
> > > was no obsever (means you) in the observation.
>
> > > Quentin
>
> > Our language is very primitive.  You can not decribe the reality with it.
>
> > If you have a computer robot with a camera and an arm, how should that
> > robot express it

Re: How would a computer know if it were conscious?

2007-06-20 Thread David Nyman

On Jun 20, 3:35 am, Colin Hales <[EMAIL PROTECTED]> wrote:

> Methinks you 'get it'. You are far more eloquent than I am, but we talk of
> the same thing..

Thank you Colin.  'Eloquence' or 'gibberish'?  Hmm...but let us
proceed...

> where I identify <<>> as a "necessary primitive" and comment that
> 'computation' or 'information' or 'complexity' have only the vaguest of an
> arm waving grip on any claim to such a specific role. Such is the 'magical
> emergence' genre.

Just so.  My own 'meta-analysis' is also a (foolhardy?) attempt to
identify the relevant 'necessity' as *logical*.  The (awesome) power
of this would be to render 'pure' 3-person accounts (i.e. so-called
'physical') radically causally incomplete.  Some primitive like yours
would be a *logically necessary* foundation of *any* coherent account
of 'what-is'.

Strawson, and Chalmers, as I've understood them, make the (IMO)
fundamental mis-step of proposing a superadded 'fundamental property'
to the 'physical' substrate ('e.g. 'information').  This has the fatal
effect of rendering such a 'property' *optional* - i.e. it appears
that everything could proceed just as happily without it in the 3-
person account, and hence 'consciousness' can (by some) still airily
be dismissed as an 'illusion'.  The first move here, I think, is to
stop using the term 'consciousness' to denote any 'property'.

My own meta-analysis attempts to pump the intuition that all
processes, whether 0, 1, or 3-person, must from *logical necessity* be
identified with 'participative encounters', which are unintelligible
in the absence of *any* component: namely 'participation', 'sense',
and 'action'.  So, to 'exist' or 'behave', one must be:

1) a participant (i.e. the prerequisite for 'existence')
2) sensible (i.e. differentiating some 'other' in relationship)
3) active (i.e. the exchange of 'motivation' with the related 'other')

and all manifestations of 'participative existence' must be 'fractal'
to these characteristics in both directions (i.e. 'emergence' and
'supervention').  So, to negate these components one-by-one:

1) if not a participant, you don't get to play
2) if not sensible, you can't relate
3) if not active in relationship, you have no 'motivation'

These logical or semantic characteristics are agnostic to the
'primitive base'.  For example, if we are to assume AR as that base,
then the 'realism' part must denote that we 'participate' in AR, that
'numbers' are 'mutually sensible', and that arithmetical relationship
is 'motivational'.  If I've understood Bruno, 'computationalism'
generates 'somethings' at the 1-person plural level.  My arguments
against 'software uploading' then apply at the level of these
'emergent somethings', not to the axiomatic base. This is the nub of
the 'level of substitution' dilemma in the 'yes doctor' puzzle.

In 'somethingist' accounts, 'players' participate in sensory-
motivational encounters between 'fundamental somethings' (e.g.
conceived as vibrational emergents of a modulated continuum).

The critical move in the above argument is that by making the relation
between 0,1, and 3-person accounts and the primitives *self-relation*
or identity, we jettison the logical possibility of 'de-composing'
participative sensory-motivational relationship.  0,1, and 3-person
are then just different povs on this:

0 - the participatory 'arena' itself
1 - the 'world' of a differentiated 'participant'
3 - a 'proxy', parasitising a 1-person world

'Zombies' and 'software' are revealed as being category 3: they
'parasitise' 1-person worlds, sometimes as 'proxies' for distal
participants, sometimes 'stand-alone'.  The imputation of 'soft
behaviour' to a computer, for example, is just such a 'proxy', and has
no relevance whatsoever to the 1-person pov of the distal
'participatory player'.  Such a pov can emerge only fractally from its
*participative* constitution.

> A
> principle of the kind X must exist or we wouldn't be having this
> discussion. There is no way to characterise explanation through magical
> emergence that enables empirical testing. Not even in principle. They are
> impotent at all prediction. You adopt the position and the whole job is
> done and is a matter of belief = NOT SCIENCE.

Well, I'm happy on the above basis to make the empirical prediction:

No 'computer' will ever spontaneously adopt a 1-person pov in virtue
of any 'computation' imputed to it.

You, of course, are working directly on this project.  My breath is
bated!

For me, one of the most important consequences of the foregoing
relates to our intuitions about ourselves.  We hear from various
directions that our 1-person worlds are 'epiphenomenal' or 'illusory'
or simply that they don't 'exist'.  But this can now be seen to be
vacuous, deriving from a narrative fixation on the 'proxy', or
'parasite', rather than the participant.  In fact, it is the tacit
assumption of sense-action to the parasite (e.g. the 'external world')
that is illusory, epiphenomenal and non-ex

Re: Asifism

2007-06-20 Thread Mohsen Ravanbakhsh
What you're referring to, is another problem, namely the "other's mind". how
we know that another human is experiencing what we do? We actually assume
that to be true, that everyone has consciousness.
But it doesn't justify the other mistake. This does not mean you can deny
your possible(!) consciousness.

"What you call "the subjective experience of first person" is just some sort
of behaviour.  When you claim that you have "the subjective experience of
first person", I can see that you are just showing a special kind of
behaviour.  You behave as if you have "the subjective experience of first
person".  And it is possible for an enough complicated computer to show up
the exact same behaviour.  But in the case of the computer, you can see that
there is no "subjective experience", there are just a lot of electrical
fenomena interacting with each other.

There is no first person experience problem, because there is no first
person experience."

Once more here you've interpreted the situation from a third person point of
view. I don't care what YOU can conclude from MY behavior. It's ONE'S own
perception of his OWN experience matters! and it is more obvious than any
other fact.

On 6/19/07, Torgny Tholerus <[EMAIL PROTECTED]> wrote:
>
>
> >
> > On Tuesday 19 June 2007 11:37:09 Torgny Tholerus wrote:
> >>  What you call "the subjective experience of first person" is just some
> >> sort of behaviour.  When you claim that you have "the subjective
> >> experience
> >> of first person", I can see that you are just showing a special kind of
> >> behaviour.  You behave as if you have "the subjective experience of
> >> first
> >> person".  And it is possible for an enough complicated computer to show
> >> up
> >> the exact same behaviour.  But in the case of the computer, you can see
> >> that there is no "subjective experience", there are just a lot of
> >> electrical fenomena interacting with each other.
> >>
> >>  There is no first person experience problem, because there is no first
> >> person experience.
> >
> > In all your reasoning you implicitely use "consciousness" for example
> when
> > you
> > says "When you claim that you have the subjective experience
> > of first person, *I* can see that you are just showing a special kind of
> > behaviour."
> >
> > Who/what is "I" ? Who/what is seeing ? What does it means for you to see
> > if
> > you have no inner representation of what you (hmmm if you're not
> > conscious,
> > you is not an appropriate word) see, what does it means to see at all ?
> >
> > In all your reasonning you allude to "I", this is what 1st pov is about
> > not
> > about you (the conscious being/knower) looking at another person as if
> > there
> > was no obsever (means you) in the observation.
> >
> > Quentin
>
> Our language is very primitive.  You can not decribe the reality with it.
>
> If you have a computer robot with a camera and an arm, how should that
> robot express itself to descibe what it observes?  Could the robot say: "I
> see a red brick and a blue brick, och when I take the blue brick and
> places it on the red brick, then I see that the blue brick is over the red
> brick."?
>
> But if the robot says this, then you will say that this proves that the
> robot is conscious, because it uses the word "I".
>
> How shall the robot express itself, so it will be correct?  It this
> possible?  Or is our language incapable of expressing reality?
>
> We human beings are slaves under our language.  The language restricts out
> thinking.
>
> --
> Torgny Tholerus
>
>
> >
>


-- 

Mohsen Ravanbakhsh,
Sharif University of Technology,
Tehran.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---