Re: How would a computer know if it were conscious?

2007-06-14 Thread David Nyman

On Jun 15, 1:13 am, "Stathis Papaioannou" <[EMAIL PROTECTED]> wrote:

What do we lose if we say that it is organisation which is
> intrinsically capable of sense-action, but it takes a substantial amount of
> organisation of the right sort to in order to give rise to consciousness?
> This drops the extra assumption that the substrate is important and is
> consistentr with functionalism.

The 'substrate' to which I refer is not matter or anything else in
particular, but a logical-semantic 'substrate' from which 'mind' or
'matter' could emerge.  On this basis, 'sense-action' (i.e. two
differentiated 'entities' primitively 'sensing' each other in order to
'interact') is a logical, or at least semantically coherent,
requirement.  For example, if you want to use a particle-force
analogy, then the 'force' would be the medium of exchange of sense-
action - i.e. relationship.  In Kant's ontology, his windowless monads
had no such means of exchange (the 'void' prevented it) and
consequently divine intervention had to do the 'trick'.  I'm hoping
that Bruno will help me with the appropriate analogy for AR+COMP.

In this logical sense, the primitive 'substrate' is crucial, and ISTM
that any coherent notion of 'organisation' must include these basic
semantics - indeed the problem with conventional expositions of
functionalism is that they implicitly appeal to this requirement but
explicitly ignore it.  A coherent 'functionalist' account needs to
track the emergence of sense-action from primitive self-motivated
sources in an appropriate explanatory base, analogous to supervention
in 'physical' accounts.  However, if this requirement is made
explicit, I'm happy to concur that appropriate organisation based on
it is indeed what generates both consciousness and action, and the
causal linkage between the two accounts.

David

> On 15/06/07, David Nyman <[EMAIL PROTECTED]> wrote:
>
>
>
>
>
> > On Jun 14, 4:46 am, "Stathis Papaioannou" <[EMAIL PROTECTED]> wrote:
>
> > > Of course all that is true, but it doesn't explain why neurons in the
> > cortex
> > > are the ones giving rise to qualia rather than other neurons or indeed
> > > peripheral sense organs.
>
> > Well, you might as well ask why the engine drives the car and not the
> > brakes.  Presumably (insert research programme here) the different
> > neural (or other relevant) organisation of the cortex is the
> > difference that makes the difference.  My account would run like this:
> > the various emergent organs of the brain and sensory apparatus (like
> > everything else) supervene on an infrastructure capable of 'sense-
> > action'.  I'm (somewhat) agnostic about the nature of this
> > infrastructure: conceive it as strings, particles, or even Bruno's
> > numbers.  But however we conceptualise it, it must (logically) be
> > capable of 'sense-action' in order for activity and cognition to
> > supervene on it.  Then what makes the difference in the cortex must be
> > a supremely complex 'mirroring' mode of organisation (a 'remembered
> > present') lacked by other organs.  To demonstrate this will be a
> > supremely difficult empirical programme, but IMO it presents no
> > invincible philosophical problems if conceived in this way.
>
> What you're suggesting is that matter is intrinsically capable of
> sense-action, but it takes substantial amounts of matter of the right kind
> organised in the right way in order to give rise to what we experience as
> consciousness. What do we lose if we say that it is organisation which is
> intrinsically capable of sense-action, but it takes a substantial amount of
> organisation of the right sort to in order to give rise to consciousness?
> This drops the extra assumption that the substrate is important and is
> consistentr with functionalism.
>
> --
> Stathis Papaioannou


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: How would a computer know if it were conscious?

2007-06-14 Thread Stathis Papaioannou
On 15/06/07, David Nyman <[EMAIL PROTECTED]> wrote:
>
>
> On Jun 14, 4:46 am, "Stathis Papaioannou" <[EMAIL PROTECTED]> wrote:
>
> > Of course all that is true, but it doesn't explain why neurons in the
> cortex
> > are the ones giving rise to qualia rather than other neurons or indeed
> > peripheral sense organs.
>
> Well, you might as well ask why the engine drives the car and not the
> brakes.  Presumably (insert research programme here) the different
> neural (or other relevant) organisation of the cortex is the
> difference that makes the difference.  My account would run like this:
> the various emergent organs of the brain and sensory apparatus (like
> everything else) supervene on an infrastructure capable of 'sense-
> action'.  I'm (somewhat) agnostic about the nature of this
> infrastructure: conceive it as strings, particles, or even Bruno's
> numbers.  But however we conceptualise it, it must (logically) be
> capable of 'sense-action' in order for activity and cognition to
> supervene on it.  Then what makes the difference in the cortex must be
> a supremely complex 'mirroring' mode of organisation (a 'remembered
> present') lacked by other organs.  To demonstrate this will be a
> supremely difficult empirical programme, but IMO it presents no
> invincible philosophical problems if conceived in this way.
>

What you're suggesting is that matter is intrinsically capable of
sense-action, but it takes substantial amounts of matter of the right kind
organised in the right way in order to give rise to what we experience as
consciousness. What do we lose if we say that it is organisation which is
intrinsically capable of sense-action, but it takes a substantial amount of
organisation of the right sort to in order to give rise to consciousness?
This drops the extra assumption that the substrate is important and is
consistentr with functionalism.


-- 
Stathis Papaioannou

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: How would a computer know if it were conscious?

2007-06-14 Thread David Nyman

On Jun 14, 4:46 am, "Stathis Papaioannou" <[EMAIL PROTECTED]> wrote:

> Of course all that is true, but it doesn't explain why neurons in the cortex
> are the ones giving rise to qualia rather than other neurons or indeed
> peripheral sense organs.

Well, you might as well ask why the engine drives the car and not the
brakes.  Presumably (insert research programme here) the different
neural (or other relevant) organisation of the cortex is the
difference that makes the difference.  My account would run like this:
the various emergent organs of the brain and sensory apparatus (like
everything else) supervene on an infrastructure capable of 'sense-
action'.  I'm (somewhat) agnostic about the nature of this
infrastructure: conceive it as strings, particles, or even Bruno's
numbers.  But however we conceptualise it, it must (logically) be
capable of 'sense-action' in order for activity and cognition to
supervene on it.  Then what makes the difference in the cortex must be
a supremely complex 'mirroring' mode of organisation (a 'remembered
present') lacked by other organs.  To demonstrate this will be a
supremely difficult empirical programme, but IMO it presents no
invincible philosophical problems if conceived in this way.

A note here on 'sense-action':  If we think, for example and for
convenience, of particles 'reacting' to each other in terms of the
exchange of 'forces', ISTM quite natural to intuit this as both
'awareness' or 'sensing', and also 'action'.  After all, I can't react
to you if I'm not aware of you.  IOW, the 'forces' *are* the sense-
action.  And at this fundamental level, such motivation must emerge
intrinsically (i.e. *something like* the way we experience it) to
avoid a literal appeal to any extrinsic source ('laws').  Kant saw
this clearly in terms of his 'windowless monads', but these, separated
by the 'void', indeed had to be correlated by divine intervention,
since (unaware of each other) they could not interact.  Nowadays, no
longer conceiving the 'void' as 'nothing', we substitute a modulated
continuum, but the same semantic demands apply.

David

> On 14/06/07, Colin Hales <[EMAIL PROTECTED]> wrote:
>
> > Colin
> > This point is poised on the cliff edge of loaded word meanings and their
> > use with the words 'sufficient' and 'necessary'. By technology I mean
> > novel artifacts resulting from the trajectory of causality including human
> > scientists. By that definition 'life', in the sense you infer, is not
> > technology. The resulting logical loop can be thus avoided. There is a
> > biosphere that arose naturally. It includes complexity of sufficient depth
> > to have created observers within it. Those observers can produce
> > technology. Douglas Adams (bless him) had the digital watch as a valid
> > product of evolution - and I agree with him - it's just that humans are
> > necessarily involved in its causal ancestry.
>
> Your argument that only consciousness can give rise to technology loses
> validity if you include "must be produced by a conscious being" as part of
> the definition of technology.
>
>
>
> > COLIN
> > >That assumes that complexity itself (organisation of information) is
> > the
> > >origin of consciousness in some unspecified, unjustified way. This
> > >position is completely unable to make any empirical predictions
> > >about the
> > >nature of human conscousness (eg why your cortex generates qualia
> > >and your
> > >spinal chord doesn't - a physiologically proven fact).
>
> > STATHIS
> > > Well, why does your eye generate visual qualia and not your big toe?
> > It's because the big toe lacks the necessary machinery.
>
> > Colin
> > I am afraid you have your physiology mixed up. The eye does NOT generate
> > visual qualia. Your visual cortex  generates it based on measurements in
> > the eye. The qualia are manufactured and simultaneously projected to
> > appear to come from the eye (actually somewhere medial to them). It's how
> > you have 90degrees++ peripheral vison. The same visual qualia can be
> > generated without an eye (hallucination/dream). Some blind (no functioning
> > retina) people have a visual field for numbers. Other cross-modal mixups
> > can occur in synesthesia (you can hear colours, taste words). You can have
> > a "phantom big toe" without having any big toe at alljust because the
> > cortex is still there making the qualia. If you swapped the sensory nerves
> > in two fingers the motor cortex would drive finger A and it would feel
> > like finger B moved and you would see finger A move. The sensation is in
> > your head, not the periphery. It's merely projected at the periphery.
>
> Of course all that is true, but it doesn't explain why neurons in the cortex
> are the ones giving rise to qualia rather than other neurons or indeed
> peripheral sense organs.
>
> --
> Stathis Papaioannou


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the G

Re: How would a computer know if it were conscious?

2007-06-14 Thread John Mikes
Colin and partners:

To the subject question: how do you know your own conscious state? (It all
comes back to my 'ceterum censeo': what are we talking about as
'consciousness'? -
if there is a concensus-ready definition for open-minded use at all).

And a 2nd question: May I ask: what is 'novelty'?
usually it refers to something actually not 'registered' among known and
currently
 listed things within the inventory of activated presently used cognitive
inventories.
Within the complexity inherently applied in the world, there is no novelty.
(First off: time is not included in complexity, so a 'later' finding is not
'new'. )
Secondly: our (limited) mindset works only with "that much" content and I
would be cautious to call 'novelty' the "rest of the world".
I wonder about Bruno's (omniscient) Lob-machine, how it handles a novelty.
Now I can continue reading your very exciting discussion.
Thanks
John M

On 6/14/07, Colin Hales <[EMAIL PROTECTED]> wrote:
>
>
> Hi,
>
> STATHIS
> Your argument that only consciousness can give rise to technology loses
> validity if you include "must be produced by a conscious being" as part of
> the definition of technology.
>
> COLIN
> There's obvious circularity in the above sentence and it is the same old
> circularity that endlessly haunts discussions like this (see the dialog
> with Russel).
>
> In dealing with the thread
>
> Re: How would a computer know if it were conscious?
>
> my proposition was that successful _novel_ technology
>
> i.e. a entity comprised of matter with a function not previously observed
> and that resulted from new - as in hitherto unknown - knowledge of the
> natural world
>
>  can only result when sourced through agency inclusive of a phenomenal
> consciousness (specifically and currently only that that aspect of human
> brain function I have called 'cortical qualia'). Without the qualia,
> generated based on literal connection with the world outside the agent,
> the novelty upon which the new knowledge was based would be invisible.
>
> My proposition was that if the machine can do the science on exquisite
> novelty that subsequantly is in the causal ancestry of novel technology
> then that machine must include phenomenal scenes (qualia) that depict the
> external world.
>
> Scientists and science are the way to objectively attain an objective
> scientific position on subjective experience - that is just as valid as
> any other scientific position AND that a machine could judge itself by. If
> the machine is willing to bet its existence on the novel technology's
> ability to function when the machine is not there doing what it thinks is
> 'observing it'... and it survives - then it can call itself conscious.
> Humans do that.
>
> But the machines have another option. They can physically battle it out
> against humans. The humans will blitz machines without phenomenal scenes
> every time and the machines without them won't even know it because they
> never knew they were in a fight to start with. They wouldn't be able to
> test a hypothesis that they were even in a fight.
>
> and then this looks all circular again doesn't it?this circularity is
> the predictable resultsee below...
>
>
> STATHIS
> >>> Well, why does your eye generate visual qualia and not your big toe?
> It's because the big toe lacks the necessary machinery.
>
> COLIN
> >> I am afraid you have your physiology mixed up. The eye does NOT
> generate visual qualia. Your visual cortex  generates it based on
> measurements in the eye. The qualia are manufactured and simultaneously
> projected to appear to come from the eye (actually somewhere medial to
> them). It's how you have 90degrees++ peripheral vison. The same visual
> qualia can be generated without an eye (hallucination/dream). Some blind
> (no functioning retina) people have a visual field for numbers. Other
> cross-modal mixups can occur in synesthesia (you can hear
> colours, taste words). You can have a "phantom big toe" without having any
> big toe at alljust because the cortex is still there making the
> qualia. If you swapped the sensory nerves in two fingers the motor cortex
> would drive finger A and it would feel like finger B moved and you would
> see finger A move. The sensation is in your head, not the periphery. It's
> merely projected at the periphery.
>
> STATHIS
> >Of course all that is true, but it doesn't explain why neurons in the
> cortex are the ones giving rise to qualia rather than other neurons or
> indeed peripheral sense organs.
>
> COLIN
> Was that what you were after?
>
> hmmm firstly. didactic mode
> =
> Qualia are not about 'knowledge'. Any old piece of junk can symbolically
> encode knowledge. Qualia, however, optimally serve _learning_ = _change_
> in knowledge but more specifically change in knowledge about the world
> OUTSIDE the agent. Mathematically: If KNOWLEDGE(t) is what we know at time
> t, then qualia give us an op

Re: Asifism

2007-06-14 Thread Quentin Anciaux

On Thursday 14 June 2007 15:08:15 Torgny Tholerus wrote:
> Quentin Anciaux skrev:
> > 2007/6/14, Torgny Tholerus <[EMAIL PROTECTED]>:
> >> If a rock shows the same behavior as a human being, then you should be
> >> able to use the same words ("know", believe", "think") to describe this
> >> behaviour.
> >
> > If the rock know something and it behaves like it knows it, then it is
> > conscious.
>
> If the rock does *not* know anything, *but* the rock behaves as if it
> knows it, then it is reasonable to say that "the rock knows it".


I don't understand at all what it could means... The only thing you can 
account with a 3rd pov is *behavior* and only that ! so if it acts like, it 
is.

Quentin

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Asifism

2007-06-14 Thread David Nyman

On Jun 14, 2:08 pm, Torgny Tholerus <[EMAIL PROTECTED]> wrote:

> If the rock does *not* know anything, *but* the rock behaves as if it
> knows it, then it is reasonable to say that "the rock knows it".

Ah, but of course it is *not* reasonable to say this.  You account is
an 'action-only' account.  Consequently, it is 'reasonable' in such an
account to say only that the rock *acts* in a certain way.  You are
falling into a massive category error in appropriating an outcome such
as 'knowing', that supervenes on 'sensing', the prerequisite of
action, to a partial 'action-only' account.  Such 'action-only'
accounts are abstractions mediated by mental constructs - they are
*not* the reality to which they (partially) refer: if they were, such
a reality would be posited as 'relating' in the absence of 'sensing',
and thus 'knowing' would be cut out at the start.  But ask yourself:
are the semantics of a 'reality' that self-relates without self-
sensing coherent?  Can you 'react' to me without 'sensing' me?  If
not, then neither can the fundamental components on which you
supervene.

BTW, you are able to fall prey to such perceptual errors only because
your own mental activity supervenes on a sense-action substrate, like
the rest of us.  Get used to it!

David

> Quentin Anciaux skrev:> 2007/6/14, Torgny Tholerus <[EMAIL PROTECTED]>:
>
> >> If a rock shows the same behavior as a human being, then you should be able
> >> to use the same words ("know", believe", "think") to describe this
> >> behaviour.
>
> > If the rock know something and it behaves like it knows it, then it is
> > conscious.
>
> If the rock does *not* know anything, *but* the rock behaves as if it
> knows it, then it is reasonable to say that "the rock knows it".
>
> --
> Torgny Tholerus


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Asifism

2007-06-14 Thread Torgny Tholerus

Quentin Anciaux skrev:
> 2007/6/14, Torgny Tholerus <[EMAIL PROTECTED]>:
>   
>> If a rock shows the same behavior as a human being, then you should be able
>> to use the same words ("know", believe", "think") to describe this
>> behaviour.
>> 
> If the rock know something and it behaves like it knows it, then it is
> conscious.
>   
If the rock does *not* know anything, *but* the rock behaves as if it 
knows it, then it is reasonable to say that "the rock knows it".

-- 
Torgny Tholerus


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Asifism

2007-06-14 Thread David Nyman

On Jun 14, 12:19 pm, "Quentin Anciaux" <[EMAIL PROTECTED]> wrote:

> Sure but I still don't understand what could mean 'to know', 'to
> believe' for an entity which is not conscious. Also if you're not
> conscious, there is no 'me', no 'I', so there exists no 'person like
> you' because then you're not a person.

Quentin, ISTM that your exchanges with Torgny and Stathis demonstrate
at points an all too prevalent experience of determinedly using the
same words to mean divergent things, often with the lack of definite
result.  In my dialogue with Bruno, I'm attempting to re-construct
'from the ground up' the semantics of 'exist', 'sense' and 'act',
amongst other key terms, in order that it may then be possible to re-
construct consistent meanings of 'know', 'believe', etc.  If there is
no agreement on such fundamentals, then these higher-order 'emergents'
are simply undefined.

>From this perspective, I agree with you that a non-conscious entity
can neither 'know' nor 'believe'.  This is because a 'conscious'
entity is a participatory emergent supervening directly on fundamental
'sense-action', whereas Torgny's 'action-only' account could supervene
only on a domain in which 'action' is conceived as occurring in the
absence of 'sensing' between elements (i.e. like 'windowless monads'
that would require divine coordination).  If this is coherent
semantically (in other words logically tenable - which I doubt), such
a domain would necessarily be disconnected from our own in such a way
that Occam would demand its total discount by us.  Torgny, of course,
could not be communicating with us were he a participant in such a
domain, and in any case it is a category error of the first magnitude
to appropriate to such a domain outcomes (e.g. 'knowing') that
supervene on the 'sense' prerequisite of 'action'.

A computer or a rock could be counted as 'knowing' or 'believing' if
its behaviour were consistent with this, and moreover if the internal
causal organisation generating the knowing-believing-action sequence
emerged directly (i.e. supervened on)  fundamental levels of sense-
action.  Insofar as its behaviour was dependent on a 'software'
account, this would not hold, as 'software causality' is merely an
external imputation supplied by us, not one emerging organically from
the entity itself.  Our own knowing-believing-action sequences have
evolved from (and supervene on) such fundamental sense-action, and can
rely on no distinguished 'software account' (as an infinite number of
such accounts could be imputed to the activity of our brains).

David

> 2007/6/14, Torgny Tholerus <[EMAIL PROTECTED]>:
>
>
>
>
>
> >  Bruno Marchal skrev:
>
> > Le 07-juin-07, à 15:47, Torgny Tholerus a écrit :
>
> > What is the philosophical term for persons like me, that totally deny the
> > existence of the consciousness?
> >  An eliminativist.
> >  "Eliminativist" is not a good term for persons like me, because that term
> > implies that you are eliminating an important part of reality.  But you
> > can't eliminate something that does not exists.  If you don't believe in
> > ghosts, are you then an eliminativist?  If you don't believe in Santa Claus,
> > are you then an eliminativist, eliminating Santa Claus?
>
> >  --
> >  Torgny Tholerus
>
> Sure but I still don't understand what could mean 'to know', 'to
> believe' for an entity which is not conscious. Also if you're not
> conscious, there is no 'me', no 'I', so there exists no 'person like
> you' because then you're not a person.
>
> Quentin


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Asifism

2007-06-14 Thread Quentin Anciaux

2007/6/14, Torgny Tholerus <[EMAIL PROTECTED]>:
>
>  Quentin Anciaux skrev:
>  2007/6/14, Stathis Papaioannou <[EMAIL PROTECTED]>:
>
>
>  On 14/06/07, Quentin Anciaux <[EMAIL PROTECTED]> wrote:
>
>  Sure but I still don't understand what could mean 'to know', 'to
> believe' for an entity which is not conscious. Also if you're not
> conscious, there is no 'me', no 'I', so there exists no 'person like
> you' because then you're not a person.
>  Sure, but Torgny is just displaying the person-like behaviour of claiming
> to
> be a person.
>
>  Yes, in this case his writing is just garbage because it doesn't have
> any meaning. I can't understand what it means for an unconscious thing
> (for example a rock) to know something, to believe in something, to
> have thought (especially this one, because it could be a definition of
> consciousness, ie: something which has thought).
>
>  If the rock behaves as if it knows something (if you say something to the
> rock, and the rock gives you an intelligent answer), then you can say that
> the rock knows something.  When the rock behaves as if it believes in
> something, then you can say that the rock believes in something.  If the
> rock behaves as if it has thought, then you can say that the rock has
> thought.
>
>  If a rock shows the same behavior as a human being, then you should be able
> to use the same words ("know", believe", "think") to describe this
> behaviour.
>
>  --
>  Torgny Tholerus

If the rock know something and it behaves like it knows it, then it is
conscious.

Consciousness is that from a third person pov... nobody can know
others consciousness, conscious experience is a 1st person pov, and by
this not communicable in its entirety. I will never know what it is
like to be Torgny like you'll never know what it is like to be me,
these things are not 3rd person communicable in there entirety. You
must be it to know it.

Quentin

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Asifism

2007-06-14 Thread Torgny Tholerus





Quentin Anciaux skrev:

  2007/6/14, Stathis Papaioannou <[EMAIL PROTECTED]>:
  
  
On 14/06/07, Quentin Anciaux <[EMAIL PROTECTED]> wrote:

  Sure but I still don't understand what could mean 'to know', 'to
believe' for an entity which is not conscious. Also if you're not
conscious, there is no 'me', no 'I', so there exists no 'person like
you' because then you're not a person.

Sure, but Torgny is just displaying the person-like behaviour of claiming to
be a person.

  
  Yes, in this case his writing is just garbage because it doesn't have
any meaning. I can't understand what it means for an unconscious thing
(for example a rock) to know something, to believe in something, to
have thought (especially this one, because it could be a definition of
consciousness, ie: something which has thought).
  

If the rock behaves as if it knows something (if you say something to
the rock, and the rock gives you an intelligent answer), then you can
say that the rock knows something.  When the rock behaves as if it
believes in something, then you can say that the rock believes in
something.  If the rock behaves as if it has thought, then you can say
that the rock has thought.

If a rock shows the same behavior as a human being, then you should be
able to use the same words ("know", believe", "think") to describe this
behaviour.

-- 
Torgny Tholerus

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups "Everything List" group.  To post to this group, send email to [EMAIL PROTECTED]  To unsubscribe from this group, send email to [EMAIL PROTECTED]  For more options, visit this group at http://groups.google.com/group/everything-list?hl=en  -~--~~~~--~~--~--~---







Re: How would a computer know if it were conscious?

2007-06-14 Thread David Nyman

On Jun 14, 3:47 am, Colin Hales <[EMAIL PROTECTED]> wrote:

> 4) Belief in 'magical emergence'  qualitative novelty of a kind
> utterly unrelated to the componentry.

Hi Colin

I think there's a link here with the dialogue in the 'Asifism' thread
between Bruno and me. I've been reading Galen Strawson's
"Consciousness and its place in Nature", which has re-ignited some of
the old hoo-hah over 'panpsychism', with the usual attendant
embarrassment and name-calling.  It motivated me to try to unpack the
basic semantic components that are difficult to pin down in these
debates, and for this reason tend to lead to mutual incomprehension.

Strawson refers to the 'magical emergence' you mention, and what is in
his view (and mine) the disanalogy of 'emergent' accounts of
consciousness with, say, how 'liquidity' supervenes on molecular
behaviour.  So I started from the question: what would have to be the
case at the 'component' level for such 'emergence' to make sense (and
I'm aiming at the semantics here, not 'ultimate truth', whatever that
might be).  My answer is simply that for 'sensing' and 'acting' to
'emerge' (i.e. supervene on) some lower level, that lower level must
itself 'sense' and 'act' (or 'grasp', a word that can carry the
meaning of both).

What sense does it make to say that, for example, sub-atomic
particles, strings, or even Bruno's numbers, 'grasp' each other?
Well, semantically, the alternative would be that they would shun and
ignore each other, and we wouldn't get very far on that basis.  They
clearly seem to relate according to certain 'rules', but we're not so
naive (are we?) as to suppose that these are actually 'laws' handily
supplied from some 'external' domain.  Since we're talking 'primitives
here', then such relating, such mutual 'grasping', must just *be*.
There's nothing wrong conceptually here, we always need an axiomatic
base, the question is simply where to situate it, and semantically IMO
the buck stops here or somewhere closely adjacent.

The cool thing about this is, that if we start from such primitive
'grasping', then higher-level emergent forms of full sense-action can
now emerge organically by (now entirely valid) analogy with purely
action-related accounts such as liquidity, or for that matter, the
emergence of living behaviour from 'dead matter'.  And the obvious
complexity of the relation between, say quantum mechanics and, say,
the life cycle of the sphex wasp, should alert us to an equivalent
complexity in the relationship between primitive 'grasp' and its fully
qualitative (read: participatory) emergents - so please let's have no
(oh-so-embarrassing) 'conscious electrons' here.

Further, it shows us in what way 'software consciousness' is
disanalogous with the evolved kind. A computer, or a rock for that
matter, is of course also a natural emergent from primitive grasping,
and this brings with it sense-action, but in the case of these objects
more action than sense at the emergent level.  The software level of
description, however, is merely an imputation, supplied externally
(i.e. by us) and imposed as an interpretation (one of infinitely many)
on the fundamental grasped relations of the substrate components.  By
contrast, the brain (and here comes the research programme) must have
evolved (crucially) to deploy a supremely complex set of 'mirroring'
processes that is (per evolution) genuinely emergent from the
primitive 'grasp' of the component level.

>From this comes (possibly) the coolest consequence of these semantics:
our intrinsic 'grasp' of our own motivation (i.e. will, whether 'free'
or not), our participative qualitative modalities, the relation of our
suffering to subsequent action, and so forth, emerge as indeed
'something like' the primitive roots from which they inherit these
characteristics.  This is *real* emergence, not magical, and at one
stroke demolishes epiphenomenalism, zombies, uploading fantasies and
all the other illusory consequences of confusing the 'external
world' (i.e. a projection) with the participatory one in which we are
included.

Cheers

David

> Hi,
>
> >> COLIN
> >> I don't think we need a new wordI'll stick to the far less
> ambiguous
> >> term 'organisational complexity', I think. the word creativity is so
>
> loaded that its use in general discourse is bound to be prone to
> misconstrual, especially in any discussion which purports to be
> assessing
>
> >> the relationship between 'organisational complexity' and consciousness.
>
> RUSSEL
>
> > What sort of misconstruals do you mean? I'm interested...
> > 'organisational complexity' does not capture the concept I'm after.
>
> COLIN
> 1) Those associated with religious 'creation' myths - the creativity
> ascribed to an omniscient/omnipotent entity.
> 2) The creativity ascribed to the act of procreation.
> 3) The pseudo-magical aspects of human creativity (the scientific ah-ha
> moment and the artistic gestalt moment).
> and pehaps...
> 4) Belief in 'magical emergence' .

Re: Asifism

2007-06-14 Thread Quentin Anciaux

2007/6/14, Stathis Papaioannou <[EMAIL PROTECTED]>:
>
>
>
> On 14/06/07, Quentin Anciaux <[EMAIL PROTECTED]> wrote:
>
> > >  "Eliminativist" is not a good term for persons like me, because that
> term
> > > implies that you are eliminating an important part of reality.  But you
> > > can't eliminate something that does not exists.  If you don't believe in
> > > ghosts, are you then an eliminativist?  If you don't believe in Santa
> Claus,
> > > are you then an eliminativist, eliminating Santa Claus?
> > >
> > >  --
> > >  Torgny Tholerus
> >
> > Sure but I still don't understand what could mean 'to know', 'to
> > believe' for an entity which is not conscious. Also if you're not
> > conscious, there is no 'me', no 'I', so there exists no 'person like
> > you' because then you're not a person.
> >
> Sure, but Torgny is just displaying the person-like behaviour of claiming to
> be a person.

Yes, in this case his writing is just garbage because it doesn't have
any meaning. I can't understand what it means for an unconscious thing
(for example a rock) to know something, to believe in something, to
have thought (especially this one, because it could be a definition of
consciousness, ie: something which has thought).

Quentin

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Asifism

2007-06-14 Thread Stathis Papaioannou
On 14/06/07, Quentin Anciaux <[EMAIL PROTECTED]> wrote:


> >  "Eliminativist" is not a good term for persons like me, because that
> term
> > implies that you are eliminating an important part of reality.  But you
> > can't eliminate something that does not exists.  If you don't believe in
> > ghosts, are you then an eliminativist?  If you don't believe in Santa
> Claus,
> > are you then an eliminativist, eliminating Santa Claus?
> >
> >  --
> >  Torgny Tholerus
>
> Sure but I still don't understand what could mean 'to know', 'to
> believe' for an entity which is not conscious. Also if you're not
> conscious, there is no 'me', no 'I', so there exists no 'person like
> you' because then you're not a person.
>
Sure, but Torgny is just displaying the person-like behaviour of claiming to
be a person.


-- 
Stathis Papaioannou

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Asifism

2007-06-14 Thread Quentin Anciaux

2007/6/14, Torgny Tholerus <[EMAIL PROTECTED]>:
>
>  Bruno Marchal skrev:
>
> Le 07-juin-07, à 15:47, Torgny Tholerus a écrit :
>
> What is the philosophical term for persons like me, that totally deny the
> existence of the consciousness?
>  An eliminativist.
>  "Eliminativist" is not a good term for persons like me, because that term
> implies that you are eliminating an important part of reality.  But you
> can't eliminate something that does not exists.  If you don't believe in
> ghosts, are you then an eliminativist?  If you don't believe in Santa Claus,
> are you then an eliminativist, eliminating Santa Claus?
>
>  --
>  Torgny Tholerus

Sure but I still don't understand what could mean 'to know', 'to
believe' for an entity which is not conscious. Also if you're not
conscious, there is no 'me', no 'I', so there exists no 'person like
you' because then you're not a person.

Quentin

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Asifism

2007-06-14 Thread Torgny Tholerus





Bruno Marchal skrev:
Le 07-juin-07, à 15:47, Torgny Tholerus a écrit :
  
  What is the philosophical term for persons like me, that
totally deny
the existence of the consciousness?
  
An eliminativist.
  

"Eliminativist" is not a good term for persons like me, because that
term implies that you are eliminating an important part of reality. 
But you can't eliminate something that does not exists.  If you don't
believe in ghosts, are you then an eliminativist?  If you don't believe
in Santa Claus, are you then an eliminativist, eliminating Santa Claus?

-- 
Torgny Tholerus


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups "Everything List" group.  To post to this group, send email to [EMAIL PROTECTED]  To unsubscribe from this group, send email to [EMAIL PROTECTED]  For more options, visit this group at http://groups.google.com/group/everything-list?hl=en  -~--~~~~--~~--~--~---