Russell, 

For me to join such a discussion with the temperature over 90F, the
dewpoint over 80F, and the mosquitoes as big as vultures was a terrible
folly.  So I shall probably wimp out at some point.  

While the morning is yet fresh let me say that I THINK I think that most of
the properties we are talking about ARE modeling properties in that they
are seen from a viewpoint by an agent with a purpose.   I am big on models
because I am big on metaphors.  I dont try to find differences between
animals and humans but if I were looking it would be something like the
capacity to work our metaphors over with formal operations such as language
and mathematics.  Everything else is banal as dirt.  Metaphor making
without language is just classical conditioning.  

It is looking more and more like I may quit my day job and move the Santa
Fe in January.  Then I would like to move slowly through this stuff.  Who
was it who said, "If you arent moving slowly, the chances are you arent
moving at all."

Nicholas Thompson
[EMAIL PROTECTED]
http://home.earthlink.net/~nickthompson


> [Original Message]
> From: Russell Standish <[EMAIL PROTECTED]>
> To: <[EMAIL PROTECTED]>; The Friday Morning Applied Complexity
Coffee Group <friam@redfish.com>
> Cc: echarles <[EMAIL PROTECTED]>
> Date: 7/17/2006 2:03:34 AM
> Subject: Re: [FRIAM] Intentionality is the mark of the vital
>
> I think that intentionality is a modelling property - something has
> intentionality because it is useful to model a given system as if it
> had a mind like ours, more useful than any other model we might have.
>
> So we can say a computer has intentionality, if it is useful model the
> machine as having a mind. This obviously depends on the software
> application, and how technical the person is (someone who programs a
> computer - like me - is more likely to have a machine model, rather
> than mind model of a computer).
>
> So - in terms of answering your question about whether intentionality
> is the mark of the vital, I would have to answer no. I do not see much
> intentional behaviour amongst simple animals (eg insects) or plants -
> rather I tend to think of these as complex machine. On the contrary, to a
> well designed artificial human (as in a computer game character) I will
assign
> intentionality, even though I know they're only the outputs of algorithms.
>
> Cheers
>
> On Fri, Jul 14, 2006 at 09:36:54PM -0400, Nicholas Thompson wrote:
> > Jochen, 
> > 
> > Thanks for your kind response. 
> > 
> > Your question churns my head.  I was keen to argue that intentionality
is a
> > property not only of thinking things but of any biological thing.  But I
> > never imagined that intentionality could be used as a criterion of
> > vitality.  I do believe that every living system displays
intentionality,
> > but I now have to think about whether I think that all intentional
systems
> > are living.   I guess NOT.  However, my reasons for holding this belief
are
> > probably robotophobic.  
> > 
> > Nick 
> > 
> > 
> > PS  My first response to your  question was to write the following 100
> > words of baffle-gab, like the good academic I am.  It might be
marginally
> > interesting in and off itself, but it didnt seem to answer your
question. 
> > I had put too much effort into it to throw it away, so I stuck it
below. 
> > Feel free to ignore it.  
> > 
> > BEGIN BAFFLEGAB =======================================================
> > 
> > Intentionality is one of those words that leads to endless confusion. 
It
> > can refer to having an intention or it can refer to a peculiar propert
to
> > assertions containing verbs of mentation, wanting, thinking, feeling,
etc. 
> > The sentence, "Jones's intention was that the books be placed on the
table"
> > is intentional in both senses: intentional in sense one because it
tells us
> > something about what Jones is up to, and intentional in the second sense
> > because it displays the odd property of referential opacity.    Unlike
the
> > statement "the books are on the table" , the statement about Jones's
> > intentions cannot be verified nor disconfirmed by gathering information
> > about the location of the books.  
> > 
> >  The two are intimately connected.  Any statement one makes about the
> > intentions of others in sense one is inevitably an intensional
utterance in
> > sense two because the truth value of the statement lies in the
organization
> > of Jones's behavior, rather than whether Jones's intention is ever
> > fulfilled.  
> > 
> > It was in this second, perhaps strained, philosophic sense, that I think
> > the cue relation is necessarily intentional.  When we say that C is a
cue
> > to X, we mean that from the point of view of the system we are
interested
> > in, C stands in for X.   ("In the Human respiratory system, Blood
acidity
> > is a cue for blood oxygenation")  To the extent that robots use cues,
they
> > MUST be intentional in this sense.  
> > 
> > ===========================================================
> > end  BAFFLEGAB.  
> > 
> > Nicholas Thompson
> > [EMAIL PROTECTED]
> > http://home.earthlink.net/~nickthompson
> > 
> > 
> > > [Original Message]
> > > From: <[EMAIL PROTECTED]>
> > > To: <friam@redfish.com>
> > > Date: 7/14/2006 12:00:29 PM
> > > Subject: Friam Digest, Vol 37, Issue 17
> > >
> > > Send Friam mailing list submissions to
> > >   friam@redfish.com
> > >
> > > To subscribe or unsubscribe via the World Wide Web, visit
> > >   http://redfish.com/mailman/listinfo/friam_redfish.com
> > > or, via email, send a message with subject or body 'help' to
> > >   [EMAIL PROTECTED]
> > >
> > > You can reach the person managing the list at
> > >   [EMAIL PROTECTED]
> > >
> > > When replying, please edit your Subject line so it is more specific
> > > than "Re: Contents of Friam digest..."
> > >
> > >
> > > Today's Topics:
> > >
> > >    1. Re: 100 billion neurons (George Duncan)
> > >    2. Re: 100 billion neurons (Jim Rutt)
> > >    3. Re: 100 billion neurons (Frank Wimberly)
> > >    4. Intentionality - the mark of the vital (Jochen Fromm)
> > >
> > >
> > > ----------------------------------------------------------------------
> > >
> > > Message: 1
> > > Date: Thu, 13 Jul 2006 10:38:47 -0600
> > > From: "George Duncan" <[EMAIL PROTECTED]>
> > > Subject: Re: [FRIAM] 100 billion neurons
> > > To: "The Friday Morning Applied Complexity Coffee Group"
> > >   <friam@redfish.com>
> > > Message-ID:
> > >   <[EMAIL PROTECTED]>
> > > Content-Type: text/plain; charset="iso-8859-1"
> > >
> > > Shall this conversation be neuronic rather than neurotic?
> > >
> > > Or try this
> > > http://www.technologyreview.com/read_article.aspx?id=17164&ch=infotech
> > >
> > >
> > > On 7/13/06, Giles Bowkett <[EMAIL PROTECTED]> wrote:
> > > >
> > > > I'm inclined to agree. The model I use is nonlinear fluid dynamics.
> > > > Say you've got a thought which you began thinking when you were
young.
> > > > That thought is a fluid in motion. Over the course of your life you
> > > > revisit certain ideas and revise certain opinions. The motion
> > > > continues for decades. The way you think is like an information
> > > > processing system which evolves over the course of your life, and
it's
> > > > true enough to call that software, not hardware, but the flow of
data
> > > > through that system is entirely organic, and creating an exact copy
of
> > > > a given flow in nonlinear fluid dynamics is impossible. The
structure
> > > > of your mode of thinking -- your "software" -- is shaped
tremendously
> > > > by the things that you think about; therefore replicating the
> > > > processor without replicating the data can only be of partial
> > > > usefulness, if the processor is shaped by and for the data. It's
like
> > > > copying a river by duplicating exactly every last rock and pebble,
but
> > > > leaving out the water.
> > > >
> > > > On 7/10/06, Frank Wimberly <[EMAIL PROTECTED]> wrote:
> > > > > Back in the 1980's Hans and I had offices next to each other in
the
> > > > > Robotics Institute at Carnegie Mellon.  Over a period of a couple
of
> > > > > years we had numerous arguments about whether machines could
realize
> > > > > consciousness; whether a human mind could be transferred to a
machine,
> > > > > etc.  I remember saying that if somehow my "mind" were transferred
> > from
> > > > > my body to some robot--which I felt was impossible--it might be
that
> > > > > everyone else would agree that it was a remarkable likeness but
that I
> > > > > would be gone.  Hans replied that I undervalued myself--that I am
> > > > > software not hardware.  After many arguments along these lines I
said,
> > > > > "Hans, I now understand why you don't understand what I am saying
> > about
> > > > > consciousness--you don't have it."  This was all in good humor and
> > later
> > > > > when I was teaching a course in AI to MBA students I invited Hans
to
> > > > > continue our debate in class.  A good time was had by all, I hope.
> > > > >
> > > > > Frank
> > > > >
> > > > > ---
> > > > > Frank C. Wimberly
> > > > > 140 Calle Ojo Feliz              (505) 995-8715 or (505) 670-9918
> > (cell)
> > > > > Santa Fe, NM 87505           [EMAIL PROTECTED]
> > > > >
> > > > > -----Original Message-----
> > > > > From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On
> > > > > Behalf Of Martin C. Martin
> > > > > Sent: Saturday, July 08, 2006 7:16 PM
> > > > > To: The Friday Morning Applied Complexity Coffee Group
> > > > > Subject: Re: [FRIAM] 100 billion neurons
> > > > >
> > > > > I suspect you'd like Hans Moravec's books:
> > > > >
> > > > > http://www.amazon.com/gp/product/0674576187
> > > > > http://www.amazon.com/gp/product/0195136306
> > > > >
> > > > > He uses Moore's law and estimates of the brain's computing power
to
> > > > > calculate when we'll have human equivalence in "a computer."  I
forget
> > > > > the date, but it's not far.  He also talks about a number of very
> > > > > interesting consequences of this.
> > > > >
> > > > > - Martin
> > > > >
> > > > > ============================================================
> > > > > FRIAM Applied Complexity Group listserv
> > > > > Meets Fridays 9a-11:30 at cafe at St. John's College
> > > > > lectures, archives, unsubscribe, maps at http://www.friam.org
> > > > >
> > > > >
> > > > > ============================================================
> > > > > FRIAM Applied Complexity Group listserv
> > > > > Meets Fridays 9a-11:30 at cafe at St. John's College
> > > > > lectures, archives, unsubscribe, maps at http://www.friam.org
> > > > >
> > > >
> > > >
> > > > --
> > > > Giles Bowkett
> > > > http://www.gilesgoatboy.org
> > > >
> > > > ============================================================
> > > > FRIAM Applied Complexity Group listserv
> > > > Meets Fridays 9a-11:30 at cafe at St. John's College
> > > > lectures, archives, unsubscribe, maps at http://www.friam.org
> > > >
> > >
> > >
> > >
> > > -- 
> > > George T. Duncan
> > > Professor of Statistics
> > > Heinz School of Public Policy and Management
> > > Carnegie Mellon University
> > > Pittsburgh, PA 15213
> > > (412) 268-2172
> > > -------------- next part --------------
> > > An HTML attachment was scrubbed...
> > > URL:
> >
/pipermail/friam_redfish.com/attachments/20060713/bcb8105c/attachment-0001.h
> > tml 
> > >
> > > ------------------------------
> > >
> > > Message: 2
> > > Date: Thu, 13 Jul 2006 16:57:49 -0600
> > > From: Jim Rutt <[EMAIL PROTECTED]>
> > > Subject: Re: [FRIAM] 100 billion neurons
> > > To: The Friday Morning Applied Complexity Coffee Group
> > >   <friam@redfish.com>
> > > Message-ID: <[EMAIL PROTECTED]>
> > > Content-Type: text/plain; charset="us-ascii"
> > >
> > > as an interesting argument that the old hardware/software argument
about 
> > > consciousness is often malformed, take a look see at:
> > >
> > >
> > >
> > > Damasio, Antonio: _The Feeling of What Happens: Body and Emotion in
the 
> > > Making of
> > > Consciousness_
> > >
> > >
> > >
> > >
> > > At 07:30 AM 7/10/2006, you wrote:
> > > >Back in the 1980's Hans and I had offices next to each other in the
> > > >Robotics Institute at Carnegie Mellon.  Over a period of a couple of
> > > >years we had numerous arguments about whether machines could realize
> > > >consciousness; whether a human mind could be transferred to a
machine,
> > > >etc.  I remember saying that if somehow my "mind" were transferred
from
> > > >my body to some robot--which I felt was impossible--it might be that
> > > >everyone else would agree that it was a remarkable likeness but that
I
> > > >would be gone.  Hans replied that I undervalued myself--that I am
> > > >software not hardware.  After many arguments along these lines I
said,
> > > >"Hans, I now understand why you don't understand what I am saying
about
> > > >consciousness--you don't have it."  This was all in good humor and
later
> > > >when I was teaching a course in AI to MBA students I invited Hans to
> > > >continue our debate in class.  A good time was had by all, I hope.
> > > >
> > > >Frank
> > > >
> > > >---
> > > >Frank C. Wimberly
> > > >140 Calle Ojo Feliz              (505) 995-8715 or (505) 670-9918
(cell)
> > > >Santa Fe, NM 87505           [EMAIL PROTECTED]
> > > >
> > > >-----Original Message-----
> > > >From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On
> > > >Behalf Of Martin C. Martin
> > > >Sent: Saturday, July 08, 2006 7:16 PM
> > > >To: The Friday Morning Applied Complexity Coffee Group
> > > >Subject: Re: [FRIAM] 100 billion neurons
> > > >
> > > >I suspect you'd like Hans Moravec's books:
> > > >
> > > >http://www.amazon.com/gp/product/0674576187
> > > >http://www.amazon.com/gp/product/0195136306
> > > >
> > > >He uses Moore's law and estimates of the brain's computing power to
> > > >calculate when we'll have human equivalence in "a computer."  I
forget
> > > >the date, but it's not far.  He also talks about a number of very
> > > >interesting consequences of this.
> > > >
> > > >- Martin
> > > >
> > > >============================================================
> > > >FRIAM Applied Complexity Group listserv
> > > >Meets Fridays 9a-11:30 at cafe at St. John's College
> > > >lectures, archives, unsubscribe, maps at http://www.friam.org
> > > >
> > > >
> > > >============================================================
> > > >FRIAM Applied Complexity Group listserv
> > > >Meets Fridays 9a-11:30 at cafe at St. John's College
> > > >lectures, archives, unsubscribe, maps at http://www.friam.org
> > >
> > > ===================================
> > > Jim Rutt
> > > voice:  505-989-1115
> > >
> > > -------------- next part --------------
> > > An HTML attachment was scrubbed...
> > > URL:
> >
/pipermail/friam_redfish.com/attachments/20060713/3f05e21d/attachment-0001.h
> > tml 
> > >
> > > ------------------------------
> > >
> > > Message: 3
> > > Date: Thu, 13 Jul 2006 18:59:23 -0600
> > > From: "Frank Wimberly" <[EMAIL PROTECTED]>
> > > Subject: Re: [FRIAM] 100 billion neurons
> > > To: "'The Friday Morning Applied Complexity Coffee Group'"
> > >   <friam@redfish.com>
> > > Message-ID: <[EMAIL PROTECTED]>
> > > Content-Type: text/plain; charset="iso-8859-1"
> > >
> > > At the risk of being neurotic, here is link to a review of Damasio's
> > > book:
> > >
> > > http://dir.salon.com/story/books/review/1999/09/21/damasio/index.html
> > >
> > >
> > > Frank
> > >
> > > ---
> > > Frank C. Wimberly
> > > 140 Calle Ojo Feliz??????????????(505) 995-8715 or (505) 670-9918
(cell)
> > > Santa Fe, NM [EMAIL PROTECTED]
> > > -----Original Message-----
> > > From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On
> > > Behalf Of Jim Rutt
> > > Sent: Thursday, July 13, 2006 4:58 PM
> > > To: The Friday Morning Applied Complexity Coffee Group
> > > Subject: Re: [FRIAM] 100 billion neurons
> > >
> > > as an interesting argument that the old hardware/software argument
about
> > > consciousness is often malformed, take a look see at:
> > >
> > >
> > >
> > > Damasio, Antonio: _The Feeling of What Happens: Body and Emotion in
the
> > > Making of 
> > > Consciousness_
> > >
> > > ?
> > >
> > >
> > > At 07:30 AM 7/10/2006, you wrote:
> > >
> > > Back in the 1980's Hans and I had offices next to each other in the
> > > Robotics Institute at Carnegie Mellon.? Over a period of a couple of
> > > years we had numerous arguments about whether machines could realize
> > > consciousness; whether a human mind could be transferred to a machine,
> > > etc.? I remember saying that if somehow my "mind" were transferred
from
> > > my body to some robot--which I felt was impossible--it might be that
> > > everyone else would agree that it was a remarkable likeness but that I
> > > would be gone.? Hans replied that I undervalued myself--that I am
> > > software not hardware.? After many arguments along these lines I said,
> > > "Hans, I now understand why you don't understand what I am saying
about
> > > consciousness--you don't have it."? This was all in good humor and
later
> > > when I was teaching a course in AI to MBA students I invited Hans to
> > > continue our debate in class.? A good time was had by all, I hope.
> > >
> > > Frank
> > >
> > > ---
> > > Frank C. Wimberly
> > > 140 Calle Ojo Feliz????????????? (505) 995-8715 or (505) 670-9918
(cell)
> > > Santa Fe, NM 87505?????????? [EMAIL PROTECTED]
> > >
> > > -----Original Message-----
> > > From: [EMAIL PROTECTED] [ mailto:[EMAIL PROTECTED] On
> > > Behalf Of Martin C. Martin
> > > Sent: Saturday, July 08, 2006 7:16 PM
> > > To: The Friday Morning Applied Complexity Coffee Group
> > > Subject: Re: [FRIAM] 100 billion neurons
> > >
> > > I suspect you'd like Hans Moravec's books:
> > >
> > > http://www.amazon.com/gp/product/0674576187
> > > http://www.amazon.com/gp/product/0195136306
> > >
> > > He uses Moore's law and estimates of the brain's computing power to 
> > > calculate when we'll have human equivalence in "a computer."? I
forget 
> > > the date, but it's not far.? He also talks about a number of very 
> > > interesting consequences of this.
> > >
> > > - Martin
> > >
> > > ============================================================
> > > FRIAM Applied Complexity Group listserv
> > > Meets Fridays 9a-11:30 at cafe at St. John's College
> > > lectures, archives, unsubscribe, maps at http://www.friam.org 
> > >
> > >
> > > ============================================================
> > > FRIAM Applied Complexity Group listserv
> > > Meets Fridays 9a-11:30 at cafe at St. John's College
> > > lectures, archives, unsubscribe, maps at http://www.friam.org 
> > > ===================================
> > > Jim Rutt
> > > voice:? 505-989-1115?? 
> > >
> > >
> > >
> > >
> > > ------------------------------
> > >
> > > Message: 4
> > > Date: Fri, 14 Jul 2006 09:42:14 +0200
> > > From: "Jochen Fromm" <[EMAIL PROTECTED]>
> > > Subject: [FRIAM] Intentionality - the mark of the vital
> > > To: "'The Friday Morning Applied Complexity Coffee Group'"
> > >   <friam@redfish.com>
> > > Message-ID: <[EMAIL PROTECTED]>
> > > Content-Type: text/plain; charset="us-ascii"
> > >
> > >  
> > > I have finally read the article "Intentionality is 
> > > the mark of the vital". It contains interesting 
> > > remarks about the mind/body problem, about the
> > > relationship between mental and material "substance",
> > > and nice illustrations (for example about lions and gnus).
> > > Well written. 
> > >
> > > If "intentionality is the mark of the vital",
> > > are artificial agents with intentions the first 
> > > step towards vital, living systems ? Agents are
> > > of course used in artificial life, but in the
> > > context of the article the question seems to
> > > gain new importance.
> > >
> > > -J.
> > > ________________________________
> > >
> > > From: Nicholas Thompson
> > > Sent: Monday, June 26, 2006 3:20 AM
> > > To: friam@redfish.com
> > > Subject: [FRIAM] self-consciousness
> > >
> > > For those rare few of you that are INTENSELY interested by the recent
> > > discussion on self consciousness, here is a paper on the subject 
which
> > > asserts that every organism must have a point of view.  
> > >  
> > > http://home.earthlink.net/~nickthompson/id14.html
> > >  
> > >
> > >
> > >
> > >
> > > ------------------------------
> > >
> > > _______________________________________________
> > > Friam mailing list
> > > Friam@redfish.com
> > > http://redfish.com/mailman/listinfo/friam_redfish.com
> > >
> > >
> > > End of Friam Digest, Vol 37, Issue 17
> > > *************************************
> > 
> > 
> > 
> > ============================================================
> > FRIAM Applied Complexity Group listserv
> > Meets Fridays 9a-11:30 at cafe at St. John's College
> > lectures, archives, unsubscribe, maps at http://www.friam.org
>
> -- 
> *PS: A number of people ask me about the attachment to my email, which
> is of type "application/pgp-signature". Don't worry, it is not a
> virus. It is an electronic signature, that may be used to verify this
> email came from me if you have PGP or GPG installed. Otherwise, you
> may safely ignore this attachment.
>
>
----------------------------------------------------------------------------
> A/Prof Russell Standish                  Phone 8308 3119 (mobile)
> Mathematics                                  0425 253119 (")
> UNSW SYDNEY 2052                       [EMAIL PROTECTED]             
> Australia                               
http://parallel.hpc.unsw.edu.au/rks
>             International prefix  +612, Interstate prefix 02
>
----------------------------------------------------------------------------



============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org

Reply via email to