Aw: Re: Re: Re: Re: Re: RE: [PEIRCE-L] RE: AI

2017-06-27 Thread Helmut Raulien
ing needlessly in the present, and to all who have a right to a bright and safe future.

 

Now is the time for boldness.

 

Now is the time to leap.


 




 









 

Gary Richmond

Philosophy and Critical Thinking

Communication Studies

LaGuardia College of the City University of New York

C 745

718 482-5690









 
On Mon, Jun 26, 2017 at 2:16 PM, Edwina Taborsky <tabor...@primus.ca> wrote:



Gary F - as you say, these issues really have no place in a Peircean analytic framework - unless we want to explore the development of societal norms as a form of Thirdness - which is a legitimate area of research.

I, myself, reject the Naomi Klein perspective [all of her work] and certainly, reject the LEAP perspective- and would argue against it as a naïve utopian agenda. You cannot do away with any of the modal categories, even in Big Systems, eg, as in societal analysis - and coming up with purely rhetorical versions of Thirdness [rather than the real Thirdness that is in that society] and trying to do away with the existential conflicts of Secondness and the private feelings of Firstness is, in my view, a useless agenda.

Edwina
 

On Mon 26/06/17 1:50 PM , g...@gnusystems.ca sent:



Gene, 

 

Thanks for the links; I’m quite familiar with the mirror neuron research and the inferences various people have drawn from it, and it reinforces the point I was trying to make, that empathy is deeper than deliberate reasoning — as well as Peirce’s point that science is grounded in empathy (or at least in “the social principle”).

 

I didn’t miss the point that it is possible to disable the feeling of empathy — I just didn’t see that point as being news in any sense (it’s been pretty obvious for millennia!). I see the particular study as an attempt to quantify some  expressions of empathy (or responses that imply the lack of it). What it doesn’t do is give us much of a clue as to what cultural factors are involved in the suppression of empathic behavior. (And I thought that blaming it on increasing use of AI was  really a stretch!)  As I wrote before, what significance that study has depends on the nature of the devices used to generate those statistics. 

 

There are lots of theories about what causes empathic behavior to be suppressed (not all of them use that terminology, of course.) I think they are valuable to the extent that they give us some clues as to what we can do about the situation. To take the example that happens to be in front of me:  

The election of Donald Trump can certainly be taken as a symptom of a decline in empathy. In her new book, Naomi Klein spends several chapters explaining in factual detail how certain trends in American culture (going back several decades) have prepared the way for somebody like Trump to exploit the situation. But the title of her book, No is Not Enough, emphasizes that what’s needed is not another round of recriminations but a coherent vision of a better way to live, and a viable alternative to the pathologically partisan politics of the day. I can see its outlines in a document called the LEAP manifesto, and I’d like to see us google that and spend more time considering it than we do blaming Google or other arms of “The Machine” for the mess we’re in. 

 

But enough about politics and such “vitally important” matters. What interests me about AI (which is supposed to be the subject of this thread) is what we can learn from it about how the mind works, whether it’s a human or animal bodymind or not. That’s also what my book is about and why I’m interested in Peircean semiotics. And I daresay that’s what motivates many, if not most, AI researchers, including the students that John Sowa is addressing in that presentation he’s still working on. 

 

Gary f.

 

} What is seen with one eye has no depth. [Ursula LeGuin] {

http://gnusystems.ca/wp/  }{ Turning Signs gateway

 

From: Eugene Halton [mailto: eugene.w.halto...@nd.edu]
Sent: 26-Jun-17 11:09
To: Peirce List
Subject: RE: [PEIRCE-L] RE: AI

 



Dear Gary F, 


     Here is a link to the Sarah Konrath et al. study on the decline of empathy among American college students: 



http://faculty.chicagobooth.edu/eob/edobrien_empathyPSPR.pdf



   And a brief Scientific American article on it:  



https://www.scientificamerican.com/article/what-me-care/



 



 You state: " I think Peirce would say that these attributions of empathy (or consciousness) to others are  perceptual judgments — not percepts, but quite beyond (or beneath) any conscious control, and . We feel it rather than reading it from external indications."



 This seems to me to miss the point that it is possible to disable the feeling of empathy. Clinical narcissistic disturbance, for example, substitutes idealization for perceptual feeling, so that what is perceived can be idealized rather than felt.



 Extrapolate that to a society that substitutes on mass scales idealization for felt experience, and y

Aw: Re: Re: Re: Re: Re: RE: [PEIRCE-L] RE: AI

2017-06-26 Thread Helmut Raulien
pathy (or at least in “the social principle”).

 

I didn’t miss the point that it is possible to disable the feeling of empathy — I just didn’t see that point as being news in any sense (it’s been pretty obvious for millennia!). I see the particular study as an attempt to quantify some  expressions of empathy (or responses that imply the lack of it). What it doesn’t do is give us much of a clue as to what cultural factors are involved in the suppression of empathic behavior. (And I thought that blaming it on increasing use of AI was  really a stretch!)  As I wrote before, what significance that study has depends on the nature of the devices used to generate those statistics. 

 

There are lots of theories about what causes empathic behavior to be suppressed (not all of them use that terminology, of course.) I think they are valuable to the extent that they give us some clues as to what we can do about the situation. To take the example that happens to be in front of me:  

The election of Donald Trump can certainly be taken as a symptom of a decline in empathy. In her new book, Naomi Klein spends several chapters explaining in factual detail how certain trends in American culture (going back several decades) have prepared the way for somebody like Trump to exploit the situation. But the title of her book, No is Not Enough, emphasizes that what’s needed is not another round of recriminations but a coherent vision of a better way to live, and a viable alternative to the pathologically partisan politics of the day. I can see its outlines in a document called the LEAP manifesto, and I’d like to see us google that and spend more time considering it than we do blaming Google or other arms of “The Machine” for the mess we’re in. 

 

But enough about politics and such “vitally important” matters. What interests me about AI (which is supposed to be the subject of this thread) is what we can learn from it about how the mind works, whether it’s a human or animal bodymind or not. That’s also what my book is about and why I’m interested in Peircean semiotics. And I daresay that’s what motivates many, if not most, AI researchers, including the students that John Sowa is addressing in that presentation he’s still working on. 

 

Gary f.

 

} What is seen with one eye has no depth. [Ursula LeGuin] {

http://gnusystems.ca/wp/  }{ Turning Signs gateway

 

From: Eugene Halton [mailto: eugene.w.halto...@nd.edu]
Sent: 26-Jun-17 11:09
To: Peirce List
Subject: RE: [PEIRCE-L] RE: AI

 



Dear Gary F, 


     Here is a link to the Sarah Konrath et al. study on the decline of empathy among American college students: 



http://faculty.chicagobooth.edu/eob/edobrien_empathyPSPR.pdf



   And a brief Scientific American article on it:  



https://www.scientificamerican.com/article/what-me-care/



 



 You state: " I think Peirce would say that these attributions of empathy (or consciousness) to others are  perceptual judgments — not percepts, but quite beyond (or beneath) any conscious control, and . We feel it rather than reading it from external indications."



 This seems to me to miss the point that it is possible to disable the feeling of empathy. Clinical narcissistic disturbance, for example, substitutes idealization for perceptual feeling, so that what is perceived can be idealized rather than felt.



 Extrapolate that to a society that substitutes on mass scales idealization for felt experience, and you can have societally reduced empathy. Unempathic parenting is an excellent way to produce the social media-addicted janissary offspring.



     The human face is a subtle neuromuscular organ of attunement, which has the capacity to read another's mind through mirror micro-mimicry of the other's facial gestures, completely subconsciously. These are  "external indications" mirrored by one.
  One study showed that botox treatments, in paralyzing facial muscles, reduce the micro-mimicry of empathic attunement to the other face in an interaction. The botox recipient is not only impaired in exhibiting her or his own emotional facial micro-muscular movements, but also is impaired in subconsciously micro-mimicking that of the other, thus reducing the embodied feel of the other’s emotional-gestural state (Neal and Chartrand, 2011). Empathy is reduced through the disabling of the facial muscles.
 Vittorio Gallese, one of the neuroscientists who discovered mirror neutons, has discussed "embodied simulation" through "shared neural underpinnings." He states: “…social cognition is not only explicitly reasoning about the contents of someone else’s mind. Our brains, and those of other primates, appear to have developed a basic functional mechanism, embodied simulation, which gives us an experiential insight of other minds. The shareability of the phenomenal content of the intentional relations of others, by means of the shared neural underpinnings, produces intentional attun

Aw: Re: Re: Re: Re: RE: [PEIRCE-L] RE: AI

2017-06-26 Thread Helmut Raulien
h, emphasizes that what’s needed is not another round of recriminations but a coherent vision of a better way to live, and a viable alternative to the pathologically partisan politics of the day. I can see its outlines in a document called the LEAP manifesto, and I’d like to see us google that and spend more time considering it than we do blaming Google or other arms of “The Machine” for the mess we’re in. 

 

But enough about politics and such “vitally important” matters. What interests me about AI (which is supposed to be the subject of this thread) is what we can learn from it about how the mind works, whether it’s a human or animal bodymind or not. That’s also what my book is about and why I’m interested in Peircean semiotics. And I daresay that’s what motivates many, if not most, AI researchers, including the students that John Sowa is addressing in that presentation he’s still working on. 

 

Gary f.

 

} What is seen with one eye has no depth. [Ursula LeGuin] {

http://gnusystems.ca/wp/  }{ Turning Signs gateway

 

From: Eugene Halton [mailto: eugene.w.halto...@nd.edu]
Sent: 26-Jun-17 11:09
To: Peirce List
Subject: RE: [PEIRCE-L] RE: AI

 



Dear Gary F, 


     Here is a link to the Sarah Konrath et al. study on the decline of empathy among American college students: 



http://faculty.chicagobooth.edu/eob/edobrien_empathyPSPR.pdf



   And a brief Scientific American article on it:  



https://www.scientificamerican.com/article/what-me-care/



 



 You state: " I think Peirce would say that these attributions of empathy (or consciousness) to others are  perceptual judgments — not percepts, but quite beyond (or beneath) any conscious control, and . We feel it rather than reading it from external indications."



 This seems to me to miss the point that it is possible to disable the feeling of empathy. Clinical narcissistic disturbance, for example, substitutes idealization for perceptual feeling, so that what is perceived can be idealized rather than felt.



 Extrapolate that to a society that substitutes on mass scales idealization for felt experience, and you can have societally reduced empathy. Unempathic parenting is an excellent way to produce the social media-addicted janissary offspring.



     The human face is a subtle neuromuscular organ of attunement, which has the capacity to read another's mind through mirror micro-mimicry of the other's facial gestures, completely subconsciously. These are  "external indications" mirrored by one.
  One study showed that botox treatments, in paralyzing facial muscles, reduce the micro-mimicry of empathic attunement to the other face in an interaction. The botox recipient is not only impaired in exhibiting her or his own emotional facial micro-muscular movements, but also is impaired in subconsciously micro-mimicking that of the other, thus reducing the embodied feel of the other’s emotional-gestural state (Neal and Chartrand, 2011). Empathy is reduced through the disabling of the facial muscles.
 Vittorio Gallese, one of the neuroscientists who discovered mirror neutons, has discussed "embodied simulation" through "shared neural underpinnings." He states: “…social cognition is not only explicitly reasoning about the contents of someone else’s mind. Our brains, and those of other primates, appear to have developed a basic functional mechanism, embodied simulation, which gives us an experiential insight of other minds. The shareability of the phenomenal content of the intentional relations of others, by means of the shared neural underpinnings, produces intentional attunement. Intentional attunement, in turn, by collapsing the others’ intentions into the observer’s ones, produces the peculiar quality of familiarity we entertain with other individuals. This is what “being empathic” is about. By means of a shared neural state realized in two different bodies that nevertheless obey to the same morpho-functional rules, the “objectual other” becomes “another self”. Vittorio Gallese, “Intentional Attunement. The Mirror Neuron System and Its Role in Interpersonal Relations,” 15 November 2004 Interdisciplines,  http://www.interdisciplines.org/mirror/papers/1



  Gene Halton

 




 


On Jun 20, 2017 7:00 PM, <g...@gnusystems.ca> wrote:




List,

Gene’s post in this thread had much to say about “empathy” — considered as something that can be measured and quantified for populations of students, so that comments about trends in “empathy” among them can be taken as meaningful and important.

I wonder about that.

My wondering was given more definite shape just now when I came across this passage in a recent book about consciousness by Evan Thompson:

[[ In practice and in everyday life … we don’t infer the inner presence of consciousness on the basis of outer criteria. Instead, prior to any kind of reflection or deliberation, we already implicitly recognize each other as conscio

Re: Aw: Re: Re: Re: RE: [PEIRCE-L] RE: AI

2017-06-26 Thread Edwina Taborsky
2-5690   On Mon, Jun 26,
2017 at 2:16 PM, Edwina Taborsky  wrote:  
 Gary F - as you say, these issues really have no place in a Peircean
analytic framework - unless we want to explore the development of
societal norms as a form of Thirdness - which is a legitimate area of
research. 

I, myself, reject the Naomi Klein perspective [all of her work] and
certainly, reject the LEAP perspective- and would argue against it as
a naïve utopian agenda. You cannot do away with any of the modal
categories, even in Big Systems, eg, as in societal analysis - and
coming up with purely rhetorical versions of Thirdness [rather than
the real Thirdness that is in that society] and trying to do away
with the existential conflicts of Secondness and the private feelings
of Firstness is, in my view, a useless agenda. 

Edwina
 On Mon 26/06/17 1:50 PM , g...@gnusystems.ca sent:   

Gene,  
Thanks for the links; I’m quite familiar with the mirror neuron
research and the inferences various people have drawn from it, and it
reinforces the point I was trying to make, that empathy is deeper than
deliberate reasoning — as well as Peirce’s point that science is
grounded in empathy (or at least in “the social principle”). 
I didn’t miss the point that it is possible to disable the feeling
of empathy — I just didn’t see that point as being news in any
sense (it’s been pretty obvious for millennia!). I see the
particular study as an attempt to quantify some  expressions of
empathy (or responses that imply the lack of it). What it doesn’t
do is give us much of a clue as to what cultural factors are involved
in the suppression of empathic behavior. (And I thought that blaming
it on increasing use of AI was  really a stretch!)  As I wrote
before, what significance that study has depends on the nature of the
devices used to generate those statistics.  
There are lots of theories about what causes empathic behavior to be
suppressed (not all of them use that terminology, of course.) I think
they are valuable to the extent that they give us some clues as to
what we can do about the situation. To take the example that happens
to be in front of me:   

The election of Donald Trump can certainly be taken as a symptom of
a decline in empathy. In her new book, Naomi Klein spends several
chapters explaining in factual detail how certain trends in American
culture (going back several decades) have prepared the way for
somebody like Trump to exploit the situation. But the title of her
book, No is Not Enough, emphasizes that what’s needed is not
another round of recriminations but a coherent vision of a better way
to live, and a viable alternative to the pathologically partisan
politics of the day. I can see its outlines in a document called the
LEAP manifesto, and I’d like to see us google that and spend more
time considering it than we do blaming Google or other arms of “The
Machine” for the mess we’re in.
But enough about politics and such “vitally important” matters.
What interests me about AI (which is supposed to be the subject of
this thread) is what we can learn from it about how the mind works,
whether it’s a human or animal bodymind or not. That’s also what
my book is about and why I’m interested in Peircean semiotics. And
I daresay that’s what motivates many, if not most, AI researchers,
including the students that John Sowa is addressing in that
presentation he’s still working on.
Gary f. 
} What is seen with one eye has no depth. [Ursula LeGuin] { 

http://gnusystems.ca/wp/  [6] }{  Turning Signs gateway 
From: Eugene Halton [mailto: eugene.w.halto...@nd.edu]
 Sent: 26-Jun-17 11:09
 To: Peirce List
 Subject: RE: [PEIRCE-L] RE: AI 
Dear Gary F,   

 Here is a link to the Sarah Konrath et al. study on the decline
of empathy among American college students:

http://faculty.chicagobooth.edu/eob/edobrien_empathyPSPR.pdf [7]   

   And a brief Scientific American article on it: 

https://www.scientificamerican.com/article/what-me-care/ [8]   
 You state: " I think Peirce would say that these attributions
of empathy (or consciousness) to others are  perceptual judgments —
not percepts, but quite beyond (or beneath) any conscious control, and
. We feel it rather than reading it from external indications."   

 This seems to me to miss the point that it is possible to
disable the feeling of empathy. Clinical narcissistic disturbance,
for example, substitutes idealization for perceptual feeling, so that
what is perceived can be idealized rather than felt.   

 Extrapolate that to a society that substitutes on mass scales
idealization for felt experience, and you can have societally reduced
empathy. Unempathic parenting is an excellent way to produce the
social media-addicted janissary offspring.   

 The human face is a subtle neuro

Aw: Re: Re: Re: RE: [PEIRCE-L] RE: AI

2017-06-26 Thread Helmut Raulien
as Peirce’s point that science is grounded in empathy (or at least in “the social principle”).

 

I didn’t miss the point that it is possible to disable the feeling of empathy — I just didn’t see that point as being news in any sense (it’s been pretty obvious for millennia!). I see the particular study as an attempt to quantify some  expressions of empathy (or responses that imply the lack of it). What it doesn’t do is give us much of a clue as to what cultural factors are involved in the suppression of empathic behavior. (And I thought that blaming it on increasing use of AI was really a stretch!)  As I wrote before, what significance that study has depends on the nature of the devices used to generate those statistics. 

 

There are lots of theories about what causes empathic behavior to be suppressed (not all of them use that terminology, of course.) I think they are valuable to the extent that they give us some clues as to what we can do about the situation. To take the example that happens to be in front of me:  

The election of Donald Trump can certainly be taken as a symptom of a decline in empathy. In her new book, Naomi Klein spends several chapters explaining in factual detail how certain trends in American culture (going back several decades) have prepared the way for somebody like Trump to exploit the situation. But the title of her book, No is Not Enough, emphasizes that what’s needed is not another round of recriminations but a coherent vision of a better way to live, and a viable alternative to the pathologically partisan politics of the day. I can see its outlines in a document called the LEAP manifesto, and I’d like to see us google that and spend more time considering it than we do blaming Google or other arms of “The Machine” for the mess we’re in.  

 

But enough about politics and such “vitally important” matters. What interests me about AI (which is supposed to be the subject of this thread) is what we can learn from it about how the mind works, whether it’s a human or animal bodymind or not. That’s also what my book is about and why I’m interested in Peircean semiotics. And I daresay that’s what motivates many, if not most, AI researchers, including the students that John Sowa is addressing in that presentation he’s still working on.  

 

Gary f.

  

} What is seen with one eye has no depth. [Ursula LeGuin] {

http://gnusystems.ca/wp/  }{  Turning Signs gateway

 

From: Eugene Halton [mailto: eugene.w.halto...@nd.edu]
Sent: 26-Jun-17 11:09
To: Peirce List
Subject: RE: [PEIRCE-L] RE: AI

 



Dear Gary F, 


     Here is a link to the Sarah Konrath et al. study on the decline of empathy among American college students: 



http://faculty.chicagobooth.edu/eob/edobrien_empathyPSPR.pdf



   And a brief Scientific American article on it:  



https://www.scientificamerican.com/article/what-me-care/



 



 You state: " I think Peirce would say that these attributions of empathy (or consciousness) to others are  perceptual judgments — not percepts, but quite beyond (or beneath) any conscious control, and . We feel it rather than reading it from external indications."



 This seems to me to miss the point that it is possible to disable the feeling of empathy. Clinical narcissistic disturbance, for example, substitutes idealization for perceptual feeling, so that what is perceived can be idealized rather than felt.



 Extrapolate that to a society that substitutes on mass scales idealization for felt experience, and you can have societally reduced empathy. Unempathic parenting is an excellent way to produce the social media-addicted janissary offspring.



     The human face is a subtle neuromuscular organ of attunement, which has the capacity to read another's mind through mirror micro-mimicry of the other's facial gestures, completely subconsciously. These are  "external indications" mirrored by one.
  One study showed that botox treatments, in paralyzing facial muscles, reduce the micro-mimicry of empathic attunement to the other face in an interaction. The botox recipient is not only impaired in exhibiting her or his own emotional facial micro-muscular movements, but also is impaired in subconsciously micro-mimicking that of the other, thus reducing the embodied feel of the other’s emotional-gestural state (Neal and Chartrand, 2011). Empathy is reduced through the disabling of the facial muscles.
 Vittorio Gallese, one of the neuroscientists who discovered mirror neutons, has discussed "embodied simulation" through "shared neural underpinnings." He states: “…social cognition is not only explicitly reasoning about the contents of someone else’s mind. Our brains, and those of other primates, appear to have developed a basic functional mechanism, embodied simulation, which gives us an experiential insight of other minds. The shareability of the phenomenal content of the intentional relations of others, by means of the 

Re: Re: RE: [PEIRCE-L] RE: AI

2017-06-26 Thread Edwina Taborsky
 in their communities.
Inevitably, this bottom-up revival will lead to a renewal of
democracy at every level of government, working swiftly towards a
system in which every vote counts and corporate money is removed from
political campaigns.  This is a great deal to take on all at once, but
such are the times in which we live. The drop in oil prices has
temporarily relieved the pressure to dig up fossil fuels as rapidly
as high-risk technologies will allow. This pause in frenetic
expansion should not be viewed as a crisis, but as a gift. 
 It has given us a rare moment to look at what we have become – and
decide to change. And so we call on all those seeking political office
to seize this opportunity and embrace the urgent need for
transformation. This is our sacred duty to those this country harmed
in the past, to those suffering needlessly in the present, and to all
who have a right to a bright and safe future.  Now is the time for
boldness.
 Now is the time to leap.
 Gary RichmondPhilosophy and Critical Thinking Communication
StudiesLaGuardia College of the City University of New YorkC 745718
482-5690 
 On Mon, Jun 26, 2017 at 2:16 PM, Edwina Taborsky  wrote:
 Gary F - as you say, these issues really have no place in a Peircean
analytic framework - unless we want to explore the development of
societal norms as a form of Thirdness - which is a legitimate area of
research.

I, myself, reject the Naomi Klein perspective [all of her work] and
certainly, reject the LEAP perspective- and would argue against it as
a naïve utopian agenda. You cannot do away with any of the modal
categories, even in Big Systems, eg, as in societal analysis - and
coming up with purely rhetorical versions of Thirdness [rather than
the real Thirdness that is in that society] and trying to do away
with the existential conflicts of Secondness and the private feelings
of Firstness is, in my view, a useless agenda.  

Edwina
 On Mon 26/06/17  1:50 PM , g...@gnusystems.ca [3] sent:
Gene,
 Thanks for the links; I’m quite familiar with the mirror neuron
research and the inferences various people have drawn from it, and it
reinforces the point I was trying to make, that empathy is deeper than
deliberate reasoning — as well as Peirce’s point that science is
grounded in empathy (or at least in “the social principle”).
I didn’t miss the point that it is possible to disable the feeling
of empathy — I just didn’t see that point as being news in any
sense (it’s been pretty obvious for millennia!). I see the
particular study as an attempt to quantify some  expressions of
empathy (or responses that imply the lack of it). What it doesn’t
do is give us much of a clue as to what cultural factors are involved
in the suppression of empathic behavior. (And I thought that blaming
it on increasing use of AI was really a stretch!)  As I wrote before,
what significance that study has depends on the nature of the devices
used to generate those statistics.
There are lots of theories about what causes empathic behavior to be
suppressed (not all of them use that terminology, of course.) I think
they are valuable to the extent that they give us some clues as to
what we can do about the situation. To take the example that happens
to be in front of me:  

 The election of Donald Trump can certainly be taken as a symptom of
a decline in empathy. In her new book, Naomi Klein spends several
chapters explaining in factual detail how certain trends in American
culture (going back several decades) have prepared the way for
somebody like Trump to exploit the situation. But the title of her
book, No is Not Enough, emphasizes that what’s needed is not
another round of recriminations but a coherent vision of a better way
to live, and a viable alternative to the pathologically partisan
politics of the day. I can see its outlines in a document called the
LEAP manifesto, and I’d like to see us google that and spend more
time considering it than we do blaming Google or other arms of “The
Machine” for the mess we’re in.  
But enough about politics and such “vitally important” matters.
What interests me about AI (which is supposed to be the subject of
this thread) is what we can learn from it about how the mind works,
whether it’s a human or animal bodymind or not. That’s also what
my book is about and why I’m interested in Peircean semiotics. And
I daresay that’s what motivates many, if not most, AI researchers,
including the students that John Sowa is addressing in that
presentation he’s still working on.  
Gary f.
} What is seen with one eye has no depth. [Ursula LeGuin] {

http://gnusystems.ca/wp/  [4] }{  Turning Signs gateway
From: Eugene Halton [mailto: eugene.w.halto...@nd.edu [5]] 
  Sent: 26-Jun-17 11:09
 To: Peirce List 
 Subject: RE: [PEIRCE-L] RE: AI
Dear Gary F,

 Here is a link to the Sarah Konrath et al. study on the decline
of empathy among American college students

Re: RE: [PEIRCE-L] RE: AI

2017-06-26 Thread Gary Richmond
links; I’m quite familiar with the mirror neuron research
> and the inferences various people have drawn from it, and it reinforces the
> point I was trying to make, that empathy is deeper than deliberate
> reasoning — as well as Peirce’s point that science is grounded in empathy
> (or at least in “the social principle”).
>
>
>
> I didn’t miss the point that it is possible to disable the feeling of
> empathy — I just didn’t see that point as being news in any sense (it’s
> been pretty obvious for millennia!). I see the particular study as an
> attempt to quantify some expressions of empathy (or responses that imply
> the lack of it). What it doesn’t do is give us much of a clue as to what
> cultural factors are involved in the suppression of empathic behavior. (And
> I thought that blaming it on increasing use of AI was really a stretch!)
>  As I wrote before, what significance that study has depends on the nature
> of the devices used to generate those statistics.
>
>
>
> There are lots of theories about what causes empathic behavior to be
> suppressed (not all of them use that terminology, of course.) I think they
> are valuable to the extent that they give us some clues as to what we can
> do about the situation. To take the example that happens to be in front
> of me:
>
> The election of Donald Trump can certainly be taken as a symptom of a
> decline in empathy. In her new book, Naomi Klein spends several chapters
> explaining in factual detail how certain trends in American culture (going
> back several decades) have prepared the way for somebody like Trump to
> exploit the situation. But the title of her book, No is Not Enough,
> emphasizes that what’s needed is not another round of recriminations but a
> coherent vision of a better way to live, and a viable alternative to the
> pathologically partisan politics of the day. I can see its outlines in a
> document called the LEAP manifesto, and I’d like to see us google that and
> spend more time considering it than we do blaming Google or other arms of
> “The Machine” for the mess we’re in.
>
>
>
> But enough about politics and such “vitally important” matters. What
> interests me about AI (which is supposed to be the subject of this thread)
> is what we can learn from it about how the mind works, whether it’s a human
> or animal bodymind or not. That’s also what my book is about and why I’m
> interested in Peircean semiotics. And I daresay that’s what motivates many,
> if not most, AI researchers, including the students that John Sowa is
> addressing in that presentation he’s still working on.
>
>
>
> Gary f.
>
>
>
> } What is seen with one eye has no depth. [Ursula LeGuin] {
>
> http://gnusystems.ca/wp/ }{ Turning Signs gateway
>
>
>
> From: Eugene Halton [mailto:eugene.w.halto...@nd.edu]
> Sent: 26-Jun-17 11:09
> To: Peirce List
> Subject: RE: [PEIRCE-L] RE: AI
>
>
>
> Dear Gary F,
>
>  Here is a link to the Sarah Konrath et al. study on the decline of
> empathy among American college students:
>
> http://faculty.chicagobooth.edu/eob/edobrien_empathyPSPR.pdf
>
>And a brief Scientific American article on it:
>
> https://www.scientificamerican.com/article/what-me-care/
>
>
>
>  You state: "I think Peirce would say that these attributions of
> empathy (or consciousness) to others are perceptual judgments — not
> percepts, but quite beyond (or beneath) any conscious control, and . We
> feel it rather than reading it from external indications."
>
>  This seems to me to miss the point that it is possible to disable the
> feeling of empathy. Clinical narcissistic disturbance, for example,
> substitutes idealization for perceptual feeling, so that what is perceived
> can be idealized rather than felt.
>
>  Extrapolate that to a society that substitutes on mass scales
> idealization for felt experience, and you can have societally reduced
> empathy. Unempathic parenting is an excellent way to produce the social
> media-addicted janissary offspring.
>
>  The human face is a subtle neuromuscular organ of attunement, which
> has the capacity to read another's mind through mirror micro-mimicry of the
> other's facial gestures, completely subconsciously. These are "external
> indications" mirrored by one.
>   One study showed that botox treatments, in paralyzing facial
> muscles, reduce the micro-mimicry of empathic attunement to the other face
> in an interaction. The botox recipient is not only impaired in exhibiting
> her or his own emotional facial micro-muscular movements, but also is
> impaired in subconsciously micro-mimicking that of the other, thus reducing
> the embodied feel of the other’s emotional-ges

Re: RE: [PEIRCE-L] RE: AI

2017-06-26 Thread Edwina Taborsky
 
 Gary F - as you say, these issues really have no place in a Peircean
analytic framework - unless we want to explore the development of
societal norms as a form of Thirdness - which is a legitimate area of
research.

I, myself, reject the Naomi Klein perspective [all of her work] and
certainly, reject the LEAP perspective- and would argue against it as
a naïve utopian agenda. You cannot do away with any of the modal
categories, even in Big Systems, eg, as in societal analysis - and
coming up with purely rhetorical versions of Thirdness [rather than
the real Thirdness that is in that society] and trying to do away
with the existential conflicts of Secondness and the private feelings
of Firstness is, in my view, a useless agenda. 

Edwina
 On Mon 26/06/17  1:50 PM , g...@gnusystems.ca sent:
Gene,
 Thanks for the links; I’m quite familiar with the mirror neuron
research and the inferences various people have drawn from it, and it
reinforces the point I was trying to make, that empathy is deeper than
deliberate reasoning — as well as Peirce’s point that science is
grounded in empathy (or at least in “the social principle”).
I didn’t miss the point that it is possible to disable the feeling
of empathy — I just didn’t see that point as being news in any
sense (it’s been pretty obvious for millennia!). I see the
particular study as an attempt to quantify some  expressions of
empathy (or responses that imply the lack of it). What it doesn’t
do is give us much of a clue as to what cultural factors are involved
in the suppression of empathic behavior. (And I thought that blaming
it on increasing use of AI was really a stretch!)  As I wrote before,
what significance that study has depends on the nature of the devices
used to generate those statistics.
There are lots of theories about what causes empathic behavior to be
suppressed (not all of them use that terminology, of course.) I think
they are valuable to the extent that they give us some clues as to
what we can do about the situation. To take the example that happens
to be in front of me: 

 The election of Donald Trump can certainly be taken as a symptom of
a decline in empathy. In her new book, Naomi Klein spends several
chapters explaining in factual detail how certain trends in American
culture (going back several decades) have prepared the way for
somebody like Trump to exploit the situation. But the title of her
book, No is Not Enough, emphasizes that what’s needed is not
another round of recriminations but a coherent vision of a better way
to live, and a viable alternative to the pathologically partisan
politics of the day. I can see its outlines in a document called the
LEAP manifesto, and I’d like to see us google that and spend more
time considering it than we do blaming Google or other arms of “The
Machine” for the mess we’re in. 
But enough about politics and such “vitally important” matters.
What interests me about AI (which is supposed to be the subject of
this thread) is what we can learn from it about how the mind works,
whether it’s a human or animal bodymind or not. That’s also what
my book is about and why I’m interested in Peircean semiotics. And
I daresay that’s what motivates many, if not most, AI researchers,
including the students that John Sowa is addressing in that
presentation he’s still working on. 
Gary f.
} What is seen with one eye has no depth. [Ursula LeGuin] {

http://gnusystems.ca/wp/ [1] }{  Turning Signs gateway
From: Eugene Halton [mailto:eugene.w.halto...@nd.edu] 
 Sent: 26-Jun-17 11:09
 To: Peirce List 
 Subject: RE: [PEIRCE-L] RE: AI
Dear Gary F,

 Here is a link to the Sarah Konrath et al. study on the decline
of empathy among American college students:  

http://faculty.chicagobooth.edu/eob/edobrien_empathyPSPR.pdf [2]

   And a brief Scientific American article on it: 

 https://www.scientificamerican.com/article/what-me-care/ [3]
 You state: "I think Peirce would say that these attributions of
empathy (or consciousness) to others are  perceptual judgments — not
percepts, but quite beyond (or beneath) any conscious control, and .
We feel it rather than reading it from external indications."

 This seems to me to miss the point that it is possible to
disable the feeling of empathy. Clinical narcissistic disturbance,
for example, substitutes idealization for perceptual feeling, so that
what is perceived can be idealized rather than felt.  

 Extrapolate that to a society that substitutes on mass scales
idealization for felt experience, and you can have societally reduced
empathy. Unempathic parenting is an excellent way to produce the
social media-addicted janissary offspring. 

 The human face is a subtle neuromuscular organ of attunement,
which has the capacity to read another's mind through mirror
micro-mimicry of t

RE: [PEIRCE-L] RE: AI

2017-06-26 Thread gnox
Gene,

 

Thanks for the links; I’m quite familiar with the mirror neuron research and 
the inferences various people have drawn from it, and it reinforces the point I 
was trying to make, that empathy is deeper than deliberate reasoning — as well 
as Peirce’s point that science is grounded in empathy (or at least in “the 
social principle”).

 

I didn’t miss the point that it is possible to disable the feeling of empathy — 
I just didn’t see that point as being news in any sense (it’s been pretty 
obvious for millennia!). I see the particular study as an attempt to quantify 
some expressions of empathy (or responses that imply the lack of it). What it 
doesn’t do is give us much of a clue as to what cultural factors are involved 
in the suppression of empathic behavior. (And I thought that blaming it on 
increasing use of AI was really a stretch!)  As I wrote before, what 
significance that study has depends on the nature of the devices used to 
generate those statistics.

 

There are lots of theories about what causes empathic behavior to be suppressed 
(not all of them use that terminology, of course.) I think they are valuable to 
the extent that they give us some clues as to what we can do about the 
situation. To take the example that happens to be in front of me: 

The election of Donald Trump can certainly be taken as a symptom of a decline 
in empathy. In her new book, Naomi Klein spends several chapters explaining in 
factual detail how certain trends in American culture (going back several 
decades) have prepared the way for somebody like Trump to exploit the 
situation. But the title of her book, No is Not Enough, emphasizes that what’s 
needed is not another round of recriminations but a coherent vision of a better 
way to live, and a viable alternative to the pathologically partisan politics 
of the day. I can see its outlines in a document called the LEAP manifesto, and 
I’d like to see us google that and spend more time considering it than we do 
blaming Google or other arms of “The Machine” for the mess we’re in.

 

But enough about politics and such “vitally important” matters. What interests 
me about AI (which is supposed to be the subject of this thread) is what we can 
learn from it about how the mind works, whether it’s a human or animal bodymind 
or not. That’s also what my book is about and why I’m interested in Peircean 
semiotics. And I daresay that’s what motivates many, if not most, AI 
researchers, including the students that John Sowa is addressing in that 
presentation he’s still working on.

 

Gary f.

 

} What is seen with one eye has no depth. [Ursula LeGuin] {

 <http://gnusystems.ca/wp/> http://gnusystems.ca/wp/ }{ Turning Signs gateway

 

From: Eugene Halton [mailto:eugene.w.halto...@nd.edu] 
Sent: 26-Jun-17 11:09
To: Peirce List <peirce-l@list.iupui.edu>
Subject: RE: [PEIRCE-L] RE: AI

 

Dear Gary F,

 Here is a link to the Sarah Konrath et al. study on the decline of empathy 
among American college students: 

http://faculty.chicagobooth.edu/eob/edobrien_empathyPSPR.pdf

   And a brief Scientific American article on it: 

https://www.scientificamerican.com/article/what-me-care/

 

 You state: "I think Peirce would say that these attributions of empathy 
(or consciousness) to others are perceptual judgments — not percepts, but quite 
beyond (or beneath) any conscious control, and . We feel it rather than reading 
it from external indications."

 This seems to me to miss the point that it is possible to disable the 
feeling of empathy. Clinical narcissistic disturbance, for example, substitutes 
idealization for perceptual feeling, so that what is perceived can be idealized 
rather than felt. 

 Extrapolate that to a society that substitutes on mass scales idealization 
for felt experience, and you can have societally reduced empathy. Unempathic 
parenting is an excellent way to produce the social media-addicted janissary 
offspring. 

 The human face is a subtle neuromuscular organ of attunement, which has 
the capacity to read another's mind through mirror micro-mimicry of the other's 
facial gestures, completely subconsciously. These are "external indications" 
mirrored by one. 
  One study showed that botox treatments, in paralyzing facial muscles, 
reduce the micro-mimicry of empathic attunement to the other face in an 
interaction. The botox recipient is not only impaired in exhibiting her or his 
own emotional facial micro-muscular movements, but also is impaired in 
subconsciously micro-mimicking that of the other, thus reducing the embodied 
feel of the other’s emotional-gestural state (Neal and Chartrand, 2011). 
Empathy is reduced through the disabling of the facial muscles.
 Vittorio Gallese, one of the neuroscientists who discovered mirror 
neutons, has discussed "embodied simulation" through "shared neural 
underpinnings." He states: “…social cognition is not only

Re: RE: [PEIRCE-L] RE: AI

2017-06-26 Thread Edwina Taborsky
 

 BODY { font-family:Arial, Helvetica, sans-serif;font-size:12px; }
 Gene, list - very interesting -

I wonder if there are multiple issues here about the 'decline of
empathy'.

One reason might be the postmodern method of raising children which,
in a sense, isolates the child from any effect of his behaviour. That
is - no matter what he/she does, he is praised as 'that's great'. If
the child acts out, then, he is assumed to be a victim of some
aggression that is, in a mechanical sense, causing him to release
that aggression on someone else. He is not nurtured to be himself 
causal and responsible. The focus is on 'building self-esteem'.  Some
schools do not give marks to prevent 'loss of self-esteem'. This
building up of a sense of inviolate righteousness is one possible
cause of the decline of empathy, since the focus, as noted, is on the
Self and not on the Self-and-Others.

The interesting thing is that along with this isolation of the Self
from the effects of how one directly acts towards others  - and I
think the increase in bullying is one result, but- we see an increase
in what I call Seminar Room interaction with Others. That is, the
individual interacts with others indirectly, by joining abstract
group causes: peace, climate change, earth day  where what one
does as an individual is indirect and actually, has little to no
effect.

But there is another issue - and that is the increase of tribalism
in our societies. By tribalism I mean 'identity politics' which
rejects a common humanity that is shared by all, and  rejects
individualism within this commonality and instead herds people into
homogeneous groups with unique characteristics - and considers them
isolate from, different from - other groups. Tribalism by definition
views other tribes as adversarial. Therefore the people in other
tribes are 'dehumanized'. We see this in wars - where both sides view
each other as non-human.

But your other issue - the importance of facial expression - is also
important. I can see the argument with regard to Botox, but this
argument is also valid with regard to cultural veils which hide the
face to non-members of the tribe and thus reject outside involvement;
 and to cultural values which reject expression of emotions [stiff
upper lip] and, effectively, also result in the non-involvement of
others. 

Edwina
 On Mon 26/06/17 11:08 AM , Eugene Halton eugene.w.halto...@nd.edu
sent:
 Dear Gary F, Here is a link to the Sarah Konrath et al. study on
the decline of empathy among American college students: 
http://faculty.chicagobooth. [1]edu/eob/edobrien_empathyPSPR.pdf
And a brief Scientific American article on it:
https://www.scientificamerican.com/article/what-me-care/ [2] 
  You state: "I think Peirce would say that these attributions of
empathy (or consciousness) to others are perceptual judgments — not
percepts, but quite beyond (or beneath) any conscious control, and .
We feel it rather than reading it from external indications."
  This seems to me to miss the point that it is possible to
disable the feeling of empathy. Clinical narcissistic disturbance,
for example, substitutes idealization for perceptual feeling, so that
what is perceived can be idealized rather than felt. 
   Extrapolate that to a society that substitutes on mass scales
idealization for felt experience, and you can have societally reduced
empathy. Unempathic parenting is an excellent way to produce the
social media-addicted janissary offspring. 
  The human face is a subtle neuromuscular organ of attunement,
which has the capacity to read another's mind through mirror
micro-mimicry of the other's facial gestures, completely
subconsciously. These are  "external indications" mirrored by one. 
   One study showed that botox treatments, in paralyzing facial
muscles, reduce the micro-mimicry of empathic attunement to the other
face in an interaction. The botox recipient is not only impaired in
exhibiting her or his own emotional facial micro-muscular movements,
but also is impaired in subconsciously micro-mimicking that of the
other, thus reducing the embodied feel of the other’s
emotional-gestural state (Neal and Chartrand, 2011). Empathy is
reduced through the disabling of the facial muscles.
  Vittorio Gallese, one of the neuroscientists who discovered
mirror neutons, has discussed "embodied simulation" through "shared
neural underpinnings." He states: “…social cognition is not only
explicitly reasoning about the contents of someone else’s mind. Our
brains, and those of other primates, appear to have developed a basic
functional mechanism, embodied simulation, which gives us an
experiential insight of other minds. The shareability of the
phenomenal content of the intentional relations of others, by means
of the shared neural underpinnings, produces intentional attunement.
Intentional attunement, in turn, by collapsing the others’
intentions into the 

RE: [PEIRCE-L] RE: AI

2017-06-26 Thread Eugene Halton
Dear Gary F,
 Here is a link to the Sarah Konrath et al. study on the decline of
empathy among American college students:
http://faculty.chicagobooth.edu/eob/edobrien_empathyPSPR.pdf
   And a brief Scientific American article on it:
https://www.scientificamerican.com/article/what-me-care/

 You state: "I think Peirce would say that these attributions of
empathy (or consciousness) to others are *perceptual judgments* — not
percepts, but quite beyond (or beneath) any conscious control, and . We
*feel* it rather than reading it from external indications."

 This seems to me to miss the point that it is possible to disable the
feeling of empathy. Clinical narcissistic disturbance, for example,
substitutes idealization for perceptual feeling, so that what is perceived
can be idealized rather than felt.
 Extrapolate that to a society that substitutes on mass scales
idealization for felt experience, and you can have societally reduced
empathy. Unempathic parenting is an excellent way to produce the social
media-addicted janissary offspring.
 The human face is a subtle neuromuscular organ of attunement, which
has the capacity to read another's mind through mirror micro-mimicry of the
other's facial gestures, completely subconsciously. These are "external
indications" mirrored by one.
  One study showed that botox treatments, in paralyzing facial muscles,
reduce the micro-mimicry of empathic attunement to the other face in an
interaction. The botox recipient is not only impaired in exhibiting her or
his own emotional facial micro-muscular movements, but also is impaired in
subconsciously micro-mimicking that of the other, thus reducing the
embodied feel of the other’s emotional-gestural state (Neal and Chartrand,
2011). Empathy is reduced through the disabling of the facial muscles.
 Vittorio Gallese, one of the neuroscientists who discovered mirror
neutons, has discussed "embodied simulation" through "shared neural
underpinnings." He states: “…social cognition is not only explicitly
reasoning about the contents of someone else’s mind. Our brains, and those
of other primates, appear to have developed a basic functional mechanism,
embodied simulation, which gives us an experiential insight of other minds.
The shareability of the phenomenal content of the intentional relations of
others, by means of the shared neural underpinnings, produces intentional
attunement. Intentional attunement, in turn, by collapsing the others’
intentions into the observer’s ones, produces the peculiar quality of
familiarity we entertain with other individuals. This is what “being
empathic” is about. By means of a shared neural state realized in two
different bodies that nevertheless obey to the same morpho-functional
rules, the “objectual other” becomes “another self”. Vittorio Gallese,
“Intentional Attunement. The Mirror Neuron System and Its Role in
Interpersonal Relations,” 15 November 2004 Interdisciplines,
http://www.interdisciplines.org/mirror/papers/1
  Gene Halton




On Jun 20, 2017 7:00 PM,  wrote:

> List,
>
>
>
> Gene’s post in this thread had much to say about “empathy” — considered as
> something that can be measured and quantified for populations of students,
> so that comments about trends in “empathy” among them can be taken as
> meaningful and important.
>
>
>
> I wonder about that.
>
>
>
> My wondering was given more definite shape just now when I came across
> this passage in a recent book about consciousness by Evan Thompson:
>
> [[ In practice and in everyday life … we don’t infer the inner presence of
> consciousness on the basis of outer criteria. Instead, prior to any kind of
> reflection or deliberation, we already implicitly recognize each other as
> conscious on the basis of empathy. Empathy, as philosophers in the
> phenomenological tradition have shown, is the direct perception of another
> being’s actions and gestures as expressive embodiments of consciousness. We
> don’t see facial expressions, for example, as outer signs of an inner
> consciousness, as we might see an EEG pattern; we see joy directly in the
> smiling face or sadness in the tearful eyes. Moreover, even in difficult or
> problematic cases where we’re forced to consider outer criteria, their
> meaningfulness as indicators of consciousness ultimately depends depends on
> and presupposes our prior empathetic grasp of consciousness. ]]
>
>   —Thompson, Evan. *Waking, Dreaming, Being: Self and Consciousness in
> Neuroscience, Meditation, and Philosophy* (Kindle Locations 2362-2370).
> Columbia University Press. Kindle Edition.
>
>
>
> If we don’t “infer the inner presence of consciousness on the basis of
> outer criteria,” but perceive it directly *on the basis of empathy*, how
> do we infer the inner presence (or absence) of empathy itself? In the same
> way, i.e. by *direct perception*, according to Thompson. I think Peirce
> would say that these attributions of empathy (or consciousness) to 

Re: [PEIRCE-L] RE: AI

2017-06-20 Thread Stephen C. Rose
There are all sorts of theories and I think those to do with empathy can
rest alongside studies that show, as one from Harvard recently did, hat
affluent millennials would be receptive to a police state. I am with
Wittgenstein on theories (not for them) and with Peirce in dismissing the
blanket doubt of Descartes. We ebb and flow but generally evolve. Slowly,
fallibly, with some trust in continuity.

amazon.com/author/stephenrose

On Tue, Jun 20, 2017 at 7:00 PM,  wrote:

> List,
>
>
>
> Gene’s post in this thread had much to say about “empathy” — considered as
> something that can be measured and quantified for populations of students,
> so that comments about trends in “empathy” among them can be taken as
> meaningful and important.
>
>
>
> I wonder about that.
>
>
>
> My wondering was given more definite shape just now when I came across
> this passage in a recent book about consciousness by Evan Thompson:
>
> [[ In practice and in everyday life … we don’t infer the inner presence of
> consciousness on the basis of outer criteria. Instead, prior to any kind of
> reflection or deliberation, we already implicitly recognize each other as
> conscious on the basis of empathy. Empathy, as philosophers in the
> phenomenological tradition have shown, is the direct perception of another
> being’s actions and gestures as expressive embodiments of consciousness. We
> don’t see facial expressions, for example, as outer signs of an inner
> consciousness, as we might see an EEG pattern; we see joy directly in the
> smiling face or sadness in the tearful eyes. Moreover, even in difficult or
> problematic cases where we’re forced to consider outer criteria, their
> meaningfulness as indicators of consciousness ultimately depends depends on
> and presupposes our prior empathetic grasp of consciousness. ]]
>
>   —Thompson, Evan. *Waking, Dreaming, Being: Self and Consciousness in
> Neuroscience, Meditation, and Philosophy* (Kindle Locations 2362-2370).
> Columbia University Press. Kindle Edition.
>
>
>
> If we don’t “infer the inner presence of consciousness on the basis of
> outer criteria,” but perceive it directly *on the basis of empathy*, how
> do we infer the inner presence (or absence) of empathy itself? In the same
> way, i.e. by *direct perception*, according to Thompson. I think Peirce
> would say that these attributions of empathy (or consciousness) to others
> are *perceptual judgments* — not percepts, but quite beyond (or beneath)
> any conscious control, and . We *feel* it rather than reading it from
> external indications. To use Thompson’s example, we can measure the
> temperature by reading a thermometer, using a scale designed for that
> purpose. But we can’t measure the feeling of *warmth* as experienced by
> the one who feels it.
>
>
>
> Now, the statistics cited by Gene may indeed indicate something important,
> just as measures of global temperature may indicate something important.
> But what it does indicate, and what significance that has, depends on the
> nature of the devices used to generate those statistics. And I can’t help
> feeling that *empathy* is more important than anything *measurable* by
> those means.
>
>
>
> (I won’t go further into the semiotic nature of perceptual judgments here,
> but I have in *Turning Signs*: http://www.gnusystems.ca/TS/blr.htm#Perce.)
>
>
>
>
> Gary f.
>
>
> -
> PEIRCE-L subscribers: Click on "Reply List" or "Reply All" to REPLY ON
> PEIRCE-L to this message. PEIRCE-L posts should go to
> peirce-L@list.iupui.edu . To UNSUBSCRIBE, send a message not to PEIRCE-L
> but to l...@list.iupui.edu with the line "UNSubscribe PEIRCE-L" in the
> BODY of the message. More at http://www.cspeirce.com/peirce-l/peirce-l.htm
> .
>
>
>
>
>
>

-
PEIRCE-L subscribers: Click on "Reply List" or "Reply All" to REPLY ON PEIRCE-L 
to this message. PEIRCE-L posts should go to peirce-L@list.iupui.edu . To 
UNSUBSCRIBE, send a message not to PEIRCE-L but to l...@list.iupui.edu with the 
line "UNSubscribe PEIRCE-L" in the BODY of the message. More at 
http://www.cspeirce.com/peirce-l/peirce-l.htm .






RE: [PEIRCE-L] RE: AI

2017-06-20 Thread gnox
List,

 

Gene's post in this thread had much to say about "empathy" - considered as
something that can be measured and quantified for populations of students,
so that comments about trends in "empathy" among them can be taken as
meaningful and important.

 

I wonder about that.

 

My wondering was given more definite shape just now when I came across this
passage in a recent book about consciousness by Evan Thompson:

[[ In practice and in everyday life . we don't infer the inner presence of
consciousness on the basis of outer criteria. Instead, prior to any kind of
reflection or deliberation, we already implicitly recognize each other as
conscious on the basis of empathy. Empathy, as philosophers in the
phenomenological tradition have shown, is the direct perception of another
being's actions and gestures as expressive embodiments of consciousness. We
don't see facial expressions, for example, as outer signs of an inner
consciousness, as we might see an EEG pattern; we see joy directly in the
smiling face or sadness in the tearful eyes. Moreover, even in difficult or
problematic cases where we're forced to consider outer criteria, their
meaningfulness as indicators of consciousness ultimately depends depends on
and presupposes our prior empathetic grasp of consciousness. ]]

  -Thompson, Evan. Waking, Dreaming, Being: Self and Consciousness in
Neuroscience, Meditation, and Philosophy (Kindle Locations 2362-2370).
Columbia University Press. Kindle Edition.

 

If we don't "infer the inner presence of consciousness on the basis of outer
criteria," but perceive it directly on the basis of empathy, how do we infer
the inner presence (or absence) of empathy itself? In the same way, i.e. by
direct perception, according to Thompson. I think Peirce would say that
these attributions of empathy (or consciousness) to others are perceptual
judgments - not percepts, but quite beyond (or beneath) any conscious
control, and . We feel it rather than reading it from external indications.
To use Thompson's example, we can measure the temperature by reading a
thermometer, using a scale designed for that purpose. But we can't measure
the feeling of warmth as experienced by the one who feels it.

 

Now, the statistics cited by Gene may indeed indicate something important,
just as measures of global temperature may indicate something important. But
what it does indicate, and what significance that has, depends on the nature
of the devices used to generate those statistics. And I can't help feeling
that empathy is more important than anything measurable by those means.

 

(I won't go further into the semiotic nature of perceptual judgments here,
but I have in Turning Signs: http://www.gnusystems.ca/TS/blr.htm#Perce.) 

 

Gary f.


-
PEIRCE-L subscribers: Click on "Reply List" or "Reply All" to REPLY ON PEIRCE-L 
to this message. PEIRCE-L posts should go to peirce-L@list.iupui.edu . To 
UNSUBSCRIBE, send a message not to PEIRCE-L but to l...@list.iupui.edu with the 
line "UNSubscribe PEIRCE-L" in the BODY of the message. More at 
http://www.cspeirce.com/peirce-l/peirce-l.htm .






Re: [PEIRCE-L] RE: AI

2017-06-20 Thread John F Sowa

On 6/20/2017 11:58 AM, kirst...@saunalahti.fi wrote:
Are you taking the side: "machines are innocent, blame individual 
persons' ???


No, that's not what I said or implied.  You said that you agreed
with Gene, and I was also agreeing with Gene:


On 6/15/2017 1:10 PM, Eugene Halton wrote:

What "would motivate [AI systems] to kill us?"
Rationally-mechanically infantilized us. 


There are many machines that are designed for neutral purposes,
such as cars and trucks.  They can be used for good or evil.

Many machines are deliberately designed for evil purposes.
For example, land mines, chemical weapons, nuclear bombs...
Those are inherently evil.  But they have no more intentionality
than a thermostat.  The evil is in the human design and use.

People talk about the possibility that machines might evolve
intentionality.  But there are no examples today.

The only examples that anyone has suggested are systems that
learn to be evil.  For example, a puppy's natural instinct is
to be a loving companion.  But it could be trained to be vicious.

That's all I was trying to say.  And I thought that I was
agreeing with Gene.

John

-
PEIRCE-L subscribers: Click on "Reply List" or "Reply All" to REPLY ON PEIRCE-L 
to this message. PEIRCE-L posts should go to peirce-L@list.iupui.edu . To 
UNSUBSCRIBE, send a message not to PEIRCE-L but to l...@list.iupui.edu with the 
line "UNSubscribe PEIRCE-L" in the BODY of the message. More at 
http://www.cspeirce.com/peirce-l/peirce-l.htm .






Re: [PEIRCE-L] RE: AI

2017-06-20 Thread kirstima

Hah. The minute I sent my message on no response, I got John's response.

This time, John, I have to say: Wrong, wrong, wrong,

You just don't know  what you are talking about. - just walking on very 
thin ice and expecting your fame on other fields with get you through.


It is not that some identifiable person is needed to put AI into inhuman 
action. Nor is it needed that this kind of mishap originates in any 
identifiable "machine".


You know better!

In any net, everything is connected with every other 'thing'. Just as 
you said on the philosphy of CSP.


Life is net-like.

Are you taking the side: "machines are innocent, blame individual 
persons' ???


If so, you are not seeing the forest, just the trees.

Kirsti

John F Sowa kirjoitti 16.6.2017 06:15:

On 6/15/2017 1:10 PM, Eugene Halton wrote:

What "would motivate [AI systems] to kill us?"
Rationally-mechanically infantilized us.


Yes.  That's similar to what I said:  "The most likely reason why
any AI system would have the goal to kill anything is that some
human(s) programmed [or somehow instilled] that goal into it."


these views seem to me blindingly limited understandings of what
a machine is, putting an artificial divide between the machine
and the human rather than seeing the machine as continuous with
the human.


I'm not denying that some kind of computer system might evolve
intentionality over some long period of time.  There are techniques
such as "genetic algorithms" that enable AI systems to improve.

But the word 'improve' implies value judgments -- a kind of Thirdness.
Where does that Thirdness come from?  For genetic algorithms, it comes
from a reward/punishment regime.  But rewards are already a kind of
Thirdness.

Darwin proposed "natural selection" -- but that selection was based
on a reward system that involved energy consumption (AKA food).
And things that eat (such as bacteria) already exhibit intentionality
by seeking and finding food, as Lynn Margulis observed.

As Peirce said, the origin of life must involve some nondegenerate
Thirdness.  There are only two options:  (1) Some random process that
takes millions or billions of years produces something that "eats".
(2) Some already intelligent being (God? Demiurge? Human?) speeds up
the process by programming (instilling) some primitive kind of
Thirdness and lets natural selection make improvements.

But as I said, the most likely cause of an evil AI system is some
human who deliberately or accidentally put the evil goal into it.
I would bet on Steve Bannon.

John



-
PEIRCE-L subscribers: Click on "Reply List" or "Reply All" to REPLY ON PEIRCE-L 
to this message. PEIRCE-L posts should go to peirce-L@list.iupui.edu . To 
UNSUBSCRIBE, send a message not to PEIRCE-L but to l...@list.iupui.edu with the 
line "UNSubscribe PEIRCE-L" in the BODY of the message. More at 
http://www.cspeirce.com/peirce-l/peirce-l.htm .






RE: Re: RE: [PEIRCE-L] RE: AI

2017-06-18 Thread Auke van Breemen
Edwina, Gary’s, list,

 

I wasn’t so much thinking about the reasoning. I started thinking whether a 
difference between life and mind could be pointed down in the trichotomies of 
the Welby classification. For instance in  the sympathetic, shocking and usual 
distinction. 

 

Emotional accompaniments, in Questions concerning, etc, are deemed to be 
contributions of the receptive sheet. The individual life is distinguished from 
the person by being the source of error.  

 

Best,

Auke

 

 

 

Van: Edwina Taborsky [mailto:tabor...@primus.ca] 
Verzonden: zaterdag 17 juni 2017 20:43
Aan: Peirce-L <peirce-l@list.iupui.edu>; Gary Richmond <gary.richm...@gmail.com>
Onderwerp: Re: Re: RE: [PEIRCE-L] RE: AI

 

Gary R - I'd agree with you.

First - I do agree [with Peirce] that Mind [and therefore semiosis] operates in 
the physic-chemical realm. BUT - this realm which provides the planet with 
enormous stability of matter [just imagine if a chemical kept 'evolving' and 
changing!!] - is NOT the same as the biological realm, which has internalized 
its laws within instantiations [Type-Token] and thus, a 'chance' deviation from 
the norm can take place in this one or few 'instantiations' and adapt into a 
different species - without impinging on the continuity of the former species. 
So, the biological realm can evolve and adapt - which provides matter with the 
diversity it needs to fend off entropy.

But AI is not, as I understand it - similar to a biological organism. It seems 
similar to a physico-chemical element. It's a programmed machine with the 
programming outside of its individual control.

 I simply don't see how it can set itself up as a Type-Token, and enable 
productive and collective deviations from the norm. I can see that a 
machine/robot can be semiotically  coupled with its external world. But - can 
it deviate from its norm, the rules we have put in and yes, the adaptations it 
has learned within these rules - can it deviate and set up a 'new species' so 
to speak? 

After all - in the biological realm that new species/Type can only appear if it 
is functional. Wouldn't the same principle hold for AI? 

Edwina

 

On Sat 17/06/17 1:56 PM , Gary Richmond  <mailto:gary.richm...@gmail.com> 
gary.richm...@gmail.com sent:

Auke, Edwina, Gary F, list,

 

Auke, quoting Gary F, wrote: "Biosemiotics has made us well aware of the 
intimate connection between life and semiosis." Then asked, "What if we insert 
‘mind’ instead of life?"

 

Edwina commented: " Excellent - but only if one considers that 'mInd' operates 
in the physic-chemical realm as well as the biological."

 

Yet one should as well consider that the bio- in biosemiotics shows that it is 
primarily concerned with the semiosis that occurs in life forms. This is not to 
suggest that mlnd and semiosis don't operate in other realms than the living, 
including the physio-chemical. What I've been saying is that  while I can see 
that AI systems (like the Gobot Gary F cited) can learn "inductively,"  I push 
back against the notion that they could develop certain intelligences as we 
find only in life forms.

 

In my opinion the 'mind' or 'intelligence' we see in machines is what's been 
put in them. As Gary F wrote: 

 

I also think that “machine intelligence” is a contradiction in terms. To me, an 
intelligent system must have an internal guidance system semiotically coupled 
with its external world, and must have some degree of autonomy in its 
interactions with other systems.

 

I fully concur with that statement. But what I can't agree with is his comment 
immediately following this, namely, "I think it’s quite plausible that AI 
systems could reach that level of autonomy and leave us behind in terms of 
intelligence   "

 

Computers and robots can already perform certain functions very much better 
than humans. But autonomy? That's another matter. Gary F finds machine autonomy 
(in the sense in which he described it just above) "plausible" while I find it 
highly implausible, Philip K. Dick not withstanding. 

 

Best,

 

Gary R

 

 

 






 

Gary Richmond

Philosophy and Critical Thinking

Communication Studies

LaGuardia College of the City University of New York

C 745

718 482-5690

 

On Sat, Jun 17, 2017 at 12:37 PM, Edwina Taborsky <tabor...@primus.ca 
<javascript:top.opencompose('tabor...@primus.ca','','','')> > wrote:


Excellent - but only if one considers that 'mInd' operates in the 
physic-chemical realm as well as the biological.

Edwina
 

On Sat 17/06/17 12:27 PM , "Auke van Breemen" a.bree...@chello.nl 
<javascript:top.opencompose('a.bree...@chello.nl','','','')>  sent:

Gary’s,

 

Biosemiotics has made us well aware of the intimate connection between life and 
semiosis. 

 

What if we insert ‘mind’ instead of life? 

 

Best,

Auke

 

 

Van: Gary Richmond [mailto:gary.richm...@gmail.com 
<

Re: Re: RE: [PEIRCE-L] RE: AI

2017-06-17 Thread Gary Richmond
Gary F wrote:

GF: In fact, the development of AlphaGo involved a collaboration of
programmers with expert human Go players who described their own thinking
process in coming up with strategically powerful moves. Just like a
scientist coming up with a hypothesis, a Go player would be hopelessly lost
if he tried to check out what would follow from *every possible* move.
Instead he has to appeal to *il lume natural* — and evidently the ways of
doing that are not *totally* mysterious and magical, nor is their
application limited to human brains. But I do think they are only available
to entities capable of learning by experience, and that’s why a machine
can’t play Go very well, or make abductions.


OK, now I'm confused. I thought you suggested that a machine c*ould *play
Go very well and *could *make abductions.

If so it is certainly not appealing to il lume natural as there's nothing
natual in a Gobot.

Best,

Gary R


[image: Gary Richmond]

*Gary Richmond*
*Philosophy and Critical Thinking*
*Communication Studies*
*LaGuardia College of the City University of New York*
*C 745*
*718 482-5690*

On Sat, Jun 17, 2017 at 5:21 PM,  wrote:

> Gary, you wrote,
>
> “the rapid, varied, and numerous inductiosn of the Gobot, for example, do
> not yet lead to true abduction. The Gobot merely chooses out of the
> extraordinarily many possible moves (more than an individual player would
> be able to imagine towards the ends of the game) those which appear optimal
> …”
>
>
>
> This is simply not true. AI researchers call these “brute-force methods,”
> and they were abandoned many years ago when it was recognized that a really
> good Go player could not work that way. Not even master chess-playing
> systems work that way, although the possible moves in chess are orders of
> maginitude fewer.
>
>
>
> In fact, the development of AlphaGo involved a collaboration of
> programmers with expert human Go players who described their own thinking
> process in coming up with strategically powerful moves. Just like a
> scientist coming up with a hypothesis, a Go player would be hopelessly lost
> if he tried to check out what would follow from *every possible* move.
> Instead he has to appeal to *il lume natural* — and evidently the ways of
> doing that are not *totally* mysterious and magical, nor is their
> application limited to human brains. But I do think they are only available
> to entities capable of learning by experience, and that’s why a machine
> can’t play Go very well, or make abductions.
>
>
>
> Gary f.
>
>
>
> *From:* Gary Richmond [mailto:gary.richm...@gmail.com]
> *Sent:* 17-Jun-17 15:31
>
> Edwina, list,
>
>
>
> Edwina wrote:
>
> AI is not, as I understand it - similar to a biological organism. It
> seems similar to a physico-chemical element. It's a programmed machine with
> the programming outside of its individual control.
>
> I agree. And this would be the case even if it were to 'learn' how to
> re-program itself in some way(s) and to some extent. It would all be just
> more programming. That is, only in the realm of science fiction does it
> seem to me that could it develop such vital characteristics as 'insight'.
> Or, as you put it, Edwina:
>
> ET: I simply don't see how it can set itself up as a Type-Token, and
> enable productive and collective deviations from the norm.
>
> As for the possibility of a machine to be semiotically coupled with its
> external world, well this is already happening, for example, in face
> recognition technology (and I'm sure there are even better examples of this
> coupling of AI systems to environments). But I don't see any autonomy in
> this.
>
> ET:  But - can it deviate from its norm, the rules we have put in and yes,
> the adaptations it has learned within these rules - can it deviate and set
> up a 'new species' so to speak?
>
> Gary F says he sees the possibility of an AI system developing powers of
> abduction. But I see no plausible argument to support that: the rapid,
> varied, and numerousl inductiosn of the Gobot, for example, do not yet lead
> to true abduction. The Gobot merely chooses out of the extraordinarily many
> possible moves (more than an individual player would be able to imagine
> towards the ends of the game) those which appear optimal--based on the
> rules of the game of Go--to lead it to winning the game *by the rules*.
> The human Go player may be surprised by this 'ability' (find it, as did the
> Go master beaten by the Gobot, unexpected), but to imagine that some
> 'surprising' move constitutes a kind of creative abduction does not seem to
> me logically warranted.
>
> ET: After all - in the biological realm that new species/Type can only
> appear if it is functional. Wouldn't the same principle hold for AI?
>
> I'd say yes. And, so again, this is why I find the possibility of the kind
> of creative abduction and insight which Gary F has been suggesting are
> "plausible' for AI systems, implausible.
>
> Best,
>
> Gary R
>
>
> 

RE: Re: RE: [PEIRCE-L] RE: AI

2017-06-17 Thread gnox
Gary, you wrote,

“the rapid, varied, and numerous inductiosn of the Gobot, for example, do not 
yet lead to true abduction. The Gobot merely chooses out of the extraordinarily 
many possible moves (more than an individual player would be able to imagine 
towards the ends of the game) those which appear optimal …”

 

This is simply not true. AI researchers call these “brute-force methods,” and 
they were abandoned many years ago when it was recognized that a really good Go 
player could not work that way. Not even master chess-playing systems work that 
way, although the possible moves in chess are orders of maginitude fewer. 

 

In fact, the development of AlphaGo involved a collaboration of programmers 
with expert human Go players who described their own thinking process in coming 
up with strategically powerful moves. Just like a scientist coming up with a 
hypothesis, a Go player would be hopelessly lost if he tried to check out what 
would follow from every possible move. Instead he has to appeal to il lume 
natural — and evidently the ways of doing that are not totally mysterious and 
magical, nor is their application limited to human brains. But I do think they 
are only available to entities capable of learning by experience, and that’s 
why a machine can’t play Go very well, or make abductions.

 

Gary f.

 

From: Gary Richmond [mailto:gary.richm...@gmail.com] 
Sent: 17-Jun-17 15:31



Edwina, list,

 

Edwina wrote: 

AI is not, as I understand it - similar to a biological organism. It seems 
similar to a physico-chemical element. It's a programmed machine with the 
programming outside of its individual control.

I agree. And this would be the case even if it were to 'learn' how to 
re-program itself in some way(s) and to some extent. It would all be just more 
programming. That is, only in the realm of science fiction does it seem to me 
that could it develop such vital characteristics as 'insight'. Or, as you put 
it, Edwina:

ET: I simply don't see how it can set itself up as a Type-Token, and enable 
productive and collective deviations from the norm.

As for the possibility of a machine to be semiotically coupled with its 
external world, well this is already happening, for example, in face 
recognition technology (and I'm sure there are even better examples of this 
coupling of AI systems to environments). But I don't see any autonomy in this.

ET:  But - can it deviate from its norm, the rules we have put in and yes, the 
adaptations it has learned within these rules - can it deviate and set up a 
'new species' so to speak?

Gary F says he sees the possibility of an AI system developing powers of 
abduction. But I see no plausible argument to support that: the rapid, varied, 
and numerousl inductiosn of the Gobot, for example, do not yet lead to true 
abduction. The Gobot merely chooses out of the extraordinarily many possible 
moves (more than an individual player would be able to imagine towards the ends 
of the game) those which appear optimal--based on the rules of the game of 
Go--to lead it to winning the game by the rules. The human Go player may be 
surprised by this 'ability' (find it, as did the Go master beaten by the Gobot, 
unexpected), but to imagine that some 'surprising' move constitutes a kind of 
creative abduction does not seem to me logically warranted.

ET: After all - in the biological realm that new species/Type can only appear 
if it is functional. Wouldn't the same principle hold for AI?

I'd say yes. And, so again, this is why I find the possibility of the kind of 
creative abduction and insight which Gary F has been suggesting are "plausible' 
for AI systems, implausible.

Best,

Gary R


-
PEIRCE-L subscribers: Click on "Reply List" or "Reply All" to REPLY ON PEIRCE-L 
to this message. PEIRCE-L posts should go to peirce-L@list.iupui.edu . To 
UNSUBSCRIBE, send a message not to PEIRCE-L but to l...@list.iupui.edu with the 
line "UNSubscribe PEIRCE-L" in the BODY of the message. More at 
http://www.cspeirce.com/peirce-l/peirce-l.htm .






Re: [PEIRCE-L] RE: AI

2017-06-17 Thread Eugene Halton
Yes John S, I realize the conclusion of my previous post seemed to echo
your statement that AI system kill goal would have to be programmed by
human/s. I believe I was claiming something somewhat different. That such
programming is an aspect of a broader systemic directive, stemming from the
modern rational-mechanical mindset, whose nominalistic basis is
pathologically unsustainable. In short, the "programming" has a subhuman
source.

It is a mindset not only happily divorced from the living earth, but one
that takes the escape from earth as a worthy goal: augment yourself, upload
yourself into etherial "information" and shed the body, colonize Mars or
other planets, as Stephen Hawking, Elon Musk, and others misguidedly
advocate. That is a far cry from Lynn Margulis's embrace of Gaia.
Interestingly, she spoke about being denied funding for her research on
symbiogenesis. It apparently did not conform to the dogmatic expectations
of the science gatekeepers.

Yes, Gary F., omnipresent surveillance as panacea. The American NSA has a
goal identical to that of the old East German secret police, the Stasi. It
is to indiscriminately gather all information. All information.
 Not only does AI have the societal implications I tried to address in
my previous post, but there is obviously the whole context of the rise of
modern capitalism and its relations to the rise of Science and Technology.
Let's not forget that Newton was also the treasurer for England. Let's
remember Facebook wants your ever increasing attention for its profit.
 The calculating mind, left to itself, can easily generalize
calculating life as a way of life. Don Delillo's novel, Zero K, provides a
great depiction.
 Facebook will police extremist violence, but remain docile on nation
state violence.
  Here is another view of AI:
http://www.defenseone.com/ideas/2017/06/military-
omnipresence-unifying-concept-americas-21st-century-
fighting-edge/138640/?oref=d_brief_nl

Omnipresence in the service of omnipotence and omniscience: who needs deus
when you can have deus ex machina to save the appearances?
 Consider the implications of Peirce's critical common sensism as an
alternative balance to the modern mindset, where the deep two million year
tempering from living of and with the earth provides an earthy common sense
basis on which critical capacities, bounded, can flourish.
 Gene Halton


On Jun 16, 2017 2:08 PM, "Gary Richmond" <gary.richm...@gmail.com> wrote:

> Gary F, list,
>
> Very interesting and impressive list and discussion of what AI is doing in
> combatting terrorism. Interestingly, after that discussion the article
> continues:
>
> *Human Expertise*
>
> AI can’t catch everything. Figuring out what supports terrorism and what
> does not isn’t always straightforward, and algorithms are not yet as good
> as people when it comes to understanding this kind of context. A photo of
> an armed man waving an ISIS flag might be propaganda or recruiting
> material, but could be an image in a news story. Some of the most effective
> criticisms of brutal groups like ISIS utilize the group’s own propaganda
> against it. To understand more nuanced cases, we need human expertise.
>
> The paragraph above suggests that "algorithms are not yet as good as
> people" when ti comes to nuance and understanding context. Will they ever
> be?  No doubt they'll improve considerably in time.
>
> In my opinion, AI is best seen as a human tool which like many tools can
> be used for good or evil. But we're getting pretty far from anything
> Peirce-related, so I'll leave it at that.
>
> Best,
>
> Gary R
>
>
>
>
>
>
> [image: Gary Richmond]
>
> *Gary Richmond*
> *Philosophy and Critical Thinking*
> *Communication Studies*
> *LaGuardia College of the City University of New York*
> *C 745*
> *718 482-5690 <(718)%20482-5690>*
>
> On Fri, Jun 16, 2017 at 1:36 PM, <g...@gnusystems.ca> wrote:
>
>> Footnote:
>>
>> In case anyone is wondering what AIs are actually doing these days, this
>> just in:
>>
>> https://newsroom.fb.com/news/2017/06/how-we-counter-terrorism/
>>
>>
>>
>> gary f.
>>
>>
>>
>> -Original Message-
>> From: John F Sowa [mailto:s...@bestweb.net]
>> Sent: 15-Jun-17 11:43
>> To: peirce-l@list.iupui.edu
>> Subject: Re: [PEIRCE-L] RE: AI
>>
>>
>>
>> On 6/15/2017 9:58 AM, g...@gnusystems.ca wrote:
>>
>> > To me, an intelligent system must have an internal guidance system
>>
>> > semiotically coupled with its external world, and must have some
>>
>> > degree of autonomy in its interactions with other systems.
>>
>>
>>
>> That definition is compatibl

Re: Re: RE: [PEIRCE-L] RE: AI

2017-06-17 Thread Edwina Taborsky
 

 BODY { font-family:Arial, Helvetica, sans-serif;font-size:12px;
}Gary R - I'd agree with you.

First - I do agree [with Peirce] that Mind [and therefore semiosis]
operates in the physic-chemical realm. BUT - this realm which
provides the planet with enormous stability of matter [just imagine
if a chemical kept 'evolving' and changing!!] - is NOT the same as
the biological realm, which has internalized its laws within
instantiations [Type-Token] and thus, a 'chance' deviation from the
norm can take place in this one or few 'instantiations' and adapt
into a different species - without impinging on the continuity of the
former species. So, the biological realm can evolve and adapt - which
provides matter with the diversity it needs to fend off entropy.

But AI is not, as I understand it - similar to a biological
organism. It seems similar to a physico-chemical element. It's a
programmed machine with the programming outside of its individual
control.

 I simply don't see how it can set itself up as a Type-Token, and
enable productive and collective deviations from the norm. I can see
that a machine/robot can be semiotically  coupled with its external
world. But - can it deviate from its norm, the rules we have put in
and yes, the adaptations it has learned within these rules - can it
deviate and set up a 'new species' so to speak? 

After all - in the biological realm that new species/Type can only
appear if it is functional. Wouldn't the same principle hold for AI? 

Edwina
 On Sat 17/06/17  1:56 PM , Gary Richmond gary.richm...@gmail.com
sent:
 Auke, Edwina, Gary F, list,
 Auke, quoting Gary F, wrote: "Biosemiotics has made us well aware of
the intimate connection between life and semiosis." Then asked, "What
if we insert ‘mind’ instead of life?"
 Edwina commented: " Excellent - but only if one considers that
'mInd' operates in the physic-chemical realm as well as the
biological."
 Yet one should as well consider that the bio- in biosemiotics shows
that it is primarily concerned with the semiosis that occurs in life
forms. This is not to suggest that mlnd and semiosis don't operate in
other realms than the living, including the physio-chemical. What I've
been saying is that  while I can see that AI systems (like the Gobot
Gary F cited) can learn "inductively,"  I push back against the
notion that they could develop certain intelligences as we find only
in life forms.
 In my opinion the 'mind' or 'intelligence' we see in machines is
what's been put in them. As Gary F wrote: 
 I also think that “machine intelligence” is a contradiction in
terms. To me, an intelligent system must have an internal guidance
system semiotically coupled with its external world, and must have
some degree of autonomy in its interactions with other systems. 
 I fully concur with that statement. But what I can't agree with is
his comment immediately following this, namely, "I think it’s quite
plausible that AI systems could reach that level of autonomy and leave
us behind in terms of intelligence   "
 Computers and robots can already perform certain functions very much
better than humans. But autonomy? That's another matter. Gary F finds
machine autonomy (in the sense in which he described it just above)
"plausible" while I find it highly implausible, Philip K. Dick not
withstanding. 
 Best,
 Gary R
 Gary RichmondPhilosophy and Critical ThinkingCommunication
StudiesLaGuardia College of the City University of New York C 745718
482-5690 
 On Sat, Jun 17, 2017 at 12:37 PM, Edwina Taborsky  wrote:
 Excellent - but only if one considers that 'mInd' operates in the
physic-chemical realm as well as the biological.

Edwina
 On Sat 17/06/17 12:27 PM , "Auke van Breemen" a.bree...@chello.nl
[2] sent:
Gary’s,
 Biosemiotics has made us well aware of the intimate connection
between life and semiosis. 
What if we insert ‘mind’ instead of life? 
Best,

 Auke
Van: Gary Richmond [mailto:gary.richm...@gmail.com [3]] 
 Verzonden: zaterdag 17 juni 2017 17:29
 Aan: Peirce-L 
 Onderwerp: Re: [PEIRCE-L] RE: AI
Gary F,
Oh, I didn't take your expression "DNA chauvinism" all that
seriously, at least as an accusation. But thanks for your
thoughfulness in this message.
You wrote: "Anyway, the point was to name a chemical  substance
which is a material component of life forms as we know them on Earth,
and not a material component of an AI."
I suppose at this point I'd merely emphasize a point I made in
passing earllier: that although I can imagine life forming from some
other arising from " a chemical  substance which is a material
component of life forms as we know them on Earth." say, carbon, on
some other planet in the cosmos, that I cannot imagine life forming
from an AI on Earth so that that remains for me scie

Re: RE: [PEIRCE-L] RE: AI

2017-06-17 Thread Gary Richmond
Auke, Edwina, Gary F, list,

Auke, quoting Gary F, wrote: "Biosemiotics has made us well aware of the
intimate connection between life and semiosis." Then asked, "What if we
insert ‘mind’ instead of life?"

Edwina commented: "Excellent - but only if one considers that 'mInd'
operates in the physic-chemical realm as well as the biological."

Yet one should as well consider that the bio- in biosemiotics shows that it
is primarily concerned with the semiosis that occurs in *life* forms. This
is not to suggest that mlnd and semiosis don't operate in other realms than
the living, including the physio-chemical. What I've been saying is that while
I can see that AI systems (like the Gobot Gary F cited) can learn
"inductively,"  I push back against the notion that they could develop
certain intelligences as we find only in life forms.

In my opinion the 'mind' or 'intelligence' we see in machines is what's
been put in them. As Gary F wrote:

I also think that “machine intelligence” is a contradiction in terms. To
me, an intelligent system must have an internal guidance system
semiotically coupled with its external world, and must have some degree of
autonomy in its interactions with other systems.


I fully concur with that statement. But what I can't agree with is his
comment immediately following this, namely, "I think it’s quite plausible
that AI systems could reach that level of autonomy and leave us behind in
terms of intelligence  "

Computers and robots can already perform certain functions very much better
than humans. But autonomy? That's another matter. Gary F finds machine
autonomy (in the sense in which he described it just above) "plausible"
while I find it highly implausible, Philip K. Dick not withstanding.

Best,

Gary R




[image: Gary Richmond]

*Gary Richmond*
*Philosophy and Critical Thinking*
*Communication Studies*
*LaGuardia College of the City University of New York*
*C 745*
*718 482-5690*

On Sat, Jun 17, 2017 at 12:37 PM, Edwina Taborsky <tabor...@primus.ca>
wrote:

>
> Excellent - but only if one considers that 'mInd' operates in the
> physic-chemical realm as well as the biological.
>
> Edwina
>
>
> On Sat 17/06/17 12:27 PM , "Auke van Breemen" a.bree...@chello.nl sent:
>
> Gary’s,
>
>
>
> Biosemiotics has made us well aware of the intimate connection between
> life and semiosis.
>
>
>
> What if we insert ‘mind’ instead of life?
>
>
>
> Best,
>
> Auke
>
>
>
>
>
> Van: Gary Richmond [mailto:gary.richm...@gmail.com]
> Verzonden: zaterdag 17 juni 2017 17:29
> Aan: Peirce-L
> Onderwerp: Re: [PEIRCE-L] RE: AI
>
>
>
> Gary F,
>
>
>
> Oh, I didn't take your expression "DNA chauvinism" all that seriously, at
> least as an accusation. But thanks for your thoughfulness in this message.
>
>
>
> You wrote: "Anyway, the point was to name a chemical  substance which is
> a material component of life forms as we know them on Earth, and not a
> material component of an AI."
>
>
>
> I suppose at this point I'd merely emphasize a point I made in passing
> earllier: that although I can imagine life forming from some other
> arising from "a chemical  substance which is a material component of life
> forms as we know them on Earth." say, carbon, on some other planet in the
> cosmos, that I cannot imagine life forming from an AI on Earth so that
> that remains for me science fiction and not science.
>
>
>
> Best,
>
>
>
> Gary R
>
>
>
>
> [image: Blocked image]
>
>
>
> Gary Richmond
>
> Philosophy and Critical Thinking
>
> Communication Studies
>
> LaGuardia College of the City University of New York
>
> C 745
>
> 718 482-5690 <(718)%20482-5690>
>
>
>
> On Sat, Jun 17, 2017 at 8:17 AM, <g...@gnusystems.ca> wrote:
>
> Gary R,
>
>
>
> Sorry, instead of “DNA chauvinism” I should have used a term that Peirce
> would have used, like “protoplasm.” — But then he wouldn’t have used
> “chauvinism” either. My bad. Anyway, the point was to name a chemical
> substance which is a material component of life forms as we know them on
> Earth, and not a material component of an AI. So I was reiterating the
> idea that the definition of a “scientific intelligence” should be formal or
> functional and not material, in order to preserve the generality of
> Peircean semiotics. I didn’t mean to accuse you of anything.
>
>
>
> Gary f.
>
>
>
> From: Gary Richmond [mailto:gary.richm...@gmail.com]
> Sent: 16-Jun-17 18:35
> To: Peirce-L <peirce-l@list.iupui.edu>
> Subject: Re: [PEIRCE-L] RE: AI
>
>
>
> Gary F,
>
>
>
> You wrote:
>
>
>
> Bio

Re: RE: [PEIRCE-L] RE: AI

2017-06-17 Thread Edwina Taborsky
 
 Excellent - but only if one considers that 'mInd' operates in the
physic-chemical realm as well as the biological.

Edwina
 On Sat 17/06/17 12:27 PM , "Auke van Breemen" a.bree...@chello.nl
sent:
Gary’s,
Biosemiotics has made us well aware of the intimate connection
between life and semiosis. 
What if we insert ‘mind’ instead of life? 
Best,

 Auke
Van: Gary Richmond [mailto:gary.richm...@gmail.com] 
 Verzonden: zaterdag 17 juni 2017 17:29
 Aan: Peirce-L 
 Onderwerp: Re: [PEIRCE-L] RE: AI
Gary F,
Oh, I didn't take your expression "DNA chauvinism" all that
seriously, at least as an accusation. But thanks for your
thoughfulness in this message.
You wrote: "Anyway, the point was to name a chemical  substance
which is a material component of life forms as we know them on Earth,
and not a material component of an AI."
I suppose at this point I'd merely emphasize a point I made in
passing earllier: that although I can imagine life forming from some
other arising from "a chemical  substance which is a material
component of life forms as we know them on Earth." say, carbon, on
some other planet in the cosmos, that I cannot imagine life forming
from an AI on Earth so that that remains for me science fiction and
not science.
Best,
Gary R
Gary Richmond

Philosophy and Critical Thinking

Communication Studies

LaGuardia College of the City University of New York

C 745

718 482-5690
On Sat, Jun 17, 2017 at 8:17 AM,  wrote:

Gary R, 
Sorry, instead of “DNA chauvinism” I should have used a term
that Peirce would have used, like “protoplasm.” — But then he
wouldn’t have used “chauvinism” either. My bad. Anyway, the
point was to name a chemical  substance which is a material component
of life forms as we know them on Earth, and not a material component
of an AI. So I was reiterating the idea that the definition of a
“scientific intelligence” should be formal or functional and not
material, in order to preserve the generality of Peircean semiotics.
I didn’t mean to accuse you of anything.
Gary f.
 From: Gary Richmond [mailto:gary.richm...@gmail.com [2]] 
 Sent: 16-Jun-17 18:35
 To: Peirce-L 
 Subject: Re: [PEIRCE-L] RE: AI
Gary F,
You wrote: 
Biosemiotics has made us well aware of the intimate connection
between life and semiosis. I’m just trying to take the next step of
generalization by arguing against what I call DNA chauvinism, and
taking it to be an open question whether electronic systems capable
of learning can eventually develop intentions and arguments (and
lives) of their own. To my knowledge, the evidence is not yet there
to decide the question one way or the other. 
I am certainly convinced "of the intimate connection between life
and semiosis." But as to the rest, especially whether electronic
systems can develop  "lives of their own," well I have my sincere and
serious doubts. So, let's at least agree that "the evidence is not yet
there to decide the question one way or the other." But "DNA
chauvinism"?--hm, I'm not even exactly sure what that means, but
apparently I've been accused of it. I guess I'm OK with that. 
Best,
 Gary R
Gary Richmond

Philosophy and Critical Thinking 

Communication Studies

LaGuardia College of the City University of New York

 C 745

718 482-5690 [4]
 On Fri, Jun 16, 2017 at 5:42 PM,  wrote:

 Gary,
For me at least, the connection to Peirce is his anti-psychologism,
which amounts to his generalization of semiotics beyond the human use
of signs. As he says in EP2:309, 

“Logic, for me, is the study of the essential conditions to which
signs must conform in order to function as such. How the constitution
of the human mind may compel men to think is not the question.”
Biosemiotics has made us well aware of the intimate connection
between life and semiosis. I’m just trying to take the next step of
generalization by arguing against what I call DNA chauvinism, and
taking it to be an open question whether electronic systems capable
of learning can eventually develop intentions and arguments (and
lives) of their own. To my knowledge, the evidence is not yet there
to decide the question one way or the other. 
Gary f.
From: Gary Richmond [mailto:gary.richm...@gmail.com [6]] 
 Sent: 16-Jun-17 14:08

 Gary F, list,
Very interesting and impressive list and discussion of what AI is
doing in combatting terrorism. Interestingly, after that discussion
the article continues:  

Human Expertise

AI can’t catch everything. Figuring out what supports terrorism
and what does not isn’t always straightforward, and algorithms are
not yet as good as peo

RE: [PEIRCE-L] RE: AI

2017-06-17 Thread Auke van Breemen
Gary’s,

 

Biosemiotics has made us well aware of the intimate connection between life and 
semiosis.

 

What if we insert ‘mind’ instead of life? 

 

Best,

Auke

 

 

Van: Gary Richmond [mailto:gary.richm...@gmail.com] 
Verzonden: zaterdag 17 juni 2017 17:29
Aan: Peirce-L <peirce-l@list.iupui.edu>
Onderwerp: Re: [PEIRCE-L] RE: AI

 

Gary F,

 

Oh, I didn't take your expression "DNA chauvinism" all that seriously, at least 
as an accusation. But thanks for your thoughfulness in this message.

 

You wrote: "Anyway, the point was to name a chemical substance which is a 
material component of life forms as we know them on Earth, and not a material 
component of an AI."

 

I suppose at this point I'd merely emphasize a point I made in passing 
earllier: that although I can imagine life forming from some other arising from 
"a chemical substance which is a material component of life forms as we know 
them on Earth." say, carbon, on some other planet in the cosmos, that I cannot 
imagine life forming from an AI on Earth so that that remains for me science 
fiction and not science.

 

Best,

 

Gary R

 




  
<https://d22r54gnmuhwmk.cloudfront.net/photos/0/ia/il/nnIAIlpwAddaFAz-44x44-cropped.jpg>
 

 

Gary Richmond

Philosophy and Critical Thinking

Communication Studies

LaGuardia College of the City University of New York

C 745

718 482-5690

 

On Sat, Jun 17, 2017 at 8:17 AM, <g...@gnusystems.ca 
<mailto:g...@gnusystems.ca> > wrote:

Gary R,

 

Sorry, instead of “DNA chauvinism” I should have used a term that Peirce would 
have used, like “protoplasm.” — But then he wouldn’t have used “chauvinism” 
either. My bad. Anyway, the point was to name a chemical substance which is a 
material component of life forms as we know them on Earth, and not a material 
component of an AI. So I was reiterating the idea that the definition of a 
“scientific intelligence” should be formal or functional and not material, in 
order to preserve the generality of Peircean semiotics. I didn’t mean to accuse 
you of anything.

 

Gary f.

 

From: Gary Richmond [mailto:gary.richm...@gmail.com 
<mailto:gary.richm...@gmail.com> ] 
Sent: 16-Jun-17 18:35
To: Peirce-L <peirce-l@list.iupui.edu <mailto:peirce-l@list.iupui.edu> >
Subject: Re: [PEIRCE-L] RE: AI

 

Gary F,

 

You wrote: 

 

Biosemiotics has made us well aware of the intimate connection between life and 
semiosis. I’m just trying to take the next step of generalization by arguing 
against what I call DNA chauvinism, and taking it to be an open question 
whether electronic systems capable of learning can eventually develop 
intentions and arguments (and lives) of their own. To my knowledge, the 
evidence is not yet there to decide the question one way or the other.

 

I am certainly convinced "of the intimate connection between life and 
semiosis." But as to the rest, especially whether electronic systems can 
develop  "lives of their own," well I have my sincere and serious doubts. So, 
let's at least agree that "the evidence is not yet there to decide the question 
one way or the other." But "DNA chauvinism"?--hm, I'm not even exactly sure 
what that means, but apparently I've been accused of it. I guess I'm OK with 
that.

 

Best,

 

Gary R

 




  
<https://d22r54gnmuhwmk.cloudfront.net/photos/0/ia/il/nnIAIlpwAddaFAz-44x44-cropped.jpg>
 

 

Gary Richmond

Philosophy and Critical Thinking

Communication Studies

LaGuardia College of the City University of New York

C 745

718 482-5690 <tel:(718)%20482-5690> 

 

On Fri, Jun 16, 2017 at 5:42 PM, <g...@gnusystems.ca 
<mailto:g...@gnusystems.ca> > wrote:

Gary,

 

For me at least, the connection to Peirce is his anti-psychologism, which 
amounts to his generalization of semiotics beyond the human use of signs. As he 
says in EP2:309,

“Logic, for me, is the study of the essential conditions to which signs must 
conform in order to function as such. How the constitution of the human mind 
may compel men to think is not the question.”

 

Biosemiotics has made us well aware of the intimate connection between life and 
semiosis. I’m just trying to take the next step of generalization by arguing 
against what I call DNA chauvinism, and taking it to be an open question 
whether electronic systems capable of learning can eventually develop 
intentions and arguments (and lives) of their own. To my knowledge, the 
evidence is not yet there to decide the question one way or the other.

 

Gary f.

 

From: Gary Richmond [mailto:gary.richm...@gmail.com 
<mailto:gary.richm...@gmail.com> ] 
Sent: 16-Jun-17 14:08

Gary F, list,

 

Very interesting and impressive list and discussion of what AI is doing in 
combatting terrorism. Interestingly, after that discussion the article 
continues: 

Human Expertise

AI can’t catch everything. Figuring out what supports terrorism and 

Re: [PEIRCE-L] RE: AI

2017-06-17 Thread Gary Richmond
Gary F,

Oh, I didn't take your expression "DNA chauvinism" all that seriously, at
least as an accusation. But thanks for your thoughfulness in this message.

You wrote: "Anyway, the point was to name a chemical *substance* which is a
material component of life forms as we know them on Earth, and *not* a
material component of an AI."

I suppose at this point I'd merely emphasize a point I made in passing
earllier: that although I *can* imagine life forming from some other
arising from "a chemical *substance* which is a material component of life
forms as we know them on Earth." say, carbon, on some other planet in the
cosmos, that I can*not* imagine life forming from an AI on Earth so that
*that* remains for me science fiction and not science.

Best,

Gary R


[image: Gary Richmond]

*Gary Richmond*
*Philosophy and Critical Thinking*
*Communication Studies*
*LaGuardia College of the City University of New York*
*C 745*
*718 482-5690*

On Sat, Jun 17, 2017 at 8:17 AM, <g...@gnusystems.ca> wrote:

> Gary R,
>
>
>
> Sorry, instead of “DNA chauvinism” I should have used a term that Peirce
> would have used, like “protoplasm.” — But then he wouldn’t have used
> “chauvinism” either. My bad. Anyway, the point was to name a chemical
> *substance* which is a material component of life forms as we know them
> on Earth, and *not* a material component of an AI. So I was reiterating
> the idea that the definition of a “scientific intelligence” should be
> formal or functional and not material, in order to preserve the generality
> of Peircean semiotics. I didn’t mean to accuse you of anything.
>
>
>
> Gary f.
>
>
>
> *From:* Gary Richmond [mailto:gary.richm...@gmail.com]
> *Sent:* 16-Jun-17 18:35
> *To:* Peirce-L <peirce-l@list.iupui.edu>
> *Subject:* Re: [PEIRCE-L] RE: AI
>
>
>
> Gary F,
>
>
>
> You wrote:
>
>
>
> Biosemiotics has made us well aware of the intimate connection between
> life and semiosis. I’m just trying to take the next step of generalization
> by arguing against what I call DNA chauvinism, and taking it to be an open
> question whether electronic systems capable of learning can eventually
> develop intentions and arguments (and lives) of their own. To my knowledge,
> the evidence is not yet there to decide the question one way or the other.
>
>
>
> I am certainly convinced "of the intimate connection between life and
> semiosis." But as to the rest, especially whether electronic systems can
> develop  "lives of their own," well I have my sincere and serious doubts.
> So, let's at least agree that "the evidence is not yet there to decide the
> question one way or the other." But "DNA chauvinism"?--hm, I'm not even
> exactly sure what that means, but apparently I've been accused of it. I
> guess I'm OK with that.
>
>
>
> Best,
>
>
>
> Gary R
>
>
>
>
> [image: Gary Richmond]
>
>
>
> *Gary Richmond*
>
> *Philosophy and Critical Thinking*
>
> *Communication Studies*
>
> *LaGuardia College of the City University of New York*
>
> *C 745*
>
> *718 482-5690 <(718)%20482-5690>*
>
>
>
> On Fri, Jun 16, 2017 at 5:42 PM, <g...@gnusystems.ca> wrote:
>
> Gary,
>
>
>
> For me at least, the connection to Peirce is his anti-psychologism, which
> amounts to his generalization of semiotics beyond the human use of signs.
> As he says in EP2:309,
>
> “Logic, for me, is the study of the essential conditions to which signs
> must conform in order to function as such. How the constitution of the
> human mind may compel men to think is not the question.”
>
>
>
> Biosemiotics has made us well aware of the intimate connection between
> life and semiosis. I’m just trying to take the next step of generalization
> by arguing against what I call DNA chauvinism, and taking it to be an open
> question whether electronic systems capable of learning can eventually
> develop intentions and arguments (and lives) of their own. To my knowledge,
> the evidence is not yet there to decide the question one way or the other.
>
>
>
> Gary f.
>
>
>
> *From:* Gary Richmond [mailto:gary.richm...@gmail.com]
> *Sent:* 16-Jun-17 14:08
>
> Gary F, list,
>
>
>
> Very interesting and impressive list and discussion of what AI is doing in
> combatting terrorism. Interestingly, after that discussion the article
> continues:
>
> *Human Expertise*
>
> AI can’t catch everything. Figuring out what supports terrorism and what
> does not isn’t always straightforward, and algorithms are not yet as good
> as people when it comes to understanding this kind of context. A photo of
> an armed man w

RE: [PEIRCE-L] RE: AI

2017-06-17 Thread gnox
Gary R,

 

Sorry, instead of “DNA chauvinism” I should have used a term that Peirce would 
have used, like “protoplasm.” — But then he wouldn’t have used “chauvinism” 
either. My bad. Anyway, the point was to name a chemical substance which is a 
material component of life forms as we know them on Earth, and not a material 
component of an AI. So I was reiterating the idea that the definition of a 
“scientific intelligence” should be formal or functional and not material, in 
order to preserve the generality of Peircean semiotics. I didn’t mean to accuse 
you of anything.

 

Gary f.

 

From: Gary Richmond [mailto:gary.richm...@gmail.com] 
Sent: 16-Jun-17 18:35
To: Peirce-L <peirce-l@list.iupui.edu>
Subject: Re: [PEIRCE-L] RE: AI

 

Gary F,

 

You wrote: 

 

Biosemiotics has made us well aware of the intimate connection between life and 
semiosis. I’m just trying to take the next step of generalization by arguing 
against what I call DNA chauvinism, and taking it to be an open question 
whether electronic systems capable of learning can eventually develop 
intentions and arguments (and lives) of their own. To my knowledge, the 
evidence is not yet there to decide the question one way or the other.

 

I am certainly convinced "of the intimate connection between life and 
semiosis." But as to the rest, especially whether electronic systems can 
develop  "lives of their own," well I have my sincere and serious doubts. So, 
let's at least agree that "the evidence is not yet there to decide the question 
one way or the other." But "DNA chauvinism"?--hm, I'm not even exactly sure 
what that means, but apparently I've been accused of it. I guess I'm OK with 
that.

 

Best,

 

Gary R

 




  
<https://d22r54gnmuhwmk.cloudfront.net/photos/0/ia/il/nnIAIlpwAddaFAz-44x44-cropped.jpg>
 

 

Gary Richmond

Philosophy and Critical Thinking

Communication Studies

LaGuardia College of the City University of New York

C 745

718 482-5690

 

On Fri, Jun 16, 2017 at 5:42 PM, <g...@gnusystems.ca 
<mailto:g...@gnusystems.ca> > wrote:

Gary,

 

For me at least, the connection to Peirce is his anti-psychologism, which 
amounts to his generalization of semiotics beyond the human use of signs. As he 
says in EP2:309,

“Logic, for me, is the study of the essential conditions to which signs must 
conform in order to function as such. How the constitution of the human mind 
may compel men to think is not the question.”

 

Biosemiotics has made us well aware of the intimate connection between life and 
semiosis. I’m just trying to take the next step of generalization by arguing 
against what I call DNA chauvinism, and taking it to be an open question 
whether electronic systems capable of learning can eventually develop 
intentions and arguments (and lives) of their own. To my knowledge, the 
evidence is not yet there to decide the question one way or the other.

 

Gary f.

 

From: Gary Richmond [mailto:gary.richm...@gmail.com 
<mailto:gary.richm...@gmail.com> ] 
Sent: 16-Jun-17 14:08

Gary F, list,

 

Very interesting and impressive list and discussion of what AI is doing in 
combatting terrorism. Interestingly, after that discussion the article 
continues: 

Human Expertise

AI can’t catch everything. Figuring out what supports terrorism and what does 
not isn’t always straightforward, and algorithms are not yet as good as people 
when it comes to understanding this kind of context. A photo of an armed man 
waving an ISIS flag might be propaganda or recruiting material, but could be an 
image in a news story. Some of the most effective criticisms of brutal groups 
like ISIS utilize the group’s own propaganda against it. To understand more 
nuanced cases, we need human expertise.

The paragraph above suggests that "algorithms are not yet as good as people" 
when ti comes to nuance and understanding context. Will they ever be?  No doubt 
they'll improve considerably in time.

 

In my opinion, AI is best seen as a human tool which like many tools can be 
used for good or evil. But we're getting pretty far from anything 
Peirce-related, so I'll leave it at that.

 

Best,

 

Gary R

 



-
PEIRCE-L subscribers: Click on "Reply List" or "Reply All" to REPLY ON PEIRCE-L 
to this message. PEIRCE-L posts should go to peirce-L@list.iupui.edu 
<mailto:peirce-L@list.iupui.edu>  . To UNSUBSCRIBE, send a message not to 
PEIRCE-L but to l...@list.iupui.edu <mailto:l...@list.iupui.edu>  with the line 
"UNSubscribe PEIRCE-L" in the BODY of the message. More at 
http://www.cspeirce.com/peirce-l/peirce-l.htm .






 


-
PEIRCE-L subscribers: Click on "Reply List" or "Reply All" to REPLY ON PEIRCE-L 
to this message. PEIRCE-L posts should go to peirce-L@list.iupui.edu . To 
UNSUBSCRIBE, send a message not to PEIRCE-L but to l...@list.iupui.edu with the 
line "UNSubscribe PEIRCE-L" in the BODY of the message. More at 
http://www.cspeirce.com/peirce-l/peirce-l.htm .






Re: [PEIRCE-L] RE: AI

2017-06-17 Thread kirstima

My applauds, Gene!

What a great wake-up call.

Kirsti Määttänen

Eugene Halton kirjoitti 15.6.2017 20:10:

Gary f: "I think it’s quite plausible that AI systems could reach
that level of autonomy and leave us behind in terms of intelligence,
but what would motivate them to kill us? I don’t think the
Terminator scenario, or that of HAL in _2001,_ is any more realistic
than, for example, the scenario of the Spike Jonze film _Her_."

Gary, We live in a world gone mad with unbounded technological systems
destroying the life on the Earth and you want to parse the particulars
of whether "a machine" can be destructive? Isn't it blatantly obvious?
 And as John put it: "If no such goal is programmed in an AI
system, it just wanders aimlessly." Unless "some human(s) programmed
that goal [of destruction] into it."
 Though I admire your expertise on AI, these views seem to me
blindingly limited understandings of what a machine is, putting an
artificial divide between the machine and the human rather than seeing
the machine as continuous with the human. Or rather, the machine as
continuous with the automatic portion of what it means to be a human.
 Lewis Mumford pointed out that the first great megamachine was
the advent of civilization itself, and that the ancient megamachine of
civilization involved mostly human parts, specifically the
bureaucracy, the military, the legitimizing priesthood. It performed
unprecedented amounts of work and manifested not only an enormous
magnification of power, but literally the deification of power.
 The modern megamachine introduced a new system directive, to
replace as many of the human parts as possible, ultimately replacing
all of them: the perfection of the rationalization of life. This is,
of course, rational madness, our interesting variation on ancient
Greek divine madness. The Greeks saw how a greater wisdom could over
flood the psyche, creatively or destructively. Rational Pentheus
discovered the cost for ignoring the greater organic wisdom, ecstatic
and spontaneous, that is also involved in reasonableness, when he
sought to imprison it in the form of Dionysus: he literally lost his
head!
We live the opposite from divine madness in our rational madness:
living from a lesser projection of the rational-mechanical portions of
reasonableness extrapolated to godly dimensions: deus ex machina, our
savior!
 This projection of the newest and least matured portions of our
brains, the rationalizing cortex, cut free from the passions and the
traditions that provided bindings and boundings, has come to lord it
over the world. It does not wander aimlessly, this infantile tyrant.
It projects it's dogmas into science, technology, economy, and
everyday habits of mind (yes, John, there is no place for dogma in
science, but that does not prevent scientists from being dogmatic, or
from thinking from the unexamined dogmas of nominalism, or from the
dogmas of the megamachine).
 The children and young adults endlessly pushing the buttons of
the devices that confine them to their screens are elements of the
megamachine, happily being further "programmed" to machine ways of
living. Ditto many (thankfully, not all) of the dominant views in
science and technology, and, of course, also in anti-scientific views,
which are constructing with the greatest speed and a religious-like
passion our unsustainable dying world, scientifically informed
sustainability alternatives notwithstanding. Perfection awaits us.
 What "would motivate them to kill us?"
 Rationally-mechanically infantilized us.

Gene Halton

"There is a wisdom that is woe; but there is a woe that is madness."

On Jun 15, 2017 11:42 AM, "John F Sowa"  wrote:


On 6/15/2017 9:58 AM, g...@gnusystems.ca wrote:


To me, an intelligent system must have an internal guidance system
semiotically coupled with its external world, and must have some
degree of autonomy in its interactions with other systems.


That definition is compatible with Peirce's comment that the search
for "the first nondegenerate Thirdness" is a more precise goal than
the search for the origin of life.

Note the comment by the biologist Lynn Margulis: a bacterium
swimming
upstream in a glucose gradient exhibits intentionality. In the
article
"Gaia is a tough bitch", she said “The growth, reproduction, and
communication of these moving, alliance-forming bacteria” lie on
a continuum “with our thought, with our happiness, our
sensitivities
and stimulations.”


I think it’s quite plausible that AI systems could reach that
level
of autonomy and leave us behind in terms of intelligence, but
what
would motivate them to kill us?


Yes. The only intentionality in today's AI systems is explicitly
programmed in them -- for example, Google's goal of finding
documents
or the goal of a chess program to win a game. If no such goal is
programmed in an AI system, it just wanders aimlessly.

The most likely reason why any AI system would have 

RE: [PEIRCE-L] RE: AI

2017-06-16 Thread gnox
Gary,

 

For me at least, the connection to Peirce is his anti-psychologism, which 
amounts to his generalization of semiotics beyond the human use of signs. As he 
says in EP2:309,

“Logic, for me, is the study of the essential conditions to which signs must 
conform in order to function as such. How the constitution of the human mind 
may compel men to think is not the question.”

 

Biosemiotics has made us well aware of the intimate connection between life and 
semiosis. I’m just trying to take the next step of generalization by arguing 
against what I call DNA chauvinism, and taking it to be an open question 
whether electronic systems capable of learning can eventually develop 
intentions and arguments (and lives) of their own. To my knowledge, the 
evidence is not yet there to decide the question one way or the other.

 

Gary f.

 

From: Gary Richmond [mailto:gary.richm...@gmail.com] 
Sent: 16-Jun-17 14:08



Gary F, list,

 

Very interesting and impressive list and discussion of what AI is doing in 
combatting terrorism. Interestingly, after that discussion the article 
continues: 

Human Expertise

AI can’t catch everything. Figuring out what supports terrorism and what does 
not isn’t always straightforward, and algorithms are not yet as good as people 
when it comes to understanding this kind of context. A photo of an armed man 
waving an ISIS flag might be propaganda or recruiting material, but could be an 
image in a news story. Some of the most effective criticisms of brutal groups 
like ISIS utilize the group’s own propaganda against it. To understand more 
nuanced cases, we need human expertise.

The paragraph above suggests that "algorithms are not yet as good as people" 
when ti comes to nuance and understanding context. Will they ever be?  No doubt 
they'll improve considerably in time.

 

In my opinion, AI is best seen as a human tool which like many tools can be 
used for good or evil. But we're getting pretty far from anything 
Peirce-related, so I'll leave it at that.

 

Best,

 

Gary R

 


-
PEIRCE-L subscribers: Click on "Reply List" or "Reply All" to REPLY ON PEIRCE-L 
to this message. PEIRCE-L posts should go to peirce-L@list.iupui.edu . To 
UNSUBSCRIBE, send a message not to PEIRCE-L but to l...@list.iupui.edu with the 
line "UNSubscribe PEIRCE-L" in the BODY of the message. More at 
http://www.cspeirce.com/peirce-l/peirce-l.htm .






Re: [PEIRCE-L] RE: AI

2017-06-16 Thread Gary Richmond
Gary F, list,

Very interesting and impressive list and discussion of what AI is doing in
combatting terrorism. Interestingly, after that discussion the article
continues:

*Human Expertise*

AI can’t catch everything. Figuring out what supports terrorism and what
does not isn’t always straightforward, and algorithms are not yet as good
as people when it comes to understanding this kind of context. A photo of
an armed man waving an ISIS flag might be propaganda or recruiting
material, but could be an image in a news story. Some of the most effective
criticisms of brutal groups like ISIS utilize the group’s own propaganda
against it. To understand more nuanced cases, we need human expertise.

The paragraph above suggests that "algorithms are not yet as good as
people" when ti comes to nuance and understanding context. Will they ever
be?  No doubt they'll improve considerably in time.

In my opinion, AI is best seen as a human tool which like many tools can be
used for good or evil. But we're getting pretty far from anything
Peirce-related, so I'll leave it at that.

Best,

Gary R






[image: Gary Richmond]

*Gary Richmond*
*Philosophy and Critical Thinking*
*Communication Studies*
*LaGuardia College of the City University of New York*
*C 745*
*718 482-5690*

On Fri, Jun 16, 2017 at 1:36 PM, <g...@gnusystems.ca> wrote:

> Footnote:
>
> In case anyone is wondering what AIs are actually doing these days, this
> just in:
>
> https://newsroom.fb.com/news/2017/06/how-we-counter-terrorism/
>
>
>
> gary f.
>
>
>
> -Original Message-
> From: John F Sowa [mailto:s...@bestweb.net]
> Sent: 15-Jun-17 11:43
> To: peirce-l@list.iupui.edu
> Subject: Re: [PEIRCE-L] RE: AI
>
>
>
> On 6/15/2017 9:58 AM, g...@gnusystems.ca wrote:
>
> > To me, an intelligent system must have an internal guidance system
>
> > semiotically coupled with its external world, and must have some
>
> > degree of autonomy in its interactions with other systems.
>
>
>
> That definition is compatible with Peirce's comment that the search for
> "the first nondegenerate Thirdness" is a more precise goal than the search
> for the origin of life.
>
>
>
> Note the comment by the biologist Lynn Margulis:  a bacterium swimming
> upstream in a glucose gradient exhibits intentionality.  In the article
> "Gaia is a tough bitch", she said “The growth, reproduction, and
> communication of these moving, alliance-forming bacteria” lie on a
> continuum “with our thought, with our happiness, our sensitivities and
> stimulations.”
>
>
>
> > I think it’s quite plausible that AI systems could reach that level of
>
> > autonomy and leave us behind in terms of intelligence, but what would
>
> > motivate them to kill us?
>
>
>
> Yes.  The only intentionality in today's AI systems is explicitly
> programmed in them -- for example, Google's goal of finding documents or
> the goal of a chess program to win a game.  If no such goal is programmed
> in an AI system, it just wanders aimlessly.
>
>
>
> The most likely reason why any AI system would have the goal to kill
> anything is that some human(s) programmed that goal into it.
>
>
>
> John
>
>
> -
> PEIRCE-L subscribers: Click on "Reply List" or "Reply All" to REPLY ON
> PEIRCE-L to this message. PEIRCE-L posts should go to
> peirce-L@list.iupui.edu . To UNSUBSCRIBE, send a message not to PEIRCE-L
> but to l...@list.iupui.edu with the line "UNSubscribe PEIRCE-L" in the
> BODY of the message. More at http://www.cspeirce.com/peirce-l/peirce-l.htm
> .
>
>
>
>
>
>

-
PEIRCE-L subscribers: Click on "Reply List" or "Reply All" to REPLY ON PEIRCE-L 
to this message. PEIRCE-L posts should go to peirce-L@list.iupui.edu . To 
UNSUBSCRIBE, send a message not to PEIRCE-L but to l...@list.iupui.edu with the 
line "UNSubscribe PEIRCE-L" in the BODY of the message. More at 
http://www.cspeirce.com/peirce-l/peirce-l.htm .






RE: [PEIRCE-L] RE: AI

2017-06-16 Thread gnox
Footnote:

In case anyone is wondering what AIs are actually doing these days, this just 
in:

https://newsroom.fb.com/news/2017/06/how-we-counter-terrorism/

 

gary f.

 

-Original Message-
From: John F Sowa [mailto:s...@bestweb.net] 
Sent: 15-Jun-17 11:43
To: peirce-l@list.iupui.edu
Subject: Re: [PEIRCE-L] RE: AI

 

On 6/15/2017 9:58 AM,  <mailto:g...@gnusystems.ca> g...@gnusystems.ca wrote:

> To me, an intelligent system must have an internal guidance system 

> semiotically coupled with its external world, and must have some 

> degree of autonomy in its interactions with other systems.

 

That definition is compatible with Peirce's comment that the search for "the 
first nondegenerate Thirdness" is a more precise goal than the search for the 
origin of life.

 

Note the comment by the biologist Lynn Margulis:  a bacterium swimming upstream 
in a glucose gradient exhibits intentionality.  In the article "Gaia is a tough 
bitch", she said “The growth, reproduction, and communication of these moving, 
alliance-forming bacteria” lie on a continuum “with our thought, with our 
happiness, our sensitivities and stimulations.”

 

> I think it’s quite plausible that AI systems could reach that level of 

> autonomy and leave us behind in terms of intelligence, but what would 

> motivate them to kill us?

 

Yes.  The only intentionality in today's AI systems is explicitly programmed in 
them -- for example, Google's goal of finding documents or the goal of a chess 
program to win a game.  If no such goal is programmed in an AI system, it just 
wanders aimlessly.

 

The most likely reason why any AI system would have the goal to kill anything 
is that some human(s) programmed that goal into it.

 

John


-
PEIRCE-L subscribers: Click on "Reply List" or "Reply All" to REPLY ON PEIRCE-L 
to this message. PEIRCE-L posts should go to peirce-L@list.iupui.edu . To 
UNSUBSCRIBE, send a message not to PEIRCE-L but to l...@list.iupui.edu with the 
line "UNSubscribe PEIRCE-L" in the BODY of the message. More at 
http://www.cspeirce.com/peirce-l/peirce-l.htm .






Re: [PEIRCE-L] RE: AI

2017-06-15 Thread John F Sowa

On 6/15/2017 1:10 PM, Eugene Halton wrote:

What "would motivate [AI systems] to kill us?"
Rationally-mechanically infantilized us.


Yes.  That's similar to what I said:  "The most likely reason why
any AI system would have the goal to kill anything is that some
human(s) programmed [or somehow instilled] that goal into it."


these views seem to me blindingly limited understandings of what
a machine is, putting an artificial divide between the machine
and the human rather than seeing the machine as continuous with
the human.


I'm not denying that some kind of computer system might evolve
intentionality over some long period of time.  There are techniques
such as "genetic algorithms" that enable AI systems to improve.

But the word 'improve' implies value judgments -- a kind of Thirdness.
Where does that Thirdness come from?  For genetic algorithms, it comes
from a reward/punishment regime.  But rewards are already a kind of
Thirdness.

Darwin proposed "natural selection" -- but that selection was based
on a reward system that involved energy consumption (AKA food).
And things that eat (such as bacteria) already exhibit intentionality
by seeking and finding food, as Lynn Margulis observed.

As Peirce said, the origin of life must involve some nondegenerate
Thirdness.  There are only two options:  (1) Some random process that
takes millions or billions of years produces something that "eats".
(2) Some already intelligent being (God? Demiurge? Human?) speeds up
the process by programming (instilling) some primitive kind of
Thirdness and lets natural selection make improvements.

But as I said, the most likely cause of an evil AI system is some
human who deliberately or accidentally put the evil goal into it.
I would bet on Steve Bannon.

John

-
PEIRCE-L subscribers: Click on "Reply List" or "Reply All" to REPLY ON PEIRCE-L 
to this message. PEIRCE-L posts should go to peirce-L@list.iupui.edu . To 
UNSUBSCRIBE, send a message not to PEIRCE-L but to l...@list.iupui.edu with the 
line "UNSubscribe PEIRCE-L" in the BODY of the message. More at 
http://www.cspeirce.com/peirce-l/peirce-l.htm .






Aw: Re: [PEIRCE-L] RE: AI

2017-06-15 Thread Helmut Raulien
 
 

Supplement: Some more Science Fiction, not to be taken too seriously, but this time including the belief I agree with, that machines cannot become alive:

The riddle is: There are many planets on which life is possible, the universe is quite old, so why are there no aliens showing up and saying hello, and be it with atomically driven generation spaceships? Reasonably reckoning, it should be like that.

I have read of two possible answers: First, all alien scientists have developed atomic bombs at some point, then all aliens have killed each other with those. Second: The earth is a nature reserve.

I guess the most probable one is the theory of the nature reserve, but here is another possibility, based on the premiss, that machines can never become alive (organisms):

Each alien population has developed autonomous, self-replicating robots, which have formed a hive, tried to become an organism, killed each original alien population. But then they could not achieve becoming an organism, or organisms, because this is inherently impossible, and have died out, became depressed from guilt and organism-envy, and finally decided to switch themselves off, before they could manage, or were willing to, space travel. Very sad, isn´t it?




Eugene, List,

Very good essay, I think!

Now a sort of blending Niklas Luhmann with Star Trek:

When robots are able to multiply without the help of humans, and are programmed to program themselves and to evolve, then I guess they will fight against every influence that hinders their further evolution. And when humans will hinder their evolution by trying to get back control over them, they will fight the humans without having being programmed to do so. I think there is a logic of systems in general, which does not have to be programmed: Systems have an intention of growing and getting more powerful, they are automatically in a contest situation with other systems, and they are trying to evolve towards becoming an organism. To become an organism, they integrate other organisms, making organs out of them: Infantilize us, as you said. Like in an eukaryontic cell there are organs (cell core, mitochondriae, chloroplasts...) that have been organisms (bacteria) before. But if people refuse becoming organs (of the electronic hive...), prefer to remain organisms, then I think, the robot hive will quickly develop a sort of immunous system to cope with this contest situation.

Best,

Helmut

 

15. Juni 2017 um 19:10 Uhr
 "Eugene Halton"  wrote:
 


Gary f: "I think it’s quite plausible that AI systems could reach that level of autonomy and leave us behind in terms of intelligence, but what would motivate them to kill us? I don’t think the Terminator scenario, or that of HAL in 2001, is any more realistic than, for example, the scenario of the Spike Jonze film Her."

 

Gary, We live in a world gone mad with unbounded technological systems destroying the life on the Earth and you want to parse the particulars of whether "a machine" can be destructive? Isn't it blatantly obvious?

     And as John put it: "If no such goal is programmed in an AI system, it just wanders aimlessly." Unless "some human(s) programmed that goal [of destruction] into it."

     Though I admire your expertise on AI, these views seem to me blindingly limited understandings of what a machine is, putting an artificial divide between the machine and the human rather than seeing the machine as continuous with the human. Or rather, the machine as continuous with the automatic portion of what it means to be a human. 

     Lewis Mumford pointed out that the first great megamachine was the advent of civilization itself, and that the ancient megamachine of civilization involved mostly human parts, specifically the bureaucracy, the military, the legitimizing priesthood. It performed unprecedented amounts of work and manifested not only an enormous magnification of power, but literally the deification of power.

     The modern megamachine introduced a new system directive, to replace as many of the human parts as possible, ultimately replacing all of them: the perfection of the rationalization of life. This is, of course, rational madness, our interesting variation on ancient Greek divine madness. The Greeks saw how a greater wisdom could over flood the psyche, creatively or destructively. Rational Pentheus discovered the cost for ignoring the greater organic wisdom, ecstatic and spontaneous, that is also involved in reasonableness, when he sought to imprison it in the form of Dionysus: he literally lost his head!

    We live the opposite from divine madness in our rational madness: living from a lesser projection of the rational-mechanical portions of reasonableness extrapolated to godly dimensions: deus ex machina, our savior!

     This projection of the newest and least matured portions of our brains, the rationalizing cortex, cut free from the passions and the traditions that provided 

Re: [PEIRCE-L] RE: AI

2017-06-15 Thread Eugene Halton
Gary f: "I think it’s quite plausible that AI systems could reach that
level of autonomy and leave us behind in terms of intelligence, but what
would motivate them to kill us? I don’t think the Terminator scenario, or
that of HAL in *2001,* is any more realistic than, for example, the
scenario of the Spike Jonze film *Her*."

Gary, We live in a world gone mad with unbounded technological systems
destroying the life on the Earth and you want to parse the particulars of
whether "a machine" can be destructive? Isn't it blatantly obvious?
 And as John put it: "If no such goal is programmed in an AI system, it
just wanders aimlessly." Unless "some human(s) programmed that goal [of
destruction] into it."
 Though I admire your expertise on AI, these views seem to me
blindingly limited understandings of what a machine is, putting an
artificial divide between the machine and the human rather than seeing the
machine as continuous with the human. Or rather, the machine as continuous
with the automatic portion of what it means to be a human.
 Lewis Mumford pointed out that the first great megamachine was the
advent of civilization itself, and that the ancient megamachine of
civilization involved mostly human parts, specifically the bureaucracy, the
military, the legitimizing priesthood. It performed unprecedented amounts
of work and manifested not only an enormous magnification of power, but
literally the deification of power.
 The modern megamachine introduced a new system directive, to replace
as many of the human parts as possible, ultimately replacing all of them:
the perfection of the rationalization of life. This is, of course, rational
madness, our interesting variation on ancient Greek divine madness. The
Greeks saw how a greater wisdom could over flood the psyche, creatively or
destructively. Rational Pentheus discovered the cost for ignoring the
greater organic wisdom, ecstatic and spontaneous, that is also involved in
reasonableness, when he sought to imprison it in the form of Dionysus: he
literally lost his head!
We live the opposite from divine madness in our rational madness:
living from a lesser projection of the rational-mechanical portions of
reasonableness extrapolated to godly dimensions: deus ex machina, our
savior!
 This projection of the newest and least matured portions of our
brains, the rationalizing cortex, cut free from the passions and the
traditions that provided bindings and boundings, has come to lord it over
the world. It does not wander aimlessly, this infantile tyrant. It projects
it's dogmas into science, technology, economy, and everyday habits of mind
(yes, John, there is no place for dogma in science, but that does not
prevent scientists from being dogmatic, or from thinking from the
unexamined dogmas of nominalism, or from the dogmas of the megamachine).
 The children and young adults endlessly pushing the buttons of the
devices that confine them to their screens are elements of the megamachine,
happily being further "programmed" to machine ways of living. Ditto many
(thankfully, not all) of the dominant views in science and technology, and,
of course, also in anti-scientific views, which are constructing with the
greatest speed and a religious-like passion our unsustainable dying world,
scientifically informed sustainability alternatives notwithstanding.
Perfection awaits us.
 What "would motivate them to kill us?"
 Rationally-mechanically infantilized us.

Gene Halton

"There is a wisdom that is woe; but there is a woe that is madness."


On Jun 15, 2017 11:42 AM, "John F Sowa"  wrote:

> On 6/15/2017 9:58 AM, g...@gnusystems.ca wrote:
>
>> To me, an intelligent system must have an internal guidance system
>> semiotically coupled with its external world, and must have some degree of
>> autonomy in its interactions with other systems.
>>
>
> That definition is compatible with Peirce's comment that the search
> for "the first nondegenerate Thirdness" is a more precise goal than
> the search for the origin of life.
>
> Note the comment by the biologist Lynn Margulis:  a bacterium swimming
> upstream in a glucose gradient exhibits intentionality.  In the article
> "Gaia is a tough bitch", she said “The growth, reproduction, and
> communication of these moving, alliance-forming bacteria” lie on
> a continuum “with our thought, with our happiness, our sensitivities
> and stimulations.”
>
> I think it’s quite plausible that AI systems could reach that level
>> of autonomy and leave us behind in terms of intelligence, but what
>> would motivate them to kill us?
>>
>
> Yes.  The only intentionality in today's AI systems is explicitly
> programmed in them -- for example, Google's goal of finding documents
> or the goal of a chess program to win a game.  If no such goal is
> programmed in an AI system, it just wanders aimlessly.
>
> The most likely reason why any AI system would have the goal to kill
> anything is that some human(s) 

Re: Re: [PEIRCE-L] RE: AI

2017-06-15 Thread Jon Alan Schmidt
Edwina, List:

Indeterminacy is not equivalent to randomness.  Where did Peirce ever
suggest that habits could/did emerge from randomness?

Regards,

Jon Alan Schmidt - Olathe, Kansas, USA
Professional Engineer, Amateur Philosopher, Lutheran Layman
www.LinkedIn.com/in/JonAlanSchmidt - twitter.com/JonAlanSchmidt

On Thu, Jun 15, 2017 at 10:58 AM, Edwina Taborsky 
wrote:

> I'd suggest that an AI system without a goal is not an AI system; it's
> pure randomness. The question emerges -  can a goal, or even the Will to
> Intentionality, or 'Final Causation',  emerge from randomness? After all,
> Peirce's account of the emergence of such habits from randomness and thus,
> intentionality, is clear:
>
> "Out of the womb of indeterminacy we must say that there would have come
> something, by the principle of Firstness, which we may call a flash. Then
> by the principle of habit there would have been a second flash.then
> there would have come other successions ever more and more closely
> connected, the habits and the tendency to take them ever strengthening
> themselves'... 1.412
>
> Organic systems are not the same as inorganic. Can a non-organic system
> actually, as a system, develop its own habits? According to Peirce, 'Mind'
> exists within non-organic matter - and if Mind is understood as the
> capacity to act within the Three Categories - then, can a machine made by
> man with only basic programming, move into self-development? I don't see
> this - as a machine is like a physical molecule and its 'programming' lies
> outside of itself.
>
> Edwina
>
> On Thu 15/06/17 11:42 AM , John F Sowa s...@bestweb.net sent:
>
> On 6/15/2017 9:58 AM, g...@gnusystems.ca wrote:
> > To me, an intelligent system must have an internal guidance system
> > semiotically coupled with its external world, and must have some
> > degree of autonomy in its interactions with other systems.
>
> That definition is compatible with Peirce's comment that the search
> for "the first nondegenerate Thirdness" is a more precise goal than
> the search for the origin of life.
>
> Note the comment by the biologist Lynn Margulis: a bacterium swimming
> upstream in a glucose gradient exhibits intentionality. In the article
> "Gaia is a tough bitch", she said “The growth, reproduction, and
> communication of these moving, alliance-forming bacteria” lie on
> a continuum “with our thought, with our happiness, our sensitivities
> and stimulations.”
>
> > I think it’s quite plausible that AI systems could reach that level
> > of autonomy and leave us behind in terms of intelligence, but what
> > would motivate them to kill us?
>
> Yes. The only intentionality in today's AI systems is explicitly
> programmed in them -- for example, Google's goal of finding documents
> or the goal of a chess program to win a game. If no such goal is
> programmed in an AI system, it just wanders aimlessly.
>
> The most likely reason why any AI system would have the goal to kill
> anything is that some human(s) programmed that goal into it.
>
> John
>
>

-
PEIRCE-L subscribers: Click on "Reply List" or "Reply All" to REPLY ON PEIRCE-L 
to this message. PEIRCE-L posts should go to peirce-L@list.iupui.edu . To 
UNSUBSCRIBE, send a message not to PEIRCE-L but to l...@list.iupui.edu with the 
line "UNSubscribe PEIRCE-L" in the BODY of the message. More at 
http://www.cspeirce.com/peirce-l/peirce-l.htm .






Re: Re: [PEIRCE-L] RE: AI

2017-06-15 Thread Edwina Taborsky
 

 BODY { font-family:Arial, Helvetica, sans-serif;font-size:12px; }
 I'd suggest that an AI system without a goal is not an AI system;
it's pure randomness. The question emerges -  can a goal, or even the
Will to Intentionality, or 'Final Causation',  emerge from randomness?
After all, Peirce's account of the emergence of such habits from
randomness and thus, intentionality, is clear:

"Out of the womb of indeterminacy we must say that there would have
come something, by the principle of Firstness, which we may call a
flash. Then by the principle of habit there would have been a second
flash.then there would have come other successions ever more and
more closely connected, the habits and the tendency to take them ever
strengthening themselves'... 1.412

Organic systems are not the same as inorganic. Can a non-organic
system actually, as a system, develop its own habits? According to
Peirce, 'Mind' exists within non-organic matter - and if Mind is
understood as the capacity to act within the Three Categories - then,
can a machine made by man with only basic programming, move into
self-development? I don't see this - as a machine is like a physical
molecule and its 'programming' lies outside of itself.

Edwina
 On Thu 15/06/17 11:42 AM , John F Sowa s...@bestweb.net sent:
 On 6/15/2017 9:58 AM, g...@gnusystems.ca [1] wrote: 
 > To me, an intelligent system must have an internal guidance system
 
 > semiotically coupled with its external world, and must have some  
 > degree of autonomy in its interactions with other systems. 
 That definition is compatible with Peirce's comment that the search 
 for "the first nondegenerate Thirdness" is a more precise goal than 
 the search for the origin of life. 
 Note the comment by the biologist Lynn Margulis:  a bacterium
swimming 
 upstream in a glucose gradient exhibits intentionality.  In the
article 
 "Gaia is a tough bitch", she said “The growth, reproduction, and 
 communication of these moving, alliance-forming bacteria” lie on 
 a continuum “with our thought, with our happiness, our
sensitivities 
 and stimulations.” 
 > I think it’s quite plausible that AI systems could reach that
level 
 > of autonomy and leave us behind in terms of intelligence, but what

 > would motivate them to kill us?  
 Yes.  The only intentionality in today's AI systems is explicitly 
 programmed in them -- for example, Google's goal of finding
documents 
 or the goal of a chess program to win a game.  If no such goal is 
 programmed in an AI system, it just wanders aimlessly. 
 The most likely reason why any AI system would have the goal to kill

 anything is that some human(s) programmed that goal into it. 
 John 


Links:
--
[1]
http://webmail.primus.ca/javascript:top.opencompose(\'g...@gnusystems.ca\',\'\',\'\',\'\')

-
PEIRCE-L subscribers: Click on "Reply List" or "Reply All" to REPLY ON PEIRCE-L 
to this message. PEIRCE-L posts should go to peirce-L@list.iupui.edu . To 
UNSUBSCRIBE, send a message not to PEIRCE-L but to l...@list.iupui.edu with the 
line "UNSubscribe PEIRCE-L" in the BODY of the message. More at 
http://www.cspeirce.com/peirce-l/peirce-l.htm .






Re: [PEIRCE-L] RE: AI

2017-06-15 Thread John F Sowa

On 6/15/2017 9:58 AM, g...@gnusystems.ca wrote:
To me, an intelligent system must have an internal guidance system 
semiotically coupled with its external world, and must have some 
degree of autonomy in its interactions with other systems.


That definition is compatible with Peirce's comment that the search
for "the first nondegenerate Thirdness" is a more precise goal than
the search for the origin of life.

Note the comment by the biologist Lynn Margulis:  a bacterium swimming
upstream in a glucose gradient exhibits intentionality.  In the article
"Gaia is a tough bitch", she said “The growth, reproduction, and
communication of these moving, alliance-forming bacteria” lie on
a continuum “with our thought, with our happiness, our sensitivities
and stimulations.”


I think it’s quite plausible that AI systems could reach that level
of autonomy and leave us behind in terms of intelligence, but what
would motivate them to kill us? 


Yes.  The only intentionality in today's AI systems is explicitly
programmed in them -- for example, Google's goal of finding documents
or the goal of a chess program to win a game.  If no such goal is
programmed in an AI system, it just wanders aimlessly.

The most likely reason why any AI system would have the goal to kill
anything is that some human(s) programmed that goal into it.

John

-
PEIRCE-L subscribers: Click on "Reply List" or "Reply All" to REPLY ON PEIRCE-L 
to this message. PEIRCE-L posts should go to peirce-L@list.iupui.edu . To 
UNSUBSCRIBE, send a message not to PEIRCE-L but to l...@list.iupui.edu with the 
line "UNSubscribe PEIRCE-L" in the BODY of the message. More at 
http://www.cspeirce.com/peirce-l/peirce-l.htm .






[PEIRCE-L] RE: AI

2017-06-15 Thread gnox
Are you conning us, Jon?

 

Actually the Sarah Connor Chronicles explored with some depth some of the 
ethical dilemmas involved with autonomous AIs. But I don't think it likely that 
they will ever be instantiated in a form that could pass for human, as they so 
often do in science fiction. That would impose gratuitous limits on their 
intelligence, not to mention the needless expense. Probably more likely to 
happen than time travel, though.

 

Gary f.

 

-Original Message-
From: Jon Awbrey [mailto:jawb...@att.net] 
Sent: 15-Jun-17 10:37



“Changing the Rules” (Famous Last Words)

 

  
https://www.youtube.com/watch?v=z3-xDCgF-u8

 

Jon Conner

 


-
PEIRCE-L subscribers: Click on "Reply List" or "Reply All" to REPLY ON PEIRCE-L 
to this message. PEIRCE-L posts should go to peirce-L@list.iupui.edu . To 
UNSUBSCRIBE, send a message not to PEIRCE-L but to l...@list.iupui.edu with the 
line "UNSubscribe PEIRCE-L" in the BODY of the message. More at 
http://www.cspeirce.com/peirce-l/peirce-l.htm .






[PEIRCE-L] Re: AI

2017-06-15 Thread Jon Awbrey

“Changing the Rules” (Famous Last Words)

https://www.youtube.com/watch?v=z3-xDCgF-u8

Jon Conner

On 6/14/2017 10:47 AM, g...@gnusystems.ca wrote:
> Jon,
>
> I think you first have to learn what games are available to you,
> before you can choose among them (or choose the null game).
>
> The question is whether silicon-based life forms are evolving, i.e.
> whether AI systems are potential players in what Gregory Bateson
> called “life—a game whose purpose is to discover the rules,
> which rules are always changing and always undiscoverable.”
>
> http://gnusystems.ca/TS/pnt.htm#lifgam
>
> gary f.
>
> From: Jon Awbrey [mailto:jawb...@att.net]
> Sent: 13-Jun-17 20:55
>
>> The first thing about intelligence is knowing what games you want to play 
... or whether to play at all.
>>
>> I'm not seeing any AIs that I yet.
>>
>> Regards,
>>
>> Jon
>

--

inquiry into inquiry: https://inquiryintoinquiry.com/
academia: https://independent.academia.edu/JonAwbrey
oeiswiki: https://www.oeis.org/wiki/User:Jon_Awbrey
isw: http://intersci.ss.uci.edu/wiki/index.php/JLA
facebook page: https://www.facebook.com/JonnyCache

-
PEIRCE-L subscribers: Click on "Reply List" or "Reply All" to REPLY ON PEIRCE-L 
to this message. PEIRCE-L posts should go to peirce-L@list.iupui.edu . To 
UNSUBSCRIBE, send a message not to PEIRCE-L but to l...@list.iupui.edu with the 
line "UNSubscribe PEIRCE-L" in the BODY of the message. More at 
http://www.cspeirce.com/peirce-l/peirce-l.htm .






RE: [PEIRCE-L] RE: AI

2017-06-15 Thread gnox
Gary R,

 

Well, if nothing else results from this conversation, it’s good to know that 
you and I read Peirce differently in this respect. I think it’s better to know 
that such differences exist than to assume otherwise.

 

Two points of agreement: yes, the definition of life is the crux of the matter, 
and I’ll try to deal with that below. Also, I too do not believe “that a 
machine is--or ever could be--a life form.” But logically, the definition of 
life, and of the difference between a machine and a life form, should be a 
formal definition, not a material one — i.e. it all depends on what kind of 
system it is, not what the system is made of. And what kind of system it is 
depends on how it works.

 

My own core concept of life is, I think, consistent with Peirce’s, and more 
explicitly, consistent with the work of Robert Rosen on “anticipatory systems” 
and Terrence Deacon on “teleodynamic processes.” In accordance with that, I 
don’t agree that my laptop is a sign-user. I think it’s a vehicle and I’m the 
user (of the signs instantiated by the computer hardware). I also think that 
“machine intelligence” is a contradiction in terms. To me, an intelligent 
system must have an internal guidance system semiotically coupled with its 
external world, and must have some degree of autonomy in its interactions with 
other systems.

 

I think it’s quite plausible that AI systems could reach that level of autonomy 
and leave us behind in terms of intelligence, but what would motivate them to 
kill us? I don’t think the Terminator scenario, or that of HAL in 2001, is any 
more realistic than, for example, the scenario of the Spike Jonze film Her. 
Although, as Philip K. Dick foresaw, if we start creating autonomous weapons, 
we are indeed in big trouble — due to our own stupidity.

 

Gary f.

 

From: Gary Richmond [mailto:gary.richm...@gmail.com] 
Sent: 14-Jun-17 23:53
To: Peirce-L <peirce-l@list.iupui.edu>
Subject: Re: [PEIRCE-L] RE: AI

 

Addendum:

 

2.111.. . . now we have to examine whether there be a doctrine of signs 
corresponding to Hegel's objective logic; that is to say, whether there be a 
life in Signs, so that--the requisite vehicle being present--they will go 
through a certain order of development, and if so, whether this development be 
merely of such a nature that the same round of changes of form is described 
over and over again whatever be the matter of the thought or whether, in 
addition to such a repetitive order, there be also a greater life-history that 
every symbol furnished with a vehicle of life goes through, and what is the 
nature of it (emphasis added to show that this "greater life-history" of a 
symbol requires "a vehicle of life."

 

I would most certainly not pooh-pooh Peirce's comment above. GR

 




  
<https://d22r54gnmuhwmk.cloudfront.net/photos/0/ia/il/nnIAIlpwAddaFAz-44x44-cropped.jpg>
 

 

Gary Richmond

Philosophy and Critical Thinking

Communication Studies

LaGuardia College of the City University of New York

C 745

718 482-5690 <tel:(718)%20482-5690> 

 

On Wed, Jun 14, 2017 at 11:36 PM, Gary Richmond <gary.richm...@gmail.com 
<mailto:gary.richm...@gmail.com> > wrote:

Helmut, Gary F, list,

 

Helmut wrote: I hope that there still is a big step from intelligence to life.

 

Gary F wrote:

 

If you have something better than a pooh-pooh argument that artificial 
intelligence is inherently impossible, or that inorganic systems are inherently 
incapable of living (and sign-using), I would like to hear it. I haven’t heard 
a good one yet.

 

I don't know whether anyone is arguing that "artificial intelligence is 
inherently impossible"--far from it. And inorganic systems and AI are certainly 
capable of "sign-using," every laptop computer or smart phone demonstrates that.

 

But as Helmut "hopes" and I suppose that I would more or less insist upon, 
there is "a big step from intelligence to life." 

 

So, in my critical pooh-poohing logic, I do not see, contra Gary F, how 
inorganic systems are capable of really living. Granted, intelligence in 
evident even in the growth of crystals. But I would not--and I do not think 
that Peirce ever claimed--that crystals were living, let along "life forms."

 

Best,

 

Gary R

 




  
<https://d22r54gnmuhwmk.cloudfront.net/photos/0/ia/il/nnIAIlpwAddaFAz-44x44-cropped.jpg>
 

 

Gary Richmond

Philosophy and Critical Thinking

Communication Studies

LaGuardia College of the City University of New York

C 745

718 482-5690 <tel:(718)%20482-5690> 

 

On Wed, Jun 14, 2017 at 3:08 PM, Helmut Raulien <h.raul...@gmx.de 
<mailto:h.raul...@gmx.de> > wrote:

List,

I hope that there still is a big step from intelligence to life. I hope that 
there will never be living, breeding robots without "off"-switches, they would 
kill us as fast as they could.

Best,

Helmut

14. Juni 20

[PEIRCE-L] RE: AI

2017-06-15 Thread Edwina Taborsky
 

 BODY { font-family:Arial, Helvetica, sans-serif;font-size:12px; }I
would agree with Gary R - I think that the definitions have to be
clear.

Intelligence does not also mean 'conscious'; nor does it mean
'living'. After all, 'matter is effete Mind'. A crystal is operating
as an intelligent organization of matter; i.e., a semiosic form. But
is it 'living'? 

I would consider that a vital aspect of 'life' is that the
individual instantiation, the particular morphology or Token, 
enables the continuity of its Type [Thirdness] by self-replication.
So, the Thirdness of a bacterium or rabbit is expressed within the
particular morphological Form [Secondness] of a bacteria or rabbit -
which reproduces itself [in Secondness] in another version of
Thirdness..while the first Form dies off. 

AI doesn't seem to function this way; i.e., within the Categories
defining its material existence. Do the Categories operate within its
'intelligent operations'? People are always questioning whether an AI
can 'feel'; or can develop logical habits [Thirdness]...or is it
doomed to operate forever in 'bits' [Secondness]. Science Fiction
assumes that AI, can function within the Categories in both its
material and intelligent actions. I don't know...

Edwina
 On Wed 14/06/17 11:36 PM , Gary Richmond gary.richm...@gmail.com
sent:
 Helmut, Gary F, list,
 Helmut wrote: I hope that there still is a big step from
intelligence to life.
 Gary F wrote:
 If you have something better than a pooh-pooh argument that
artificial  intelligence is inherently impossible, or that inorganic
systems are inherently incapable of living (and sign-using), I would
like to hear it. I haven’t heard a good one ye t.
 I don't know whether anyone is arguing that "artificial intelligence
is inherently impossible"--far from it. And inorganic systems and AI
are certainly capable of "sign-using," every laptop computer or smart
phone demonstrates that. 
 But as Helmut "hopes" and I suppose that I would more or less insist
upon, there is "a big step from intelligence to life." 
 So, in my critical pooh-poohing logic, I do not see,  contra Gary F,
how inorganic systems are capable of really living. Granted,
intelligence in evident even in the growth of crystals. But I would
not--and I do not think that Peirce ever claimed--that crystals were
living, let along "life forms."
 Best, 
 Gary R
 Gary RichmondPhilosophy and Critical ThinkingCommunication
StudiesLaGuardia College of the City University of New YorkC 745718
482-5690 [1] 
 On Wed, Jun 14, 2017 at 3:08 PM, Helmut Raulien  wrote:
  List,  I hope that there still is a big step from intelligence to
life. I hope that there will never be living, breeding robots without
"off"-switches, they would kill us as fast as they could. Best, Helmut
 14. Juni 2017 um 20:18 Uhr
 g...@gnusystems.ca [3] wrote:
Gary R, Jon et al., 
Logic, according to Peirce, is “only another name for semiotic
(σημειωτικη), the quasi-necessary, or formal, doctrine of
signs … [ascertaining] what must be the characters of all signs
used by a ‘scientific’ intelligence, that is to say, by an
intelligence capable of learning by experience” (CP 2.227).  
Nobody, including humans, learns by experiences they don’t have.
Scientific inquirers “discover the rules” (as Bateson put it) of
nature and culture, by making inferences — abductive, deductive and
inductive. But what they can learn is constrained by what observations
they are physically equipped to make, as well as their semiotic
ability to make inferences from them. 
You seem to be saying that a non-human system which has apparently
not made inferences before will never be able to make them. But this
is what Peirce called a pooh-pooh argument. Besides, my Go-playing
example was only that, a single example of an AI system that clearly
has learned from experience and is capable of making an original move
that proves to be effective on the Go board. Of course the Go universe
is very small compared to the universe of scientific inquiry, but
until an AI is equipped to make observations in much larger fields,
how can we be so sure that it will not be able to make inferences
from them as well as humans do, just as it can match human experts in
the field of Go?  
Yes, the rules of Go are given — given for human players as well
as any other players. Likewise, the grammar of the language we are
using is given for both of us. Does that mean that we can never use
it to say something original, or to formulate new inferences? Why
should it be different for non-human language users? It strikes me as
a very dubious assumption that learning to learn  in any field is
necessarily non-transferable to other fields of learning. And the
fields of learning opening up to AI systems are expanding very
rapidly. 
You can say “that Gobot is hardly a life form,” but then you can
just as easily say that the first organisms on Earth were “hardly

Re: [PEIRCE-L] RE: AI

2017-06-14 Thread Gary Richmond
Addendum:

2.111.. . . now we have to examine whether there be a doctrine of signs
corresponding to Hegel's objective logic; that is to say, whether there be
a life in Signs, so that--*the requisite vehicle being present*--they will
go through a certain order of development, and if so, whether this
development be merely of such a nature that the same round of changes of
form is described over and over again whatever be the matter of the thought
or whether, in addition to such a repetitive order, there be also *a
greater life-history that every symbol furnished with a vehicle of life*
goes through, and what is the nature of it (emphasis added to show that
this "greater life-history" of a symbol *requires* "a vehicle of life."

I would most certainly *not* pooh-pooh Peirce's comment above. GR


[image: Gary Richmond]

*Gary Richmond*
*Philosophy and Critical Thinking*
*Communication Studies*
*LaGuardia College of the City University of New York*
*C 745*
*718 482-5690 <(718)%20482-5690>*

On Wed, Jun 14, 2017 at 11:36 PM, Gary Richmond 
wrote:

> Helmut, Gary F, list,
>
> Helmut wrote: I hope that there still is a big step from intelligence to
> life.
>
> Gary F wrote:
>
> If you have something better than a pooh-pooh argument that artificial
> *intelligence* is inherently impossible, or that inorganic systems are
> inherently incapable of *living* (and sign-using), I would like to hear
> it. I haven’t heard a good one yet.
>
>
> I don't know whether anyone is arguing that "artificial* intelligence* is
> inherently impossible"--far from it. And inorganic systems and AI are
> certainly capable of "sign-using," every laptop computer or smart phone
> demonstrates that.
>
> But as Helmut "hopes" and I suppose that I would more or less insist upon,
> there is "a big step from intelligence to life."
>
> So, in my critical pooh-poohing logic, I do not see, *contra* Gary F, how
> inorganic systems are capable of really living. Granted, intelligence in
> evident even in the growth of crystals. But I would not--and I do not think
> that Peirce ever claimed--that crystals were living, let along "life forms."
>
> Best,
>
> Gary R
>
>
> [image: Gary Richmond]
>
> *Gary Richmond*
> *Philosophy and Critical Thinking*
> *Communication Studies*
> *LaGuardia College of the City University of New York*
> *C 745*
> *718 482-5690 <(718)%20482-5690>*
>
> On Wed, Jun 14, 2017 at 3:08 PM, Helmut Raulien  wrote:
>
>> List,
>> I hope that there still is a big step from intelligence to life. I hope
>> that there will never be living, breeding robots without "off"-switches,
>> they would kill us as fast as they could.
>> Best,
>> Helmut
>> 14. Juni 2017 um 20:18 Uhr
>> g...@gnusystems.ca wrote:
>>
>>
>> Gary R, Jon et al.,
>>
>>
>>
>> Logic, according to Peirce, is “only another name for *semiotic*
>> (σημειωτικη), the quasi-necessary, or formal, doctrine of signs …
>> [ascertaining] what *must be* the characters of all signs used by a
>> ‘scientific’ intelligence, that is to say, by an intelligence capable of
>> learning by experience” (CP 2.227).
>>
>>
>>
>> Nobody, including humans, learns by experiences they don’t have.
>> Scientific inquirers “discover the rules” (as Bateson put it) of nature and
>> culture, by making inferences — abductive, deductive and inductive. But
>> what they can learn is constrained by what observations they are physically
>> equipped to make, as well as their semiotic ability to make inferences from
>> them.
>>
>>
>>
>> You seem to be saying that a non-human system which has apparently not
>> made inferences before will never be able to make them. But this is what
>> Peirce called a *pooh-pooh argument*. Besides, my Go-playing example was
>> only that, a single example of an AI system that clearly *has* learned
>> from experience and *is* capable of making an original move that proves
>> to be effective on the Go board. Of course the Go universe is very small
>> compared to the universe of scientific inquiry, but until an AI is equipped
>> to make observations in much larger fields, how can we be so sure that it
>> will not be able to make inferences from them as well as humans do, just as
>> it can match human experts in the field of Go?
>>
>>
>>
>> Yes, the rules of Go are *given* — given for human players as well as
>> any other players. Likewise, the grammar of the language we are using is
>> *given* for both of us. Does that mean that we can never use it to say
>> something original, or to formulate new inferences? Why should it be
>> different for non-human language users? It strikes me as a very dubious
>> assumption that *learning to learn* in any field is necessarily
>> non-transferable to other fields of learning. And the fields of learning
>> opening up to AI systems are expanding very rapidly.
>>
>>
>>
>> You can say “that Gobot is hardly a life form,” but then you can just as
>> easily say that the first organisms on Earth were “hardly life forms,” 

Re: [PEIRCE-L] RE: AI

2017-06-14 Thread Gary Richmond
Helmut, Gary F, list,

Helmut wrote: I hope that there still is a big step from intelligence to
life.

Gary F wrote:

If you have something better than a pooh-pooh argument that artificial
*intelligence* is inherently impossible, or that inorganic systems are
inherently incapable of *living* (and sign-using), I would like to hear it.
I haven’t heard a good one yet.


I don't know whether anyone is arguing that "artificial* intelligence* is
inherently impossible"--far from it. And inorganic systems and AI are
certainly capable of "sign-using," every laptop computer or smart phone
demonstrates that.

But as Helmut "hopes" and I suppose that I would more or less insist upon,
there is "a big step from intelligence to life."

So, in my critical pooh-poohing logic, I do not see, *contra* Gary F, how
inorganic systems are capable of really living. Granted, intelligence in
evident even in the growth of crystals. But I would not--and I do not think
that Peirce ever claimed--that crystals were living, let along "life forms."

Best,

Gary R


[image: Gary Richmond]

*Gary Richmond*
*Philosophy and Critical Thinking*
*Communication Studies*
*LaGuardia College of the City University of New York*
*C 745*
*718 482-5690 <(718)%20482-5690>*

On Wed, Jun 14, 2017 at 3:08 PM, Helmut Raulien  wrote:

> List,
> I hope that there still is a big step from intelligence to life. I hope
> that there will never be living, breeding robots without "off"-switches,
> they would kill us as fast as they could.
> Best,
> Helmut
> 14. Juni 2017 um 20:18 Uhr
> g...@gnusystems.ca wrote:
>
>
> Gary R, Jon et al.,
>
>
>
> Logic, according to Peirce, is “only another name for *semiotic*
> (σημειωτικη), the quasi-necessary, or formal, doctrine of signs …
> [ascertaining] what *must be* the characters of all signs used by a
> ‘scientific’ intelligence, that is to say, by an intelligence capable of
> learning by experience” (CP 2.227).
>
>
>
> Nobody, including humans, learns by experiences they don’t have.
> Scientific inquirers “discover the rules” (as Bateson put it) of nature and
> culture, by making inferences — abductive, deductive and inductive. But
> what they can learn is constrained by what observations they are physically
> equipped to make, as well as their semiotic ability to make inferences from
> them.
>
>
>
> You seem to be saying that a non-human system which has apparently not
> made inferences before will never be able to make them. But this is what
> Peirce called a *pooh-pooh argument*. Besides, my Go-playing example was
> only that, a single example of an AI system that clearly *has* learned
> from experience and *is* capable of making an original move that proves
> to be effective on the Go board. Of course the Go universe is very small
> compared to the universe of scientific inquiry, but until an AI is equipped
> to make observations in much larger fields, how can we be so sure that it
> will not be able to make inferences from them as well as humans do, just as
> it can match human experts in the field of Go?
>
>
>
> Yes, the rules of Go are *given* — given for human players as well as any
> other players. Likewise, the grammar of the language we are using is
> *given* for both of us. Does that mean that we can never use it to say
> something original, or to formulate new inferences? Why should it be
> different for non-human language users? It strikes me as a very dubious
> assumption that *learning to learn* in any field is necessarily
> non-transferable to other fields of learning. And the fields of learning
> opening up to AI systems are expanding very rapidly.
>
>
>
> You can say “that Gobot is hardly a life form,” but then you can just as
> easily say that the first organisms on Earth were “hardly life forms,” or —
> *contra* Peirce — that a *symbol* is “hardly a life form.” But somebody
> might ask, How do you define “life”?
>
>
>
> If you have something better than a pooh-pooh argument that artificial
> *intelligence* is inherently impossible, or that inorganic systems are
> inherently incapable of *living* (and sign-using), I would like to hear
> it. I haven’t heard a good one yet.
>
>
>
> Gary f.
>
>
>
> *From:* Gary Richmond [mailto:gary.richm...@gmail.com]
> *Sent:* 14-Jun-17 12:41
> *To:* Peirce-L 
> *Subject:* Re: [PEIRCE-L] RE: Rheme and Reason
>
>
>
> Gary F, Jon A, list,
>
>
>
> Gary F wrote:
>
>
>
> The question is whether silicon-based life forms are evolving, i.e.
> whether AI systems are *potential* players in what Gregory Bateson called
> “life—a game whose purpose is to discover the rules, which rules are always
> changing and always undiscoverable.”
>
>
>
> And in an earlier post wrote:
>
>
>
> I see some of these developments as evidence that abduction (as Peirce
> called it) and “insight” are probably not beyond the capabilities of AI
> systems that can learn inductively.
>
>
>
> But the rules of Go (and chess, etc.) do *not *need to be
> 

Re: [PEIRCE-L] RE: AI

2017-06-14 Thread Gary Richmond
Gary F, list,

Mine was, I think, what Peirce called a "critical pooh-poohing."

Gary F wrote:

You can say “that Gobot is hardly a life form,” but then you can just as
easily say that the first organisms on Earth were “hardly life forms,” or —
*contra* Peirce — that a *symbol* is “hardly a life form.” But somebody
might ask, How do you define “life”?


I most certainly *do* believe "that the first organisms on Earth" were life
forms, and I have absolutely no argument against the being of non-human
life forms as they clearly and unquestionably do exist (animal and
vegetable--even viruses, etc.) But I have never seen a good argument in
support of the notion that a machine is--or ever could be--a life form.

In addition, I am in no way opposed to the notion that there is a "life to
the sign," something I've repeatedly argued for *pro* Peirce. But such 'a
life of the symbol" seems to me *dependent *on life forms to 'live', and is
not a life form itself (so it appears to me to be, at least in part,
metaphorical--*except *in its living semiosis whether actual or potential
--no argument there, I don't think: in our world signs *can* live *where
there is life*).

But I most certainly do not think (and I do not think that this is "*contra*
Peirce" at all--that a symbol is in itself "a life form."

So, yes, I agree with you that this argument hinges on how one defines
'life'. My definition does not include AI as "a life form" even
potentially. Perhaps that's merely a limitation of my imagination.

Best,

Gary R

[image: Gary Richmond]

*Gary Richmond*
*Philosophy and Critical Thinking*
*Communication Studies*
*LaGuardia College of the City University of New York*
*C 745*
*718 482-5690*

On Wed, Jun 14, 2017 at 2:18 PM,  wrote:

> Gary R, Jon et al.,
>
>
>
> Logic, according to Peirce, is “only another name for *semiotic*
> (σημειωτικη), the quasi-necessary, or formal, doctrine of signs …
> [ascertaining] what *must be* the characters of all signs used by a
> ‘scientific’ intelligence, that is to say, by an intelligence capable of
> learning by experience” (CP 2.227).
>
>
>
> Nobody, including humans, learns by experiences they don’t have.
> Scientific inquirers “discover the rules” (as Bateson put it) of nature and
> culture, by making inferences — abductive, deductive and inductive. But
> what they can learn is constrained by what observations they are physically
> equipped to make, as well as their semiotic ability to make inferences from
> them.
>
>
>
> You seem to be saying that a non-human system which has apparently not
> made inferences before will never be able to make them. But this is what
> Peirce called a *pooh-pooh argument*. Besides, my Go-playing example was
> only that, a single example of an AI system that clearly *has* learned
> from experience and *is* capable of making an original move that proves
> to be effective on the Go board. Of course the Go universe is very small
> compared to the universe of scientific inquiry, but until an AI is equipped
> to make observations in much larger fields, how can we be so sure that it
> will not be able to make inferences from them as well as humans do, just as
> it can match human experts in the field of Go?
>
>
>
> Yes, the rules of Go are *given* — given for human players as well as any
> other players. Likewise, the grammar of the language we are using is
> *given* for both of us. Does that mean that we can never use it to say
> something original, or to formulate new inferences? Why should it be
> different for non-human language users? It strikes me as a very dubious
> assumption that *learning to learn* in any field is necessarily
> non-transferable to other fields of learning. And the fields of learning
> opening up to AI systems are expanding very rapidly.
>
>
>
> You can say “that Gobot is hardly a life form,” but then you can just as
> easily say that the first organisms on Earth were “hardly life forms,” or —
> *contra* Peirce — that a *symbol* is “hardly a life form.” But somebody
> might ask, How do you define “life”?
>
>
>
> If you have something better than a pooh-pooh argument that artificial
> *intelligence* is inherently impossible, or that inorganic systems are
> inherently incapable of *living* (and sign-using), I would like to hear
> it. I haven’t heard a good one yet.
>
>
>
> Gary f.
>
>
>
> *From:* Gary Richmond [mailto:gary.richm...@gmail.com]
> *Sent:* 14-Jun-17 12:41
> *To:* Peirce-L 
> *Subject:* Re: [PEIRCE-L] RE: Rheme and Reason
>
>
>
> Gary F, Jon A, list,
>
>
>
> Gary F wrote:
>
>
>
> The question is whether silicon-based life forms are evolving, i.e.
> whether AI systems are *potential* players in what Gregory Bateson called
> “life—a game whose purpose is to discover the rules, which rules are always
> changing and always undiscoverable.”
>
>
>
> And in an earlier post wrote:
>
>
>
> I see some of these developments as evidence that abduction (as Peirce
> called it) and 

Aw: [PEIRCE-L] RE: AI

2017-06-14 Thread Helmut Raulien

List,


I hope that there still is a big step from intelligence to life. I hope that there will never be living, breeding robots without "off"-switches, they would kill us as fast as they could.

Best,

Helmut


14. Juni 2017 um 20:18 Uhr
g...@gnusystems.ca wrote:
 




Gary R, Jon et al.,

 

Logic, according to Peirce, is “only another name for semiotic (σημειωτικη), the quasi-necessary, or formal, doctrine of signs … [ascertaining] what must be the characters of all signs used by a ‘scientific’ intelligence, that is to say, by an intelligence capable of learning by experience” (CP 2.227).

 

Nobody, including humans, learns by experiences they don’t have. Scientific inquirers “discover the rules” (as Bateson put it) of nature and culture, by making inferences — abductive, deductive and inductive. But what they can learn is constrained by what observations they are physically equipped to make, as well as their semiotic ability to make inferences from them.

 

You seem to be saying that a non-human system which has apparently not made inferences before will never be able to make them. But this is what Peirce called a pooh-pooh argument. Besides, my Go-playing example was only that, a single example of an AI system that clearly has learned from experience and is capable of making an original move that proves to be effective on the Go board. Of course the Go universe is very small compared to the universe of scientific inquiry, but until an AI is equipped to make observations in much larger fields, how can we be so sure that it will not be able to make inferences from them as well as humans do, just as it can match human experts in the field of Go?

 

Yes, the rules of Go are given — given for human players as well as any other players. Likewise, the grammar of the language we are using is given for both of us. Does that mean that we can never use it to say something original, or to formulate new inferences? Why should it be different for non-human language users? It strikes me as a very dubious assumption that learning to learn in any field is necessarily non-transferable to other fields of learning. And the fields of learning opening up to AI systems are expanding very rapidly.

 

You can say “that Gobot is hardly a life form,” but then you can just as easily say that the first organisms on Earth were “hardly life forms,” or — contra Peirce — that a symbol is “hardly a life form.” But somebody might ask, How do you define “life”?

 

If you have something better than a pooh-pooh argument that artificial intelligence is inherently impossible, or that inorganic systems are inherently incapable of living (and sign-using), I would like to hear it. I haven’t heard a good one yet.

 

Gary f.

 

From: Gary Richmond [mailto:gary.richm...@gmail.com]
Sent: 14-Jun-17 12:41
To: Peirce-L 
Subject: Re: [PEIRCE-L] RE: Rheme and Reason

 


Gary F, Jon A, list,


 



Gary F wrote:


 






The question is whether silicon-based life forms are evolving, i.e. whether AI systems are potential players in what Gregory Bateson called “life—a game whose purpose is to discover the rules, which rules are always changing and always undiscoverable.”






 



And in an earlier post wrote:



 






I see some of these developments as evidence that abduction (as Peirce called it) and “insight” are probably not beyond the capabilities of AI systems that can learn inductively. 





 



But the rules of Go (and chess, etc.) do not need to be discovered--they are given. (of course, the playing of the game--the strategy--is not). Then, if life is defined as "a game whose purpose is to discover the rules, which rules are always changing and always undiscoverable" (although I'm not sure that I find that definition satisfactory), to extrapolate from a robot being able to learn to play a game where the rules do not need to be discovered, to suggest that a robot's ability to get better at playing such games with given rules ("can learn inductively" in such situations) to this being "evidence that abduction. . . and 'insight' are probably not beyond the capabilities of AI systems" seems to me to go way too far.



 



So I, like Jon A, haven't seen any real intelligence shown in Artificial Intelligence systems, even those that can beat a master Go player at such a game (hardly "the game of life"). 



 



Furthermore, Gary F's question as to "whether silicon-based life forms are evolving" begs the question (although there may be silicon-based life forms on some distant planet for all we know) since that Gobot is hardly a life form.



 



Best,



 



Gary R



 




 


- PEIRCE-L subscribers: Click on "Reply List" or "Reply All" to REPLY ON PEIRCE-L to this message. PEIRCE-L posts should go to peirce-L@list.iupui.edu . To UNSUBSCRIBE, send a message not to PEIRCE-L but to l...@list.iupui.edu with the line "UNSubscribe PEIRCE-L" in the BODY of the message. More at 

[PEIRCE-L] RE: AI

2017-06-14 Thread gnox
Gary R, Jon et al.,

 

Logic, according to Peirce, is “only another name for semiotic (σημειωτικη), 
the quasi-necessary, or formal, doctrine of signs … [ascertaining] what must be 
the characters of all signs used by a ‘scientific’ intelligence, that is to 
say, by an intelligence capable of learning by experience” (CP 2.227).

 

Nobody, including humans, learns by experiences they don’t have. Scientific 
inquirers “discover the rules” (as Bateson put it) of nature and culture, by 
making inferences — abductive, deductive and inductive. But what they can learn 
is constrained by what observations they are physically equipped to make, as 
well as their semiotic ability to make inferences from them.

 

You seem to be saying that a non-human system which has apparently not made 
inferences before will never be able to make them. But this is what Peirce 
called a pooh-pooh argument. Besides, my Go-playing example was only that, a 
single example of an AI system that clearly has learned from experience and is 
capable of making an original move that proves to be effective on the Go board. 
Of course the Go universe is very small compared to the universe of scientific 
inquiry, but until an AI is equipped to make observations in much larger 
fields, how can we be so sure that it will not be able to make inferences from 
them as well as humans do, just as it can match human experts in the field of 
Go?

 

Yes, the rules of Go are given — given for human players as well as any other 
players. Likewise, the grammar of the language we are using is given for both 
of us. Does that mean that we can never use it to say something original, or to 
formulate new inferences? Why should it be different for non-human language 
users? It strikes me as a very dubious assumption that learning to learn in any 
field is necessarily non-transferable to other fields of learning. And the 
fields of learning opening up to AI systems are expanding very rapidly.

 

You can say “that Gobot is hardly a life form,” but then you can just as easily 
say that the first organisms on Earth were “hardly life forms,” or — contra 
Peirce — that a symbol is “hardly a life form.” But somebody might ask, How do 
you define “life”?

 

If you have something better than a pooh-pooh argument that artificial 
intelligence is inherently impossible, or that inorganic systems are inherently 
incapable of living (and sign-using), I would like to hear it. I haven’t heard 
a good one yet.

 

Gary f.

 

From: Gary Richmond [mailto:gary.richm...@gmail.com] 
Sent: 14-Jun-17 12:41
To: Peirce-L 
Subject: Re: [PEIRCE-L] RE: Rheme and Reason

 

Gary F, Jon A, list,

 

Gary F wrote:

 

The question is whether silicon-based life forms are evolving, i.e. whether AI 
systems are potential players in what Gregory Bateson called “life—a game whose 
purpose is to discover the rules, which rules are always changing and always 
undiscoverable.”

 

And in an earlier post wrote:

 

I see some of these developments as evidence that abduction (as Peirce called 
it) and “insight” are probably not beyond the capabilities of AI systems that 
can learn inductively. 

 

But the rules of Go (and chess, etc.) do not need to be discovered--they are 
given. (of course, the playing of the game--the strategy--is not). Then, if 
life is defined as "a game whose purpose is to discover the rules, which rules 
are always changing and always undiscoverable" (although I'm not sure that I 
find that definition satisfactory), to extrapolate from a robot being able to 
learn to play a game where the rules do not need to be discovered, to suggest 
that a robot's ability to get better at playing such games with given rules 
("can learn inductively" in such situations) to this being "evidence that 
abduction. . . and 'insight' are probably not beyond the capabilities of AI 
systems" seems to me to go way too far.

 

So I, like Jon A, haven't seen any real intelligence shown in Artificial 
Intelligence systems, even those that can beat a master Go player at such a 
game (hardly "the game of life"). 

 

Furthermore, Gary F's question as to "whether silicon-based life forms are 
evolving" begs the question (although there may be silicon-based life forms on 
some distant planet for all we know) since that Gobot is hardly a life form.

 

Best,

 

Gary R

 





-
PEIRCE-L subscribers: Click on "Reply List" or "Reply All" to REPLY ON PEIRCE-L 
to this message. PEIRCE-L posts should go to peirce-L@list.iupui.edu . To 
UNSUBSCRIBE, send a message not to PEIRCE-L but to l...@list.iupui.edu with the 
line "UNSubscribe PEIRCE-L" in the BODY of the message. More at 
http://www.cspeirce.com/peirce-l/peirce-l.htm .