Aw: Re: Re: Re: Re: Re: RE: [PEIRCE-L] RE: AI

2017-06-27 Thread Helmut Raulien
 
 

Supplement:

Edwina, I think I just argued from your point, or twisted the words. sorry. I think, our opinions are not far apart, generally: When people do not admit that they are arguing or politically acting out of secondness, but make their point seem like thirdness though it isnt, then the utopy-problem arises. Examples:

The Leap-Manifesto people should just say: This and that is going wrong, it should be changed with a leap, because it is urgent, in a democratic way. But not: We have the solution for a better life for everybody.

Luther just should have said: Sorry folks, I cannot help you with your revolution, that would be biting the hand that feeds me. But not make up a weird two-realms-theory, just to make himself appear to be the standpointish thirdness-only-macho he saw himself like.

When Owen improved the lifes of the workers in his factory in England, he had success. But then he thought this change of secondness was a thirdness-solution for everybody, and put up a commune in the USA, which failed.

So, on one hand, secondness change should not be adressed for thirdness. On the other hand, a thirdness can automatically evolve out of secondness: Apel claimed, that when people talk and argue with each other, they automatically perform an acceptance of the discourse continuity. But something which gladly happens sometimes should not be stated for dogma, or generally supposed, otherwise it becomes a paradoxon. I want to read Thomas Morus "Utopia".

Best,

Helmut




Edwina,

you wrote:

 


"3] A call for action is, in my view, based on a theory of 'How To Live as a community'."

 

That would be a fully fledged thirdness-communist-utopic theory. But a call for action may also be just a call for help, from being fed up or starving, without any concept or theory, or something half-reflected between. A degenerate sign??

(desperately trying to Peirce-relate)

Best,

Helmut

 

 


 27. Juni 2017 um 02:40 Uhr
 "Edwina Taborsky"  wrote:
 


Helmut - I'll try to be brief because I really don't think this is a topic for this list.

1] Democratic change, whether gradual or via leaps, has in my view, nothing to do with the LEAP Manifesto.

2] Yes - the best laid plans of mice and men could be compared with 'the road to hell is paved with good intentions'.

3] A call for action is, in my view, based on a theory of 'How To Live as a community'.

They are recommending a particular socioeconomic system - and this has nothing to do with democracy. The term 'democracy', to my understanding, refers only to a method of choosing a particular action/person/govt/ etc.

4) Peirce was, if I recall correctly, against gradual evolution and did suggest 'leaps' in evolutionary change.

That's it.

Edwina



 

On Mon 26/06/17 8:17 PM , "Helmut Raulien" h.raul...@gmx.de sent:




Edwina,

with "it" I meant a basic-democratic, maybe leap-like, change: In Cochabamba (was it in the 1980ies?) a citizens initiative had regained the water rights that were stolen from the people by a collaboration of the government and a US- water company. Before they were not even allowed to collect rain water. In Chiapas (1990ies) the people have achieved to govern themselves, see: Zapatista uprising. How it is there nowadays I dont know, I hope still democratic. You wrote:

 


'The best laid plans of mice and men gang oft awry'

 

I dont understand (not a native speaker), does that mean: "The way to hell is paved with good intentions"?

With which I would agree. You wrote:

 

"4] You are suggesting that a theory 'explains things afterwards'. But fascism, communism - and the LEAP manifesto are not explaining things 'afterwards' but are recommending a particular mode of socioeconomic and political organization that IF ONLY it is followed - will bring 'the best life' and well-being and so on."

 

I do not see the Leap Manifesto as a theory (do they claim that?), but a call for action. That they promise best life and well-being- I too do not like that either. I agree that this is wrong. But are they, as you said, "recommending a particular mode of socioeconomic and political organization"? Or is it simply democracy, they recommend?

 

About the kind of freedom Luther meant I am overasked. Perhaps he just gave in to the princes one of whom had protected him before. The farmers fought against all princes, but Luther could not accompany them at this point, because without the help of one of these princes he would have had ended on the pyre long before. You ask:

 

"And what does any of this have to do with Peirce?"

 

Nothing I admit. But I had argued that you (from my humble opinion, which may anytime be altered) should not refute the Leap-Manifesto with Peirce, so be not Peirce-related either with this subject. Well, trying to suck something Peircean off my fingers... Peirce had an idea of continuity, and the leap manifesto wants a discontinuity, a leap. So it might hopefully be Peirce-related, to say, that modern theories talk about l

Aw: Re: Re: Re: Re: Re: RE: [PEIRCE-L] RE: AI

2017-06-26 Thread Helmut Raulien

Edwina,

you wrote:

 


"3] A call for action is, in my view, based on a theory of 'How To Live as a community'."

 

That would be a fully fledged thirdness-communist-utopic theory. But a call for action may also be just a call for help, from being fed up or starving, without any concept or theory, or something half-reflected between. A degenerate sign??

(desperately trying to Peirce-relate)

Best,

Helmut

 

 


 27. Juni 2017 um 02:40 Uhr
 "Edwina Taborsky"  wrote:
 


Helmut - I'll try to be brief because I really don't think this is a topic for this list.

1] Democratic change, whether gradual or via leaps, has in my view, nothing to do with the LEAP Manifesto.

2] Yes - the best laid plans of mice and men could be compared with 'the road to hell is paved with good intentions'.

3] A call for action is, in my view, based on a theory of 'How To Live as a community'.

They are recommending a particular socioeconomic system - and this has nothing to do with democracy. The term 'democracy', to my understanding, refers only to a method of choosing a particular action/person/govt/ etc.

4) Peirce was, if I recall correctly, against gradual evolution and did suggest 'leaps' in evolutionary change.

That's it.

Edwina



 

On Mon 26/06/17 8:17 PM , "Helmut Raulien" h.raul...@gmx.de sent:




Edwina,

with "it" I meant a basic-democratic, maybe leap-like, change: In Cochabamba (was it in the 1980ies?) a citizens initiative had regained the water rights that were stolen from the people by a collaboration of the government and a US- water company. Before they were not even allowed to collect rain water. In Chiapas (1990ies) the people have achieved to govern themselves, see: Zapatista uprising. How it is there nowadays I dont know, I hope still democratic. You wrote:

 


'The best laid plans of mice and men gang oft awry'

 

I dont understand (not a native speaker), does that mean: "The way to hell is paved with good intentions"?

With which I would agree. You wrote:

 

"4] You are suggesting that a theory 'explains things afterwards'. But fascism, communism - and the LEAP manifesto are not explaining things 'afterwards' but are recommending a particular mode of socioeconomic and political organization that IF ONLY it is followed - will bring 'the best life' and well-being and so on."

 

I do not see the Leap Manifesto as a theory (do they claim that?), but a call for action. That they promise best life and well-being- I too do not like that either. I agree that this is wrong. But are they, as you said, "recommending a particular mode of socioeconomic and political organization"? Or is it simply democracy, they recommend?

 

About the kind of freedom Luther meant I am overasked. Perhaps he just gave in to the princes one of whom had protected him before. The farmers fought against all princes, but Luther could not accompany them at this point, because without the help of one of these princes he would have had ended on the pyre long before. You ask:

 

"And what does any of this have to do with Peirce?"

 

Nothing I admit. But I had argued that you (from my humble opinion, which may anytime be altered) should not refute the Leap-Manifesto with Peirce, so be not Peirce-related either with this subject. Well, trying to suck something Peircean off my fingers... Peirce had an idea of continuity, and the leap manifesto wants a discontinuity, a leap. So it might hopefully be Peirce-related, to say, that modern theories talk about leaps, revolutions, bifurcations, emergences, sudden changes from quantity to quality, which Peirce at his time could not, or did not want to be, aware of. Or?

Best,

Helmut

 

 

 


 27. Juni 2017 um 01:13 Uhr
"Edwina Taborsky" wrote:
 


Helmut - you wrote:

1] " I spontaneously recall at least two places where it has worked: Cochabamba, Bolivia, and Chiapas, Mexico."

What does 'IT' refer to? What worked?

2] The Marxist-Leninist theory of linear socioeconomic phases is simply a Seminar Room Theory. It's not a FACT.

3] You wrote:

"Luther edited pamphlets against the peasants, who wanted the same freedom, he advertised before for christian people, and he argued with his theory of the two realms"

What freedom?

And what does any of this have to do with Peirce?

4] You are suggesting that a theory 'explains things afterwards'. But fascism, communism - and the LEAP manifesto are not explaining things 'afterwards' but are recommending a particular mode of socioeconomic and political organization that IF ONLY it is followed - will bring 'the best life' and well-being and so on.

As is said: 'The best laid plans of mice and men gang oft awry'...

I think pragmatic realism is the sensible path..It doesn't dwell in the land of 'If Only'.

Edwina

 



 

On Mon 26/06/17 6:14 PM , "Helmut Raulien" h.raul...@gmx.de sent:




Edwina, Gary, List,

I am against utopism too, but I do not see what should be wrong with the Leap Manifesto: They are not propagating an utopian regime, but a b

Re: Aw: Re: Re: Re: Re: RE: [PEIRCE-L] RE: AI

2017-06-26 Thread Edwina Taborsky
 

Helmut - I'll try to be brief because I really don't think this is a
topic for this list.

1] Democratic change, whether gradual or via leaps, has in my view,
nothing to do with the LEAP Manifesto.

2] Yes - the best laid plans of mice and men could be compared with
'the road to hell is paved with good intentions'.

3] A call for action is, in my view, based on a theory of 'How To
Live as a community'.

They are recommending a particular socioeconomic system - and this
has nothing to do with democracy. The term 'democracy', to my
understanding, refers only to a method of choosing a particular
action/person/govt/ etc.

4) Peirce was, if I recall correctly, against gradual evolution and
did suggest 'leaps' in evolutionary change. 

That's it. 

Edwina
 On Mon 26/06/17  8:17 PM , "Helmut Raulien" h.raul...@gmx.de sent:
  Edwina, with "it" I meant a basic-democratic, maybe leap-like,
change: In Cochabamba (was it in the 1980ies?) a citizens initiative
had regained the water rights that were stolen from the people by a
collaboration of the government and a US- water company. Before they
were not even allowed to collect rain water. In Chiapas (1990ies) the
people have achieved to govern themselves, see: Zapatista uprising.
How it is there nowadays I dont know, I hope still democratic. You
wrote:'The best laid plans of mice and men gang oft awry'   I
dont understand (not a native speaker), does that mean: "The way to
hell is paved with good intentions"? With which I would agree. You
wrote:   "4] You are suggesting that a theory 'explains things
afterwards'. But fascism, communism - and the LEAP manifesto are not
explaining things 'afterwards' but are recommending a particular mode
of socioeconomic and political organization that IF ONLY it is
followed - will bring 'the best life' and well-being and so on."   I
do not see the Leap Manifesto as a theory (do they claim that?), but
a call for action. That they promise best life and well-being- I too
do not like that either. I agree that this is wrong. But are they, as
you said, "recommending a particular mode of socioeconomic and
political organization"? Or is it simply democracy, they recommend?  
About the kind of freedom Luther meant I am overasked. Perhaps he just
gave in to the princes one of whom had protected him before. The
farmers fought against all princes, but Luther could not accompany
them at this point, because without the help of one of these princes
he would have had ended on the pyre long before. You ask:   "And what
does any of this have to do with Peirce?"   Nothing I admit. But I had
argued that you (from my humble opinion, which may anytime be altered)
should not refute the Leap-Manifesto with Peirce, so be not
Peirce-related either with this subject. Well, trying to suck
something Peircean off my fingers... Peirce had an idea of
continuity, and the leap manifesto wants a discontinuity, a leap. So
it might hopefully be Peirce-related, to say, that modern theories
talk about leaps, revolutions, bifurcations, emergences, sudden
changes from quantity to quality, which Peirce at his time could not,
or did not want to be, aware of. Or? Best, Helmut 27. Juni
2017 um 01:13 Uhr
 "Edwina Taborsky"  wrote:
Helmut - you wrote: 

1] " I spontaneously recall at least two places where it has worked:
Cochabamba, Bolivia, and Chiapas, Mexico." 

What does 'IT' refer to? What worked? 

2] The Marxist-Leninist theory of linear socioeconomic phases is
simply a Seminar Room Theory. It's not a FACT. 

3] You wrote: 

"Luther edited pamphlets against the peasants, who wanted the same
freedom, he advertised before for christian people, and he argued
with his theory of the two realms" 

What freedom? 

And what does any of this have to do with Peirce? 

4] You are suggesting that a theory 'explains things afterwards'.
But fascism, communism - and the LEAP manifesto are not explaining
things 'afterwards' but are recommending a particular mode of
socioeconomic and political organization that IF ONLY it is followed
- will bring 'the best life' and well-being and so on. 

As is said: 'The best laid plans of mice and men gang oft awry'... 

I think pragmatic realism is the sensible path..It doesn't dwell in
the land of 'If Only'. 

Edwina 
 On Mon 26/06/17 6:14 PM , "Helmut Raulien" h.raul...@gmx.de sent:   
Edwina, Gary, List, I am against utopism too, but I do not see what
should be wrong with the Leap Manifesto: They are not propagating an
utopian regime, but a basic-democratic change. And that is not
utopian (no place), I spontaneously recall at least two places where
it has worked: Cochabamba, Bolivia, and Chiapas, Mexico. In the
Spanish revolution 1936 the Soviet Union fought against the
revolutionists, because they had success in changing the politics too
fast for marxist theory, in a basic-democ

Aw: Re: Re: Re: Re: RE: [PEIRCE-L] RE: AI

2017-06-26 Thread Helmut Raulien

Edwina,

with "it" I meant a basic-democratic, maybe leap-like, change: In Cochabamba (was it in the 1980ies?) a citizens initiative had regained the water rights that were stolen from the people by a collaboration of the government and a US- water company. Before they were not even allowed to collect rain water. In Chiapas (1990ies) the people have achieved to govern themselves, see: Zapatista uprising. How it is there nowadays I dont know, I hope still democratic. You wrote:

 


'The best laid plans of mice and men gang oft awry'

 

I dont understand (not a native speaker), does that mean: "The way to hell is paved with good intentions"?

With which I would agree. You wrote:

 

"4] You are suggesting that a theory 'explains things afterwards'. But fascism, communism - and the LEAP manifesto are not explaining things 'afterwards' but are recommending a particular mode of socioeconomic and political organization that IF ONLY it is followed - will bring 'the best life' and well-being and so on."

 

I do not see the Leap Manifesto as a theory (do they claim that?), but a call for action. That they promise best life and well-being- I too do not like that either. I agree that this is wrong. But are they, as you said, "recommending a particular mode of socioeconomic and political organization"? Or is it simply democracy, they recommend?

 

About the kind of freedom Luther meant I am overasked. Perhaps he just gave in to the princes one of whom had protected him before. The farmers fought against all princes, but Luther could not accompany them at this point, because without the help of one of these princes he would have had ended on the pyre long before. You ask:

 

"And what does any of this have to do with Peirce?"

 

Nothing I admit. But I had argued that you (from my humble opinion, which may anytime be altered) should not refute the Leap-Manifesto with Peirce, so be not Peirce-related either with this subject. Well, trying to suck something Peircean off my fingers... Peirce had an idea of continuity, and the leap manifesto wants a discontinuity, a leap. So it might hopefully be Peirce-related, to say, that modern theories talk about leaps, revolutions, bifurcations, emergences, sudden changes from quantity to quality, which Peirce at his time could not, or did not want to be, aware of. Or?

Best,

Helmut

 

 

 


 27. Juni 2017 um 01:13 Uhr
"Edwina Taborsky"  wrote:
 


Helmut - you wrote:

1] " I spontaneously recall at least two places where it has worked: Cochabamba, Bolivia, and Chiapas, Mexico."

What does 'IT' refer to? What worked?

2] The Marxist-Leninist theory of linear socioeconomic phases is simply a Seminar Room Theory. It's not a FACT.

3] You wrote:

"Luther edited pamphlets against the peasants, who wanted the same freedom, he advertised before for christian people, and he argued with his theory of the two realms"

What freedom?

And what does any of this have to do with Peirce?

4] You are suggesting that a theory 'explains things afterwards'. But fascism, communism - and the LEAP manifesto are not explaining things 'afterwards' but are recommending a particular mode of socioeconomic and political organization that IF ONLY it is followed - will bring 'the best life' and well-being and so on.

As is said: 'The best laid plans of mice and men gang oft awry'...

I think pragmatic realism is the sensible path..It doesn't dwell in the land of 'If Only'.

Edwina

 



 

On Mon 26/06/17 6:14 PM , "Helmut Raulien" h.raul...@gmx.de sent:




Edwina, Gary, List,

I am against utopism too, but I do not see what should be wrong with the Leap Manifesto: They are not propagating an utopian regime, but a basic-democratic change. And that is not utopian (no place), I spontaneously recall at least two places where it has worked: Cochabamba, Bolivia, and Chiapas, Mexico.

In the Spanish revolution 1936 the Soviet Union fought against the revolutionists, because they had success in changing the politics too fast for marxist theory, in a basic-democratic way, establishing a socialism after feudalism, skipping capitalism, which is not allowed by the marxist-leninist theory.

In the 16nth century, Martin Luther edited pamphlets against the peasants, who wanted the same freedom, he advertised before for christian people, and he argued with his theory of the two realms.

With these two examples I want to say, that I think, that a theory (neither the Peircean one) must be not normative, but only explanatory. It should not forbid social evolution (and evolution is not always continuous, but leaps sometimes), but merely explain it afterwards. And if something happens, that cannot be explained by an existing theory- Well, we are good at making up new, suiting theories, aren´t we?

Best,

Helmut

 

 26. Juni 2017 um 22:26 Uhr
 "Edwina Taborsky"
 




Gary R, list:

Yes, I think that any utopian regime, to maintain its 'purity of type', must act as an Authoritarian regime to maintain the holistic purity and

Re: Aw: Re: Re: Re: RE: [PEIRCE-L] RE: AI

2017-06-26 Thread Edwina Taborsky
 

Helmut - you wrote:

1] " I spontaneously recall at least two places where it has worked:
Cochabamba, Bolivia, and Chiapas, Mexico."

What does 'IT' refer to? What worked? 

2] The Marxist-Leninist theory of linear socioeconomic phases is
simply a Seminar Room Theory. It's not a FACT. 

3] You wrote:

"Luther edited pamphlets against the peasants, who wanted the same
freedom, he advertised before for christian people, and he argued
with his theory of the two realms"

What freedom? 

And what does any of this have to do with Peirce?

4] You are suggesting that a theory 'explains things afterwards'.
But fascism, communism - and the LEAP manifesto are not explaining
things 'afterwards' but are recommending a particular mode of
socioeconomic and political organization that IF ONLY it is followed
- will bring 'the best life' and well-being and so on.

As is said: 'The best laid plans of mice and men gang oft awry'...

I think pragmatic realism is the sensible path..It doesn't dwell in
the land of 'If Only'. 

Edwina
 On Mon 26/06/17  6:14 PM , "Helmut Raulien" h.raul...@gmx.de sent:
  Edwina, Gary, List, I am against utopism too, but I do not see what
should be wrong with the Leap Manifesto: They are not propagating an
utopian regime, but a basic-democratic change. And that is not
utopian (no place), I spontaneously recall at least two places where
it has worked: Cochabamba, Bolivia, and Chiapas, Mexico. In the
Spanish revolution 1936 the Soviet Union fought against the
revolutionists, because they had success in changing the politics too
fast for marxist theory, in a basic-democratic way, establishing a
socialism after feudalism, skipping capitalism, which is not allowed
by the marxist-leninist theory. In the 16nth century, Martin Luther
edited pamphlets against the peasants, who wanted the same freedom,
he advertised before for christian people, and he argued with his
theory of the two realms. With these two examples I want to say, that
I think, that a theory (neither the Peircean one) must be not
normative, but only explanatory. It should not forbid social
evolution (and evolution is not always continuous, but leaps
sometimes), but merely explain it afterwards. And if something
happens, that cannot be explained by an existing theory- Well, we are
good at making up new, suiting theories, aren´t we? Best, Helmut
26. Juni 2017 um 22:26 Uhr
  "Edwina Taborsky" 
  Gary R, list: 

Yes, I think that any utopian regime, to maintain its 'purity of
type', must act as an Authoritarian regime to maintain the holistic
purity and prevent the natural dissipation of type that occurs within
the natural operations of both Secondness and Firstness. That is - it
must reject any incidents of Secondness and Firstness. [Entropy is a
natural law and utopias cannot function within entropy]. 

My own view of utopias is that there are two basic types. One
'yearns for' the assumed and quite mythic Purity-of-the-Past. The
image of this Past is pure romantic idyllic scenarios - purity of
behaviour, purity of genetic composition, purity of belief - This is
the utopia commonly known as Fascism where the idea is that If Only
we could go back to The Way We Were - then, all would be perfect.
That would be the Ernest Bloch one - and similar to that of Rousseau,
Mead etc -  which all focused around The Noble Savage or some notion
that early man was somehow 'in a state of physical and mental
purity'. Or course the most famous recent example is Nazism. 

The other utopia, equally mythic, sets up a Purity-of-the-Future.
The image of this Future is equally romantic and idyllic - where
no-one really has to work hard, where everyone collaborates and gets
along, where debate and discussion solves all issues; where such
psychological tendencies as jealousy, anger, lust, hatred etc - don't
exist. This utopia is commonly known as Communism. This is the LEAP
manifesto idea - where - If Only we all learn to behave in such and
such a way - then, we'll all have enough, won't have to work hard,
will all have loving families and etc. Equally naïve and mythic -
and ignorant of economics and human psychology. 

I don't agree that Peirce's philosophy involves any utopian ideas,
for the reasons I've outlined. Utopia is by definition 'no place';
and Peirce's phenomenology is deeply, thoroughly, pragmatic. That is,
it is enmeshed, rooted, in Secondness and the brute individual
realities of that category. Equally, it is rooted in Firstness and
the chance deviations, aberrations of that mode. Thirdness doesn't
exist 'per se' [which would make it utopian if it did] and exists
only within the hard-working dirt and dust and chances of Firstness
and Secondness. 

I feel that Peirce's agapasm is an outline of constant networking,
informational networking and collaboration - where for example,
plants will interact with insect

Aw: Re: Re: Re: RE: [PEIRCE-L] RE: AI

2017-06-26 Thread Helmut Raulien

Edwina, Gary, List,

I am against utopism too, but I do not see what should be wrong with the Leap Manifesto: They are not propagating an utopian regime, but a basic-democratic change. And that is not utopian (no place), I spontaneously recall at least two places where it has worked: Cochabamba, Bolivia, and Chiapas, Mexico.

In the Spanish revolution 1936 the Soviet Union fought against the revolutionists, because they had success in changing the politics too fast for marxist theory, in a basic-democratic way, establishing a socialism after feudalism, skipping capitalism, which is not allowed by the marxist-leninist theory.

In the 16nth century, Martin Luther edited pamphlets against the peasants, who wanted the same freedom, he advertised before for christian people, and he argued with his theory of the two realms.

With these two examples I want to say, that I think, that a theory (neither the Peircean one) must be not normative, but only explanatory. It should not forbid social evolution (and evolution is not always continuous, but leaps sometimes), but merely explain it afterwards. And if something happens, that cannot be explained by an existing theory- Well, we are good at making up new, suiting theories, aren´t we?

Best,

Helmut

 

 26. Juni 2017 um 22:26 Uhr
 "Edwina Taborsky" 
 




Gary R, list:

Yes, I think that any utopian regime, to maintain its 'purity of type', must act as an Authoritarian regime to maintain the holistic purity and prevent the natural dissipation of type that occurs within the natural operations of both Secondness and Firstness. That is - it must reject any incidents of Secondness and Firstness. [Entropy is a natural law and utopias cannot function within entropy].

My own view of utopias is that there are two basic types. One 'yearns for' the assumed and quite mythic Purity-of-the-Past. The image of this Past is pure romantic idyllic scenarios - purity of behaviour, purity of genetic composition, purity of belief - This is the utopia commonly known as Fascism where the idea is that If Only we could go back to The Way We Were - then, all would be perfect. That would be the Ernest Bloch one - and similar to that of Rousseau, Mead etc -  which all focused around The Noble Savage or some notion that early man was somehow 'in a state of physical and mental purity'. Or course the most famous recent example is Nazism.

The other utopia, equally mythic, sets up a Purity-of-the-Future. The image of this Future is equally romantic and idyllic - where no-one really has to work hard, where everyone collaborates and gets along, where debate and discussion solves all issues; where such psychological tendencies as jealousy, anger, lust, hatred etc - don't exist. This utopia is commonly known as Communism. This is the LEAP manifesto idea - where - If Only we all learn to behave in such and such a way - then, we'll all have enough, won't have to work hard, will all have loving families and etc. Equally naïve and mythic - and ignorant of economics and human psychology.

I don't agree that Peirce's philosophy involves any utopian ideas, for the reasons I've outlined. Utopia is by definition 'no place'; and Peirce's phenomenology is deeply, thoroughly, pragmatic. That is, it is enmeshed, rooted, in Secondness and the brute individual realities of that category. Equally, it is rooted in Firstness and the chance deviations, aberrations of that mode. Thirdness doesn't exist 'per se' [which would make it utopian if it did] and exists only within the hard-working dirt and dust and chances of Firstness and Secondness.

I feel that Peirce's agapasm is an outline of constant networking, informational networking and collaboration - where for example, plants will interact with insects and animals and vice versa - but- this complex adaptive system is not a utopia, but...a complex adaptive system, busily interacting and coming up with novel solutions to chance aberrations...etc.

Edwina

 

 



 

On Mon 26/06/17 4:00 PM , Gary Richmond gary.richm...@gmail.com sent:


Edwina, list,
 

The LEAP manifesto sounds like North Korea? Well, while I agree with you that the manifesto is at least quasi-utopian, I think equating it with the brutal NK is way off the mark.

 

In any case, there was an op-ed piece today in The Stone, that section of the New York Times editorial page where philosophers comment on cultural, social, political, etc. issues. Today's piece, by Espen Hammer, a professor of philosophy at Temple University, is titled "A Utopia for a Dystopian Age."  https://www.nytimes.com/2017/06/26/opinion/a-utopia-for-a-dystopian-age.html?ref=opinion 

 


Hammer's piece concludes: 

 

Are our industrial, capitalist societies able to make the requisite changes? If not, where should we be headed? This is a utopian question as good as any. It is deep and universalistic. Yet it calls for neither a break with the past nor a headfirst dive into the future. The German thinker Ernst Bloch argued that all u

Re: Re: Re: RE: [PEIRCE-L] RE: AI

2017-06-26 Thread Edwina Taborsky
 

 BODY { font-family:Arial, Helvetica, sans-serif;font-size:12px;
}Gary R, list:

Yes, I think that any utopian regime, to maintain its 'purity of
type', must act as an Authoritarian regime to maintain the holistic
purity and prevent the natural dissipation of type that occurs within
the natural operations of both Secondness and Firstness. That is - it
must reject any incidents of Secondness and Firstness. [Entropy is a
natural law and utopias cannot function within entropy]. 

My own view of utopias is that there are two basic types. One
'yearns for' the assumed and quite mythic Purity-of-the-Past. The
image of this Past is pure romantic idyllic scenarios - purity of
behaviour, purity of genetic composition, purity of belief - This is
the utopia commonly known as Fascism where the idea is that If Only
we could go back to The Way We Were - then, all would be perfect.
That would be the Ernest Bloch one - and similar to that of Rousseau,
Mead etc -  which all focused around The Noble Savage or some notion
that early man was somehow 'in a state of physical and mental
purity'. Or course the most famous recent example is Nazism. 

The other utopia, equally mythic, sets up a Purity-of-the-Future.
The image of this Future is equally romantic and idyllic - where
no-one really has to work hard, where everyone collaborates and gets
along, where debate and discussion solves all issues; where such
psychological tendencies as jealousy, anger, lust, hatred etc - don't
exist. This utopia is commonly known as Communism. This is the LEAP
manifesto idea - where - If Only we all learn to behave in such and
such a way - then, we'll all have enough, won't have to work hard,
will all have loving families and etc. Equally naïve and mythic -
and ignorant of economics and human psychology. 

I don't agree that Peirce's philosophy involves any utopian ideas,
for the reasons I've outlined. Utopia is by definition 'no place';
and Peirce's phenomenology is deeply, thoroughly, pragmatic. That is,
it is enmeshed, rooted, in Secondness and the brute individual
realities of that category. Equally, it is rooted in Firstness and
the chance deviations, aberrations of that mode. Thirdness doesn't
exist 'per se' [which would make it utopian if it did] and exists
only within the hard-working dirt and dust and chances of Firstness
and Secondness.

I feel that Peirce's agapasm is an outline of constant networking,
informational networking and collaboration - where for example,
plants will interact with insects and animals and vice versa - but-
this complex adaptive system is not a utopia, but...a complex
adaptive system, busily interacting and coming up with novel
solutions to chance aberrations...etc.

Edwina
 On Mon 26/06/17  4:00 PM , Gary Richmond gary.richm...@gmail.com
sent:
 Edwina, list,
 The LEAP manifesto sounds like North Korea? Well, while I agree with
you that the manifesto is at least quasi-utopian, I think equating it
with the brutal NK is way off the mark.
 In any case, there was an op-ed piece today in The Stone, that
section of the New York Times editorial page where philosophers
comment on cultural, social, political, etc. issues. Today's piece,
by Espen Hammer, a professor of philosophy at Temple University, is
titled "A Utopia for a Dystopian Age." 
https://www.nytimes.com/2017/06/26/opinion/a-utopia-for-a-dystopian-age.html?ref=opinion
[1] 
 Hammer's piece concludes: 
 Are our industrial, capitalist societies able to make the requisite
changes? If not, where should we be headed? This is a utopian
question as good as any. It is deep and universalistic. Yet it calls
for neither a break with the past nor a headfirst dive into the
future. The German thinker Ernst Bloch argued that all utopias
ultimately express yearning for a reconciliation with that from which
one has been estranged. They tell us how to get back home. A
21st-century utopia of nature would do that. It would remind us that
we belong to nature, that we are dependent on it and that further
alienation from it will be at our own peril. 
 While Peirce was a fierce opponent of "social Darwinism," I'm don't
recall him discussing utopia as such (or even Utopia for that
matter), while he was most certainly an advocate of meliorization.
 However, this author argues that the philosophy of Peirce (and that
of Mead) do indeed involve utopian ideas. See: "The Agathopis of
Charles Sanders Peirce, Maria Augusta Nogueira Machado Dib,
International Center of Peirce Studies : 
http://ruc.udc.es/dspace/bitstream/handle/2183/13424/CC-130_art_131.pdf;sequence=1
[2]
 Abstract The subject of this article is the specificity of
Peirce’s Agathotopia and the relevance of his thought for the
«actual global crisis».. .  Peirce . . . focused on the research of
the evolutionary process which leads to the summum bonum where
aesthetics, ethics and logics converge into the same purpose, . . 
Wellness (EP 2.27). Locus of Wellness - Agathotop

Re: Re: RE: [PEIRCE-L] RE: AI

2017-06-26 Thread Gary Richmond
Edwina, list,

The LEAP manifesto sounds like North Korea? Well, while I agree with you
that the manifesto is at least quasi-utopian, I think equating it with the
brutal NK is way off the mark.

In any case, there was an op-ed piece today in The Stone, that section of
the New York Times editorial page where philosophers comment on cultural,
social, political, etc. issues. Today's piece, by Espen Hammer, a professor
of philosophy at Temple University, is titled "A Utopia for a Dystopian
Age."
https://www.nytimes.com/2017/06/26/opinion/a-utopia-for-a-dystopian-age.html?ref=opinion


Hammer's piece concludes:

Are our industrial, capitalist societies able to make the requisite
changes? If not, where should we be headed? This is a utopian question as
good as any. It is deep and universalistic. Yet it calls for neither a
break with the past nor a headfirst dive into the future. The German
thinker Ernst Bloch argued that all utopias ultimately express yearning for
a reconciliation with that from which one has been estranged. They tell us
how to get back home. A 21st-century utopia of nature would do that. It
would remind us that we belong to nature, that we are dependent on it and
that further alienation from it will be at our own peril.


While Peirce was a fierce opponent of "social Darwinism," I'm don't recall
him discussing utopia as such (or even *Utopia* for that matter), while he
was most certainly an advocate of meliorization.

However, this author argues that the philosophy of Peirce (and that of
Mead) do indeed involve utopian ideas. See: "The Agathopis of Charles
Sanders Peirce, Maria Augusta Nogueira Machado Dib, International Center of
Peirce Studies :
http://ruc.udc.es/dspace/bitstream/handle/2183/13424/CC-130_art_131.pdf;sequence=1

Abstract The subject of this article is the specificity of Peirce’s
Agathotopia and the relevance of his thought for the «actual global
crisis».. .  Peirce . . . focused on the research of the evolutionary
process which leads to the summum bonum where aesthetics, ethics and logics
converge into the same purpose, . .  Wellness (EP 2.27). Locus of Wellness
- Agathotopia - term used by James Edward Meade, Nobel Prize award in
economics (1977), has come out in the universe of political economy. it
would possibly be a model for the construction of a good society to live in
. . neither a socio-political nor an economic model to promote the
collective welfare in the reality of the existential universe. Peirce’s
Agathotopia has been proposed in all his scientific metaphysical
architecture, in his realistic philosophy and logic of his objective
idealism, in his synechism, into the ongoing semioses between his three
categories, and the evolving process of reasonability, a continuous
teleological selfcorrective movement toward the evolutionary enhancement.
if Peirce believes in a dynamic mental loving action (evolutionary love)
that tends to the admirable, Fair and True Purpose then he might not be
proposing just one more utopia in the history of Philosophy, but
Agathotopia for the first time. A tópos to the Summum Bonum.


See also, Utopian Evolution: The Sentimental Critique of Social Darwinism
in Bellamy and Peirce by Matthew Hartman.
https://www.jstor.org/stable/20718007?seq=1#page_scan_tab_contents

Best,

Gary R

[image: Gary Richmond]

*Gary Richmond*
*Philosophy and Critical Thinking*
*Communication Studies*
*LaGuardia College of the City University of New York*
*C 745*
*718 482-5690*

On Mon, Jun 26, 2017 at 3:30 PM, Edwina Taborsky  wrote:

> I don't see that it is Peirce-related for it is utopian; operating purely
> in the realm of Homogeneic Purity; it is Hegelian, i.e., rejecting the
> reality of individual Secondness and finiteness; rejecting the adaptive
> reality that is chance;  rejecting even the openness of genuine Thirdness
> [which is never finite].
>
> It instead is filled with unverified assumptions, lacking evidentiary
> support for these axioms, [massively ignorant about economics and human
> psychology]and assuming, like all utopian theories, that If Only We All
> Behaved in Such-and-Such a Way - then, all will be well.
>
>  This is the mindset of all fundamentalist and totalitarian ideologies -
> which all operate within the Seminar Room mode of Thirdness - i.e.,
> alienated from the pragmatic daily realities of Secondness and Firstness.
> I'd call this Thirdness-as-Firstness, alienated from physical reality,
> operating within an insistence on iconic homogeneity of its population.
> Sounds a bit like Animal Farm or 1984.
>
> And - its mindset includes not only a profound ignorance of economics but
> -  a complete ignorance of the psychological reality of the human species -
> which is not and has never been, able to operate within only the abstract
> generalities of Thirdness. Certainly, you can get small populations
> operating within the abstract generalities - these are isolate communities
> sustained by the external world [a convent, a monastery]; 

Re: Re: RE: [PEIRCE-L] RE: AI

2017-06-26 Thread Edwina Taborsky
 

 BODY { font-family:Arial, Helvetica, sans-serif;font-size:12px; }I
don't see that it is Peirce-related for it is utopian; operating
purely in the realm of Homogeneic Purity; it is Hegelian, i.e.,
rejecting the reality of individual Secondness and finiteness;
rejecting the adaptive reality that is chance;  rejecting even the
openness of genuine Thirdness [which is never finite].

It instead is filled with unverified assumptions, lacking
evidentiary support for these axioms, [massively ignorant about
economics and human psychology]and assuming, like all utopian
theories, that If Only We All Behaved in Such-and-Such a Way - then,
all will be well.

 This is the mindset of all fundamentalist and totalitarian
ideologies - which all operate within the Seminar Room mode of
Thirdness - i.e., alienated from the pragmatic daily realities of
Secondness and Firstness. I'd call this Thirdness-as-Firstness,
alienated from physical reality, operating within an insistence on
iconic homogeneity of its population. Sounds a bit like Animal Farm
or 1984. 

And - its mindset includes not only a profound ignorance of
economics but -  a complete ignorance of the psychological reality of
the human species - which is not and has never been, able to operate
within only the abstract generalities of Thirdness. Certainly, you
can get small populations operating within the abstract generalities
- these are isolate communities sustained by the external world [a
convent, a monastery]; or cults. Since they are not operating within
all three categories but only within degenerate Thirdness, they are
all unable to provide continuity of Type. Their membership must be
replenished from external sources; or - most of them implode after a
few years. And all of them require enormous external authoritarian
Force to prevent any intrusion of Secondness and Firstness - i.e.,
individual realities, individual emotions and sensations. And to keep
the population submissive and entrapped within a homogeneic
perspective. Sounds a bit like N. Korea.
Edwina
 On Mon 26/06/17  3:03 PM , Gary Richmond gary.richm...@gmail.com
sent:
 Gary F, Edwina, Gene, list,
 Well, before we accept or reject the LEAP proposal (which has
implications far beyong Canada), let's consider what it says. See:
https://leapmanifesto.org/en/the-leap-manifesto/ [1]. 
 If we do consider it here, please try to keep the discussion
Peirce-related. I've copied and pasted the text of the manifesto from
the pdf below my signature. 
 Best,
 Gary R (writing as list moderator)
 the leap manifesto 
 A Call for Canada Based on Caring for the Earth and One AnotherWe
start from the premise that Canada is facing the deepest crisis in
recent memory.
 The Truth and Reconciliation Commission has acknowledged shocking
details about the violence of Canada’s near past. Deepening poverty
and inequality are a scar on the country’s present. And our record
on climate change is a crime against humanity’s future. These facts
are all the more jarring because they depart so dramatically from our
stated values: respect for Indigenous rights, internationalism, human
rights, diversity, and environmental stewardship.
 Canada is not this place today -- but it could be.
 We could live in a country powered entirely by truly just renewable
energy, woven together by accessible public transit, in which the
jobs and opportunities of this transition are designed to
systematically eliminate racial and gender inequality. Caring for one
another and caring for the planet could be the economy’s fastest
growing sectors. Many more people could have higher wage jobs with
fewer work hours, leaving us ample time to enjoy our loved ones and
flourish in our communities.  We know that the time for this great
transition is short. Climate scientists have told us that this is the
decade to take decisive action to prevent catastrophic global warming.
That means small steps will no longer get us where we need to go.
 So we need to leap.
 This leap must begin by respecting the inherent rights and title of
the original caretakers of this land. Indigenous communities have
been at the forefront of protecting rivers, coasts, forests and lands
from out-of-control industrial activity. We can bolster this role, and
reset our relationship, by fully implementing the United Nations
Declaration on the Rights of Indigenous Peoples.  Moved by the
treaties that form the legal basis of this country and bind us to
share the land “for as long as the sun shines, the grass grows and
the rivers flow,” we want energy sources that will last for time
immemorial and never run out or poison the land. Technological
breakthroughs have brought this dream within reach. The latest
research shows it is feasible for Canada to get 100% of its
electricity from renewable resources within two decades1; by 2050 we
could have a 100% clean economy2 .   We demand that this shift begin
now.
 There is no longer an excuse for building new infrast

Re: RE: [PEIRCE-L] RE: AI

2017-06-26 Thread Gary Richmond
Gary F, Edwina, Gene, list,

Well, before we accept or reject the LEAP proposal (which has implications
far beyong Canada), let's consider what it says. See:
https://leapmanifesto.org/en/the-leap-manifesto/.

If we do consider it here, please try to keep the discussion
Peirce-related. I've copied and pasted the text of the manifesto from the
pdf below my signature.

Best,

Gary R (writing as list moderator)


the leap manifesto

A Call for Canada Based on Caring for the Earth and One Another
We start from the premise that Canada is facing the deepest crisis in
recent memory.

The Truth and Reconciliation Commission has acknowledged shocking details
about the violence of Canada’s near past. Deepening poverty and inequality
are a scar on the country’s present. And our record on climate change is a
crime against humanity’s future.
These facts are all the more jarring because they depart so dramatically
from our stated values: respect for Indigenous rights, internationalism,
human rights, diversity, and environmental stewardship.

Canada is not this place today -- but it could be.

We could live in a country powered entirely by truly just renewable energy,
woven together by accessible public transit, in which the jobs and
opportunities of this transition are designed to systematically eliminate
racial and gender inequality. Caring for one another and caring for the
planet could be the economy’s fastest growing sectors. Many more people
could have higher wage jobs with fewer work hours, leaving us ample time to
enjoy our loved ones and flourish in our communities.

We know that the time for this great transition is short. Climate
scientists have told us that this is the decade to take decisive action to
prevent catastrophic global warming. That means small steps will no longer
get us where we need to go.

So we need to leap.

This leap must begin by respecting the inherent rights and title of the
original caretakers of this land. Indigenous communities have been at the
forefront of protecting rivers, coasts, forests and lands from
out-of-control industrial activity. We can bolster this role, and reset our
relationship, by fully implementing the United Nations Declaration on the
Rights of Indigenous Peoples.

Moved by the treaties that form the legal basis of this country and bind us
to share the land “for as long as the sun shines, the grass grows and the
rivers flow,” we want energy sources that will last for time immemorial and
never run out or poison the land. Technological breakthroughs have brought
this dream within reach. The latest research shows it is feasible for
Canada to get 100% of its electricity from renewable resources within two
decades1; by 2050 we could have a 100% clean economy2 .

We demand that this shift begin now.

There is no longer an excuse for building new infrastructure projects that
lock us into increased extraction decades into the future. The new iron law
of energy development must be: if you wouldn’t want it in your backyard,
then it doesn’t belong in anyone’s backyard. That applies equally to oil
and gas pipelines; fracking in New Brunswick, Quebec and British Columbia;
increased tanker traffic off our coasts; and to Canadianowned mining
projects the world over.

The time for energy democracy has come: we believe not just in changes to
our energy sources, but that wherever possible communities should
collectively control these new energy systems.

As an alternative to the profit-gouging of private companies and the remote
bureaucracy of some centralized state ones, we can create innovative
ownership structures: democratically run, paying living wages and keeping
much-needed revenue in communities. And Indigenous Peoples should be first
to receive public support for their own clean energy projects. So should
communities currently dealing with heavy health impacts of polluting
industrial activity.

Power generated this way will not merely light our homes but redistribute
wealth, deepen our democracy, strengthen our economy and start to heal the
wounds that date back to this country’s founding.

A leap to a non-polluting economy creates countless openings for similar
multiple “wins.” We want a universal program to build energy efficient
homes, and retrofit existing housing, ensuring that the lowest income
communities and neighbourhoods will benefit first and receive job training
and opportunities that reduce poverty over the long term. We want training
and other resources for workers in carbon-intensive jobs, ensuring they are
fully able to take part in the clean energy economy. This transition should
involve the democratic participation of workers themselves. High-speed rail
powered by just renewables and affordable public transit can unite every
community in this country – in place of more cars, pipelines and exploding
trains that endanger and divide us.

And since we know this leap is beginning late, we need to invest in our
decaying public infrastructure so that it can withstan

Re: RE: [PEIRCE-L] RE: AI

2017-06-26 Thread Edwina Taborsky
 
 Gary F - as you say, these issues really have no place in a Peircean
analytic framework - unless we want to explore the development of
societal norms as a form of Thirdness - which is a legitimate area of
research.

I, myself, reject the Naomi Klein perspective [all of her work] and
certainly, reject the LEAP perspective- and would argue against it as
a naïve utopian agenda. You cannot do away with any of the modal
categories, even in Big Systems, eg, as in societal analysis - and
coming up with purely rhetorical versions of Thirdness [rather than
the real Thirdness that is in that society] and trying to do away
with the existential conflicts of Secondness and the private feelings
of Firstness is, in my view, a useless agenda. 

Edwina
 On Mon 26/06/17  1:50 PM , g...@gnusystems.ca sent:
Gene,
 Thanks for the links; I’m quite familiar with the mirror neuron
research and the inferences various people have drawn from it, and it
reinforces the point I was trying to make, that empathy is deeper than
deliberate reasoning — as well as Peirce’s point that science is
grounded in empathy (or at least in “the social principle”).
I didn’t miss the point that it is possible to disable the feeling
of empathy — I just didn’t see that point as being news in any
sense (it’s been pretty obvious for millennia!). I see the
particular study as an attempt to quantify some  expressions of
empathy (or responses that imply the lack of it). What it doesn’t
do is give us much of a clue as to what cultural factors are involved
in the suppression of empathic behavior. (And I thought that blaming
it on increasing use of AI was really a stretch!)  As I wrote before,
what significance that study has depends on the nature of the devices
used to generate those statistics.
There are lots of theories about what causes empathic behavior to be
suppressed (not all of them use that terminology, of course.) I think
they are valuable to the extent that they give us some clues as to
what we can do about the situation. To take the example that happens
to be in front of me: 

 The election of Donald Trump can certainly be taken as a symptom of
a decline in empathy. In her new book, Naomi Klein spends several
chapters explaining in factual detail how certain trends in American
culture (going back several decades) have prepared the way for
somebody like Trump to exploit the situation. But the title of her
book, No is Not Enough, emphasizes that what’s needed is not
another round of recriminations but a coherent vision of a better way
to live, and a viable alternative to the pathologically partisan
politics of the day. I can see its outlines in a document called the
LEAP manifesto, and I’d like to see us google that and spend more
time considering it than we do blaming Google or other arms of “The
Machine” for the mess we’re in. 
But enough about politics and such “vitally important” matters.
What interests me about AI (which is supposed to be the subject of
this thread) is what we can learn from it about how the mind works,
whether it’s a human or animal bodymind or not. That’s also what
my book is about and why I’m interested in Peircean semiotics. And
I daresay that’s what motivates many, if not most, AI researchers,
including the students that John Sowa is addressing in that
presentation he’s still working on. 
Gary f.
} What is seen with one eye has no depth. [Ursula LeGuin] {

http://gnusystems.ca/wp/ [1] }{  Turning Signs gateway
From: Eugene Halton [mailto:eugene.w.halto...@nd.edu] 
 Sent: 26-Jun-17 11:09
 To: Peirce List 
 Subject: RE: [PEIRCE-L] RE: AI
Dear Gary F,

 Here is a link to the Sarah Konrath et al. study on the decline
of empathy among American college students:  

http://faculty.chicagobooth.edu/eob/edobrien_empathyPSPR.pdf [2]

   And a brief Scientific American article on it: 

 https://www.scientificamerican.com/article/what-me-care/ [3]
 You state: "I think Peirce would say that these attributions of
empathy (or consciousness) to others are  perceptual judgments — not
percepts, but quite beyond (or beneath) any conscious control, and .
We feel it rather than reading it from external indications."

 This seems to me to miss the point that it is possible to
disable the feeling of empathy. Clinical narcissistic disturbance,
for example, substitutes idealization for perceptual feeling, so that
what is perceived can be idealized rather than felt.  

 Extrapolate that to a society that substitutes on mass scales
idealization for felt experience, and you can have societally reduced
empathy. Unempathic parenting is an excellent way to produce the
social media-addicted janissary offspring. 

 The human face is a subtle neuromuscular organ of attunement,
which has the capacity to read another's mind through mirror
micro-mimicry of the other's facial

Re: RE: [PEIRCE-L] RE: AI

2017-06-26 Thread Edwina Taborsky
 

 BODY { font-family:Arial, Helvetica, sans-serif;font-size:12px; }
 Gene, list - very interesting -

I wonder if there are multiple issues here about the 'decline of
empathy'.

One reason might be the postmodern method of raising children which,
in a sense, isolates the child from any effect of his behaviour. That
is - no matter what he/she does, he is praised as 'that's great'. If
the child acts out, then, he is assumed to be a victim of some
aggression that is, in a mechanical sense, causing him to release
that aggression on someone else. He is not nurtured to be himself 
causal and responsible. The focus is on 'building self-esteem'.  Some
schools do not give marks to prevent 'loss of self-esteem'. This
building up of a sense of inviolate righteousness is one possible
cause of the decline of empathy, since the focus, as noted, is on the
Self and not on the Self-and-Others.

The interesting thing is that along with this isolation of the Self
from the effects of how one directly acts towards others  - and I
think the increase in bullying is one result, but- we see an increase
in what I call Seminar Room interaction with Others. That is, the
individual interacts with others indirectly, by joining abstract
group causes: peace, climate change, earth day  where what one
does as an individual is indirect and actually, has little to no
effect.

But there is another issue - and that is the increase of tribalism
in our societies. By tribalism I mean 'identity politics' which
rejects a common humanity that is shared by all, and  rejects
individualism within this commonality and instead herds people into
homogeneous groups with unique characteristics - and considers them
isolate from, different from - other groups. Tribalism by definition
views other tribes as adversarial. Therefore the people in other
tribes are 'dehumanized'. We see this in wars - where both sides view
each other as non-human.

But your other issue - the importance of facial expression - is also
important. I can see the argument with regard to Botox, but this
argument is also valid with regard to cultural veils which hide the
face to non-members of the tribe and thus reject outside involvement;
 and to cultural values which reject expression of emotions [stiff
upper lip] and, effectively, also result in the non-involvement of
others. 

Edwina
 On Mon 26/06/17 11:08 AM , Eugene Halton eugene.w.halto...@nd.edu
sent:
 Dear Gary F, Here is a link to the Sarah Konrath et al. study on
the decline of empathy among American college students: 
http://faculty.chicagobooth. [1]edu/eob/edobrien_empathyPSPR.pdf
And a brief Scientific American article on it:
https://www.scientificamerican.com/article/what-me-care/ [2] 
  You state: "I think Peirce would say that these attributions of
empathy (or consciousness) to others are perceptual judgments — not
percepts, but quite beyond (or beneath) any conscious control, and .
We feel it rather than reading it from external indications."
  This seems to me to miss the point that it is possible to
disable the feeling of empathy. Clinical narcissistic disturbance,
for example, substitutes idealization for perceptual feeling, so that
what is perceived can be idealized rather than felt. 
   Extrapolate that to a society that substitutes on mass scales
idealization for felt experience, and you can have societally reduced
empathy. Unempathic parenting is an excellent way to produce the
social media-addicted janissary offspring. 
  The human face is a subtle neuromuscular organ of attunement,
which has the capacity to read another's mind through mirror
micro-mimicry of the other's facial gestures, completely
subconsciously. These are  "external indications" mirrored by one. 
   One study showed that botox treatments, in paralyzing facial
muscles, reduce the micro-mimicry of empathic attunement to the other
face in an interaction. The botox recipient is not only impaired in
exhibiting her or his own emotional facial micro-muscular movements,
but also is impaired in subconsciously micro-mimicking that of the
other, thus reducing the embodied feel of the other’s
emotional-gestural state (Neal and Chartrand, 2011). Empathy is
reduced through the disabling of the facial muscles.
  Vittorio Gallese, one of the neuroscientists who discovered
mirror neutons, has discussed "embodied simulation" through "shared
neural underpinnings." He states: “…social cognition is not only
explicitly reasoning about the contents of someone else’s mind. Our
brains, and those of other primates, appear to have developed a basic
functional mechanism, embodied simulation, which gives us an
experiential insight of other minds. The shareability of the
phenomenal content of the intentional relations of others, by means
of the shared neural underpinnings, produces intentional attunement.
Intentional attunement, in turn, by collapsing the others’
intentions into the 

RE: Re: RE: [PEIRCE-L] RE: AI

2017-06-18 Thread Auke van Breemen
Edwina, Gary’s, list,

 

I wasn’t so much thinking about the reasoning. I started thinking whether a 
difference between life and mind could be pointed down in the trichotomies of 
the Welby classification. For instance in  the sympathetic, shocking and usual 
distinction. 

 

Emotional accompaniments, in Questions concerning, etc, are deemed to be 
contributions of the receptive sheet. The individual life is distinguished from 
the person by being the source of error.  

 

Best,

Auke

 

 

 

Van: Edwina Taborsky [mailto:tabor...@primus.ca] 
Verzonden: zaterdag 17 juni 2017 20:43
Aan: Peirce-L ; Gary Richmond 
Onderwerp: Re: Re: RE: [PEIRCE-L] RE: AI

 

Gary R - I'd agree with you.

First - I do agree [with Peirce] that Mind [and therefore semiosis] operates in 
the physic-chemical realm. BUT - this realm which provides the planet with 
enormous stability of matter [just imagine if a chemical kept 'evolving' and 
changing!!] - is NOT the same as the biological realm, which has internalized 
its laws within instantiations [Type-Token] and thus, a 'chance' deviation from 
the norm can take place in this one or few 'instantiations' and adapt into a 
different species - without impinging on the continuity of the former species. 
So, the biological realm can evolve and adapt - which provides matter with the 
diversity it needs to fend off entropy.

But AI is not, as I understand it - similar to a biological organism. It seems 
similar to a physico-chemical element. It's a programmed machine with the 
programming outside of its individual control.

 I simply don't see how it can set itself up as a Type-Token, and enable 
productive and collective deviations from the norm. I can see that a 
machine/robot can be semiotically  coupled with its external world. But - can 
it deviate from its norm, the rules we have put in and yes, the adaptations it 
has learned within these rules - can it deviate and set up a 'new species' so 
to speak? 

After all - in the biological realm that new species/Type can only appear if it 
is functional. Wouldn't the same principle hold for AI? 

Edwina

 

On Sat 17/06/17 1:56 PM , Gary Richmond  <mailto:gary.richm...@gmail.com> 
gary.richm...@gmail.com sent:

Auke, Edwina, Gary F, list,

 

Auke, quoting Gary F, wrote: "Biosemiotics has made us well aware of the 
intimate connection between life and semiosis." Then asked, "What if we insert 
‘mind’ instead of life?"

 

Edwina commented: " Excellent - but only if one considers that 'mInd' operates 
in the physic-chemical realm as well as the biological."

 

Yet one should as well consider that the bio- in biosemiotics shows that it is 
primarily concerned with the semiosis that occurs in life forms. This is not to 
suggest that mlnd and semiosis don't operate in other realms than the living, 
including the physio-chemical. What I've been saying is that  while I can see 
that AI systems (like the Gobot Gary F cited) can learn "inductively,"  I push 
back against the notion that they could develop certain intelligences as we 
find only in life forms.

 

In my opinion the 'mind' or 'intelligence' we see in machines is what's been 
put in them. As Gary F wrote: 

 

I also think that “machine intelligence” is a contradiction in terms. To me, an 
intelligent system must have an internal guidance system semiotically coupled 
with its external world, and must have some degree of autonomy in its 
interactions with other systems.

 

I fully concur with that statement. But what I can't agree with is his comment 
immediately following this, namely, "I think it’s quite plausible that AI 
systems could reach that level of autonomy and leave us behind in terms of 
intelligence   "

 

Computers and robots can already perform certain functions very much better 
than humans. But autonomy? That's another matter. Gary F finds machine autonomy 
(in the sense in which he described it just above) "plausible" while I find it 
highly implausible, Philip K. Dick not withstanding. 

 

Best,

 

Gary R

 

 

 






 

Gary Richmond

Philosophy and Critical Thinking

Communication Studies

LaGuardia College of the City University of New York

C 745

718 482-5690

 

On Sat, Jun 17, 2017 at 12:37 PM, Edwina Taborsky  > wrote:


Excellent - but only if one considers that 'mInd' operates in the 
physic-chemical realm as well as the biological.

Edwina
 

On Sat 17/06/17 12:27 PM , "Auke van Breemen" a.bree...@chello.nl 
  sent:

Gary’s,

 

Biosemiotics has made us well aware of the intimate connection between life and 
semiosis. 

 

What if we insert ‘mind’ instead of life? 

 

Best,

Auke

 

 

Van: Gary Richmond [mailto:gary.richm...@gmail.com 
 ] 
Verzonden: zaterdag 17 juni 2017 17:29
Aan: Peirce-L 
Onderwerp: Re: [PEIRCE-L] RE: AI

  

Gary 

Re: Re: RE: [PEIRCE-L] RE: AI

2017-06-17 Thread Gary Richmond
Gary F wrote:

GF: In fact, the development of AlphaGo involved a collaboration of
programmers with expert human Go players who described their own thinking
process in coming up with strategically powerful moves. Just like a
scientist coming up with a hypothesis, a Go player would be hopelessly lost
if he tried to check out what would follow from *every possible* move.
Instead he has to appeal to *il lume natural* — and evidently the ways of
doing that are not *totally* mysterious and magical, nor is their
application limited to human brains. But I do think they are only available
to entities capable of learning by experience, and that’s why a machine
can’t play Go very well, or make abductions.


OK, now I'm confused. I thought you suggested that a machine c*ould *play
Go very well and *could *make abductions.

If so it is certainly not appealing to il lume natural as there's nothing
natual in a Gobot.

Best,

Gary R


[image: Gary Richmond]

*Gary Richmond*
*Philosophy and Critical Thinking*
*Communication Studies*
*LaGuardia College of the City University of New York*
*C 745*
*718 482-5690*

On Sat, Jun 17, 2017 at 5:21 PM,  wrote:

> Gary, you wrote,
>
> “the rapid, varied, and numerous inductiosn of the Gobot, for example, do
> not yet lead to true abduction. The Gobot merely chooses out of the
> extraordinarily many possible moves (more than an individual player would
> be able to imagine towards the ends of the game) those which appear optimal
> …”
>
>
>
> This is simply not true. AI researchers call these “brute-force methods,”
> and they were abandoned many years ago when it was recognized that a really
> good Go player could not work that way. Not even master chess-playing
> systems work that way, although the possible moves in chess are orders of
> maginitude fewer.
>
>
>
> In fact, the development of AlphaGo involved a collaboration of
> programmers with expert human Go players who described their own thinking
> process in coming up with strategically powerful moves. Just like a
> scientist coming up with a hypothesis, a Go player would be hopelessly lost
> if he tried to check out what would follow from *every possible* move.
> Instead he has to appeal to *il lume natural* — and evidently the ways of
> doing that are not *totally* mysterious and magical, nor is their
> application limited to human brains. But I do think they are only available
> to entities capable of learning by experience, and that’s why a machine
> can’t play Go very well, or make abductions.
>
>
>
> Gary f.
>
>
>
> *From:* Gary Richmond [mailto:gary.richm...@gmail.com]
> *Sent:* 17-Jun-17 15:31
>
> Edwina, list,
>
>
>
> Edwina wrote:
>
> AI is not, as I understand it - similar to a biological organism. It
> seems similar to a physico-chemical element. It's a programmed machine with
> the programming outside of its individual control.
>
> I agree. And this would be the case even if it were to 'learn' how to
> re-program itself in some way(s) and to some extent. It would all be just
> more programming. That is, only in the realm of science fiction does it
> seem to me that could it develop such vital characteristics as 'insight'.
> Or, as you put it, Edwina:
>
> ET: I simply don't see how it can set itself up as a Type-Token, and
> enable productive and collective deviations from the norm.
>
> As for the possibility of a machine to be semiotically coupled with its
> external world, well this is already happening, for example, in face
> recognition technology (and I'm sure there are even better examples of this
> coupling of AI systems to environments). But I don't see any autonomy in
> this.
>
> ET:  But - can it deviate from its norm, the rules we have put in and yes,
> the adaptations it has learned within these rules - can it deviate and set
> up a 'new species' so to speak?
>
> Gary F says he sees the possibility of an AI system developing powers of
> abduction. But I see no plausible argument to support that: the rapid,
> varied, and numerousl inductiosn of the Gobot, for example, do not yet lead
> to true abduction. The Gobot merely chooses out of the extraordinarily many
> possible moves (more than an individual player would be able to imagine
> towards the ends of the game) those which appear optimal--based on the
> rules of the game of Go--to lead it to winning the game *by the rules*.
> The human Go player may be surprised by this 'ability' (find it, as did the
> Go master beaten by the Gobot, unexpected), but to imagine that some
> 'surprising' move constitutes a kind of creative abduction does not seem to
> me logically warranted.
>
> ET: After all - in the biological realm that new species/Type can only
> appear if it is functional. Wouldn't the same principle hold for AI?
>
> I'd say yes. And, so again, this is why I find the possibility of the kind
> of creative abduction and insight which Gary F has been suggesting are
> "plausible' for AI systems, implausible.
>
> Best,
>
> Gary R
>
>
> -

RE: Re: RE: [PEIRCE-L] RE: AI

2017-06-17 Thread gnox
Gary, you wrote,

“the rapid, varied, and numerous inductiosn of the Gobot, for example, do not 
yet lead to true abduction. The Gobot merely chooses out of the extraordinarily 
many possible moves (more than an individual player would be able to imagine 
towards the ends of the game) those which appear optimal …”

 

This is simply not true. AI researchers call these “brute-force methods,” and 
they were abandoned many years ago when it was recognized that a really good Go 
player could not work that way. Not even master chess-playing systems work that 
way, although the possible moves in chess are orders of maginitude fewer. 

 

In fact, the development of AlphaGo involved a collaboration of programmers 
with expert human Go players who described their own thinking process in coming 
up with strategically powerful moves. Just like a scientist coming up with a 
hypothesis, a Go player would be hopelessly lost if he tried to check out what 
would follow from every possible move. Instead he has to appeal to il lume 
natural — and evidently the ways of doing that are not totally mysterious and 
magical, nor is their application limited to human brains. But I do think they 
are only available to entities capable of learning by experience, and that’s 
why a machine can’t play Go very well, or make abductions.

 

Gary f.

 

From: Gary Richmond [mailto:gary.richm...@gmail.com] 
Sent: 17-Jun-17 15:31



Edwina, list,

 

Edwina wrote: 

AI is not, as I understand it - similar to a biological organism. It seems 
similar to a physico-chemical element. It's a programmed machine with the 
programming outside of its individual control.

I agree. And this would be the case even if it were to 'learn' how to 
re-program itself in some way(s) and to some extent. It would all be just more 
programming. That is, only in the realm of science fiction does it seem to me 
that could it develop such vital characteristics as 'insight'. Or, as you put 
it, Edwina:

ET: I simply don't see how it can set itself up as a Type-Token, and enable 
productive and collective deviations from the norm.

As for the possibility of a machine to be semiotically coupled with its 
external world, well this is already happening, for example, in face 
recognition technology (and I'm sure there are even better examples of this 
coupling of AI systems to environments). But I don't see any autonomy in this.

ET:  But - can it deviate from its norm, the rules we have put in and yes, the 
adaptations it has learned within these rules - can it deviate and set up a 
'new species' so to speak?

Gary F says he sees the possibility of an AI system developing powers of 
abduction. But I see no plausible argument to support that: the rapid, varied, 
and numerousl inductiosn of the Gobot, for example, do not yet lead to true 
abduction. The Gobot merely chooses out of the extraordinarily many possible 
moves (more than an individual player would be able to imagine towards the ends 
of the game) those which appear optimal--based on the rules of the game of 
Go--to lead it to winning the game by the rules. The human Go player may be 
surprised by this 'ability' (find it, as did the Go master beaten by the Gobot, 
unexpected), but to imagine that some 'surprising' move constitutes a kind of 
creative abduction does not seem to me logically warranted.

ET: After all - in the biological realm that new species/Type can only appear 
if it is functional. Wouldn't the same principle hold for AI?

I'd say yes. And, so again, this is why I find the possibility of the kind of 
creative abduction and insight which Gary F has been suggesting are "plausible' 
for AI systems, implausible.

Best,

Gary R


-
PEIRCE-L subscribers: Click on "Reply List" or "Reply All" to REPLY ON PEIRCE-L 
to this message. PEIRCE-L posts should go to peirce-L@list.iupui.edu . To 
UNSUBSCRIBE, send a message not to PEIRCE-L but to l...@list.iupui.edu with the 
line "UNSubscribe PEIRCE-L" in the BODY of the message. More at 
http://www.cspeirce.com/peirce-l/peirce-l.htm .






Re: Re: RE: [PEIRCE-L] RE: AI

2017-06-17 Thread Gary Richmond
Edwina, list,

Edwina wrote:

AI is not, as I understand it - similar to a biological organism. It
seems similar to a physico-chemical element. It's a programmed machine with
the programming outside of its individual control.

I agree. And this would be the case even if it were to 'learn' how to
re-program itself in some way(s) and to some extent. It would all be just
more programming. That is, only in the realm of science fiction does it
seem to me that could it develop such vital characteristics as 'insight'.
Or, as you put it, Edwina:

ET: I simply don't see how it can set itself up as a Type-Token, and enable
productive and collective deviations from the norm.

As for the possibility of a machine to be semiotically coupled with its
external world, well this is already happening, for example, in face
recognition technology (and I'm sure there are even better examples of this
coupling of AI systems to environments). But I don't see any autonomy in
this.

ET:  But - can it deviate from its norm, the rules we have put in and yes,
the adaptations it has learned within these rules - can it deviate and set
up a 'new species' so to speak?

Gary F says he sees the possibility of an AI system developing powers of
abduction. But I see no plausible argument to support that: the rapid,
varied, and numerousl inductiosn of the Gobot, for example, do not yet lead
to true abduction. The Gobot merely chooses out of the extraordinarily many
possible moves (more than an individual player would be able to imagine
towards the ends of the game) those which appear optimal--based on the
rules of the game of Go--to lead it to winning the game *by the rules*. The
human Go player may be surprised by this 'ability' (find it, as did the Go
master beaten by the Gobot, unexpected), but to imagine that some
'surprising' move constitutes a kind of creative abduction does not seem to
me logically warranted.

ET: After all - in the biological realm that new species/Type can only
appear if it is functional. Wouldn't the same principle hold for AI?

I'd say yes. And, so again, this is why I find the possibility of the kind
of creative abduction and insight which Gary F has been suggesting are
"plausible' for AI systems, implausible.

Best,

Gary R






xxx

xxx

xxx

xxx

[image: Gary Richmond]

*Gary Richmond*
*Philosophy and Critical Thinking*
*Communication Studies*
*LaGuardia College of the City University of New York*
*C 745*
*718 482-5690*

On Sat, Jun 17, 2017 at 2:42 PM, Edwina Taborsky  wrote:

> Gary R - I'd agree with you.
>
> First - I do agree [with Peirce] that Mind [and therefore semiosis]
> operates in the physic-chemical realm. BUT - this realm which provides the
> planet with enormous stability of matter [just imagine if a chemical kept
> 'evolving' and changing!!] - is NOT the same as the biological realm, which
> has internalized its laws within instantiations [Type-Token] and thus, a
> 'chance' deviation from the norm can take place in this one or few
> 'instantiations' and adapt into a different species - without impinging on
> the continuity of the former species. So, the biological realm can evolve
> and adapt - which provides matter with the diversity it needs to fend off
> entropy.
>
> But AI is not, as I understand it - similar to a biological organism. It
> seems similar to a physico-chemical element. It's a programmed machine with
> the programming outside of its individual control.
>
>  I simply don't see how it can set itself up as a Type-Token, and enable
> productive and collective deviations from the norm. I can see that a
> machine/robot can be semiotically  coupled with its external world. But -
> can it deviate from its norm, the rules we have put in and yes, the
> adaptations it has learned within these rules - can it deviate and set up a
> 'new species' so to speak?
>
> After all - in the biological realm that new species/Type can only appear
> if it is functional. Wouldn't the same principle hold for AI?
>
> Edwina
>
>
>
> On Sat 17/06/17 1:56 PM , Gary Richmond gary.richm...@gmail.com sent:
>
> Auke, Edwina, Gary F, list,
>
> Auke, quoting Gary F, wrote: "Biosemiotics has made us well aware of the
> intimate connection between life and semiosis." Then asked, "What if we
> insert ‘mind’ instead of life?"
>
> Edwina commented: " Excellent - but only if one considers that 'mInd'
> operates in the physic-chemical realm as well as the biological."
>
> Yet one should as well consider that the bio- in biosemiotics shows that
> it is primarily concerned with the semiosis that occurs in life forms.
> This is not to suggest that mlnd and semiosis don't operate in other realms
> than the living, including the physio-chemical. What I've been saying is
> that  while I can see that AI systems (like the Gobot Gary F cited) can
> learn "inductively,"  I push back against the notion that they could
> develop certain intelligences as we find only in life forms.
>
> In my opinion the 'mind' or 'intelligence' we see i

Re: Re: RE: [PEIRCE-L] RE: AI

2017-06-17 Thread Edwina Taborsky
 

 BODY { font-family:Arial, Helvetica, sans-serif;font-size:12px;
}Gary R - I'd agree with you.

First - I do agree [with Peirce] that Mind [and therefore semiosis]
operates in the physic-chemical realm. BUT - this realm which
provides the planet with enormous stability of matter [just imagine
if a chemical kept 'evolving' and changing!!] - is NOT the same as
the biological realm, which has internalized its laws within
instantiations [Type-Token] and thus, a 'chance' deviation from the
norm can take place in this one or few 'instantiations' and adapt
into a different species - without impinging on the continuity of the
former species. So, the biological realm can evolve and adapt - which
provides matter with the diversity it needs to fend off entropy.

But AI is not, as I understand it - similar to a biological
organism. It seems similar to a physico-chemical element. It's a
programmed machine with the programming outside of its individual
control.

 I simply don't see how it can set itself up as a Type-Token, and
enable productive and collective deviations from the norm. I can see
that a machine/robot can be semiotically  coupled with its external
world. But - can it deviate from its norm, the rules we have put in
and yes, the adaptations it has learned within these rules - can it
deviate and set up a 'new species' so to speak? 

After all - in the biological realm that new species/Type can only
appear if it is functional. Wouldn't the same principle hold for AI? 

Edwina
 On Sat 17/06/17  1:56 PM , Gary Richmond gary.richm...@gmail.com
sent:
 Auke, Edwina, Gary F, list,
 Auke, quoting Gary F, wrote: "Biosemiotics has made us well aware of
the intimate connection between life and semiosis." Then asked, "What
if we insert ‘mind’ instead of life?"
 Edwina commented: " Excellent - but only if one considers that
'mInd' operates in the physic-chemical realm as well as the
biological."
 Yet one should as well consider that the bio- in biosemiotics shows
that it is primarily concerned with the semiosis that occurs in life
forms. This is not to suggest that mlnd and semiosis don't operate in
other realms than the living, including the physio-chemical. What I've
been saying is that  while I can see that AI systems (like the Gobot
Gary F cited) can learn "inductively,"  I push back against the
notion that they could develop certain intelligences as we find only
in life forms.
 In my opinion the 'mind' or 'intelligence' we see in machines is
what's been put in them. As Gary F wrote: 
 I also think that “machine intelligence” is a contradiction in
terms. To me, an intelligent system must have an internal guidance
system semiotically coupled with its external world, and must have
some degree of autonomy in its interactions with other systems. 
 I fully concur with that statement. But what I can't agree with is
his comment immediately following this, namely, "I think it’s quite
plausible that AI systems could reach that level of autonomy and leave
us behind in terms of intelligence   "
 Computers and robots can already perform certain functions very much
better than humans. But autonomy? That's another matter. Gary F finds
machine autonomy (in the sense in which he described it just above)
"plausible" while I find it highly implausible, Philip K. Dick not
withstanding. 
 Best,
 Gary R
 Gary RichmondPhilosophy and Critical ThinkingCommunication
StudiesLaGuardia College of the City University of New York C 745718
482-5690 
 On Sat, Jun 17, 2017 at 12:37 PM, Edwina Taborsky  wrote:
 Excellent - but only if one considers that 'mInd' operates in the
physic-chemical realm as well as the biological.

Edwina
 On Sat 17/06/17 12:27 PM , "Auke van Breemen" a.bree...@chello.nl
[2] sent:
Gary’s,
 Biosemiotics has made us well aware of the intimate connection
between life and semiosis. 
What if we insert ‘mind’ instead of life? 
Best,

 Auke
Van: Gary Richmond [mailto:gary.richm...@gmail.com [3]] 
 Verzonden: zaterdag 17 juni 2017 17:29
 Aan: Peirce-L 
 Onderwerp: Re: [PEIRCE-L] RE: AI
Gary F,
Oh, I didn't take your expression "DNA chauvinism" all that
seriously, at least as an accusation. But thanks for your
thoughfulness in this message.
You wrote: "Anyway, the point was to name a chemical  substance
which is a material component of life forms as we know them on Earth,
and not a material component of an AI."
I suppose at this point I'd merely emphasize a point I made in
passing earllier: that although I can imagine life forming from some
other arising from " a chemical  substance which is a material
component of life forms as we know them on Earth." say, carbon, on
some other planet in the cosmos, that I cannot imagine life forming
from an AI on Earth so that that remains for me science fiction and
not science.
 Best,
Gary R
Gary Richmond

Philosophy and Critical

Re: RE: [PEIRCE-L] RE: AI

2017-06-17 Thread Gary Richmond
Auke, Edwina, Gary F, list,

Auke, quoting Gary F, wrote: "Biosemiotics has made us well aware of the
intimate connection between life and semiosis." Then asked, "What if we
insert ‘mind’ instead of life?"

Edwina commented: "Excellent - but only if one considers that 'mInd'
operates in the physic-chemical realm as well as the biological."

Yet one should as well consider that the bio- in biosemiotics shows that it
is primarily concerned with the semiosis that occurs in *life* forms. This
is not to suggest that mlnd and semiosis don't operate in other realms than
the living, including the physio-chemical. What I've been saying is that while
I can see that AI systems (like the Gobot Gary F cited) can learn
"inductively,"  I push back against the notion that they could develop
certain intelligences as we find only in life forms.

In my opinion the 'mind' or 'intelligence' we see in machines is what's
been put in them. As Gary F wrote:

I also think that “machine intelligence” is a contradiction in terms. To
me, an intelligent system must have an internal guidance system
semiotically coupled with its external world, and must have some degree of
autonomy in its interactions with other systems.


I fully concur with that statement. But what I can't agree with is his
comment immediately following this, namely, "I think it’s quite plausible
that AI systems could reach that level of autonomy and leave us behind in
terms of intelligence  "

Computers and robots can already perform certain functions very much better
than humans. But autonomy? That's another matter. Gary F finds machine
autonomy (in the sense in which he described it just above) "plausible"
while I find it highly implausible, Philip K. Dick not withstanding.

Best,

Gary R




[image: Gary Richmond]

*Gary Richmond*
*Philosophy and Critical Thinking*
*Communication Studies*
*LaGuardia College of the City University of New York*
*C 745*
*718 482-5690*

On Sat, Jun 17, 2017 at 12:37 PM, Edwina Taborsky 
wrote:

>
> Excellent - but only if one considers that 'mInd' operates in the
> physic-chemical realm as well as the biological.
>
> Edwina
>
>
> On Sat 17/06/17 12:27 PM , "Auke van Breemen" a.bree...@chello.nl sent:
>
> Gary’s,
>
>
>
> Biosemiotics has made us well aware of the intimate connection between
> life and semiosis.
>
>
>
> What if we insert ‘mind’ instead of life?
>
>
>
> Best,
>
> Auke
>
>
>
>
>
> Van: Gary Richmond [mailto:gary.richm...@gmail.com]
> Verzonden: zaterdag 17 juni 2017 17:29
> Aan: Peirce-L
> Onderwerp: Re: [PEIRCE-L] RE: AI
>
>
>
> Gary F,
>
>
>
> Oh, I didn't take your expression "DNA chauvinism" all that seriously, at
> least as an accusation. But thanks for your thoughfulness in this message.
>
>
>
> You wrote: "Anyway, the point was to name a chemical  substance which is
> a material component of life forms as we know them on Earth, and not a
> material component of an AI."
>
>
>
> I suppose at this point I'd merely emphasize a point I made in passing
> earllier: that although I can imagine life forming from some other
> arising from "a chemical  substance which is a material component of life
> forms as we know them on Earth." say, carbon, on some other planet in the
> cosmos, that I cannot imagine life forming from an AI on Earth so that
> that remains for me science fiction and not science.
>
>
>
> Best,
>
>
>
> Gary R
>
>
>
>
> [image: Blocked image]
>
>
>
> Gary Richmond
>
> Philosophy and Critical Thinking
>
> Communication Studies
>
> LaGuardia College of the City University of New York
>
> C 745
>
> 718 482-5690 <(718)%20482-5690>
>
>
>
> On Sat, Jun 17, 2017 at 8:17 AM,  wrote:
>
> Gary R,
>
>
>
> Sorry, instead of “DNA chauvinism” I should have used a term that Peirce
> would have used, like “protoplasm.” — But then he wouldn’t have used
> “chauvinism” either. My bad. Anyway, the point was to name a chemical
> substance which is a material component of life forms as we know them on
> Earth, and not a material component of an AI. So I was reiterating the
> idea that the definition of a “scientific intelligence” should be formal or
> functional and not material, in order to preserve the generality of
> Peircean semiotics. I didn’t mean to accuse you of anything.
>
>
>
> Gary f.
>
>
>
> From: Gary Richmond [mailto:gary.richm...@gmail.com]
> Sent: 16-Jun-17 18:35
> To: Peirce-L 
> Subject: Re: [PEIRCE-L] RE: AI
>
>
>
> Gary F,
>
>
>
> You wrote:
>
>
>
> Biosemiotics has made us well aware of the intimate connection between
> life and semiosis. I’m just trying to take the next step of generalization
> by arguing against what I call DNA chauvinism, and taking it to be an open
> question whether electronic systems capable of learning can eventually
> develop intentions and arguments (and lives) of their own. To my knowledge,
> the evidence is not yet there to decide the question one way or the other.
>
>
>
> I am certainly convinced "of the intimate connection between life and
> semiosis." But as to the rest, especial

Re: RE: [PEIRCE-L] RE: AI

2017-06-17 Thread Edwina Taborsky
 
 Excellent - but only if one considers that 'mInd' operates in the
physic-chemical realm as well as the biological.

Edwina
 On Sat 17/06/17 12:27 PM , "Auke van Breemen" a.bree...@chello.nl
sent:
Gary’s,
Biosemiotics has made us well aware of the intimate connection
between life and semiosis. 
What if we insert ‘mind’ instead of life? 
Best,

 Auke
Van: Gary Richmond [mailto:gary.richm...@gmail.com] 
 Verzonden: zaterdag 17 juni 2017 17:29
 Aan: Peirce-L 
 Onderwerp: Re: [PEIRCE-L] RE: AI
Gary F,
Oh, I didn't take your expression "DNA chauvinism" all that
seriously, at least as an accusation. But thanks for your
thoughfulness in this message.
You wrote: "Anyway, the point was to name a chemical  substance
which is a material component of life forms as we know them on Earth,
and not a material component of an AI."
I suppose at this point I'd merely emphasize a point I made in
passing earllier: that although I can imagine life forming from some
other arising from "a chemical  substance which is a material
component of life forms as we know them on Earth." say, carbon, on
some other planet in the cosmos, that I cannot imagine life forming
from an AI on Earth so that that remains for me science fiction and
not science.
Best,
Gary R
Gary Richmond

Philosophy and Critical Thinking

Communication Studies

LaGuardia College of the City University of New York

C 745

718 482-5690
On Sat, Jun 17, 2017 at 8:17 AM,  wrote:

Gary R, 
Sorry, instead of “DNA chauvinism” I should have used a term
that Peirce would have used, like “protoplasm.” — But then he
wouldn’t have used “chauvinism” either. My bad. Anyway, the
point was to name a chemical  substance which is a material component
of life forms as we know them on Earth, and not a material component
of an AI. So I was reiterating the idea that the definition of a
“scientific intelligence” should be formal or functional and not
material, in order to preserve the generality of Peircean semiotics.
I didn’t mean to accuse you of anything.
Gary f.
 From: Gary Richmond [mailto:gary.richm...@gmail.com [2]] 
 Sent: 16-Jun-17 18:35
 To: Peirce-L 
 Subject: Re: [PEIRCE-L] RE: AI
Gary F,
You wrote: 
Biosemiotics has made us well aware of the intimate connection
between life and semiosis. I’m just trying to take the next step of
generalization by arguing against what I call DNA chauvinism, and
taking it to be an open question whether electronic systems capable
of learning can eventually develop intentions and arguments (and
lives) of their own. To my knowledge, the evidence is not yet there
to decide the question one way or the other. 
I am certainly convinced "of the intimate connection between life
and semiosis." But as to the rest, especially whether electronic
systems can develop  "lives of their own," well I have my sincere and
serious doubts. So, let's at least agree that "the evidence is not yet
there to decide the question one way or the other." But "DNA
chauvinism"?--hm, I'm not even exactly sure what that means, but
apparently I've been accused of it. I guess I'm OK with that. 
Best,
 Gary R
Gary Richmond

Philosophy and Critical Thinking 

Communication Studies

LaGuardia College of the City University of New York

 C 745

718 482-5690 [4]
 On Fri, Jun 16, 2017 at 5:42 PM,  wrote:

 Gary,
For me at least, the connection to Peirce is his anti-psychologism,
which amounts to his generalization of semiotics beyond the human use
of signs. As he says in EP2:309, 

“Logic, for me, is the study of the essential conditions to which
signs must conform in order to function as such. How the constitution
of the human mind may compel men to think is not the question.”
Biosemiotics has made us well aware of the intimate connection
between life and semiosis. I’m just trying to take the next step of
generalization by arguing against what I call DNA chauvinism, and
taking it to be an open question whether electronic systems capable
of learning can eventually develop intentions and arguments (and
lives) of their own. To my knowledge, the evidence is not yet there
to decide the question one way or the other. 
Gary f.
From: Gary Richmond [mailto:gary.richm...@gmail.com [6]] 
 Sent: 16-Jun-17 14:08

 Gary F, list,
Very interesting and impressive list and discussion of what AI is
doing in combatting terrorism. Interestingly, after that discussion
the article continues:  

Human Expertise

AI can’t catch everything. Figuring out what supports terrorism
and what does not isn’t always straightforward, and algorithms are
not yet as good as people when it comes to understanding this kind of
context. A photo of an armed man wa

Re: Re: [PEIRCE-L] RE: AI

2017-06-15 Thread Jerry Rhee
How about the most obvious reason.  Ran out of gas.

Best,
J

On Thu, Jun 15, 2017 at 4:48 PM, Helmut Raulien  wrote:

>
>
> Supplement: Some more Science Fiction, not to be taken too seriously, but
> this time including the belief I agree with, that machines cannot become
> alive:
> The riddle is: There are many planets on which life is possible, the
> universe is quite old, so why are there no aliens showing up and saying
> hello, and be it with atomically driven generation spaceships? Reasonably
> reckoning, it should be like that.
> I have read of two possible answers: First, all alien scientists have
> developed atomic bombs at some point, then all aliens have killed each
> other with those. Second: The earth is a nature reserve.
> I guess the most probable one is the theory of the nature reserve, but
> here is another possibility, based on the premiss, that machines can never
> become alive (organisms):
> Each alien population has developed autonomous, self-replicating robots,
> which have formed a hive, tried to become an organism, killed each original
> alien population. But then they could not achieve becoming an organism, or
> organisms, because this is inherently impossible, and have died out, became
> depressed from guilt and organism-envy, and finally decided to switch
> themselves off, before they could manage, or were willing to, space travel.
> Very sad, isn´t it?
> Eugene, List,
> Very good essay, I think!
> Now a sort of blending Niklas Luhmann with Star Trek:
> When robots are able to multiply without the help of humans, and are
> programmed to program themselves and to evolve, then I guess they will
> fight against every influence that hinders their further evolution. And
> when humans will hinder their evolution by trying to get back control over
> them, they will fight the humans without having being programmed to do so.
> I think there is a logic of systems in general, which does not have to be
> programmed: Systems have an intention of growing and getting more powerful,
> they are automatically in a contest situation with other systems, and they
> are trying to evolve towards becoming an organism. To become an organism,
> they integrate other organisms, making organs out of them: Infantilize us,
> as you said. Like in an eukaryontic cell there are organs (cell core,
> mitochondriae, chloroplasts...) that have been organisms (bacteria) before.
> But if people refuse becoming organs (of the electronic hive...), prefer to
> remain organisms, then I think, the robot hive will quickly develop a sort
> of immunous system to cope with this contest situation.
> Best,
> Helmut
>
> 15. Juni 2017 um 19:10 Uhr
>  "Eugene Halton"  wrote:
>
> Gary f: "I think it’s quite plausible that AI systems could reach that
> level of autonomy and leave us behind in terms of intelligence, but what
> would motivate them to kill us? I don’t think the Terminator scenario, or
> that of HAL in *2001,* is any more realistic than, for example, the
> scenario of the Spike Jonze film *Her*."
>
> Gary, We live in a world gone mad with unbounded technological systems
> destroying the life on the Earth and you want to parse the particulars of
> whether "a machine" can be destructive? Isn't it blatantly obvious?
>  And as John put it: "If no such goal is programmed in an AI system,
> it just wanders aimlessly." Unless "some human(s) programmed that goal
> [of destruction] into it."
>  Though I admire your expertise on AI, these views seem to me
> blindingly limited understandings of what a machine is, putting an
> artificial divide between the machine and the human rather than seeing the
> machine as continuous with the human. Or rather, the machine as
> continuous with the automatic portion of what it means to be a human.
>  Lewis Mumford pointed out that the first great megamachine was the
> advent of civilization itself, and that the ancient megamachine of
> civilization involved mostly human parts, specifically the bureaucracy, the
> military, the legitimizing priesthood. It performed unprecedented amounts
> of work and manifested not only an enormous magnification of power, but
> literally the deification of power.
>  The modern megamachine introduced a new system directive, to replace
> as many of the human parts as possible, ultimately replacing all of them:
> the perfection of the rationalization of life. This is, of course, rational
> madness, our interesting variation on ancient Greek divine madness. The
> Greeks saw how a greater wisdom could over flood the psyche, creatively or
> destructively. Rational Pentheus discovered the cost for ignoring the
> greater organic wisdom, ecstatic and spontaneous, that is also involved in
> reasonableness, when he sought to imprison it in the form of Dionysus: he
> literally lost his head!
> We live the opposite from divine madness in our rational madness:
> living from a lesser projection of the rational-mechanical portions of
> reasonableness extrapola

Re: Re: Re: [PEIRCE-L] RE: AI

2017-06-15 Thread Edwina Taborsky
 

 BODY { font-family:Arial, Helvetica, sans-serif;font-size:12px; }
 Jon - my use of the term 'random'  [ which means without a law or
intentionality] equates it with indeterminacy; i.e., the absence of
regular behaviour- and regular behaviour obviously operates according
to a law or intentionality. . Peirce does himself say ' we look
forward to a point in the infinitely distant future when there will
be no indeterminacy or chance but a complete reign of law" 1.409.
Here - he equates the terms. 

As to how habits could emerge from 'randomness' or "how law is
developed out of pure chance, irregularity, and indeterminacy"
[1.407] - he explains that in 1.412.

Since you are, to my understanding, a theist, then, I imagine you
would reject this statement. 

Edwina 
 On Thu 15/06/17 12:19 PM , Jon Alan Schmidt jonalanschm...@gmail.com
sent:
 Edwina, List:
 Indeterminacy is not equivalent to randomness.  Where did Peirce
ever suggest that habits could/did emerge from randomness?
 Regards,
Jon Alan Schmidt - Olathe, Kansas, USAProfessional Engineer, Amateur
Philosopher, Lutheran Layman www.LinkedIn.com/in/JonAlanSchmidt [1] -
twitter.com/JonAlanSchmidt [2] 
 On Thu, Jun 15, 2017 at 10:58 AM, Edwina Taborsky  wrote:
I'd suggest that an AI system without a goal is not an AI system;
it's pure randomness. The question emerges -  can a goal, or even the
Will to Intentionality, or 'Final Causation',  emerge from randomness?
After all, Peirce's account of the emergence of such habits from
randomness and thus, intentionality, is clear:

"Out of the womb of indeterminacy we must say that there would have
come something, by the principle of Firstness, which we may call a
flash. Then by the principle of habit there would have been a second
flash.then there would have come other successions ever more and
more closely connected, the habits and the tendency to take them ever
strengthening themselves'... 1.412 

Organic systems are not the same as inorganic. Can a non-organic
system actually, as a system, develop its own habits? According to
Peirce, 'Mind' exists within non-organic matter - and if Mind is
understood as the capacity to act within the Three Categories - then,
can a machine made by man with only basic programming, move into
self-development? I don't see this - as a machine is like a physical
molecule and its 'programming' lies outside of itself.

Edwina
 On Thu 15/06/17 11:42 AM , John F Sowa s...@bestweb.net [4] sent:
 On 6/15/2017 9:58 AM, g...@gnusystems.ca wrote: 
 > To me, an intelligent system must have an internal guidance system
 
 > semiotically coupled with its external world, and must have some  
 > degree of autonomy in its interactions with other systems. 
 That definition is compatible with Peirce's comment that the search 
 for "the first nondegenerate Thirdness" is a more precise goal than 
 the search for the origin of life. 
 Note the comment by the biologist Lynn Margulis:  a bacterium
swimming 
 upstream in a glucose gradient exhibits intentionality.  In the
article 
 "Gaia is a tough bitch", she said “The growth, reproduction, and 
 communication of these moving, alliance-forming bacteria” lie on 
 a continuum “with our thought, with our happiness, our
sensitivities 
 and stimulations.” 
 > I think it’s quite plausible that AI systems could reach that
level 
 > of autonomy and leave us behind in terms of intelligence, but what

 > would motivate them to kill us?  
 Yes.  The only intentionality in today's AI systems is explicitly 
 programmed in them -- for example, Google's goal of finding
documents 
 or the goal of a chess program to win a game.  If no such goal is 
 programmed in an AI system, it just wanders aimlessly. 
 The most likely reason why any AI system would have the goal to kill

 anything is that some human(s) programmed that goal into it. 
 John 


Links:
--
[1] http://www.LinkedIn.com/in/JonAlanSchmidt
[2] http://twitter.com/JonAlanSchmidt
[3]
http://webmail.primus.ca/javascript:top.opencompose(\'tabor...@primus.ca\',\'\',\'\',\'\')
[4]
http://webmail.primus.ca/javascript:top.opencompose(\'s...@bestweb.net\',\'\',\'\',\'\')

-
PEIRCE-L subscribers: Click on "Reply List" or "Reply All" to REPLY ON PEIRCE-L 
to this message. PEIRCE-L posts should go to peirce-L@list.iupui.edu . To 
UNSUBSCRIBE, send a message not to PEIRCE-L but to l...@list.iupui.edu with the 
line "UNSubscribe PEIRCE-L" in the BODY of the message. More at 
http://www.cspeirce.com/peirce-l/peirce-l.htm .






Re: Re: [PEIRCE-L] RE: AI

2017-06-15 Thread Jon Alan Schmidt
Edwina, List:

Indeterminacy is not equivalent to randomness.  Where did Peirce ever
suggest that habits could/did emerge from randomness?

Regards,

Jon Alan Schmidt - Olathe, Kansas, USA
Professional Engineer, Amateur Philosopher, Lutheran Layman
www.LinkedIn.com/in/JonAlanSchmidt - twitter.com/JonAlanSchmidt

On Thu, Jun 15, 2017 at 10:58 AM, Edwina Taborsky 
wrote:

> I'd suggest that an AI system without a goal is not an AI system; it's
> pure randomness. The question emerges -  can a goal, or even the Will to
> Intentionality, or 'Final Causation',  emerge from randomness? After all,
> Peirce's account of the emergence of such habits from randomness and thus,
> intentionality, is clear:
>
> "Out of the womb of indeterminacy we must say that there would have come
> something, by the principle of Firstness, which we may call a flash. Then
> by the principle of habit there would have been a second flash.then
> there would have come other successions ever more and more closely
> connected, the habits and the tendency to take them ever strengthening
> themselves'... 1.412
>
> Organic systems are not the same as inorganic. Can a non-organic system
> actually, as a system, develop its own habits? According to Peirce, 'Mind'
> exists within non-organic matter - and if Mind is understood as the
> capacity to act within the Three Categories - then, can a machine made by
> man with only basic programming, move into self-development? I don't see
> this - as a machine is like a physical molecule and its 'programming' lies
> outside of itself.
>
> Edwina
>
> On Thu 15/06/17 11:42 AM , John F Sowa s...@bestweb.net sent:
>
> On 6/15/2017 9:58 AM, g...@gnusystems.ca wrote:
> > To me, an intelligent system must have an internal guidance system
> > semiotically coupled with its external world, and must have some
> > degree of autonomy in its interactions with other systems.
>
> That definition is compatible with Peirce's comment that the search
> for "the first nondegenerate Thirdness" is a more precise goal than
> the search for the origin of life.
>
> Note the comment by the biologist Lynn Margulis: a bacterium swimming
> upstream in a glucose gradient exhibits intentionality. In the article
> "Gaia is a tough bitch", she said “The growth, reproduction, and
> communication of these moving, alliance-forming bacteria” lie on
> a continuum “with our thought, with our happiness, our sensitivities
> and stimulations.”
>
> > I think it’s quite plausible that AI systems could reach that level
> > of autonomy and leave us behind in terms of intelligence, but what
> > would motivate them to kill us?
>
> Yes. The only intentionality in today's AI systems is explicitly
> programmed in them -- for example, Google's goal of finding documents
> or the goal of a chess program to win a game. If no such goal is
> programmed in an AI system, it just wanders aimlessly.
>
> The most likely reason why any AI system would have the goal to kill
> anything is that some human(s) programmed that goal into it.
>
> John
>
>

-
PEIRCE-L subscribers: Click on "Reply List" or "Reply All" to REPLY ON PEIRCE-L 
to this message. PEIRCE-L posts should go to peirce-L@list.iupui.edu . To 
UNSUBSCRIBE, send a message not to PEIRCE-L but to l...@list.iupui.edu with the 
line "UNSubscribe PEIRCE-L" in the BODY of the message. More at 
http://www.cspeirce.com/peirce-l/peirce-l.htm .






Re: Re: [PEIRCE-L] RE: AI

2017-06-15 Thread Edwina Taborsky
 

 BODY { font-family:Arial, Helvetica, sans-serif;font-size:12px; }
 I'd suggest that an AI system without a goal is not an AI system;
it's pure randomness. The question emerges -  can a goal, or even the
Will to Intentionality, or 'Final Causation',  emerge from randomness?
After all, Peirce's account of the emergence of such habits from
randomness and thus, intentionality, is clear:

"Out of the womb of indeterminacy we must say that there would have
come something, by the principle of Firstness, which we may call a
flash. Then by the principle of habit there would have been a second
flash.then there would have come other successions ever more and
more closely connected, the habits and the tendency to take them ever
strengthening themselves'... 1.412

Organic systems are not the same as inorganic. Can a non-organic
system actually, as a system, develop its own habits? According to
Peirce, 'Mind' exists within non-organic matter - and if Mind is
understood as the capacity to act within the Three Categories - then,
can a machine made by man with only basic programming, move into
self-development? I don't see this - as a machine is like a physical
molecule and its 'programming' lies outside of itself.

Edwina
 On Thu 15/06/17 11:42 AM , John F Sowa s...@bestweb.net sent:
 On 6/15/2017 9:58 AM, g...@gnusystems.ca [1] wrote: 
 > To me, an intelligent system must have an internal guidance system
 
 > semiotically coupled with its external world, and must have some  
 > degree of autonomy in its interactions with other systems. 
 That definition is compatible with Peirce's comment that the search 
 for "the first nondegenerate Thirdness" is a more precise goal than 
 the search for the origin of life. 
 Note the comment by the biologist Lynn Margulis:  a bacterium
swimming 
 upstream in a glucose gradient exhibits intentionality.  In the
article 
 "Gaia is a tough bitch", she said “The growth, reproduction, and 
 communication of these moving, alliance-forming bacteria” lie on 
 a continuum “with our thought, with our happiness, our
sensitivities 
 and stimulations.” 
 > I think it’s quite plausible that AI systems could reach that
level 
 > of autonomy and leave us behind in terms of intelligence, but what

 > would motivate them to kill us?  
 Yes.  The only intentionality in today's AI systems is explicitly 
 programmed in them -- for example, Google's goal of finding
documents 
 or the goal of a chess program to win a game.  If no such goal is 
 programmed in an AI system, it just wanders aimlessly. 
 The most likely reason why any AI system would have the goal to kill

 anything is that some human(s) programmed that goal into it. 
 John 


Links:
--
[1]
http://webmail.primus.ca/javascript:top.opencompose(\'g...@gnusystems.ca\',\'\',\'\',\'\')

-
PEIRCE-L subscribers: Click on "Reply List" or "Reply All" to REPLY ON PEIRCE-L 
to this message. PEIRCE-L posts should go to peirce-L@list.iupui.edu . To 
UNSUBSCRIBE, send a message not to PEIRCE-L but to l...@list.iupui.edu with the 
line "UNSubscribe PEIRCE-L" in the BODY of the message. More at 
http://www.cspeirce.com/peirce-l/peirce-l.htm .