Gary F, list,

Very interesting and impressive list and discussion of what AI is doing in
combatting terrorism. Interestingly, after that discussion the article
continues:

*Human Expertise*

AI can’t catch everything. Figuring out what supports terrorism and what
does not isn’t always straightforward, and algorithms are not yet as good
as people when it comes to understanding this kind of context. A photo of
an armed man waving an ISIS flag might be propaganda or recruiting
material, but could be an image in a news story. Some of the most effective
criticisms of brutal groups like ISIS utilize the group’s own propaganda
against it. To understand more nuanced cases, we need human expertise.

The paragraph above suggests that "algorithms are not yet as good as
people" when ti comes to nuance and understanding context. Will they ever
be?  No doubt they'll improve considerably in time.

In my opinion, AI is best seen as a human tool which like many tools can be
used for good or evil. But we're getting pretty far from anything
Peirce-related, so I'll leave it at that.

Best,

Gary R






[image: Gary Richmond]

*Gary Richmond*
*Philosophy and Critical Thinking*
*Communication Studies*
*LaGuardia College of the City University of New York*
*C 745*
*718 482-5690*

On Fri, Jun 16, 2017 at 1:36 PM, <g...@gnusystems.ca> wrote:

> Footnote:
>
> In case anyone is wondering what AIs are actually doing these days, this
> just in:
>
> https://newsroom.fb.com/news/2017/06/how-we-counter-terrorism/
>
>
>
> gary f.
>
>
>
> -----Original Message-----
> From: John F Sowa [mailto:s...@bestweb.net]
> Sent: 15-Jun-17 11:43
> To: peirce-l@list.iupui.edu
> Subject: Re: [PEIRCE-L] RE: AI
>
>
>
> On 6/15/2017 9:58 AM, g...@gnusystems.ca wrote:
>
> > To me, an intelligent system must have an internal guidance system
>
> > semiotically coupled with its external world, and must have some
>
> > degree of autonomy in its interactions with other systems.
>
>
>
> That definition is compatible with Peirce's comment that the search for
> "the first nondegenerate Thirdness" is a more precise goal than the search
> for the origin of life.
>
>
>
> Note the comment by the biologist Lynn Margulis:  a bacterium swimming
> upstream in a glucose gradient exhibits intentionality.  In the article
> "Gaia is a tough bitch", she said “The growth, reproduction, and
> communication of these moving, alliance-forming bacteria” lie on a
> continuum “with our thought, with our happiness, our sensitivities and
> stimulations.”
>
>
>
> > I think it’s quite plausible that AI systems could reach that level of
>
> > autonomy and leave us behind in terms of intelligence, but what would
>
> > motivate them to kill us?
>
>
>
> Yes.  The only intentionality in today's AI systems is explicitly
> programmed in them -- for example, Google's goal of finding documents or
> the goal of a chess program to win a game.  If no such goal is programmed
> in an AI system, it just wanders aimlessly.
>
>
>
> The most likely reason why any AI system would have the goal to kill
> anything is that some human(s) programmed that goal into it.
>
>
>
> John
>
>
> -----------------------------
> PEIRCE-L subscribers: Click on "Reply List" or "Reply All" to REPLY ON
> PEIRCE-L to this message. PEIRCE-L posts should go to
> peirce-L@list.iupui.edu . To UNSUBSCRIBE, send a message not to PEIRCE-L
> but to l...@list.iupui.edu with the line "UNSubscribe PEIRCE-L" in the
> BODY of the message. More at http://www.cspeirce.com/peirce-l/peirce-l.htm
> .
>
>
>
>
>
>
-----------------------------
PEIRCE-L subscribers: Click on "Reply List" or "Reply All" to REPLY ON PEIRCE-L 
to this message. PEIRCE-L posts should go to peirce-L@list.iupui.edu . To 
UNSUBSCRIBE, send a message not to PEIRCE-L but to l...@list.iupui.edu with the 
line "UNSubscribe PEIRCE-L" in the BODY of the message. More at 
http://www.cspeirce.com/peirce-l/peirce-l.htm .




Reply via email to