Re: [FRIAM] AI advance

2017-01-31 Thread Gillian Densmore
Aye that the do!


On Tue, Jan 31, 2017 at 10:14 AM, Marcus Daniels <mar...@snoutfarm.com>
wrote:

> We’ve even managed to mess up the *climate*.   That’s a serious kind of
> stubborn stupidity.   I think those hypothetical clankers may have a
> point.
>
>
>
> *From:* Friam [mailto:friam-boun...@redfish.com] *On Behalf Of *Gillian
> Densmore
> *Sent:* Tuesday, January 31, 2017 10:06 AM
>
> *To:* The Friday Morning Applied Complexity Coffee Group <
> friam@redfish.com>
> *Subject:* Re: [FRIAM] AI advance
>
>
>
> Hmmm why do I worry about 'clankers' deciding humans are jerks and
> suddenly we're living inside a game while the robots laugh and play  agame
> of Unu?
>
> I think I saw that move.
>
>
>
>
>
> On Tue, Jan 31, 2017 at 9:44 AM, Marcus Daniels <mar...@snoutfarm.com>
> wrote:
>
> Why assume they would be interested in our fate or that they'd compete for
> our resources?They'd probably just head for another environment that
> was hostile to human life, but not to them.   If for some reason they
> needed to occupy our computers for a while, they'd surely be better at it
> than the botnets of human criminals and script-kiddies.
>
> -Original Message-
> From: Friam [mailto:friam-boun...@redfish.com] On Behalf Of Robert J.
> Cordingley
> Sent: Tuesday, January 31, 2017 9:24 AM
> To: The Friday Morning Applied Complexity Coffee Group <friam@redfish.com>
> Subject: Re: [FRIAM] AI advance
>
> So once AI machines are allowed to start designing themselves with at
> least the goal for increasing performance, how long have we got? (It
> doesn't matter whether we (ie the US) allow that or some other resourceful,
> perhaps military, organization does it.) Didn't Hawking fear runaway AI as
> a bigger existential threat than runaway greenhouse effects?
>
> Robert C
>
>
> On 1/31/17 10:34 AM, Pamela McCorduck wrote:
> > To consider the issue perhaps more seriously, AI100 was created two
> years ago at Stanford University, funded by Eric Horowitz and his wife.
> Eric is an early AI pioneer at Microsoft. It’s a hundred-year, rolling
> study of the many impacts of AI, and it plans to issue reports every five
> years based on contributions from leading AI researchers, social
> scientists, ethicists, and philosophers (among representatives of fields
> outside AI).
> >
> > Its first report was issued late last year, and you can read it on the
> AI100 website.
> >
> > You may say that leading AI researchers and their friends have vested
> interests, but then I point to a number of other organizations who have
> taken on the topic of AI and its impact: nearly every major university has
> such a program (Georgia Tech, MIT, UC Berkeley, Michigan, just for
> instance), and a joint program on the future between Oxford and Cambridge
> has put a great deal of effort into such studies.
> >
> > The amateur speculation is fun, but the professionals are paying
> attention. FWIW, I consider the fictional representations of AI in movies,
> books, TV, to be valuable scenario builders. It doesn’t matter if they’re
> farfetched (most of them certainly are) but it does matter that they raise
> interesting issues for nonspecialists to chew over.
> >
> > Pamela
> >
> >
> >
> >> On Jan 31, 2017, at 8:18 AM, Joe Spinden <j...@qri.us> wrote:
> >>
> >> In a book I read several years ago, whose title I cannot recall, the
> conclusion was: "They may have created us, but they keep gumming things
> up.  They have outlived their usefulness.  Better to just get rid of them."
> >>
> >> -JS
> >>
> >>
> >> On 1/31/17 7:41 AM, Marcus Daniels wrote:
> >>> Steve writes:
> >>>
> >>> "Maybe... but somehow I'm not a lot more confident in the *product* of
> humans who make bad decisions making *better* decisions?"
> >>>
> >>> Nowadays machine learning is much more unsupervised.Self-taught,
> if you will.   Such a consciousness might reasonably decide, "Oh they
> created us because they needed us -- they just didn't realize how much."
> >>>
> >>> Marcus
> >>>
> >>> 
> >>> FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at
> >>> cafe at St. John's College to unsubscribe
> >>> http://redfish.com/mailman/listinfo/friam_redfish.com
> >>> FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
> >>>
> >> --
> >> Joe
> >>
> >>
> >> ==

Re: [FRIAM] AI advance

2017-01-31 Thread Marcus Daniels
We’ve even managed to mess up the climate.   That’s a serious kind of stubborn 
stupidity.   I think those hypothetical clankers may have a point.

From: Friam [mailto:friam-boun...@redfish.com] On Behalf Of Gillian Densmore
Sent: Tuesday, January 31, 2017 10:06 AM
To: The Friday Morning Applied Complexity Coffee Group <friam@redfish.com>
Subject: Re: [FRIAM] AI advance

Hmmm why do I worry about 'clankers' deciding humans are jerks and suddenly 
we're living inside a game while the robots laugh and play  agame of Unu?
I think I saw that move.


On Tue, Jan 31, 2017 at 9:44 AM, Marcus Daniels 
<mar...@snoutfarm.com<mailto:mar...@snoutfarm.com>> wrote:
Why assume they would be interested in our fate or that they'd compete for our 
resources?They'd probably just head for another environment that was 
hostile to human life, but not to them.   If for some reason they needed to 
occupy our computers for a while, they'd surely be better at it than the 
botnets of human criminals and script-kiddies.

-Original Message-
From: Friam 
[mailto:friam-boun...@redfish.com<mailto:friam-boun...@redfish.com>] On Behalf 
Of Robert J. Cordingley
Sent: Tuesday, January 31, 2017 9:24 AM
To: The Friday Morning Applied Complexity Coffee Group 
<friam@redfish.com<mailto:friam@redfish.com>>
Subject: Re: [FRIAM] AI advance
So once AI machines are allowed to start designing themselves with at least the 
goal for increasing performance, how long have we got? (It doesn't matter 
whether we (ie the US) allow that or some other resourceful, perhaps military, 
organization does it.) Didn't Hawking fear runaway AI as a bigger existential 
threat than runaway greenhouse effects?

Robert C


On 1/31/17 10:34 AM, Pamela McCorduck wrote:
> To consider the issue perhaps more seriously, AI100 was created two years ago 
> at Stanford University, funded by Eric Horowitz and his wife. Eric is an 
> early AI pioneer at Microsoft. It’s a hundred-year, rolling study of the many 
> impacts of AI, and it plans to issue reports every five years based on 
> contributions from leading AI researchers, social scientists, ethicists, and 
> philosophers (among representatives of fields outside AI).
>
> Its first report was issued late last year, and you can read it on the AI100 
> website.
>
> You may say that leading AI researchers and their friends have vested 
> interests, but then I point to a number of other organizations who have taken 
> on the topic of AI and its impact: nearly every major university has such a 
> program (Georgia Tech, MIT, UC Berkeley, Michigan, just for instance), and a 
> joint program on the future between Oxford and Cambridge has put a great deal 
> of effort into such studies.
>
> The amateur speculation is fun, but the professionals are paying attention. 
> FWIW, I consider the fictional representations of AI in movies, books, TV, to 
> be valuable scenario builders. It doesn’t matter if they’re farfetched (most 
> of them certainly are) but it does matter that they raise interesting issues 
> for nonspecialists to chew over.
>
> Pamela
>
>
>
>> On Jan 31, 2017, at 8:18 AM, Joe Spinden <j...@qri.us<mailto:j...@qri.us>> 
>> wrote:
>>
>> In a book I read several years ago, whose title I cannot recall, the 
>> conclusion was: "They may have created us, but they keep gumming things up.  
>> They have outlived their usefulness.  Better to just get rid of them."
>>
>> -JS
>>
>>
>> On 1/31/17 7:41 AM, Marcus Daniels wrote:
>>> Steve writes:
>>>
>>> "Maybe... but somehow I'm not a lot more confident in the *product* of 
>>> humans who make bad decisions making *better* decisions?"
>>>
>>> Nowadays machine learning is much more unsupervised.Self-taught, if you 
>>> will.   Such a consciousness might reasonably decide, "Oh they created us 
>>> because they needed us -- they just didn't realize how much."
>>>
>>> Marcus
>>>
>>> 
>>> FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at
>>> cafe at St. John's College to unsubscribe
>>> http://redfish.com/mailman/listinfo/friam_redfish.com
>>> FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
>>>
>> --
>> Joe
>>
>>
>> 
>> FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at
>> cafe at St. John's College to unsubscribe
>> http://redfish.com/mailman/listinfo/friam_redfish.com
>> FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Stran

Re: [FRIAM] AI advance

2017-01-31 Thread Gillian Densmore
Hmmm why do I worry about 'clankers' deciding humans are jerks and suddenly
we're living inside a game while the robots laugh and play  agame of Unu?
I think I saw that move.


On Tue, Jan 31, 2017 at 9:44 AM, Marcus Daniels <mar...@snoutfarm.com>
wrote:

> Why assume they would be interested in our fate or that they'd compete for
> our resources?They'd probably just head for another environment that
> was hostile to human life, but not to them.   If for some reason they
> needed to occupy our computers for a while, they'd surely be better at it
> than the botnets of human criminals and script-kiddies.
>
> -Original Message-
> From: Friam [mailto:friam-boun...@redfish.com] On Behalf Of Robert J.
> Cordingley
> Sent: Tuesday, January 31, 2017 9:24 AM
> To: The Friday Morning Applied Complexity Coffee Group <friam@redfish.com>
> Subject: Re: [FRIAM] AI advance
>
> So once AI machines are allowed to start designing themselves with at
> least the goal for increasing performance, how long have we got? (It
> doesn't matter whether we (ie the US) allow that or some other resourceful,
> perhaps military, organization does it.) Didn't Hawking fear runaway AI as
> a bigger existential threat than runaway greenhouse effects?
>
> Robert C
>
>
> On 1/31/17 10:34 AM, Pamela McCorduck wrote:
> > To consider the issue perhaps more seriously, AI100 was created two
> years ago at Stanford University, funded by Eric Horowitz and his wife.
> Eric is an early AI pioneer at Microsoft. It’s a hundred-year, rolling
> study of the many impacts of AI, and it plans to issue reports every five
> years based on contributions from leading AI researchers, social
> scientists, ethicists, and philosophers (among representatives of fields
> outside AI).
> >
> > Its first report was issued late last year, and you can read it on the
> AI100 website.
> >
> > You may say that leading AI researchers and their friends have vested
> interests, but then I point to a number of other organizations who have
> taken on the topic of AI and its impact: nearly every major university has
> such a program (Georgia Tech, MIT, UC Berkeley, Michigan, just for
> instance), and a joint program on the future between Oxford and Cambridge
> has put a great deal of effort into such studies.
> >
> > The amateur speculation is fun, but the professionals are paying
> attention. FWIW, I consider the fictional representations of AI in movies,
> books, TV, to be valuable scenario builders. It doesn’t matter if they’re
> farfetched (most of them certainly are) but it does matter that they raise
> interesting issues for nonspecialists to chew over.
> >
> > Pamela
> >
> >
> >
> >> On Jan 31, 2017, at 8:18 AM, Joe Spinden <j...@qri.us> wrote:
> >>
> >> In a book I read several years ago, whose title I cannot recall, the
> conclusion was: "They may have created us, but they keep gumming things
> up.  They have outlived their usefulness.  Better to just get rid of them."
> >>
> >> -JS
> >>
> >>
> >> On 1/31/17 7:41 AM, Marcus Daniels wrote:
> >>> Steve writes:
> >>>
> >>> "Maybe... but somehow I'm not a lot more confident in the *product* of
> humans who make bad decisions making *better* decisions?"
> >>>
> >>> Nowadays machine learning is much more unsupervised.Self-taught,
> if you will.   Such a consciousness might reasonably decide, "Oh they
> created us because they needed us -- they just didn't realize how much."
> >>>
> >>> Marcus
> >>>
> >>> 
> >>> FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at
> >>> cafe at St. John's College to unsubscribe
> >>> http://redfish.com/mailman/listinfo/friam_redfish.com
> >>> FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
> >>>
> >> --
> >> Joe
> >>
> >>
> >> 
> >> FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at
> >> cafe at St. John's College to unsubscribe
> >> http://redfish.com/mailman/listinfo/friam_redfish.com
> >> FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
> >
> > 
> > FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe
> > at St. John's College to unsubscribe
> > http://redfish.com/mailman/listinfo/friam_redfish.com
> > FRIAM-COMIC http://friam-co

Re: [FRIAM] AI advance

2017-01-31 Thread Marcus Daniels
Why assume they would be interested in our fate or that they'd compete for our 
resources?They'd probably just head for another environment that was 
hostile to human life, but not to them.   If for some reason they needed to 
occupy our computers for a while, they'd surely be better at it than the 
botnets of human criminals and script-kiddies.

-Original Message-
From: Friam [mailto:friam-boun...@redfish.com] On Behalf Of Robert J. Cordingley
Sent: Tuesday, January 31, 2017 9:24 AM
To: The Friday Morning Applied Complexity Coffee Group <friam@redfish.com>
Subject: Re: [FRIAM] AI advance

So once AI machines are allowed to start designing themselves with at least the 
goal for increasing performance, how long have we got? (It doesn't matter 
whether we (ie the US) allow that or some other resourceful, perhaps military, 
organization does it.) Didn't Hawking fear runaway AI as a bigger existential 
threat than runaway greenhouse effects?

Robert C


On 1/31/17 10:34 AM, Pamela McCorduck wrote:
> To consider the issue perhaps more seriously, AI100 was created two years ago 
> at Stanford University, funded by Eric Horowitz and his wife. Eric is an 
> early AI pioneer at Microsoft. It’s a hundred-year, rolling study of the many 
> impacts of AI, and it plans to issue reports every five years based on 
> contributions from leading AI researchers, social scientists, ethicists, and 
> philosophers (among representatives of fields outside AI).
>
> Its first report was issued late last year, and you can read it on the AI100 
> website.
>
> You may say that leading AI researchers and their friends have vested 
> interests, but then I point to a number of other organizations who have taken 
> on the topic of AI and its impact: nearly every major university has such a 
> program (Georgia Tech, MIT, UC Berkeley, Michigan, just for instance), and a 
> joint program on the future between Oxford and Cambridge has put a great deal 
> of effort into such studies.
>
> The amateur speculation is fun, but the professionals are paying attention. 
> FWIW, I consider the fictional representations of AI in movies, books, TV, to 
> be valuable scenario builders. It doesn’t matter if they’re farfetched (most 
> of them certainly are) but it does matter that they raise interesting issues 
> for nonspecialists to chew over.
>
> Pamela
>
>
>
>> On Jan 31, 2017, at 8:18 AM, Joe Spinden <j...@qri.us> wrote:
>>
>> In a book I read several years ago, whose title I cannot recall, the 
>> conclusion was: "They may have created us, but they keep gumming things up.  
>> They have outlived their usefulness.  Better to just get rid of them."
>>
>> -JS
>>
>>
>> On 1/31/17 7:41 AM, Marcus Daniels wrote:
>>> Steve writes:
>>>
>>> "Maybe... but somehow I'm not a lot more confident in the *product* of 
>>> humans who make bad decisions making *better* decisions?"
>>>
>>> Nowadays machine learning is much more unsupervised.Self-taught, if you 
>>> will.   Such a consciousness might reasonably decide, "Oh they created us 
>>> because they needed us -- they just didn't realize how much."
>>>
>>> Marcus
>>>
>>> 
>>> FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at 
>>> cafe at St. John's College to unsubscribe 
>>> http://redfish.com/mailman/listinfo/friam_redfish.com
>>> FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
>>>
>> --
>> Joe
>>
>>
>> 
>> FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at 
>> cafe at St. John's College to unsubscribe 
>> http://redfish.com/mailman/listinfo/friam_redfish.com
>> FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
>
> 
> FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe 
> at St. John's College to unsubscribe 
> http://redfish.com/mailman/listinfo/friam_redfish.com
> FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

--
Cirrillian
Web Design & Development
Santa Fe, NM
http://cirrillian.com
281-989-6272 (cell)
Member Design Corps of Santa Fe



FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

Re: [FRIAM] AI advance

2017-01-31 Thread Robert J. Cordingley
So once AI machines are allowed to start designing themselves with at 
least the goal for increasing performance, how long have we got? (It 
doesn't matter whether we (ie the US) allow that or some other 
resourceful, perhaps military, organization does it.) Didn't Hawking 
fear runaway AI as a bigger existential threat than runaway greenhouse 
effects?


Robert C


On 1/31/17 10:34 AM, Pamela McCorduck wrote:

To consider the issue perhaps more seriously, AI100 was created two years ago 
at Stanford University, funded by Eric Horowitz and his wife. Eric is an early 
AI pioneer at Microsoft. It’s a hundred-year, rolling study of the many impacts 
of AI, and it plans to issue reports every five years based on contributions 
from leading AI researchers, social scientists, ethicists, and philosophers 
(among representatives of fields outside AI).

Its first report was issued late last year, and you can read it on the AI100 
website.

You may say that leading AI researchers and their friends have vested 
interests, but then I point to a number of other organizations who have taken 
on the topic of AI and its impact: nearly every major university has such a 
program (Georgia Tech, MIT, UC Berkeley, Michigan, just for instance), and a 
joint program on the future between Oxford and Cambridge has put a great deal 
of effort into such studies.

The amateur speculation is fun, but the professionals are paying attention. 
FWIW, I consider the fictional representations of AI in movies, books, TV, to 
be valuable scenario builders. It doesn’t matter if they’re farfetched (most of 
them certainly are) but it does matter that they raise interesting issues for 
nonspecialists to chew over.

Pamela




On Jan 31, 2017, at 8:18 AM, Joe Spinden  wrote:

In a book I read several years ago, whose title I cannot recall, the conclusion was: 
"They may have created us, but they keep gumming things up.  They have outlived 
their usefulness.  Better to just get rid of them."

-JS


On 1/31/17 7:41 AM, Marcus Daniels wrote:

Steve writes:

"Maybe... but somehow I'm not a lot more confident in the *product* of humans who 
make bad decisions making *better* decisions?"

Nowadays machine learning is much more unsupervised.Self-taught, if you will.   Such 
a consciousness might reasonably decide, "Oh they created us because they needed us 
-- they just didn't realize how much."

Marcus


FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove


--
Joe



FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove



FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove


--
Cirrillian
Web Design & Development
Santa Fe, NM
http://cirrillian.com
281-989-6272 (cell)
Member Design Corps of Santa Fe



FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

Re: [FRIAM] AI advance

2017-01-31 Thread Pamela McCorduck
To consider the issue perhaps more seriously, AI100 was created two years ago 
at Stanford University, funded by Eric Horowitz and his wife. Eric is an early 
AI pioneer at Microsoft. It’s a hundred-year, rolling study of the many impacts 
of AI, and it plans to issue reports every five years based on contributions 
from leading AI researchers, social scientists, ethicists, and philosophers 
(among representatives of fields outside AI). 

Its first report was issued late last year, and you can read it on the AI100 
website.

You may say that leading AI researchers and their friends have vested 
interests, but then I point to a number of other organizations who have taken 
on the topic of AI and its impact: nearly every major university has such a 
program (Georgia Tech, MIT, UC Berkeley, Michigan, just for instance), and a 
joint program on the future between Oxford and Cambridge has put a great deal 
of effort into such studies.

The amateur speculation is fun, but the professionals are paying attention. 
FWIW, I consider the fictional representations of AI in movies, books, TV, to 
be valuable scenario builders. It doesn’t matter if they’re farfetched (most of 
them certainly are) but it does matter that they raise interesting issues for 
nonspecialists to chew over.

Pamela



> On Jan 31, 2017, at 8:18 AM, Joe Spinden  wrote:
> 
> In a book I read several years ago, whose title I cannot recall, the 
> conclusion was: "They may have created us, but they keep gumming things up.  
> They have outlived their usefulness.  Better to just get rid of them."
> 
> -JS
> 
> 
> On 1/31/17 7:41 AM, Marcus Daniels wrote:
>> Steve writes:
>> 
>> "Maybe... but somehow I'm not a lot more confident in the *product* of 
>> humans who make bad decisions making *better* decisions?"
>> 
>> Nowadays machine learning is much more unsupervised.Self-taught, if you 
>> will.   Such a consciousness might reasonably decide, "Oh they created us 
>> because they needed us -- they just didn't realize how much."
>> 
>> Marcus
>> 
>> 
>> FRIAM Applied Complexity Group listserv
>> Meets Fridays 9a-11:30 at cafe at St. John's College
>> to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
>> FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
>> 
> 
> -- 
> Joe
> 
> 
> 
> FRIAM Applied Complexity Group listserv
> Meets Fridays 9a-11:30 at cafe at St. John's College
> to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
> FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove



FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

Re: [FRIAM] AI advance

2017-01-31 Thread Joe Spinden
In a book I read several years ago, whose title I cannot recall, the 
conclusion was: "They may have created us, but they keep gumming things 
up.  They have outlived their usefulness.  Better to just get rid of them."


-JS


On 1/31/17 7:41 AM, Marcus Daniels wrote:

Steve writes:

"Maybe... but somehow I'm not a lot more confident in the *product* of humans who 
make bad decisions making *better* decisions?"

Nowadays machine learning is much more unsupervised.Self-taught, if you will.   Such 
a consciousness might reasonably decide, "Oh they created us because they needed us 
-- they just didn't realize how much."

Marcus


FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove



--
Joe



FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove


Re: [FRIAM] AI advance

2017-01-31 Thread Roger Critchlow
On Tue, Jan 31, 2017 at 11:50 AM, Pamela McCorduck  wrote:

>
> > On Jan 31, 2017, at 7:32 AM, Steven A Smith  wrote:
> >
> >
> >> " AlphaGo itself isn't scary it's what comes next and so on and how
> quickly these advances are progressing that give some great minds cause for
> concern."
> >>
> >> I just hope it comes soon.  Humans aren't making very good decisions
> lately.
> > Maybe... but somehow I'm not a lot more confident in the *product* of
> humans who make bad decisions making *better* decisions?
> >
> > Coal Fired Power Plants, Internal Combustion Engines, and even Smart
> Grids make decisions based on their creators values all the time. Why would
> an AI created by short-sited, narrow-minded humans do any better?
>
> For one thing, it can search a larger search space for solutions.
>

The AI's can search a larger space for solutions, but there could be an
even larger space to search which was discarded by faulty assumptions the
builders brought to the problem.  There's room for an inconceivable amount
of space out there in parameter land and many wonderful ways to leave your
thumb on the scales.

-- rec --

FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

Re: [FRIAM] AI advance

2017-01-31 Thread Marcus Daniels
Steve writes:

> I *did* like the image of AI offered up in the movie "She" a few years ago.



Pamela writes:



> Me too, especially how lonely the humans were when their AI pals deserted 
> them because frankly, they were too boring.



Well, I was rooting for Maeve all 
along.



Marcus

FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

Re: [FRIAM] AI advance

2017-01-31 Thread Pamela McCorduck

> On Jan 31, 2017, at 7:32 AM, Steven A Smith  wrote:
> 
> 
>> " AlphaGo itself isn't scary it's what comes next and so on and how quickly 
>> these advances are progressing that give some great minds cause for concern."
>> 
>> I just hope it comes soon.  Humans aren't making very good decisions lately.
> Maybe... but somehow I'm not a lot more confident in the *product* of humans 
> who make bad decisions making *better* decisions?
> 
> Coal Fired Power Plants, Internal Combustion Engines, and even Smart Grids 
> make decisions based on their creators values all the time. Why would an AI 
> created by short-sited, narrow-minded humans do any better?

For one thing, it can search a larger search space for solutions.
> 
> Our Government and all of our Institutions are roughly "human steered AI"... 
> rule-based programs if you will.   And I'm not real proud of most of them 
> right now?

Not at all the same.
> 
> I *did* like the image of AI offered up in the movie "She" a few years ago.

Me too, especially how lonely the humans were when their AI pals deserted them 
because frankly, they were too boring.



FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove


Re: [FRIAM] AI advance

2017-01-31 Thread Marcus Daniels
Steve writes:

"Maybe... but somehow I'm not a lot more confident in the *product* of humans 
who make bad decisions making *better* decisions?"

Nowadays machine learning is much more unsupervised.Self-taught, if you 
will.   Such a consciousness might reasonably decide, "Oh they created us 
because they needed us -- they just didn't realize how much."

Marcus


FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove


Re: [FRIAM] AI advance

2017-01-31 Thread Steven A Smith



" AlphaGo itself isn't scary it's what comes next and so on and how quickly these 
advances are progressing that give some great minds cause for concern."

I just hope it comes soon.  Humans aren't making very good decisions lately.
Maybe... but somehow I'm not a lot more confident in the *product* of 
humans who make bad decisions making *better* decisions?


Coal Fired Power Plants, Internal Combustion Engines, and even Smart 
Grids make decisions based on their creators values all the time. Why 
would an AI created by short-sited, narrow-minded humans do any better?


Our Government and all of our Institutions are roughly "human steered 
AI"... rule-based programs if you will.   And I'm not real proud of most 
of them right now?


I *did* like the image of AI offered up in the movie "She" a few years ago.

A kinder, gentler transcendence?

Utopia/Dystopia!

Just Sayin'!


Marcus


FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove




FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove


Re: [FRIAM] AI advance

2017-01-31 Thread Marcus Daniels
" AlphaGo itself isn't scary it's what comes next and so on and how quickly 
these advances are progressing that give some great minds cause for concern."

I just hope it comes soon.  Humans aren't making very good decisions lately.

Marcus


FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove


Re: [FRIAM] AI advance

2017-01-31 Thread Steven A Smith

Vlad -


Only a confirmed Go player could breathe that atmosphere. Though I wonder
why Hawking is so afraid of this
machine when it can humble the best of us. Just make the board much larger.
At some point we will smell insulation burning.

Are you sure that isn't the smell of myelin sheath burning?

Seems like you would appreciate Chess Boxing as well?

https://en.wikipedia.org/wiki/Chess_boxing


FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove


Re: [FRIAM] AI advance

2017-01-30 Thread Robert J. Cordingley
You can find go players in Santa Fe, NM by visiting the 
http://santafegoclub.org website and attending their meets and any 
teaching sessions. For other places see the AGA at http://usgo.org or 
the EGF at http://www.eurogofed.org/


AlphaGo went on to beet Korean top player Lee Seedol 4-1 in March of 
2016. I don't think a larger board would help humans at all against a 
fully trained AlphaGo on the same size - but it is an interesting question.


AlphaGo itself isn't scary it's what comes next and so on and how 
quickly these advances are progressing that give some great minds cause 
for concern.


Robert C (AGA 2k)


On 1/30/17 11:37 PM, Vladimyr Burachynsky wrote:

So there are at least three by your count, and that was only a shallow
dredge of the pond.

I obtained an early version of a computer game and frittered away a lot of
hours playing
that maniacal coffee maker.  I found the flaw that the writer relied upon
and wiped out the game every time. That style of playing against a
stupid piece of code was horrible but only worked against a machine.

The flaw was that it made decisions on perceived values. So it was easy to
lead it into disaster. I  had never seen a human play in that manner
nor may that even be possible. Indeed I was able to annihilate it every
game, wipe it off the board. This is considered very offensive and
humiliating by Oriental Standards. But then I reminded my teachers that
Cossacks were never noted for their Table Manners.

Talk about a group of Intense Nicotine Addicts back then...

Only a confirmed Go player could breathe that atmosphere. Though I wonder
why Hawking is so afraid of this
machine when it can humble the best of us. Just make the board much larger.
At some point we will smell insulation burning.

vib

-Original Message-
From: Friam [mailto:friam-boun...@redfish.com] On Behalf Of Steven A Smith
Sent: January-30-17 9:54 PM
To: The Friday Morning Applied Complexity Coffee Group
Subject: Re: [FRIAM] AI advance

Vlad -

   I am the weakest of GO players, in spite of having considered the problem
of trying to use Gosper's memoisation as a mode of associative memory
problem solving.  Cody the M00se Dooderson has beat me every time we have
played I think.  Weak, weak, weak!

But I do find it fascinating.

   - Steve


On 1/30/17 8:07 PM, Vladimyr Burachynsky wrote:

To Joseph Spinden,

The article is old and I wonder if you play the game.
I ran a Go club at the University of Manitoba and can tell you strange
stories about a time before Hassabis.

I swear I never won a game in 5 years but I kept playing anyway.
I guess I am bloody minded. Eventually I discovered that my handicap
was being reduced and suspect I was close to 1 Dan at the time. I was
told that was harder than a Ph.D. So I went for the degree and
sloughed off the game.

There should be a few players in the congregation, let them speak up.
vib

-Original Message-
From: Friam [mailto:friam-boun...@redfish.com] On Behalf Of Joseph
Spinden
Sent: January-28-17 8:32 AM
To: The Friday Morning Applied Complexity Coffee Group
Subject: [FRIAM] AI advance

Of interest to some:

https://www.wired.com/2016/01/in-a-huge-breakthrough-googles-ai-beats-
a-top-
player-at-the-game-of-go

-JS





FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe
at St. John's College to unsubscribe
http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove



FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe
at St. John's College to unsubscribe
http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove



FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe
http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove



FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove




--
Cirrillian
Web Design & Development
Santa Fe, NM
http://cirrillian.com
281-989-6272 (cell)
Member Design Corps of Santa Fe



FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove


Re: [FRIAM] AI advance

2017-01-30 Thread Vladimyr Burachynsky
So there are at least three by your count, and that was only a shallow
dredge of the pond.

I obtained an early version of a computer game and frittered away a lot of
hours playing
that maniacal coffee maker.  I found the flaw that the writer relied upon
and wiped out the game every time. That style of playing against a 
stupid piece of code was horrible but only worked against a machine.

The flaw was that it made decisions on perceived values. So it was easy to
lead it into disaster. I  had never seen a human play in that manner
nor may that even be possible. Indeed I was able to annihilate it every
game, wipe it off the board. This is considered very offensive and
humiliating by Oriental Standards. But then I reminded my teachers that
Cossacks were never noted for their Table Manners.

Talk about a group of Intense Nicotine Addicts back then...  

Only a confirmed Go player could breathe that atmosphere. Though I wonder
why Hawking is so afraid of this
machine when it can humble the best of us. Just make the board much larger.
At some point we will smell insulation burning.

vib

-Original Message-
From: Friam [mailto:friam-boun...@redfish.com] On Behalf Of Steven A Smith
Sent: January-30-17 9:54 PM
To: The Friday Morning Applied Complexity Coffee Group
Subject: Re: [FRIAM] AI advance

Vlad -

  I am the weakest of GO players, in spite of having considered the problem
of trying to use Gosper's memoisation as a mode of associative memory
problem solving.  Cody the M00se Dooderson has beat me every time we have
played I think.  Weak, weak, weak!

But I do find it fascinating.

  - Steve


On 1/30/17 8:07 PM, Vladimyr Burachynsky wrote:
> To Joseph Spinden,
>
> The article is old and I wonder if you play the game.
> I ran a Go club at the University of Manitoba and can tell you strange 
> stories about a time before Hassabis.
>
> I swear I never won a game in 5 years but I kept playing anyway.
> I guess I am bloody minded. Eventually I discovered that my handicap 
> was being reduced and suspect I was close to 1 Dan at the time. I was 
> told that was harder than a Ph.D. So I went for the degree and 
> sloughed off the game.
>
> There should be a few players in the congregation, let them speak up.
> vib
>
> -Original Message-
> From: Friam [mailto:friam-boun...@redfish.com] On Behalf Of Joseph 
> Spinden
> Sent: January-28-17 8:32 AM
> To: The Friday Morning Applied Complexity Coffee Group
> Subject: [FRIAM] AI advance
>
> Of interest to some:
>
> https://www.wired.com/2016/01/in-a-huge-breakthrough-googles-ai-beats-
> a-top-
> player-at-the-game-of-go
>
> -JS
>
>
>
>
> 
> FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe 
> at St. John's College to unsubscribe 
> http://redfish.com/mailman/listinfo/friam_redfish.com
> FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
>
>
> 
> FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe 
> at St. John's College to unsubscribe 
> http://redfish.com/mailman/listinfo/friam_redfish.com
> FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove



FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe
http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove



FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove


Re: [FRIAM] AI advance

2017-01-30 Thread Joe Spinden
Old article, but new to me.  In any case, if true it is of interest.  I 
was never better than 2-3 kyu, but I stopped playing some time ago..


JS


On 1/30/17 8:07 PM, Vladimyr Burachynsky wrote:

To Joseph Spinden,

The article is old and I wonder if you play the game.
I ran a Go club at the University of Manitoba and can tell
you strange stories about a time before Hassabis.

I swear I never won a game in 5 years but I kept playing anyway.
I guess I am bloody minded. Eventually I discovered that my handicap was
being reduced and suspect
I was close to 1 Dan at the time. I was told that was harder than a Ph.D. So
I went for the degree and sloughed off the game.

There should be a few players in the congregation, let them speak up.
vib

-Original Message-
From: Friam [mailto:friam-boun...@redfish.com] On Behalf Of Joseph Spinden
Sent: January-28-17 8:32 AM
To: The Friday Morning Applied Complexity Coffee Group
Subject: [FRIAM] AI advance

Of interest to some:

https://www.wired.com/2016/01/in-a-huge-breakthrough-googles-ai-beats-a-top-
player-at-the-game-of-go

-JS





FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe
http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove



FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove



--
Joe



FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove


Re: [FRIAM] AI advance

2017-01-30 Thread Steven A Smith

Vlad -

 I am the weakest of GO players, in spite of having considered the 
problem of trying to use Gosper's memoisation as a mode of associative 
memory problem solving.  Cody the M00se Dooderson has beat me every time 
we have played I think.  Weak, weak, weak!


But I do find it fascinating.

 - Steve


On 1/30/17 8:07 PM, Vladimyr Burachynsky wrote:

To Joseph Spinden,

The article is old and I wonder if you play the game.
I ran a Go club at the University of Manitoba and can tell
you strange stories about a time before Hassabis.

I swear I never won a game in 5 years but I kept playing anyway.
I guess I am bloody minded. Eventually I discovered that my handicap was
being reduced and suspect
I was close to 1 Dan at the time. I was told that was harder than a Ph.D. So
I went for the degree and sloughed off the game.

There should be a few players in the congregation, let them speak up.
vib

-Original Message-
From: Friam [mailto:friam-boun...@redfish.com] On Behalf Of Joseph Spinden
Sent: January-28-17 8:32 AM
To: The Friday Morning Applied Complexity Coffee Group
Subject: [FRIAM] AI advance

Of interest to some:

https://www.wired.com/2016/01/in-a-huge-breakthrough-googles-ai-beats-a-top-
player-at-the-game-of-go

-JS





FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe
http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove



FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove




FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove


Re: [FRIAM] AI advance

2017-01-30 Thread Vladimyr Burachynsky
To Joseph Spinden,

The article is old and I wonder if you play the game.
I ran a Go club at the University of Manitoba and can tell
you strange stories about a time before Hassabis.

I swear I never won a game in 5 years but I kept playing anyway.
I guess I am bloody minded. Eventually I discovered that my handicap was
being reduced and suspect
I was close to 1 Dan at the time. I was told that was harder than a Ph.D. So
I went for the degree and sloughed off the game.

There should be a few players in the congregation, let them speak up.
vib

-Original Message-
From: Friam [mailto:friam-boun...@redfish.com] On Behalf Of Joseph Spinden
Sent: January-28-17 8:32 AM
To: The Friday Morning Applied Complexity Coffee Group
Subject: [FRIAM] AI advance

Of interest to some:

https://www.wired.com/2016/01/in-a-huge-breakthrough-googles-ai-beats-a-top-
player-at-the-game-of-go

-JS





FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe
http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove



FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove


Re: [FRIAM] AI advance

2017-01-28 Thread Steven A Smith
Fascinating!   I remember the broad discussions at the Cellular Automata 
Conference here in 1984 on the challenges/opportunities of using a CA to 
play GO.


I had an (unpublished of course) variation on Bill Gosper's HashLife 
 which I hoped might be a good 
basis for a  winning GO system back in those Pre Artificial Life days.


MIne used a less optimal subdivision (he did quad-tree, I used N-1 
Patches).  The purpose was to make the memoization translation invariant 
at all scales, not just binary orders of magnitude.   I was interested 
in general in the problem of using the hash to help analyze the 
computational complexity of a problem under solution based on the growth 
of the hash table.


Through my colleague, Susan Stepney (who some of you know) in York, I 
encouraged her grad student Jenny Owen to take this somewhere.  Alas, 
she chose to work with the Gosper version which I still believe has the 
unfortunate artifact of quad-tree/binary subdivision of the space, 
missing *many* repeated patterns at scales and offsets not aligning with 
the quad-tree.


Now we just need to teach it to play a mean game of "Go back to your 
Golden Towers" in DC?


- Steve


On 1/28/17 7:31 AM, Joseph Spinden wrote:

Of interest to some:

https://www.wired.com/2016/01/in-a-huge-breakthrough-googles-ai-beats-a-top-player-at-the-game-of-go 



-JS





FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove



FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove