So once AI machines are allowed to start designing themselves with at least the goal for increasing performance, how long have we got? (It doesn't matter whether we (ie the US) allow that or some other resourceful, perhaps military, organization does it.) Didn't Hawking fear runaway AI as a bigger existential threat than runaway greenhouse effects?

Robert C


On 1/31/17 10:34 AM, Pamela McCorduck wrote:
To consider the issue perhaps more seriously, AI100 was created two years ago 
at Stanford University, funded by Eric Horowitz and his wife. Eric is an early 
AI pioneer at Microsoft. It’s a hundred-year, rolling study of the many impacts 
of AI, and it plans to issue reports every five years based on contributions 
from leading AI researchers, social scientists, ethicists, and philosophers 
(among representatives of fields outside AI).

Its first report was issued late last year, and you can read it on the AI100 
website.

You may say that leading AI researchers and their friends have vested 
interests, but then I point to a number of other organizations who have taken 
on the topic of AI and its impact: nearly every major university has such a 
program (Georgia Tech, MIT, UC Berkeley, Michigan, just for instance), and a 
joint program on the future between Oxford and Cambridge has put a great deal 
of effort into such studies.

The amateur speculation is fun, but the professionals are paying attention. 
FWIW, I consider the fictional representations of AI in movies, books, TV, to 
be valuable scenario builders. It doesn’t matter if they’re farfetched (most of 
them certainly are) but it does matter that they raise interesting issues for 
nonspecialists to chew over.

Pamela



On Jan 31, 2017, at 8:18 AM, Joe Spinden <j...@qri.us> wrote:

In a book I read several years ago, whose title I cannot recall, the conclusion was: 
"They may have created us, but they keep gumming things up.  They have outlived 
their usefulness.  Better to just get rid of them."

-JS


On 1/31/17 7:41 AM, Marcus Daniels wrote:
Steve writes:

"Maybe... but somehow I'm not a lot more confident in the *product* of humans who 
make bad decisions making *better* decisions?"

Nowadays machine learning is much more unsupervised.    Self-taught, if you will.   Such 
a consciousness might reasonably decide, "Oh they created us because they needed us 
-- they just didn't realize how much."

Marcus

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

--
Joe


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

--
Cirrillian
Web Design & Development
Santa Fe, NM
http://cirrillian.com
281-989-6272 (cell)
Member Design Corps of Santa Fe


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

Reply via email to