I have ordered 2 tickets. Should be interesting. Thanks, Tom.

George Duncan
georgeduncanart.com
(505) 983-6895
Represented by ViVO Contemporary
725 Canyon Road
Santa Fe, NM 87501

My art theme: Dynamic application of matrix order and luminous chaos.

"Attempt what is not certain. Certainty may or may not come later. It may
then be a valuable delusion."
>From "Notes to myself on beginning a painting" by Richard Diebenkorn

On Wed, May 20, 2015 at 9:48 AM, Merle Lefkoff <[email protected]>
wrote:

> Thanks so much, Tom.  I've got my ticket. Sounds wonderful.  See you there.
>
>
>
> On Tue, May 19, 2015 at 10:22 PM, John Dobson <[email protected]> wrote:
>
>> cdobson@okstate,edu
>>
>> On Mon, May 18, 2015 at 4:54 PM, Tom Johnson <[email protected]> wrote:
>>
>>> FYI, Santa Fe folks.
>>> -tj
>>>
>>> ============================================
>>> Tom Johnson
>>> Institute for Analytic Journalism   --     Santa Fe, NM USA
>>> 505.577.6482(c)                                    505.473.9646(h)
>>> Society of Professional Journalists <http://www.spj.org>   -   Region 9
>>> <http://www.spj.org/region9.asp> Director
>>> Join more than 1,500 journalists Sept. 18-20 at
>>> Excellence in Journalism 2015 in Orlando.  #EIJ15 Orlando
>>> http://www.jtjohnson.com                   [email protected]
>>> ============================================
>>>
>>>
>>> Can We Reshape Humanity’s Deep Future?Possibilities & Risks of
>>> Artificial Intelligence (AI), Human Enhancement, and Other Emerging
>>> Technologies
>>> ------------------------------
>>>
>>> WHERE: The James A. Little Theater <https://goo.gl/maps/NfQUu> at the
>>> New Mexico School for the Deaf.
>>> WHEN: Sunday, June 7, 2015, 2:00 pm
>>> TICKETS: Book your seats now
>>> <http://tickets.ticketssantafe.org/single/SelectSeating.aspx?p=2065> | More
>>> info. <http://tickets.ticketssantafe.org/single/EventDetail.aspx?p=2065>
>>> ------------------------------
>>>
>>> Dr. Nick Bostrom spends much of his time calculating the possible
>>> rewards and dangers of rapid technological advances — how such advances
>>> will likely alter the course of human evolution and life as we know it. One
>>> useful concept in untangling this puzzle is existential risk — the question
>>> of whether an adverse outcome would end human intelligent life or
>>> drastically curtail what we, in the infancy of the twenty-first century,
>>> would consider a viable future. Figuring out how to reduce existential risk
>>> even slightly brings into play an array of thought-provoking issues. In
>>> this engaging lecture, Professor Bostrom will present the factors to be
>>> taken into consideration:
>>>
>>>    - Future technology and its capabilities
>>>    - Anthropics
>>>    - Population ethics
>>>
>>>
>>>    - Human enhancement ethics
>>>    - Game theory
>>>    - Fermi paradox
>>>
>>> ------------------------------
>>> About Nick Bostrom
>>>
>>> Nick Bostrom <http://www.nickbostrom.com/> is Professor in the Faculty
>>> of Philosophy at Oxford University. He is the founding director of the
>>> Future of Humanity Institute, a multidisciplinary research center that
>>> enables a few exceptional mathematicians, philosophers, and scientists to
>>> think carefully about global priorities and big questions for humanity.
>>>
>>> He is the recipient of a Eugene R. Gannon Award and has been listed on 
>>> *Foreign
>>> Policy’s* Top 100 Global Thinkers list. He was included on *Prospect*
>>> magazine’s World Thinkers list, the youngest person in the top fifteen from
>>> all fields and the highest-ranked analytic philosopher. His writings have
>>> been translated into twenty-four languages.
>>>
>>> Bostrom’s background includes physics, computational neuroscience, and
>>> mathematical logic as well as philosophy. He is the author of some 200
>>> publications, including *Anthropic Bias* (Routledge, 2002), *Global
>>> Catastrophic Risks* (ed., OUP, 2008), *Human Enhancement* (ed., OUP,
>>> 2009), and *Superintelligence: Paths, Dangers, Strategies* (OUP, 2014),
>>> a *New York Times* bestseller. He is best known for his work in five
>>> areas: existential risk; the simulation argument; anthropics; impacts of
>>> future technology; and implications of consequentialism for global
>>> strategy. He has been referred to as one of the most important thinkers of
>>> our age.
>>> ------------------------------
>>>
>>> *SAR thanks these sponsors for underwriting this lecture:*
>>> ------------------------------
>>>
>>>
>>>
>>> *Slate,* Sept. 2014:
>>> You Should Be Terrified of Superintelligent Machines
>>>
>>> In the recent discussion over the risks of developing superintelligent
>>> machines—that is, machines with general intelligence greater than that of
>>> humans—two narratives have emerged. One side argues that if a machine ever
>>> achieved advanced intelligence, it would automatically know and care about
>>> human values and wouldn’t pose a threat to us. The opposing side argues
>>> that artificial intelligence would “want” to wipe humans out, either out of
>>> revenge or an intrinsic desire for survival.
>>>
>>> As it turns out, both of these views are wrong.
>>>
>>> Read more >
>>> <http://www.slate.com/articles/technology/future_tense/2014/09/will_artificial_intelligence_turn_on_us_robots_are_nothing_like_humans_and.html>
>>>
>>> *Aeon Magazine,* Feb. 2013:
>>> Omens
>>>
>>> To understand why an AI might be dangerous, you have to avoid
>>> anthropomorphising it. When you ask yourself what it might do in a
>>> particular situation, you can’t answer by proxy. You can't picture a
>>> super-smart version of yourself floating above the situation. Human
>>> cognition is only one species of intelligence, one with built-in impulses
>>> like empathy that colour the way we see the world, and limit what we are
>>> willing to do to accomplish our goals. But these biochemical impulses
>>> aren’t essential components of intelligence. They’re incidental software
>>> applications, installed by aeons of evolution and culture. Bostrom told me
>>> that it’s best to think of an AI as a primordial force of nature, like a
>>> star system or a hurricane — something strong, but indifferent.
>>>
>>> Read more >
>>> <http://aeon.co/magazine/philosophy/ross-andersen-human-extinction/>
>>>
>>> *TEDx/Youtube,* Apr. 2015:
>>> TEDx Talks: What happens when our computers get smarter than we are?
>>>
>>> Artificial intelligence is getting smarter by leaps and bounds — within
>>> this century, research suggests, a computer AI could be as “smart” as a
>>> human being. Nick Bostrom asks us to think hard about the world we're
>>> building right now, driven by thinking machines. Will our smart machines
>>> help to preserve humanity and our values — or will they have values of
>>> their own?
>>> Become a Member of SAR!
>>>
>>> A School for Advanced Research membership opens doors to exploring a
>>> world of ideas about past and present peoples around the world and in the
>>> Southwest, as well as Native American life and arts. Become an SAR member
>>> today. Individual memberships start at $50.   *Click here to join!*
>>> <http://sarweb.org/?become_a_member>
>>>
>>>
>>>
>>>
>>> Header image, copyright: / 123RF Stock Photo
>>> <http://www.123rf.com/profile_spaxia>
>>>
>>> ==============================
>>> Dorothy H. Bracey -- Santa Fe, NM US
>>> [email protected]
>>> ==============================
>>>
>>>
>>>
>>> ============================================================
>>> FRIAM Applied Complexity Group listserv
>>> Meets Fridays 9a-11:30 at cafe at St. John's College
>>> to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
>>>
>>
>>
>> ============================================================
>> FRIAM Applied Complexity Group listserv
>> Meets Fridays 9a-11:30 at cafe at St. John's College
>> to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
>>
>
>
>
> --
> Merle Lefkoff, Ph.D.
> President, Center for Emergent Diplomacy
> Santa Fe, New Mexico, USA
> [email protected]
> mobile:  (303) 859-5609
> skype:  merlelefkoff
>
> ============================================================
> FRIAM Applied Complexity Group listserv
> Meets Fridays 9a-11:30 at cafe at St. John's College
> to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
>
============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com

Reply via email to