Yes, I agree. Except for the part about checking up on us. As i mentioned before, indifference to us seems to me to be more the default than caring about us.

On 5/25/2015 5:03 PM, cogmission (David Ray) wrote:
Let me try and think this through. Only in the context of scarcity does the question of AGI **or** us come about. Where there is no scarcity, I think an AGI will just go about its business - peeking in from time to time to make sure we're doing ok. Why in a universe where it can go anywhere it wants and produce infinite energy and not be bound by our planet, would a super-super intelligent being even be obsessed over us, when it could merely go someplace else? I honestly thing that is the way it will be. (and maybe is already!)

On Mon, May 25, 2015 at 3:56 PM, Matthew Lohbihler <m...@serotoninsoftware.com <mailto:m...@serotoninsoftware.com>> wrote:

    Forgive me David, but these are very loose definitions, and i've
    lost track of how they relate back to what an AGI will think about
    humanity. But to use your terms - hopefully accurately - what if
    the AGI satisfies its sentient need for "others" by creating other
    AGIs, ones that it can love and appreciate? I doubt humans would
    ever be up such a task, unless 1) as pets, or 2) with cybernetic
    improvements.

    On 5/25/2015 4:37 PM, David Ray wrote:
    Observation is the phenomenon of distinction, in the domain of
    language. The universe consists of two things, content and
    context. Content depends on its boundaries in order to exist. It
    depends on what it is not for it's being. Context is the space
    for things to be, though it is not quite space because space is
    yet another thing. It has no boundaries and it cannot be arrived
    at by assembling all of its content.

    Ideas; love, hate, our sense of who we are, our histories what we
    know to be true all of those are content. Context is what allows
    for that stuff to be. And all of it lives in language without
    which there would be nothing. There maybe would be a "drift" but
    we wouldn't know about it and we wouldn't be able to observe it.

    Sent from my iPhone

    On May 25, 2015, at 3:26 PM, Matthew Lohbihler
    <m...@serotoninsoftware.com <mailto:m...@serotoninsoftware.com>> wrote:

    You lost me. You seem to be working with definitions of
    "observation" and "space for thinking" that i'm unaware of.

    On 5/25/2015 4:14 PM, David Ray wrote:
    Matthew L.,

    It isn't a thought. It is there before observation or thoughts
    or thinking. It actually is the space for thinking to occur -
    it is the context that allows for thought. We bring it to the
    table - it is there before we are (ontologically speaking). (It
    being this sense of integrity/wholeness)

    Sent from my iPhone

    On May 25, 2015, at 2:59 PM, Matthew Lohbihler
    <m...@serotoninsoftware.com <mailto:m...@serotoninsoftware.com>> wrote:

    Goodness. I thought we agreed that an AGI would not think like
    humans. And besides, "love" doesn't feel like something i want
    to depend on as obvious in a machine.


    On 5/25/2015 3:50 PM, David Ray wrote:
    If I can take this conversation into yet a different direction.

    I think we've all been dancing around The question of what
    belies the generation of morality or how will an AI derive
    its sense of ethics? Of course initially there will be those
    parameters that are programmed in -  but eventually those
    will be gotten around.

    There has been a lot of research into this actually - though
    it's not common knowledge it is however knowledge developed
    over the observation of millions of people.

    The universe and all beings along the gradient of sentience
    observe (albeit perhaps unconsciously), a sense of what I
    will call integrity or "wholeness". We'd like to think that
    mankind steered itself through the ages toward notions of
    gentility and societal sophistication; but it didn't really.
    The idea that a group or different groups devised a grand
    plan to have it turn out this way is totally preposterous.

    What is more likely is that there is a natural order to
    things and that is motion toward what works for the whole. I
    can't prove any of this but internally we all know when it's
    missing or when we are not in alignment with it. This
    ineffable sense is what love is - it's concern for the whole.

    So I say that any truly intelligent being, by virtue of
    existing in a substrate of integrity will have this built in
    and a super intelligent being will understand this - and that
    is ultimately the best chance for any single instance to
    survive is for the whole to survive.

    Yes I know immediately people want to cite all the
    aberrations and of course yes there are aberrations just as
    there are mutations but those aberrations our reactions to
    how a person is shown love during their development.

    Like I said I can't prove any of this but eventually it will
    bear itself out and we will find it to be so in the future.

    You can be skeptical if you want to but ask yourself some
    questions. Why is it that we all know when it's missing
    (fairness/justice/integrity)? Why is it that we develop open
    source software and free software? Why is it that despite our
    greed and insecurity society moves toward freedom and
    equality for everyone?

    One more question. Why is it that the most advanced
    philosophical beliefs cite that where we are located as a
    phenomenological event, is not in separate bodies?

    I know this kind of talk doesn't go over well in this crowd
    of concrete thinkers but I know that there is some science
    somewhere that backs this up.

    Sent from my iPhone

    On May 25, 2015, at 2:12 PM, vlab <thadd...@vlab.ca
    <mailto:thadd...@vlab.ca>> wrote:

    Small point: Even if they did decide that our diverse
    intelligence is worth keeping around (having not already
mapped it into silicon) why would they need all of us. Surely 10% of the population would give them enough 'sample
    size' to get their diversity ration, heck maybe 1/10 of 1%
    would be enough.   They may find that we are wasting away
    the planet (oh, not maybe, we are) and the planet would be
    more efficient and they could have more energy without most
    of us.  (Unless we become 'copper tops' as in the Matrix
    movie).

    On 5/25/2015 2:40 PM, Fergal Byrne wrote:
    Matthew,

    You touch upon the right point. Intelligence which can
    self-improve could only come about by having an
    appreciation for intelligence, so it's not going to be
    interested in destroying diverse sources of intelligence.
    We represent a crap kind of intelligence to such an AI in a
    certain sense, but one which it itself would rather
    communicate with than condemn its offspring to have to live
    like. If these things appear (which looks inevitable) and
    then they kill us, many of them will look back at us as a
    kind of "lost civilisation" which they'll struggle to
    reconstruct.

    The nice thing is that they'll always be able to rebuild us
    from the human genome. It's just a file of numbers after all.

    So, we have these huge threats to humanity. The AGI future
    is the only reversible one.

    Regards
    Fergal Byrne

    --

    Fergal Byrne, Brenter IT

    Author, Real Machine Intelligence with Clortex and NuPIC
    https://leanpub.com/realsmartmachines

    Speaking on Clortex and HTM/CLA at euroClojure Krakow, June
    2014: http://euroclojure.com/2014/
    and at LambdaJam Chicago, July 2014: http://www.lambdajam.com

    http://inbits.com - Better Living through Thoughtful Technology
    http://ie.linkedin.com/in/fergbyrne/ -
    https://github.com/fergalbyrne

    e:fergalbyrnedub...@gmail.com
    <mailto:e:fergalbyrnedub...@gmail.com> t:+353 83 4214179
    <tel:%2B353%2083%204214179>
    Join the quest for Machine Intelligence at http://numenta.org
    Formerly of Adnet edi...@adnet.ie <mailto:edi...@adnet.ie>
    http://www.adnet.ie


    On Mon, May 25, 2015 at 7:27 PM, Matthew Lohbihler
    <m...@serotoninsoftware.com
    <mailto:m...@serotoninsoftware.com>> wrote:

        I think Jeff underplays a couple of points, the main
        one being the speed at which an AGI can learn. Yes,
        there is a natural limit to how much experimentation in
        the real world can be done in a given amount of time.
        But we humans are already going beyond this with, for
        example, protein folding simulations, which speeds up
        the discovery of new drugs and such by many orders of
        magnitude. Any sufficiently detailed simulation could
        massively narrow down the amount of real world
        verification necessary, such that new discoveries
        happen more and more quickly, possibly at some point
        faster than we know the AGI is doing them. An
        intelligence explosion is not a remote possibility. The
        major risk here is what Eliezer Yudkowsky pointed out:
        not that the AGI is evil or something, but that it is
        indifferent to humanity. No one yet goes out of their
        way to make any form of AI care about us (because we
        don't yet know how). What if an AI created
        self-replicating nanobots just to prove a hypothesis?

        I think Nick Bostrom's book is what got Stephen, Elon,
        and Bill all upset. I have to say it starts out merely
        interesting, but gets to a dark place pretty quickly.
        But he goes too far in the other direction, at the same
        time easily accepting that superinteligences have all
        manner of cognitive skill, but at the same time can't
        fathom the how humans might not like the idea of having
        our brain's pleasure centers constantly poked, turning
        us all into smiling idiots (as i mentioned here:
        http://blog.serotoninsoftware.com/so-smart-its-stupid).



        On 5/25/2015 2:01 PM, Fergal Byrne wrote:
        Just one last idea in this. One thing that crops up
        every now and again in the Culture novels is the
        response of the Culture to Swarms, which are
        self-replicating viral machines or organisms. Once
        these things start consuming everything else, the AIs
        (mainly Ships and Hubs) respond by treating the swarms
        as a threat to the diversity of their Culture. They
        first try to negotiate, then they'll eradicate. If
        they can contain them, they'll do that.

        They do this even though they can themselves withdraw
        from real spacetime. They don't have to worry about
        their own survival. They do this simply because life
        is more interesting when it includes all the rest of us.

        Regards

        Fergal Byrne

        --

        Fergal Byrne, Brenter IT

        Author, Real Machine Intelligence with Clortex and NuPIC
        https://leanpub.com/realsmartmachines

        Speaking on Clortex and HTM/CLA at euroClojure Krakow,
        June 2014: http://euroclojure.com/2014/
        and at LambdaJam Chicago, July 2014:
        http://www.lambdajam.com

        http://inbits.com - Better Living through Thoughtful
        Technology
        http://ie.linkedin.com/in/fergbyrne/ -
        https://github.com/fergalbyrne

        e:fergalbyrnedub...@gmail.com
        <mailto:e:fergalbyrnedub...@gmail.com> t:+353 83
        4214179 <tel:%2B353%2083%204214179>
        Join the quest for Machine Intelligence at
        http://numenta.org
        Formerly of Adnet edi...@adnet.ie
        <mailto:edi...@adnet.ie> http://www.adnet.ie


        On Mon, May 25, 2015 at 5:04 PM, cogmission (David
        Ray) <cognitionmiss...@gmail.com
        <mailto:cognitionmiss...@gmail.com>> wrote:

            This was someone's response to Jeff's interview
            (see here:
            https://www.facebook.com/fareedzakaria/posts/10152703985901330)


            Please read and comment if you feel the need...

            Cheers,
            David

-- /With kind regards,/
            David Ray
            Java Solutions Architect
            *Cortical.io <http://cortical.io/>*
            Sponsor of: HTM.java
            <https://github.com/numenta/htm.java>
            d....@cortical.io <mailto:d....@cortical.io>
            http://cortical.io <http://cortical.io/>











--
/With kind regards,/
David Ray
Java Solutions Architect
*Cortical.io <http://cortical.io/>*
Sponsor of: HTM.java <https://github.com/numenta/htm.java>
d....@cortical.io <mailto:d....@cortical.io>
http://cortical.io <http://cortical.io/>

Reply via email to