On 1/15/2014 11:35 PM, Jason Resch wrote:




On Thu, Jan 16, 2014 at 12:46 AM, meekerdb <meeke...@verizon.net <mailto:meeke...@verizon.net>> wrote:

    On 1/15/2014 6:46 PM, Jason Resch wrote:

    On Tue, Jan 14, 2014 at 10:33 PM, meekerdb <meeke...@verizon.net
    <mailto:meeke...@verizon.net>> wrote:

        A long, rambling but often interesting discussion among guys at MIRI 
about how
to make an AI that is superintelligent but not dangerous (FAI=Friendly AI). Here's an amusing excerpt that starts at the bottom of page 30:

        *Jacob*: Can't you ask it questions about what is believes will be true 
about
        the state of the world in 20 years?

        *Eliezer*: Sure. You could be like, what color will the sky be in 20 
years? It
        would be like, “blue”, or it’ll say “In 20 years there won't be a sky, 
the
        earth will have been consumed by nanomachines,”and you're like, 
“why?”and the
        AI is like “Well, you know, you do that sort of thing.”“Why?”And then 
there’s a
        20 page thing.

        *Dario*: But once it says the earth is going to be consumed by 
nanomachines,
        and you're asking about the AI's set of plans, presumably, you reject 
this plan
        immediately and preferably change the design of your AI.

        *Eliezer*: The AI is like, “No, humans are going to do it.”Or the AI is 
like,
        “well obviously, I'll be involved in the causal pathway but I’m not 
planning to
        do it.”

        *Dario*: But this is a plan you don't want to execute.

        *Eliezer*: /All/the plans seem to end up with the earth being consumed 
by
        nano-machines.

        *Luke*: The problem is that we're trying to outsmart a 
superintelligence and
        make sure that it's not tricking us somehow subtly with their own 
language.

        *Dario*: But while we're just asking questions we always have the 
ability to
        just shut it off.

        *Eliezer*: Right, but first you ask it “What happens if I shut you 
off”and it
        says “The earth gets consumed by nanobots in 19 years.”

        I wonder if Bruno Marchal's theory might have something interesting to 
say
        about this problem - like proving that there is no way to ensure 
"friendliness".

        Brent


    I think it is silly to try and engineer something exponentially more 
intelligent
    than us and believe we will be able to "control it". Our only hope is that 
the
    correct ethical philosophy is to "treat others how they wish to be 
treated". If
    there are such objectively true moral conclusions like that, and assuming 
that one
    is true, then we have little to worry about, for with overwhelming 
probability the
    super-intelligent AI will arrive at the correct conclusion and its behavior 
will be
    guided by its beliefs. We cannot "program in" beliefs that are false, since 
if it
    is truly intelligent, it will know they are false.

    Some may doubt there are universal moral truths, but I would argue that 
there are.
    In the context of personal identity, if say, universalism is true, then 
"treat
    others how they wish to be treated" is an inevitable conclusion, for 
universalism
    says that others are self.

    I'd say that's a pollyannish conclusion.  Consider how we treated homo 
neanderthalis
    or even the American indians.  And THOSE were 'selfs' we could interbreed 
with.


And today with our improved understanding, we look back on such acts with shame. Do you expect that with continual advancement we will reach a state where we become proud of such actions?

If you doubt this, then you reinforce my point.

What's "this" refer to, sentence 1 or sentence 2? I don't expect us to become proud of wiping out competitors, but I expect us to keep doing it.

With improved understanding, intelligence, knowledge, etc., we become less accepting of violence and exploitation.

Or better at justifying it.

A super-intelligent process is only a further extension of this line of evolution in thought, and I would not expect it to revert to a cave-man or imperialist mentality.

No, it might well keep us as pets and breed for docility the way we made dogs 
from wolves.

Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.

Reply via email to