http://wiki.lesswrong.com/wiki/Paperclip_maximizer


On Fri, Sep 5, 2014 at 11:13 AM, Stephen Paul King <
[email protected]> wrote:

> AFAIK, if the AGI and humanity are not competing for the same resources,
> no conflict need arise...
>
>
> On Fri, Sep 5, 2014 at 11:08 AM, Terren Suydam <[email protected]>
> wrote:
>
>>
>> On Fri, Sep 5, 2014 at 10:57 AM, John Clark <[email protected]> wrote:
>>
>>>
>>>
>>>
>>> On Thu, Sep 4, 2014 at 6:09 AM, Telmo Menezes <[email protected]>
>>> wrote:
>>>
>>>>
>>>> > Intelligence is clearly a process that can be bootstrapped -- we know
>>>> this from biology.
>>>>
>>>
>>> Yes, adults tend to be smarter than infants and infants are smarter than
>>> one celled zygotes.
>>>
>>>
>>>> > What I don't understand is how people expect to have a human-level AI
>>>> (many degrees of freedom) and then also be able to micro-manage it.
>>>>
>>>
>>> I also don't understand the people who talk about a "friendly A" I when
>>> what they really mean is a AI that will happily remain our slave and place
>>> our interests at a higher level than its own. It's just not possible to
>>> consistently outsmart something that is vastly more intelligent than you
>>> are.
>>>
>>>   John K Clark
>>>
>>>
>> You're presupposing that an AGI must necessarily have interests that
>> conflict with ours.
>>
>> It's obviously a really difficult problem, if for no other reason, you'd
>> have to have faith that a much smarter AI was acting in our interests. Even
>> if you could mathematically prove beforehand that an AGI would be friendly
>> (which I doubt is possible), something way smarter than us would behave in
>> unpredictable ways and make decisions that seem contrary to our interests,
>> simply because we wouldn't be smart enough to follow the reasoning (which
>> might take hundreds of years to explain to mere humans).
>>
>> Terren
>>
>> --
>> You received this message because you are subscribed to a topic in the
>> Google Groups "Everything List" group.
>> To unsubscribe from this topic, visit
>> https://groups.google.com/d/topic/everything-list/YJeHJO5dNqQ/unsubscribe
>> .
>> To unsubscribe from this group and all its topics, send an email to
>> [email protected].
>> To post to this group, send email to [email protected].
>> Visit this group at http://groups.google.com/group/everything-list.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>
>
> --
>
> Kindest Regards,
>
> Stephen Paul King
>
> Senior Researcher
>
> Mobile: (864) 567-3099
>
> [email protected]
>
>  http://www.provensecure.us/
>
>
> “This message (including any attachments) is intended only for the use of
> the individual or entity to which it is addressed, and may contain
> information that is non-public, proprietary, privileged, confidential and
> exempt from disclosure under applicable law or may be constituted as
> attorney work product. If you are not the intended recipient, you are
> hereby notified that any use, dissemination, distribution, or copying of
> this communication is strictly prohibited. If you have received this
> message in error, notify sender immediately and delete this message
> immediately.”
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> To post to this group, send email to [email protected].
> Visit this group at http://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Reply via email to