I am in agreement with Kyle. It's what I was saying in the first place: We
will design around such problems if and when we encounter them. The will of
such a program will be engineered every bit as much as its intellect. And
like anything we engineer, these systems will become, in effect, extensions
of our own will. Engineered systems that don't meet this criterion get
re-engineered until they do. That's the very basis for the existence of
technology in the first place.

Why are we always implying that something vastly more developed than us
> will try to harm anyone?


Because the first time we build a system of this sort, it won't necessarily
function as we intended it to. Bugs happen. The truth is, the first few
versions of this technology are going to suck -- until we improve it. This
happens with every new technology.

While the AGI system develops it will inevitably become more loving and
> empathic...


What is your basis for this claim? Isn't this just another assumption that
disagrees with the others being made? The only way this is going to happen
is if we shape the technology that way. AGIs won't know, understand, or
(especially) care about the human notions of right and wrong, good and
evil, unless we design it to do so. You can't expect something to turn out
good just by virtue of being intelligent, anymore than you can expect it to
turn out bad on that same basis. Intelligence is simply an amplifier of the
will, not a generator of it.


On Sun, May 11, 2014 at 11:25 AM, just camel via AGI <[email protected]>wrote:

> A superintelligent (above human intelligence) machine will question its
> belief systems just like any intelligent and empathic person will do. It
> seems that we prefer to talk about super ignorant machines instead of super
> intelligent ones?
>
> Also the concept of having one AGI safeguard you against other AGI's is
> horribly anthropomorphic in so many ways. Why are we always implying that
> something vastly more developed than us will try to harm anyone? That must
> be some primordial fear? While the AGI system develops it will inevitably
> become more loving and empathic and you will most likely not be able to
> hardcode any "human level" belief traps into that AGI which would make the
> system become that destructive omnipotent Roman emperor demigod so many
> people seem to have in mind ...
>
>
> On 05/11/2014 02:06 PM, Kyle Kidd via AGI wrote:
>
>> Just because a machine has human intelligence doesn't mean it has human
>> desires.  If machines do evil it is because people have put their own
>> desires into the machine.  Obviously there will be systems that are put in
>> place that safeguard against this since more people will seek to protect
>> their property rather than destroy property of others.
>>
>
>
>
> -------------------------------------------
> AGI
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/23050605-2da819ff
> Modify Your Subscription: https://www.listbox.com/
> member/?&
> Powered by Listbox: http://www.listbox.com
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to