Travis,

The AGI world seems to be cleanly divided into two groups:

1.  People (like Ben) who feel as you do, and aren't at all interested or
willing to look at the really serious lapses in logic that underlie this
approach. Note that there is a similar belief in Buddhism, akin to the
"prisoners dilemma", that if everyone just decides to respect everyone else,
that the world will be a really nice place. The problem is, it doesn't work,
and it can't work for some sound logical reasons that were unknown thousands
of years ago when those beliefs were first advanced, and are STILL unknown
to most of the present-day population, and...

2.  People (like me) who see that this is a really insane, dangerous, and
delusional belief system, as it encourages activities that are every bit as
dangerous as DIY thermonuclear weapons. Sure, you aren't likely to build a
"successful" H-bomb in your basement using heavy water that you separated
using old automobile batteries, but should we encourage you to even try?

Unfortunately, there is ~zero useful communication between these two groups.
For example, Ben explains that he has heard all of the horror scenarios for
AGIs, and I believe that he has, yet he continues in this direction for
reasons that he "is too busy" to explain in detail. I have viewed some of
his presentations, e.g. at the 2009 Singularity conference. There, he
provides no glimmer of any reason why his approach isn't predictably
suicidal if/when an AGI ever comes into existence, beyond what you outlined,
e.g. imperfect protective mechanisms that would only serve to become their
own points of contention between future AGIs. What if some accident disables
an AGI's protective mechanisms? Would there be some major contention between
Ben's AGI and Osama bin Laden's AGI? How about those nasty little areas
where our present social rules enforce specie-destroying dysgenic activity?
Ultimately and eventually, why should AGIs give a damn about us?

Steve
=============
On Fri, Jun 25, 2010 at 1:25 PM, Travis Lenting <travlent...@gmail.com>wrote:

> I hope I don't miss represent him but I agree with Ben (at
> least my interpretation) when he said, "We can ask it questions like, 'how
> can we make a better A(G)I that can serve us in more different ways without
> becoming dangerous'...It can help guide us along the path to a
> positive singularity." I'm pretty sure he was also saying at first it
> should just be a question answering machine with a reliable goal system and
> stop the development if it has an unstable one before it gets to smart. I
> like the idea that we should create an automated
> cross disciplinary scientist and engineer (if you even separate the two) and
> that NLP not modeled after the human brain is the best proposal for
> a benevolent and resourceful super intelligence that enables a positive
> singularity and all its unforeseen perks.
> On Wed, Jun 23, 2010 at 11:04 PM, The Wizard <key.unive...@gmail.com>wrote:
>
>
>> If you could ask an AGI anything, what would you ask it?
>> --
>> Carlos A Mejia
>>
>> Taking life one singularity at a time.
>> www.Transalchemy.com
>>    *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
>> <https://www.listbox.com/member/archive/rss/303/> | 
>> Modify<https://www.listbox.com/member/?&;>Your Subscription
>> <http://www.listbox.com>
>>
>
>    *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com

Reply via email to