On 27 June 2010 22:21, Travis Lenting <[email protected]> wrote:

> I don't like the idea of enhancing human intelligence before the
> singularity.


What do you class as enhancement? Suppose I am in the Middle East and I am
wearing glasses which can give a 3D data screen. Somebody speaks to me. Up
on my glasses are the possible translations. Neither me nor the computer
system understands Arabic, yet together we can achieve comprehension. (PS I
in fact did just that with
http://docs.google.com/Doc?docid=0AQIg8QuzTONQZGZxenF2NnNfNzY4ZDRxcnJ0aHI&hl=en_GB
)

I think crime has to be made impossible even for an enhanced humans first.


If our enhancement was Internet based it could be turned off if we were
about to commit a crime. You really should have said "unenhanced" humans. If
my conversation (see above) was about jihad and terrorism AI would provide a
route for the security services. I think you are muddled here.


> I think life is too adapt to abusing opportunities if possible. I would
> like to see the singularity enabling AI to be as least like a reproduction
> machine as possible. Does it really need to be a general AI to cause a
> singularity?


The idea of the Singularity is that AGI enhances itself. Hence a singularity
*without* AGI is a contradiction in terms. I did not quite get you syntax on
reproduction, but it is perfectly true that you do not need a singularity
for a Von Neumann machine. The singularity is a long way off yet Obama is
going to leave Afghanistan in 2014 leaving robots behind.


> Can it not just stick to scientific data and quantify human uncertainty?
>  It seems like it would be less likely to ever care about killing all humans
> so it can rule the galaxy or that its an omnipotent servant.


AGI will not have evolved. It will have been created. It will not anyway
have the desires we might ascribe to it. Scientific data would be a high
priority but you could *never* be exclusively scientific. If human
uncertainty were quantified that would give it, or whoever wielded it
immense power.

There is one other eventuality to consider - a virus. If an AGI system was
truly thinking and introspective, at least to the extent that it understood
what it was doing, a virus would be impossible. Software would in fact be
self repairing.

GT makes a lot of very silly translations. Could I say that no one in Mossad
or any dictator ever told me how to do my French homework. Trivial and naive
remark, yet GT is open to all kinds of hacking. True AGI would not by
definition. This does in fact serve to indicate how far off we are.


  - Ian Parker

>
>
> On Sun, Jun 27, 2010 at 11:39 AM, The Wizard <[email protected]>wrote:
>
>> This is wishful thinking. Wishful thinking is dangerous. How about instead
>> of hoping that AGI won't destroy the world, you study the problem and come
>> up with a safe design.
>>
>>
>> Agreed on this dangerous thought!
>>
>> On Sun, Jun 27, 2010 at 1:13 PM, Matt Mahoney <[email protected]>wrote:
>>
>>> This is wishful thinking. Wishful thinking is dangerous. How about
>>> instead of hoping that AGI won't destroy the world, you study the problem
>>> and come up with a safe design.
>>>
>>>
>>> -- Matt Mahoney, [email protected]
>>>
>>>
>>> ------------------------------
>>> *From:* rob levy <[email protected]>
>>> *To:* agi <[email protected]>
>>> *Sent:* Sat, June 26, 2010 1:14:22 PM
>>> *Subject:* Re: [agi] Questions for an AGI
>>>
>>>  why should AGIs give a damn about us?
>>>
>>>
>>>> I like to think that they will give a damn because humans have a unique
>>> way of experiencing reality and there is no reason to not take advantage of
>>> that precious opportunity to create astonishment or bliss. If anything is
>>> important in the universe, its insuring positive experiences for all areas
>>> in which it is conscious, I think it will realize that. And with the
>>> resources available in the solar system alone, I don't think we will be much
>>> of a burden.
>>>
>>>
>>> I like that idea.  Another reason might be that we won't crack the
>>> problem of autonomous general intelligence, but the singularity will proceed
>>> regardless as a symbiotic relationship between life and AI.  That would be
>>> beneficial to us as a form of intelligence expansion, and beneficial to the
>>> artificial entity a way of being alive and having an experience of the
>>> world.
>>>    *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
>>> <https://www.listbox.com/member/archive/rss/303/> | 
>>> Modify<https://www.listbox.com/member/?&;>Your Subscription
>>> <http://www.listbox.com>
>>>    *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
>>> <https://www.listbox.com/member/archive/rss/303/> | 
>>> Modify<https://www.listbox.com/member/?&;>Your Subscription
>>> <http://www.listbox.com>
>>>
>>
>>
>>
>> --
>> Carlos A Mejia
>>
>> Taking life one singularity at a time.
>> www.Transalchemy.com
>>    *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
>> <https://www.listbox.com/member/archive/rss/303/> | 
>> Modify<https://www.listbox.com/member/?&;>Your Subscription
>> <http://www.listbox.com>
>>
>
>    *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com

Reply via email to