http://video.google.fr/videoplay?docid=351403092360854662


----- Message d'origine ----
De : Kaj Sotala <[EMAIL PROTECTED]>
À : [email protected]
Envoyé le : Samedi, 29 Décembre 2007, 23h50mn 04s
Objet : Re: [singularity] Requested: objections to SIAI, AGI, the Singularity 
and Friendliness

On 12/29/07, Charles D Hixson <[EMAIL PROTECTED]> wrote:
> Well, it could be that in any environment there is an optimal level of
> intelligence, and that possessing more doesn't yield dramatically
> improved results, but does yield higher costs.  This is, of course,
> presuming that intelligence is a unitary kind of thing, which I doubt,
> but a more sophisticated argument along the same lines could argue that
> there is an optimum in each dimension of intelligence.

We have an objection of basically that type, but the wording could be
improved. Thanks.

By the way, http://www.acceleratingfuture.com/tom/?p=83 has a more
recent list of the objections so far.



-- 
http://www.saunalahti.fi/~tspro1/ | http://xuenay.livejournal.com/

Organizations worth your time:
http://www.singinst.org/ | http://www.crnano.org/ | http://lifeboat.com/

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;


      
_____________________________________________________________________________ 
Ne gardez plus qu'une seule adresse mail ! Copiez vos mails vers Yahoo! Mail 
http://mail.yahoo.fr

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=80295484-3ee460

Reply via email to