Stefan,
 
I plan to read your 112+- paged book 'Jame5' this weekend....while sitting in 
taxis / on planes etc to Ireland.  PS : I really like the graphics on the 
cover, interesting that the 'fire ball' is the one that is about to hit the 
rest of the balls !  PPS : Why the name JAME5 ?
 
Candice


Date: Fri, 26 Oct 2007 11:13:17 +0800From: [EMAIL PROTECTED]: [EMAIL 
PROTECTED]: Re: [singularity] 14 objections against AI/Friendly AI/The 
Singularity answeredGreat write up. My special interest is AI friendliness so I 
would like to comment on 11.CEV is a concept that avoids answering the question 
of what friendliness is by letting an advanced AI figure out what good might 
be. Doing so makes endowing an AI implementation with friendliness not 
feasible. CEV is circular. See the following core sentence for example: "...if 
we knew more, thought faster, were more the people we wished we were, had grown 
up farther together; where the extrapolation converges rather than diverges, 
where our wishes cohere rather than interfere; extrapolated as we wish that 
extrapolated, interpreted as we wish that interpreted..."Simplified: "If we 
were better people we were better people." True - but not adding value as key 
concepts such as 'friendliness', 'good',  'better' and 'benevolence' remain 
undefined. In my recent book (see www.Jame5.com) I take the definition of 
friendliness further by grounding key terms such as 'good' and 'friendly'.If 
you rather not read my complete 45'000 word book I suggest focusing on the end 
of chapter 9 until 12. Those sum up the key concepts. Further I will post a 7 
page paper (hopefully today) that further condenses the core ideas of what 
benevolence means and how hard goals for a friendly AI can be derived from 
those ideas. Kind regards,Stefan
On 10/26/07, Kaj Sotala <[EMAIL PROTECTED]> wrote: 
Can be found at http://www.saunalahti.fi/~tspro1/objections.html .Answers the 
following objections:1: There are limits to everything. You can't get infinite 
growth 2: Extrapolation of graphs doesn't prove anything. It doesn't showthat 
we'll have AI in the future.3: A superintelligence could rewrite itself to 
remove human tampering.Therefore we cannot build Friendly AI. 4: What reason 
would a super-intelligent AI have to care about us?5: The idea of a hostile AI 
is anthropomorphic.6: Intelligence is not linear.7: There is no such thing as a 
human-equivalent AI.8: Intelligence isn't everything. An AI still wouldn't have 
the resources of humanity.9: It's too early to start thinking about Friendly 
AI10: Development towards AI will be gradual. Methods will pop up to deal with 
it.11: "Friendliness" is too vaguely defined. 12: What if the AI misinterprets 
its goals?13: Couldn't AIs be built as pure advisors, so they wouldn't 
doanything themselves? That way, we wouldn't need to worry aboutFriendly AI.14: 
Machines will never be placed in positions of power. Constructive criticism 
welcome, as always.--http://www.saunalahti.fi/~tspro1/ | 
http://xuenay.livejournal.com/ Organizations worth your 
time:http://www.singinst.org/ | http://www.crnano.org/ | http://lifeboat.com/ 
-----This list is sponsored by AGIRI: http://www.agiri.org/emailTo unsubscribe 
or change your options, please go to:http://v2.listbox.com/member/?&-- Stefan 
Pernar3-E-101 Silver Maple Garden#6 Cai Hong Road, Da Shan ZiChao Yang District 
100015 BeijingP.R. CHINAMobil: +86 1391 009 1931Skype: Stefan.Pernar 

This list is sponsored by AGIRI: http://www.agiri.org/emailTo unsubscribe or 
change your options, please go to:http://v2.listbox.com/member/?&;
_________________________________________________________________
Get free emoticon packs and customisation from Windows Live. 
http://www.pimpmylive.co.uk

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=57774745-70320d

Reply via email to