Re: [agi] Write a doctoral dissertation, trigger a Singularity
The signal/noise ratio on this list is starting to get pretty bad. And why do people always quote the entire prior message in their responses? Is it so hard to highlight and delete? -- Michael Anissimov Lifeboat Foundation http://lifeboat.com http://acceleratingfuture.com/michael/blog - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e
[agi] Re: Singularity Flash Report! [2006 May 4]
A. T. Murray wrote: is a WIRED Magazine Blog by Bruce Sterling, who came upon http://www.blogcharm.com/Singularity/25603/Timetable.html and reported it in the WIRED Blog, causing hundreds of hits. Congratulations, you've been mocked in public. Bruce Sterling's blog is so boring, so thrown-together-looking, and practically content-free, I really wonder why it's there sometimes. His mood is *always* accident prone or don't ask. How can he be accident prone if he barely leaves his desk? And is the mood don't ask a way of portraying his sassy cyberpunk side? Please. --- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
RE: [agi] AGI morality
to ask the AGI's advice on this one, too. Some of these issues will come up in relation to humans as they face the possibilities of individual and collective transformation via genetic engineering, body modification and cyborgisation. Mind modification, yep, if any of that stuff goes down before the Singularity itself. The ethics and procedures of such an endeavor would be so complex, so tangled, that I wouldn't feel safe unless the individual undertaking self-modification first created an independent and devoted sensory modality for spotting the peaks and valleys in the morality landscape at a safe distance. Michael Anissimov - http://eo.yifan.net Free POP3/Web Email, File Manager, Calendar and Address Book --- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]
RE: [agi] AGI morality
Ben Goertzel writes: This is a key aspect of Eliezer Yudkowsky's Friendly Goal Architecture Yeah; too bad there isn't really anyone else to cite on this one. It will be interesting to see what other AGI pursuers have to say about the hierarchial goal system issue, once they write up their thoughts. The Novamente design does not lend itself naturally to a hierarchical goal structure in which all the AI's actions flow from a single supergoal. Doesn't it depend pretty heavily on how you look at it? If the supergoal is abstract enough and generates a diversity of subgoals, then many people wouldn't call it a supergoal at all. I guess it ultimately burns down to how the AI designer looks at it. GoalNodes are simply PredicateNodes that are specially labeled as GoalNodes; the special labeling indicates to other MindAgents that they are used to drive schema (procedure) learning. Okay; got it. Letting the AI grow up with whichever goals look immediately useful, (regularly check and optimize chunk of code X, win this training game, etc.) and then trying to weave in ethics ... That was not my suggestion at all, though. The ethical goals can be there from the beginning. It's just that a purely hierarchical goal structure is highly unlikely to emerge as a goal map, i.e. an attractor, of Novamente's self-organizing goal-creating dynamics. Right, that statement was directed towards Philip Sutton's mail, but I appreciate your stepping in to clarify. Of course, whether AIs with substantially prehuman (low) intelligence can have goals that deserve being called ethical or unethical is a matter of word choice and definitions. Michael Anissimov - http://eo.yifan.net Free POP3/Web Email, File Manager, Calendar and Address Book --- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]