Re: [singularity] Defining the Singularity

2006-10-27 Thread Richard Loosemore
Matt, This is a textbook example of the way that all discussions of the consequences of a singularity tend to go. What you have done here is to repeat the same song heard over and over again from people who criticise the singularity on the grounds that one or another nightmare will

Re: [singularity] Motivational Systems that are stable

2006-10-27 Thread Richard Loosemore
Curious. A couple of days ago, I responded to demands that I produce arguments to justify the conclusion that there were ways to build a friendly AI that was extremely stable and trustworthy, but without having to give a mathematical proof of its friendliness. Now, granted, the text was

Re: [singularity] Convincing non-techie skeptics that the Singularity isn't total bunk

2006-10-27 Thread Anna Taylor
Josh Cowan wrote: Issues associated with animal rights are better known then the coming Singularity. Issues associated with animal rights are easy to understand, they make you feel good when you help. The general public can pick up a phone, donate money and feel rewarded that it is helping a

Re: [singularity] Convincing non-techie skeptics that the Singularity isn't total bunk

2006-10-27 Thread BillK
On 10/22/06, Anna Taylor [EMAIL PROTECTED] wrote: On 10/22/06, Bill K wrote: But I agree that huge military RD expenditure (which already supports many, many research groups) is the place most likely to produce singularity-level events. I am aware that the military is the most likely place to

Re: [singularity] Defining the Singularity

2006-10-27 Thread Matt Mahoney
I am not describing a nightmare scenario where a SAI forces its will upon us. People will *want* these things. If you were dying and we had the technology to upload your mind, wouldn't you? Orwell's 1984 predicted a world where a totalitarian government watched your every move. What he

[singularity] Animal rights

2006-10-27 Thread Matt Mahoney
I think an animal rights analogy can help us answer important questions about AGI design. 1. Should a superhuman AI (SAI) decide what is best for us? Or should we decide? In the case of humans and animals, humans are smarter, and humans decide. We keep hamsters in a cage because setting

[singularity] Re: [agi] Motivational Systems that are stable

2006-10-27 Thread Ben Goertzel
Richard, As I see it, in this long message you have given a conceptual sketch of an AI design including a motivational subsystem and a cognitive subsystem, connected via a complex network of continually adapting connections. You've discussed the way such a system can potentially build up a

Re: [singularity] Convincing non-techie skeptics that the Singularity isn't total bunk

2006-10-27 Thread Anna Taylor
On 10/28/06, Bill K wrote: I've just seen a news article that is relevant. http://technology.guardian.co.uk/weekly/story/0,,1930960,00.html I'm aware that robot fighters of some sort are being built by the military, it would be ridiculous to believe that with technology as advanced as it is,