Re: [singularity] Animal rights

2006-10-30 Thread LĂșcio de Souza Coelho
On 10/27/06, Matt Mahoney [EMAIL PROTECTED] wrote: (...) 2. What is human? - If you make an exact copy of a human and kill the original, is it murder? - What if you copy only the brain and put it in a different body? - What if you put the copy in a robot body? - What if you copy only the

Re: Re: [singularity] Defining the Singularity

2006-10-30 Thread LĂșcio de Souza Coelho
On 10/27/06, Matt Mahoney [EMAIL PROTECTED] wrote: (...) Orwell's 1984 predicted a world where a totalitarian government watched your every move. What he failed to predict is that it would happen in a democracy. People want surveillence. You want cameras in businesses for better security.

Re: [singularity] Motivational Systems that are stable

2006-10-30 Thread Richard Loosemore
Mitchell Porter wrote: Richard Loosemore: In fact, if it knew all about its own design (and it would, eventually), it would check to see just how possible it might be for it to accidentally convince itself to disobey its prime directive, But it doesn't have a prime directive, does it? It

Re: [agi] Re: [singularity] Motivational Systems that are stable

2006-10-30 Thread Richard Loosemore
Ben, I guess the issue I have with your critique is that you say that I have given no details, no rigorous argument, just handwaving, etc. But you are being contradictory: on the one hand you say that the proposal is vague/underspecified/does not give any arguments but then having

Re: Re: [agi] Re: [singularity] Motivational Systems that are stable

2006-10-30 Thread Ben Goertzel
Hi Richard, Let me go back to start of this dialogue... Ben Goertzel wrote: Loosemore wrote: The motivational system of some types of AI (the types you would classify as tainted by complexity) can be made so reliable that the likelihood of them becoming unfriendly would be similar to the