Re: [singularity] Motivational Systems that are stable

2006-10-30 Thread Richard Loosemore
Mitchell Porter wrote: Richard Loosemore: In fact, if it knew all about its own design (and it would, eventually), it would check to see just how possible it might be for it to accidentally convince itself to disobey its prime directive, But it doesn't have a prime directive, does it? It

Re: [agi] Re: [singularity] Motivational Systems that are stable

2006-10-30 Thread Richard Loosemore
Ben, I guess the issue I have with your critique is that you say that I have given no details, no rigorous argument, just handwaving, etc. But you are being contradictory: on the one hand you say that the proposal is vague/underspecified/does not give any arguments but then having

Re: Re: [agi] Re: [singularity] Motivational Systems that are stable

2006-10-30 Thread Ben Goertzel
Hi Richard, Let me go back to start of this dialogue... Ben Goertzel wrote: Loosemore wrote: The motivational system of some types of AI (the types you would classify as tainted by complexity) can be made so reliable that the likelihood of them becoming unfriendly would be similar to the

RE: [singularity] Motivational Systems that are stable

2006-10-29 Thread Mitchell Porter
Richard Loosemore: In fact, if it knew all about its own design (and it would, eventually), it would check to see just how possible it might be for it to accidentally convince itself to disobey its prime directive, But it doesn't have a prime directive, does it? It has large numbers of

Re: [agi] Re: [singularity] Motivational Systems that are stable

2006-10-29 Thread Ben Goertzel
Hi, There is something about the gist of your response that seemed strange to me, but I think I have put my finger on it: I am proposing a general *class* of architectures for an AI-with-motivational-system. I am not saying that this is a specific instance (with all the details nailed down)

Re: [singularity] Motivational Systems that are stable

2006-10-27 Thread Richard Loosemore
Curious. A couple of days ago, I responded to demands that I produce arguments to justify the conclusion that there were ways to build a friendly AI that was extremely stable and trustworthy, but without having to give a mathematical proof of its friendliness. Now, granted, the text was

Re: [singularity] Motivational Systems that are stable

2006-10-25 Thread Anna Taylor
The last I heard, computers are spied upon because of the language the computer is generating. Why would the government care about the guy that picks up garbage? Richard Loosemore wrote, Wed, Oct 25, 2006: The word trapdoor is a reference to trapdoor algorithms that allow computers to be spied