Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-01 Thread Richard Loosemore
Hank Conn wrote: Yes, now the point being that if you have an AGI and you aren't in a sufficiently fast RSI loop, there is a good chance that if someone else were to launch an AGI with a faster RSI loop, your AGI would lose control to the other AGI where the goals of the other AGI differed

Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-01 Thread Richard Loosemore
James Ratcliff wrote: You could start a smaller AI with a simple hardcoded desire or reward mechanism to learn new things, or to increase the size of its knowledge. That would be a simple way to programmaticaly insert it. That along with a seed AI, must be put in there in the beginning.

Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-01 Thread Samantha Atkins
On Nov 30, 2006, at 12:21 PM, Richard Loosemore wrote: Recursive Self Inmprovement? The answer is yes, but with some qualifications. In general RSI would be useful to the system IF it were done in such a way as to preserve its existing motivational priorities. How could the system

Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-01 Thread Samantha Atkins
On Nov 30, 2006, at 10:15 PM, Hank Conn wrote: Yes, now the point being that if you have an AGI and you aren't in a sufficiently fast RSI loop, there is a good chance that if someone else were to launch an AGI with a faster RSI loop, your AGI would lose control to the other AGI where the

Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-01 Thread Hank Conn
On 12/1/06, Matt Mahoney [EMAIL PROTECTED] wrote: --- Hank Conn [EMAIL PROTECTED] wrote: The further the actual target goal state of that particular AI is away from the actual target goal state of humanity, the worse. The goal of ... humanity... is that the AGI implemented that will have

Re: [agi] A question on the symbol-system hypothesis

2006-12-01 Thread Philip Goetz
On 11/30/06, Mark Waser [EMAIL PROTECTED] wrote: With many SVD systems, however, the representation is more vector-like and *not* conducive to easy translation to human terms. I have two answers to these cases. Answer 1 is that it is still easy for a human to look at the closest matches to

Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-01 Thread Hank Conn
This seems rather circular and ill-defined. - samantha Yeah I don't really know what I'm talking about at all. - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id=303

Re: [agi] A question on the symbol-system hypothesis

2006-12-01 Thread Matt Mahoney
--- Philip Goetz [EMAIL PROTECTED] wrote: On 11/30/06, James Ratcliff [EMAIL PROTECTED] wrote: One good one: Consciousness is a quality of the mind generally regarded to comprise qualities such as subjectivity, self-awareness, sentience, sapience, and the ability to perceive the

Re: [agi] RSI - What is it and how fast?

2006-12-01 Thread Philip Goetz
On 12/1/06, J. Storrs Hall, PhD. [EMAIL PROTECTED] wrote: I don't think so. The singulatarians tend to have this mental model of a superintelligence that is essentially an analogy of the difference between an animal and a human. My model is different. I think there's a level of universality,

Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-01 Thread Matt Mahoney
--- Hank Conn [EMAIL PROTECTED] wrote: On 12/1/06, Matt Mahoney [EMAIL PROTECTED] wrote: The goals of humanity, like all other species, was determined by evolution. It is to propagate the species. That's not the goal of humanity. That's the goal of the evolution of humanity, which

Re: [agi] RSI - What is it and how fast?

2006-12-01 Thread J. Storrs Hall, PhD.
On Friday 01 December 2006 20:06, Philip Goetz wrote: Thus, I don't think my ability to follow rules written on paper to implement a Turing machine proves that the operations powering my consciousness are Turing-complete. Actually, I think it does prove it, since your simulation of a Turing

Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-01 Thread Richard Loosemore
Samantha Atkins wrote: On Nov 30, 2006, at 12:21 PM, Richard Loosemore wrote: Recursive Self Inmprovement? The answer is yes, but with some qualifications. In general RSI would be useful to the system IF it were done in such a way as to preserve its existing motivational priorities.

Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-01 Thread Richard Loosemore
Matt Mahoney wrote: I guess we are arguing terminology. I mean that the part of the brain which generates the reward/punishment signal for operant conditioning is not trainable. It is programmed only through evolution. There is no such thing. This is the kind of psychology that died out at

Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-01 Thread Richard Loosemore
Matt Mahoney wrote: --- Hank Conn [EMAIL PROTECTED] wrote: The further the actual target goal state of that particular AI is away from the actual target goal state of humanity, the worse. The goal of ... humanity... is that the AGI implemented that will have the strongest RSI curve also will

Re: [agi] A question on the symbol-system hypothesis

2006-12-01 Thread Kashif Shah
A little late on the draw here - I am a new member to the list and was checking out the archives. I had an insight into this debate over understanding. James Ratcliff wrote: Understanding is a dum-dum word, it must be specifically defined as a concept or not used. Understanding art is a