Ben, Let me try to be mathematical and behavioral, too.
Assume we finally agree on a way to measure a system's problem-solving capability (over a wide variety of complex goals) with a numerical function F(t), with t as the time of the measurement. The system's resources cost is also measured by a numerical function C(t). You and Shane believe that the value of F(t) is also a measurement of "intelligence". Furthermore, you suggest "efficient intelligence" to be F(t)/C(t), and arguing that it is more realistic and relevant than "raw intelligence". You also think my definition of intelligence is roughly the same. But to me, in this situation "intelligence" is better measured by F'(t), that is, the derivative of the capability, or how much the capability of the system can change (usually increase), under a constant resources supply. I believe it is also close to what Mark said. All these three measurement makes sense and are related to the everyday meaning of the word "intelligence", though they are very different. For a system without adaptation ability, both F(t) and F(t)/C(t) can be large, but F'(t) is zero --- this is conventional computer systems, in my mind. On the other hand, systems with large F'(t) have great potentials, though initially may not have much problem-solving capability --- this is AI systems, according to my definition. For practical applications, we surely want systems with both large F(t)/C(t) and large F'(t), and system with huge F(t) at the cost of a huge C(t), like AIXI, is unrealistic --- we all agree here, including Shane, so it is not the issue. The issue is: F(t)/C(t) and F'(t) are different (though not the opposite of each other). Pei On 5/20/07, Benjamin Goertzel <[EMAIL PROTECTED]> wrote:
Sure, that's fine... I mean: I have given a mathematical definition before, so all these verbal paraphrases should be viewed as rough approximations anyway... On 5/20/07, Mark Waser <[EMAIL PROTECTED]> wrote: > > > > Allow me to paraphrase . . . . > > Something is intelligent if it is functional over a wide variety of complex goals. > > Is that a reasonable shot at your definition? > > ----- Original Message ----- > From: Benjamin Goertzel > To: [email protected] > > Sent: Sunday, May 20, 2007 2:41 PM > Subject: Re: [agi] Relationship btw consciousness and intelligence > > > Intelligence, to me, is the ability to achieve complex goals... > > This is one way of being functional.... a paperclip though is very functional yet not very intelligent... > > ben g > > > > On 5/20/07, Mark Waser <[EMAIL PROTECTED]> wrote: > > > > > > >> Sure... I prefer to define intelligence in terms of behavioral functionality rather than internal properties, but you are free to define it differently ;-) > > > > I wouldn't call learning/adaptability an internal(-only) property . . . . > > > > >> I note that if the Chinese language changes over time, then the {Searle + rulebook} system will rapidly become less intelligent in this context !!!! > > > > See. Now this indicates the funkiness of your definition . . . . Replace intelligent with functional and it makes a lot more sense. > > > > Actually, that raises a good question -- What is the difference between your "intelligent" and your "functional"? > > > > ----- Original Message ----- > > From: Benjamin Goertzel > > To: [email protected] > > > > Sent: Sunday, May 20, 2007 2:11 PM > > Subject: Re: [agi] Relationship btw consciousness and intelligence > > > > > > Sure... I prefer to define intelligence in terms of behavioral functionality rather than internal properties, but you are free to define it differently ;-) > > > > I note that if the Chinese language changes over time, then the {Searle + rulebook} system will rapidly become less intelligent in this context !!!! > > > > ben g > > > > > > > > On 5/20/07, Mark Waser <[EMAIL PROTECTED]> wrote: > > > > > > > > > > > > I liked most of your points, but . . . . > > > > > > >> However, Searle's example is pathological in the sense that it posits a system with a high degree of intelligence associated with a functionality that is NOT associated with any intensity-of-consciousness. But I suggest that this pathology is due to the unrealistically large amount of computing resources that the rulebook requires. > > > > > > Not by my definition of intelligence (which requires learning/adaptation). > > > > > > > > > > > > > > > ----- Original Message ----- > > > From: Benjamin Goertzel > > > To: [email protected] > > > Sent: Sunday, May 20, 2007 1:24 PM > > > Subject: [agi] Relationship btw consciousness and intelligence > > > > > > > > > Hi all, > > > > > > Someone emailed me recently about Searle's Chinese Room argument, > > > > > > http://en.wikipedia.org/wiki/Chinese_room > > > > > > a topic that normally bores me to tears, but it occurred to me that part of my reply might be of interest to some > > > on this list, because it pertains to the more general issue of the relationship btw consciousness and intelligence. > > > > > > It also ties in with the importance of thinking about "efficient intelligence" rather than just raw intelligence, as > > > discussed in the recent thread on definitions of intelligence. > > > > > > Here is the relevant part of my reply about Searle: > > > > > > **** > > > However, a key point is: The scenario Searle describes is likely not physically possible, due to the unrealistically large size of the rulebook. The structures that we associate with intelligence (will, focused awareness, etc.) in a human context, all come out of the need to do intelligent processing within modest space and time requirements. > > > > > > So when we say we feel like the {Searle+rulebook} system isn't really understanding Chinese, what we mean is: It isn't understanding Chinese according to the methods we are used to, which are methods adapted to deal with modest space and time resources. > > > > > > This ties in with the relationship btw intensity-of-consciousness and degree-of-intelligence. In real life, these seem often to be tied together, because the cognitive structures that correlate with intensity of consciousness are useful ones for achieving intelligent behaviors. > > > > > > However, Searle's example is pathological in the sense that it posits a system with a high degree of intelligence associated with a functionality that is NOT associated with any intensity-of-consciousness. But I suggest that this pathology is due to the unrealistically large amount of computing resources that the rulebook requires. > > > > > > I.e., it is finitude of resources that causes intelligence and intensity-of-consciousness to be correlated. The fact that this correlation breaks in a pathological, physically-impossible case that requires dramatically much resources, doesn't mean too much... > > > **** > > > > > > Note that I write about intensity of consciousness rather than presence of consciousness. I tend toward panpsychism but I do accept that "while all animals are conscious, some animals are more conscious than others" (to pervert Orwell). I have elaborated on this perspective considerably in The Hidden Pattern. > > > > > > -- Ben G ________________________________ This list is sponsored by AGIRI: http://www.agiri.org/email > > > To unsubscribe or change your options, please go to: > > > http://v2.listbox.com/member/?& ________________________________ This list is sponsored by AGIRI: http://www.agiri.org/email > > > To unsubscribe or change your options, please go to: > > > http://v2.listbox.com/member/?& > > > > ________________________________ This list is sponsored by AGIRI: http://www.agiri.org/email > > To unsubscribe or change your options, please go to: > > http://v2.listbox.com/member/?& ________________________________ This list is sponsored by AGIRI: http://www.agiri.org/email > > To unsubscribe or change your options, please go to: > > http://v2.listbox.com/member/?& > > ________________________________ This list is sponsored by AGIRI: http://www.agiri.org/email > To unsubscribe or change your options, please go to: > http://v2.listbox.com/member/?& > > ________________________________ This list is sponsored by AGIRI: http://www.agiri.org/email > To unsubscribe or change your options, please go to: > http://v2.listbox.com/member/?& ________________________________ This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?&
----- This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936
