Re: [agi] rule-based NL system

2007-04-27 Thread YKY (Yan King Yin)
I'll answer you point by point; others who find this tedious can just scroll down to "conclusions" at the bottom. On 4/27/07, Mark Waser <[EMAIL PROTECTED]> wrote: I am NOT suggesting a rule-based system at this level. First I figure out a good representation for the minimal Basic English gram

Re: [agi] Circular definitions of intelligence

2007-04-27 Thread J. Storrs Hall, PhD.
On Friday 27 April 2007 17:44, John G. Rose wrote: > So then we decide OK let's try to measure the temperature of a black hole > so we jettison the X1117 into the black hole and right before it passes the > event horizon it converts itself into radially emitted neutrinos by > sacrificing itself an

[agi] unsubscribe

2007-04-27 Thread Li, Jiang \(NIH/CC/DRD\) [F]
Please unsubscribe me from this list. Thank you. Jiang - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936

RE: [agi] Circular definitions of intelligence

2007-04-27 Thread John G. Rose
All this talk about stupid thermometers... Some thermometers are smarter than others. Why? When it comes to telling temperature they know what they are talking about. Some have higher margins of error; they may be off by 1/2 degree F. Others are right on. But then put one in an environment th

Re: [agi] Circular definitions of intelligence - who cares?

2007-04-27 Thread Matt Mahoney
--- Richard Loosemore <[EMAIL PROTECTED]> wrote: > Benjamin Goertzel wrote: > > > > > > Has anyone noticed that people who have studied AGI all their lives, > > like > > Kurzweil and Minsky, aren't trying to build one? > > > > > > -- Matt Mahoney, [EMAIL PROTECTED]

Re: [agi] Circular definitions of intelligence - who cares?

2007-04-27 Thread Richard Loosemore
Benjamin Goertzel wrote: Has anyone noticed that people who have studied AGI all their lives, like Kurzweil and Minsky, aren't trying to build one? -- Matt Mahoney, [EMAIL PROTECTED] I don't get it ... some of us who have studied AGI all our lives

Re: [agi] Circular definitions of intelligence - who cares?

2007-04-27 Thread Mike Tintner
pray tell which errors [The Emotion Machine BTW contains a v. rough proposal for an AGI, human-like system which was formally put together in a separate paper] - Original Message - From: Benjamin Goertzel To: agi@v2.listbox.com Sent: Friday, April 27, 2007 8:42 PM Subject:

Re: [agi] Sony's QRIO robot

2007-04-27 Thread Bob Mottram
It's a shame that Sony discontinued their robotics division. I suspect in this case this is just anthropomorphisation of some quirk of the robots control system. What's being described here is the psychological principle of reciprocation: I give something of value to you, you give something of v

Re: [agi] Circular definitions of intelligence - who cares?

2007-04-27 Thread Benjamin Goertzel
So far as I know, Kurzweil never made a serious effort to create AGI. Minsky did, but he made some serious conceptual errors, in spite of his obvious brilliance... -- Ben G On 4/27/07, Matt Mahoney <[EMAIL PROTECTED]> wrote: --- Benjamin Goertzel <[EMAIL PROTECTED]> wrote: > > > > > > > > H

Re: [agi] Circular definitions of intelligence - who cares?

2007-04-27 Thread Matt Mahoney
--- Benjamin Goertzel <[EMAIL PROTECTED]> wrote: > > > > > > > > Has anyone noticed that people who have studied AGI all their lives, like > > Kurzweil and Minsky, aren't trying to build one? > > > > > > -- Matt Mahoney, [EMAIL PROTECTED] > > > > I don't get it ... some of us who have studied

Re: [agi] Circular definitions of intelligence

2007-04-27 Thread Mike Tintner
Interesting but, if I've understood the Universal Intelligence paper, there are 3 major flaws. 1) It seems to assume that intelligence is based on a rational, deterministic program - is that right? Adaptive intelligence, I would argue, definitely isn't. There isn't a rational, right way to appr

Re: [agi] Circular definitions of intelligence

2007-04-27 Thread Mark Waser
The world can always be described by an arbitrarily large logical predicate. The world as it exists now is one world-state. If one clause in the predicate were changed, that would be another world state. More complex world states need more clauses in the predicate to describe them (i.

Re: [agi] Circular definitions of intelligence

2007-04-27 Thread Kaj Sotala
On 4/27/07, Richard Loosemore <[EMAIL PROTECTED]> wrote: If all definitions of intelligence end up with a judgment call, we might as well throw away all of them and just say "Intelligence is what a human would consider intelligent". That is useless, of course, but no less useless than longer def

Re: [agi] Circular definitions of intelligence

2007-04-27 Thread Mark Waser
Entity in question is any particle in the universe. OK. Said particle either has a fixed velocity and direction or is stationary. States that it is trying to avoid: collision with its antiparticle. No, the particle is not "trying" to do anything. Size of space containing all world-state

Re: [agi] Circular definitions of intelligence - who cares?

2007-04-27 Thread Benjamin Goertzel
Has anyone noticed that people who have studied AGI all their lives, like Kurzweil and Minsky, aren't trying to build one? -- Matt Mahoney, [EMAIL PROTECTED] I don't get it ... some of us who have studied AGI all our lives ARE trying to build one... - This list is sponsored by AGIRI:

[agi] Circular definitions of intelligence - who cares?

2007-04-27 Thread Matt Mahoney
Maybe you want to build an intelligent system to get a better understanding of how people think. Or maybe you want to make money. Or maybe it would be cool to launch a singularity. None of these projects is going to result in something indistinguishable from a human brain. They will be machines

Re: [agi] Circular definitions of intelligence

2007-04-27 Thread Benjamin Goertzel
What is a waste of time, however, is for someone to think that they have made some progress by going from a partial description that is 8 words long to a partial description that is 21 words long. 210,000 words, maybe (if it were coherent enough). But in the jump from 8 to 21 I see nothing bu

Re: [agi] Circular definitions of intelligence

2007-04-27 Thread Benjamin Goertzel
I guess I don't really understand your formalism, i.e. -- how you define a "world-state" ... do you mean a set of elementary world-states described by some logical predicate? -- how do you define the size of a world-state ... would it be the complexity of that predicate? thx ben On 4/27/07, Ma

Re: [agi] Circular definitions of intelligence

2007-04-27 Thread Shane Legg
Kaj, (Disclaimer: I do not claim to know the sort of maths that Ben and Hutter and others have used in defining intelligence. I'm fully aware that I'm dabbling in areas that I have little education in, and might be making a complete fool of myself. Nonetheless...) I'm currently writing my PhD

Re: [agi] Circular definitions of intelligence

2007-04-27 Thread Mark Waser
>> I don't like this so much, because two sets of world-states with equal >> measure (size) may have very different complexity... I don't believe so because "complex" world states are by definition larger since they have more variables to vary (and thus more points/states/variables). It is tru

Re: [agi] Circular definitions of intelligence

2007-04-27 Thread Richard Loosemore
Benjamin Goertzel wrote: If all definitions of intelligence end up with a judgment call, we might as well throw away all of them and just say "Intelligence is what a human would consider intelligent". But, also, then "Sexy is what a human would consider sexy" Yet, intelligence

Re: [agi] Circular definitions of intelligence

2007-04-27 Thread Richard Loosemore
Mark Waser wrote: On 4/26/07, Richard Loosemore <[EMAIL PROTECTED]> wrote: Can you point to an objective definition that is clear about which things are more intelligent than others, and which does not accidentally include things that manifestly conflict with the commonsense definition (by fal

Re: [agi] Circular definitions of intelligence

2007-04-27 Thread Benjamin Goertzel
Yet, intelligence and sexiness are not the same thing... Though fortunately for us geeks, some people do find them to be correlated ;-D - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member

Re: [agi] Circular definitions of intelligence

2007-04-27 Thread Benjamin Goertzel
If all definitions of intelligence end up with a judgment call, we might as well throw away all of them and just say "Intelligence is what a human would consider intelligent". But, also, then "Sexy is what a human would consider sexy" Yet, intelligence and sexiness are not the same thing...

Re: [agi] Circular definitions of intelligence

2007-04-27 Thread Richard Loosemore
Kaj Sotala wrote: On 4/26/07, Richard Loosemore <[EMAIL PROTECTED]> wrote: Can you point to an objective definition that is clear about which things are more intelligent than others, and which does not accidentally include things that manifestly conflict with the commonsense definition (by false

Re: [agi] Circular definitions of intelligence

2007-04-27 Thread Benjamin Goertzel
On 4/27/07, Mark Waser <[EMAIL PROTECTED]> wrote: >> On 4/26/07, Richard Loosemore <[EMAIL PROTECTED]> wrote: >>> Can you point to an objective definition that is clear about which >>> things are more intelligent than others, and which does not accidentally >>> include things that manifestly con

Re: [agi] Sony's QRIO robot

2007-04-27 Thread J. Storrs Hall, PhD.
On the face of it, this isn't anything more than Parry did. If you have a substrate that can interpret actions and situations into emotional inputs ("I just got insulted") the overall emotional control mechanism can be modelled by an embarrassingly simple system of differential equations. Josh

Re: [agi] Sony's QRIO robot

2007-04-27 Thread Mike Tintner
Now this I would imagine is where evolutionary thinking has to be v. helpful if not essential. .. if one traces the evolution of cooperative activity in animals, it should be a useful guide to the stages needed to build up cooperative activity among any agents. Can anyone think of any work done

Re: [agi] Circular definitions of intelligence

2007-04-27 Thread Mark Waser
On 4/26/07, Richard Loosemore <[EMAIL PROTECTED]> wrote: Can you point to an objective definition that is clear about which things are more intelligent than others, and which does not accidentally include things that manifestly conflict with the commonsense definition (by false negatives or false

Re: [agi] Circular definitions of intelligence

2007-04-27 Thread Mike Tintner
Yes, all intelligent systems (i.e. living creatures) have a psychoeconomy of goals - important point. But in solving any particular problem, they may be dealing with only one or two goals at a time. Have measures of intelligence mentioned, included: 1) the depth of the problem - the number

Re: [agi] Circular definitions of intelligence

2007-04-27 Thread Kaj Sotala
On 4/26/07, Richard Loosemore <[EMAIL PROTECTED]> wrote: Can you point to an objective definition that is clear about which things are more intelligent than others, and which does not accidentally include things that manifestly conflict with the commonsense definition (by false negatives or false

Re: [agi] Sony's QRIO robot

2007-04-27 Thread Benjamin Goertzel
Hi Mike, Yeah, this is a well known requirement ;-) In studying the MMOG application area for early-stage AGI's, the observation has been made that, in current MMOG's, one of the big weaknesses of NPCs (non player characters) is their inability to work together (with each other or humans) as a t

Re: [agi] Sony's QRIO robot

2007-04-27 Thread Mike Tintner
Er this is a v. important dimension so far, I think, left out of the informal list of basic requirements for AGI. The social dimension of a machine's activities - and particularly its social exchanges. In fact, if an AGI machine is to undertake and adapt to problematic activities, it will prob

Re: [agi] The University of Phoenix Test [was: Why do you think your AGI design will work?]

2007-04-27 Thread James Ratcliff
This is more along the lines of what I have been trying to say as well, How and can we enumerate the type of stepping stone tasks we need an AGI to perform, rate the complexity of them, and then look thru and see which ones we really want to concentrate as teh more important matters, with the fi

RE: [agi] Circular definitions of intelligence

2007-04-27 Thread John G. Rose
> From: J. Storrs Hall, PhD. [mailto:[EMAIL PROTECTED] > > (With a wink towards Ben) > > Cheney goes into the Oval Office with the latest war report. > > "Three Brazilian soldiers died in the latest bombing." > > Bush gasps and turns ash-gray. > > "Oh my God!" he says. His brow wrinkles in th

[agi] Sony's QRIO robot

2007-04-27 Thread Eric B. Ramsay
In reading about Sony's QRIO robot I came across the following. What would this behaviour be categorized as in the continuum from thermostat to human (following a previous thread)? : "Interestingly, when they're doing demonstrations, they have found that the AI in QRIO is so strong that if