Re: [agi] More public awarenesss that AGI is coming fast

2007-10-18 Thread Bob Mottram
On 17/10/2007, David Orban [EMAIL PROTECTED] wrote: There are now Department of Labor predictions of 50%-80% unemployment rates due to automation of white collar jobs. This in my opinion is not a small matter either. On the unemployment question I remain optimistic. If you go back a few

Re: [agi] More public awarenesss that AGI is coming fast

2007-10-18 Thread Bob Mottram
Despite these arguments there are good reasons for caution. When you look at the history of AI research one thing tends to stand out - some people never seem to learn of the dangers of hype. Having been around for a while I've heard many individuals make a ten years to SAI type of prediction,

Re: [agi] symbol grounding QA

2007-10-18 Thread J Storrs Hall, PhD
Remember that Eliezer is using holonic to describe *conflict resolution* in the interpretation process. The reason it fits Koestler's usage is that it uses *both* information about the parts that make up a possible entity and the larger entities it might be part of. Suppose we see the

[agi] Poll

2007-10-18 Thread J Storrs Hall, PhD
I'd be interested in everyone's take on the following: 1. What is the single biggest technical gap between current AI and AGI? (e.g. we need a way to do X or we just need more development of Y or we have the ideas, just need hardware, etc) 2. Do you have an idea as to what should should be

RE: [agi] symbol grounding QA

2007-10-18 Thread Edward W. Porter
Josh, According to that font of undisputed truth, Wikipedia, the general definition of a holon is: “A holon is a system http://en.wikipedia.org/wiki/System (or phenomenon http://en.wikipedia.org/wiki/Phenomenon ) that is a whole in itself as well as a part of a larger system. It can be

Re: [agi] Poll

2007-10-18 Thread Benjamin Goertzel
On 10/18/07, J Storrs Hall, PhD [EMAIL PROTECTED] wrote: I'd be interested in everyone's take on the following: 1. What is the single biggest technical gap between current AI and AGI? ( e.g. we need a way to do X or we just need more development of Y or we have the ideas, just need

Re: [agi] Poll

2007-10-18 Thread Russell Wallace
On 10/18/07, J Storrs Hall, PhD [EMAIL PROTECTED] wrote: 1. What is the single biggest technical gap between current AI and AGI? (e.g. we need a way to do X or we just need more development of Y or we have the ideas, just need hardware, etc) Procedural knowledge. Data in relational databases

Re: [agi] symbol grounding QA

2007-10-18 Thread J Storrs Hall, PhD
On Thursday 18 October 2007 09:28:04 am, Edward W. Porter wrote: Josh, According to that font of undisputed truth, Wikipedia, the general definition of a holon is: ... “Since a holon is embedded in larger wholes, it is influenced by and influences these larger wholes. And since a holon

Re: [agi] An AGI Test/Prize

2007-10-18 Thread Benjamin Goertzel
I guess, off the top of my head, the conversational equivalent might be a Story Challenge - asking your AGI to tell some explanatory story about a problem that had occurred to it recently, (designated by the tester), and then perhaps asking it to devise a solution. Just my first thought -

Re: [agi] Poll

2007-10-18 Thread Benjamin Goertzel
1. What is the single biggest technical gap between current AI and AGI? (e.g. we need a way to do X or we just need more development of Y or we have the ideas, just need hardware, etc) The biggest gap is the design of a system that can absorb information generated by other

Re: [agi] Poll

2007-10-18 Thread William Pearson
On 18/10/2007, J Storrs Hall, PhD [EMAIL PROTECTED] wrote: I'd be interested in everyone's take on the following: 1. What is the single biggest technical gap between current AI and AGI? (e.g. we need a way to do X or we just need more development of Y or we have the ideas, just need hardware,

[agi] An AGI Test/Prize

2007-10-18 Thread Mike Tintner
There certainly should be an AGI Test/Prize. Ben suggested a Toddler Interview. In fact, in the interviews linked I could only see one Q A that to me demonstrated any higher adaptive intelligence - with a child possibly freely adapting materials to fit the problem - and then it wasn't the Q

Re: [agi] An AGI Test/Prize

2007-10-18 Thread Russell Wallace
On 10/18/07, Benjamin Goertzel [EMAIL PROTECTED] wrote: Hmmm... the storytelling direction is interesting. E.g., you could tell the first half of a story to the test-taker, and ask them to finish it... Or better, draw an animation of (both halves of) it. - This list is sponsored by

RE: [agi] More public awarenesss that AGI is coming fast

2007-10-18 Thread Edward W. Porter
IN response to Bob Mottram’s Thu 10/18/2007 3:38 AM post. With regard to the fact that many people who promised to produce AI in the past have failed -- I repeat what I have said on this list many times -- you can’t do the type of computation the human brain does without at least something within

RE: [agi] Poll

2007-10-18 Thread Derek Zahn
1. What is the single biggest technical gap between current AI and AGI? I think hardware is a limitation because it biases our thinking to focus on simplistic models of intelligence. However, even if we had more computational power at our disposal we do not yet know what to do with it, and

Re: [agi] An AGI Test/Prize

2007-10-18 Thread Vladimir Nesov
I think AGI test should fundamentally be a learning ability test. When there's a specified domain in which the system should demonstrate it competency (like 'chatting' or 'playing Go'), it's likely easier to write narrow solution. If system is not a RSI AI already, resulted competency depends on

Re: [agi] Poll

2007-10-18 Thread Cenny Wenner
Please find below commentaries of a naive neat which do not quite agree with the approaches of the seasoned users on this list. Comments and pointers are most welcome. On 10/18/07, J Storrs Hall, PhD [EMAIL PROTECTED] wrote: I'd be interested in everyone's take on the following: 1. What is the

RE: [agi] Poll

2007-10-18 Thread John G. Rose
From: J Storrs Hall, PhD [mailto:[EMAIL PROTECTED] I'd be interested in everyone's take on the following: 1. What is the single biggest technical gap between current AI and AGI? (e.g. we need a way to do X or we just need more development of Y or we have the ideas, just need hardware,

RE: [agi] symbol grounding QA

2007-10-18 Thread Edward W. Porter
Josh, There actually is some downward information flow, apparently largely for inhibiting loosing patterns. I forget how learning was done, but in many hierarchical learning systems downward influences are used to determine relative importance of lower level patterns in their competition for

Re: Semantics [WAS Re: [agi] symbol grounding QA]

2007-10-18 Thread Richard Loosemore
Linas Vepstas wrote: On Wed, Oct 17, 2007 at 10:25:18AM -0400, Richard Loosemore wrote: One way this group have tried to pursue their agenda is through an idea due to Montague and others, in which meanings of terms are related to something called possible worlds. They imagine infinite numbers

Re: [agi] Poll

2007-10-18 Thread Richard Loosemore
J Storrs Hall, PhD wrote: I'd be interested in everyone's take on the following: 1. What is the single biggest technical gap between current AI and AGI? (e.g. we need a way to do X or we just need more development of Y or we have the ideas, just need hardware, etc) The gap is a matter of

Re: [agi] Poll

2007-10-18 Thread Mike Dougherty
On 10/18/07, Derek Zahn [EMAIL PROTECTED] wrote: Because neither of these things can be done at present, we can barely even talk to each other about things like goals, semantics, grounding, intelligence, and so forth... the process of taking these unknown and perhaps inherently complex things

Re: [agi] Poll

2007-10-18 Thread Benjamin Goertzel
That's where I think narrow Assistive Intelligence could add the sender's assumed context to a neutral exchange format that the receiver's agent could properly display in an unencumbered way. The only way I see for that to happen is that the agents are trained on/around the unique core

Re: [agi] More public awarenesss that AGI is coming fast

2007-10-18 Thread J. Andrew Rogers
On Oct 18, 2007, at 10:40 PM, John G. Rose wrote: Well after living in Seattle during the dot com craze the hype was just absolutely out of control. Yet people did get funded. Was it all worth it? Hell yeah but the hangover was pretty bad :) AGI IS hypeable but people have to make a

RE: [agi] Poll

2007-10-18 Thread Edward W. Porter
Matt Mahoney’s Thu 10/18/2007 9:15 PM post states MAHONEY There is possibly a 6 order of magnitude gap between the size of a cognitive model of human memory (10^9 bits) and the number of synapses in the brain (10^15), and precious little research to resolve this discrepancy. In fact, these

RE: Semantics [WAS Re: [agi] symbol grounding QA]

2007-10-18 Thread Edward W. Porter
Re: Richard Loosemore's below copied post LOOSEMORE Overall, I believe that possible-worlds semantics serves no purpose in AI except to justify the idea that statements like It is the case that all cups are drinking vessels that possess a handle can have something like a truth value that is

RE: [agi] More public awarenesss that AGI is coming fast

2007-10-18 Thread John G. Rose
From: Bob Mottram [mailto:[EMAIL PROTECTED] Subject: Re: [agi] More public awarenesss that AGI is coming fast Despite these arguments there are good reasons for caution. When you look at the history of AI research one thing tends to stand out - some people never seem to learn of the

Re: [agi] Poll

2007-10-18 Thread Matt Mahoney
--- J Storrs Hall, PhD [EMAIL PROTECTED] wrote: I'd be interested in everyone's take on the following: 1. What is the single biggest technical gap between current AI and AGI? In hindsight we can say that we did not have enough hardware. However there has been no point in time since the