Re: [agi] RE:P2P and/or communal AGI development [WAS Hacker intelligence level...]

2007-12-04 Thread Russell Wallace
On Dec 3, 2007 7:19 PM, Ed Porter [EMAIL PROTECTED] wrote: Perhaps one aspect of the AGI-at-home project would be to develop a good generalized architecture for wedding various classes of narrow AI and AGI in such a learning environment. Yes, I think this is the key aspect, the meta-problem

Re: [agi] AGI first mention on NPR!

2007-12-04 Thread Joshua Fox
I actually thought that that was one of the more positive pieces I've found. Listeners may come out with a bad (mis-)impression, but NPR did nothing to abet that. Joshua 2007/12/3, Bob Mottram [EMAIL PROTECTED]: Perhaps a good word of warning is that it will be really easy to

Re: [agi] RE:P2P and/or communal AGI development [WAS Hacker intelligence level...]

2007-12-04 Thread Mike Dougherty
On Dec 3, 2007 11:03 PM, Bryan Bishop [EMAIL PROTECTED] wrote: On Monday 03 December 2007, Mike Dougherty wrote: Another method of doing search agents, in the mean time, might be to take neural tissue samples (or simple scanning of the brain) and try to simulate a patch of neurons via

RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-04 Thread Ed Porter
Bryan, The name grub sounds familiar. That is probably it. Ed -Original Message- From: Bryan Bishop [mailto:[EMAIL PROTECTED] Sent: Monday, December 03, 2007 10:47 PM To: agi@v2.listbox.com Subject: Re: Hacker intelligence level [WAS Re: [agi] Funding AGI research] On Thursday 29

RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-04 Thread Ed Porter
RICHARD LOOSEMORE= You have no idea of the context in which I made that sweeping dismissal. If you have enough experience of research in this area you will know that it is filled with bandwagons, hype and publicity-seeking. Trivial models are presented as if they are fabulous

RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-04 Thread Ed Porter
John, I am sure there is interesting stuff that can be done. It would be interesting just to see what sort of an agi could be made on a PC. I would be interested in you Ideas for how to make a powerful AGI without a vast amount of interconnect. The major schemes I know about for reducting

Re: [agi] AGI first mention on NPR!

2007-12-04 Thread Richard Loosemore
Joshua Fox wrote: I actually thought that that was one of the more positive pieces I've found. Listeners may come out with a bad (mis-)impression, but NPR did nothing to abet that. Agreed. It is just that the baseline is so low that I suppose we feel gratified when they only miss the point

Re: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-04 Thread Richard Loosemore
Ed Porter wrote: RICHARD LOOSEMORE= You have no idea of the context in which I made that sweeping dismissal. If you have enough experience of research in this area you will know that it is filled with bandwagons, hype and publicity-seeking. Trivial models are presented as if they are

[agi] A question for J Storrs Hall re SIGMA's

2007-12-04 Thread Mike Tintner
Josh, A pen-pal - an AI/robotics guy - has been waxing enthusiastic about your book. For him: the basic idea in his book is to devise what is essentially the basic computational unit - BCU [this is my term, btw] that can be extended indefinitely horizontally [in modules], and vertically

RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-04 Thread Ed Porter
Richard, It is not clear how valuable your 25 years of hard won learning is if it causes you to dismiss valuable scientific work that seems to have eclipsed the importance of anything I or you have published as trivial exercises in public relations without giving any reason whatsoever for the

RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-04 Thread John G. Rose
From: Ed Porter [mailto:[EMAIL PROTECTED] John, I am sure there is interesting stuff that can be done. It would be interesting just to see what sort of an agi could be made on a PC. Yes it would be interesting to see what could be done on a small cluster of modern server grade computers. I

RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-04 Thread Ed Porter
John, As you say the hardware is just going to get better and better. In five years the PC's of most of the people on this list will probably have at least 8 cores and 16 gig of ram. But even with a current 32 bit PC with say 4G of Ram you should be able to build an AGI that would be a

[agi] None of you seem to be able ...

2007-12-04 Thread Dennis Gorelik
Mike, Matt:: The whole point of using massive parallel computation is to do the hard part of the problem. The whole idea of massive parallel computation here, surely has to be wrong. And yet none of you seem able to face this to my mind obvious truth. Who do you mean under you in this

[agi] Solution to Grounding problem

2007-12-04 Thread Dennis Gorelik
Richard, 1) Grounding Problem (the *real* one, not the cheap substitute that everyone usually thinks of as the symbol grounding problem). Could you describe, what *real* grounding problem is? It would be nice to consider an example. Say, we are trying to build AGI for the purpose of running

[agi] How to tepresent things problem

2007-12-04 Thread Dennis Gorelik
Richard, 3) A way to represent things - and in particular, uncertainty - without getting buried up to the eyeballs in (e.g.) temporal logics that nobody believes in. Conceptually the way of representing things is described very well. It's Neural Network -- set of nodes (concepts), when every

RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-04 Thread John G. Rose
From: Ed Porter [mailto:[EMAIL PROTECTED] But even with a current 32 bit PC with say 4G of Ram you should be able to build an AGI that would be a meaningful proof of concept. Lets say 3G is for representation, at say 60 bytes per atom (less than my usual 100 bytes/atom because using

Re: [agi] Solution to Grounding problem

2007-12-04 Thread Mike Tintner
Dennis: 1) Grounding Problem (the *real* one, not the cheap substitute that everyone usually thinks of as the symbol grounding problem). Say, we are trying to build AGI for the purpose of running intelligent chat-bot. What would be the grounding problem in this case? Example:

[agi] RE: A global approach to AI in virtual, artificial and real worlds

2007-12-04 Thread Ed Porter
Ken, Wow. I was going to say, this is one of the most interesting posts I have read on the AGI list in a while, until I realized it wasn't on the AGI list. Too bad. I have copied this response and your original email (below) to the AGI list to share the inspiration. In the following I have

Re: [agi] None of you seem to be able ...

2007-12-04 Thread Matt Mahoney
--- Dennis Gorelik [EMAIL PROTECTED] wrote: For example, I disagree with Matt's claim that AGI research needs special hardware with massive computational capabilities. I don't claim you need special hardware. -- Matt Mahoney, [EMAIL PROTECTED] - This list is sponsored by AGIRI:

Re: [agi] None of you seem to be able ...

2007-12-04 Thread Mike Tintner
Dennis: MT:none of you seem able to face this to my mind obvious truth. Who do you mean under you in this context? Do you think that everyone here agrees with Matt on everyting? Quite the opposite is true -- almost every AI researcher has his own unique set of believes. I'm delighted to be

RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-04 Thread Matt Mahoney
--- Ed Porter [EMAIL PROTECTED] wrote: Matt, IN my Mon 12/3/2007 8:17 PM post to John Rose from which your are probably quoting below I discussed the bandwidth issues. I am assuming nodes directly talk to each other, which is probably overly optimistic, but still are limited by the fact

Re: [agi] None of you seem to be able ...

2007-12-04 Thread Benjamin Goertzel
More generally, I don't perceive any readiness to recognize that the brain has the answers to all the many unsolved problems of AGI - Obviously the brain contains answers to many of the unsolved problems of AGI (not all -- e.g. not the problem of how to create a stable goal system under

Re: [agi] Solution to Grounding problem

2007-12-04 Thread Richard Loosemore
Dennis Gorelik wrote: Richard, 1) Grounding Problem (the *real* one, not the cheap substitute that everyone usually thinks of as the symbol grounding problem). Could you describe, what *real* grounding problem is? It would be nice to consider an example. Say, we are trying to build AGI for

Re: [agi] None of you seem to be able ...

2007-12-04 Thread Richard Loosemore
Benjamin Goertzel wrote: [snip] And neither you nor anyone else has ever made a cogent argument that emulating the brain is the ONLY route to creating powerful AGI. The closest thing to such an argument that I've seen was given by Eric Baum in his book What Is Thought?, and I note that Eric has

RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-04 Thread Ed Porter
MATT MAHONEY= My design would use most of the Internet (10^9 P2P nodes). ED PORTER= That's ambitious. Easier said than done unless you have a Google, Microsoft, or mass popular movement backing you. ED PORTER= I mean, what would motivate the average American, or even the average

Re: [agi] None of you seem to be able ...

2007-12-04 Thread Benjamin Goertzel
On Dec 4, 2007 8:38 PM, Richard Loosemore [EMAIL PROTECTED] wrote: Benjamin Goertzel wrote: [snip] And neither you nor anyone else has ever made a cogent argument that emulating the brain is the ONLY route to creating powerful AGI. The closest thing to such an argument that I've seen

Re: [agi] How to tepresent things problem

2007-12-04 Thread Richard Loosemore
Dennis Gorelik wrote: Richard, 3) A way to represent things - and in particular, uncertainty - without getting buried up to the eyeballs in (e.g.) temporal logics that nobody believes in. Conceptually the way of representing things is described very well. It's Neural Network -- set of nodes

Re: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-04 Thread Richard Loosemore
Ed Porter wrote: Richard, It is not clear how valuable your 25 years of hard won learning is if it causes you to dismiss valuable scientific work that seems to have eclipsed the importance of anything I or you have published as trivial exercises in public relations without giving any reason

Re: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-04 Thread Benjamin Goertzel
Thus: building a NL parser, no matter how good it is, is of no use whatsoever unless it can be shown to emerge from (or at least fit with) a learning mechanism that allows the system itself to generate its own understanding (or, at least, acquisition) of grammar IN THE CONTEXT OF A MECHANISM

Re: [agi] None of you seem to be able ...

2007-12-04 Thread Richard Loosemore
Benjamin Goertzel wrote: On Dec 4, 2007 8:38 PM, Richard Loosemore [EMAIL PROTECTED] wrote: Benjamin Goertzel wrote: [snip] And neither you nor anyone else has ever made a cogent argument that emulating the brain is the ONLY route to creating powerful AGI. The closest thing to such an

RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-04 Thread Matt Mahoney
--- Ed Porter [EMAIL PROTECTED] wrote: MATT MAHONEY= My design would use most of the Internet (10^9 P2P nodes). ED PORTER= That's ambitious. Easier said than done unless you have a Google, Microsoft, or mass popular movement backing you. It would take some free software that people

Re: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-04 Thread Richard Loosemore
Benjamin Goertzel wrote: Thus: building a NL parser, no matter how good it is, is of no use whatsoever unless it can be shown to emerge from (or at least fit with) a learning mechanism that allows the system itself to generate its own understanding (or, at least, acquisition) of grammar IN THE

Re: [agi] None of you seem to be able ...

2007-12-04 Thread Benjamin Goertzel
Richard, Well, I'm really sorry to have offended you so much, but you seem to be a mighty easy guy to offend! I know I can be pretty offensive at times; but this time, I wasn't even trying ;-) The argument I presented was not a conjectural assertion, it made the following coherent case:

RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-04 Thread Ed Porter
The particular NL parser paper in question, Collins's Convolution Kernels for Natural Language (http://l2r.cs.uiuc.edu/~danr/Teaching/CS598-05/Papers/Collins-kernels.pdf) is actually saying something quite important that extends way beyond parsers and is highly applicable to AGI in general. It

RE: [agi] None of you seem to be able ...

2007-12-04 Thread Ed Porter
RICHARD LOOSEMOORE There is a high prima facie *risk* that intelligence involves a significant amount of irreducibility (some of the most crucial characteristics of a complete intelligence would, in any other system, cause the behavior to show a global-local disconnect), ED PORTER=

RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-04 Thread Ed Porter
Matt, Perhaps your are right. But one problem is that big Google-like compuplexes in the next five to ten years will be powerful enough to do AGI and they will be much more efficient for AGI search because the physical closeness of their machines will make it possible for them to perform the

RE: [agi] None of you seem to be able ...

2007-12-04 Thread John G. Rose
From: Benjamin Goertzel [mailto:[EMAIL PROTECTED] As an example of a creative leap (that is speculative and may be wrong, but is certainly creative), check out my hypothesis of emergent social- psychological intelligence as related to mirror neurons and octonion algebras:

[agi] Re: A global approach to AI in virtual, artificial and real worlds

2007-12-04 Thread Benjamin Goertzel
What makes anyone think OpenCog will be different? Is it more understandable? Will there be long-term aficionados who write books on how to build systems in OpenCog? Will the developers have experience, or just adolescent enthusiasm? I'm watching the experiment to find out. Well, OpenCog

Re: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-04 Thread Benjamin Goertzel
OK, understood... On Dec 4, 2007 9:32 PM, Richard Loosemore [EMAIL PROTECTED] wrote: Benjamin Goertzel wrote: Thus: building a NL parser, no matter how good it is, is of no use whatsoever unless it can be shown to emerge from (or at least fit with) a learning mechanism that allows the