Re: [agi] Updated AGI proposal (CMR v2.1)

2008-10-14 Thread Vladimir Nesov
On Tue, Oct 14, 2008 at 8:36 AM, Matt Mahoney [EMAIL PROTECTED] wrote: Ben, If you want to argue that recursive self improvement is a special case of learning, then I have no disagreement with the rest of your argument. But is this really a useful approach to solving AGI? A group of humans

Re: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-14 Thread Ben Goertzel
Hi, My main impression of the AGI-08 forum was one of over-dominance by singularity-obsessed and COMP thinking, which must have freaked me out a bit. This again is completely off-base ;-) COMP, yes ... Singularity, no. The Singularity was not a theme of AGI-08 and the vast majority of

Re: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-14 Thread Terren Suydam
Hi Colin, Are there other forums or email lists associated with some of the other AI communities you mention?  I've looked briefly but in vain ... would appreciate any helpful pointers. Thanks, Terren --- On Tue, 10/14/08, Colin Hales [EMAIL PROTECTED] wrote: From: Colin Hales [EMAIL

Re: [agi] Updated AGI proposal (CMR v2.1)

2008-10-14 Thread William Pearson
2008/10/14 Terren Suydam [EMAIL PROTECTED]: --- On Tue, 10/14/08, Matt Mahoney [EMAIL PROTECTED] wrote: An AI that is twice as smart as a human can make no more progress than 2 humans. Spoken like someone who has never worked with engineers. A genius engineer can outproduce 20 ordinary

Re: [agi] Updated AGI proposal (CMR v2.1)

2008-10-14 Thread Terren Suydam
Hi Will, I think humans provide ample evidence that intelligence is not necessarily correlated with processing power. The genius engineer in my example solves a given problem with *much less* overall processing than the ordinary engineer, so in this case intelligence is correlated with some

Re: [agi] Updated AGI proposal (CMR v2.1)

2008-10-14 Thread Matt Mahoney
--- On Tue, 10/14/08, Terren Suydam [EMAIL PROTECTED] wrote: --- On Tue, 10/14/08, Matt Mahoney [EMAIL PROTECTED] wrote: An AI that is twice as smart as a human can make no more progress than 2 humans. Spoken like someone who has never worked with engineers. A genius engineer can

Re: [agi] Updated AGI proposal (CMR v2.1)

2008-10-14 Thread Terren Suydam
--- On Tue, 10/14/08, Matt Mahoney [EMAIL PROTECTED] wrote: An AI that is twice as smart as a human can make no more progress than 2 humans. Spoken like someone who has never worked with engineers. A genius engineer can outproduce 20 ordinary engineers in the same timeframe. Do you really

Re: [agi] Updated AGI proposal (CMR v2.1)

2008-10-14 Thread Terren Suydam
Matt, Your measure of intelligence seems to be based on not much more than storage capacity, processing power, I/O, and accumulated knowledge. This has the advantage of being easily formalizable, but has the disadvantage of missing a necessary aspect of intelligence. I have yet to see from

Re: [agi] Updated AGI proposal (CMR v2.1)

2008-10-14 Thread William Pearson
Hi Terren, I think humans provide ample evidence that intelligence is not necessarily correlated with processing power. The genius engineer in my example solves a given problem with *much less* overall processing than the ordinary engineer, so in this case intelligence is correlated with

Re: [agi] Updated AGI proposal (CMR v2.1)

2008-10-14 Thread Matt Mahoney
--- On Tue, 10/14/08, Ben Goertzel [EMAIL PROTECTED] wrote: Here is how I see this exchange... You proposed a so-called *mathematical* debunking of RSI. I presented some detailed arguments against this so-called debunking, pointing out that its mathematical assumptions and its

Re: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-14 Thread Mike Tintner
Colin: others such as Hynna and Boahen at Stanford, who have an unusual hardware neural architecture...(Hynna, K. M. and Boahen, K. 'Thermodynamically equivalent silicon models of voltage-dependent ion channels', Neural Computation vol. 19, no. 2, 2007. 327-350.) ...and others ... then things

Re: [agi] Updated AGI proposal (CMR v2.1)

2008-10-14 Thread Charles Hixson
If you want to argue this way (reasonable), then you need a specific definition of intelligence. One that allows it to be accurately measured (and not just in principle). IQ definitely won't serve. Neither will G. Neither will GPA (if you're discussing a student). Because of this, while I

Re: [agi] Updated AGI proposal (CMR v2.1)

2008-10-14 Thread Eric Burton
An AI that is twice as smart as a human can make no more progress than 2 humans. Actually I'll argue that we can't make predictions about what a greater-than-human intelligence would do. Maybe the summed intelligence of 2 humans would be sufficient to do the work of a dozen. Maybe

Re: [agi] Updated AGI proposal (CMR v2.1)

2008-10-14 Thread Terren Suydam
Charles, I'm not sure it's possible to nail down a measure of intelligence that's going to satisfy everyone. Presumably, it would be some measure of performance in problem solving across a wide variety of novel domains in complex (i.e. not toy) environments. Obviously among potential agents,

Re: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-14 Thread Ben Goertzel
Again, when you say that these neuroscience theories have squashed the computational theories of mind, it is not clear to me what you mean by the computational theories of mind. Do you have a more precise definition of what you mean? ben g On Tue, Oct 14, 2008 at 11:26 AM, Mike Tintner [EMAIL

Re: [agi] Updated AGI proposal (CMR v2.1)

2008-10-14 Thread Terren Suydam
--- On Tue, 10/14/08, William Pearson [EMAIL PROTECTED] wrote: There are things you can't model with limits of processing power/memory which restricts your ability to solve them. Processing power, storage capacity, and so forth, are all important in the realization of an AI but I don't see

COMP = false? (was Re: [agi] Advocacy Is no Excuse for Exaggeration)

2008-10-14 Thread Matt Mahoney
--- On Tue, 10/14/08, Colin Hales [EMAIL PROTECTED] wrote: The only reason for not connecting consciousness with AGI is a situation where one can see no mechanism or role for it. That inability is no proof there is noneand I have both to the point of having a patent in progress.  Yes, I

Re: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-14 Thread Colin Hales
Ben Goertzel wrote: Hi, My main impression of the AGI-08 forum was one of over-dominance by singularity-obsessed and COMP thinking, which must have freaked me out a bit. This again is completely off-base ;-) I also found my feeling about -08 as slightly coloured by first

Re: [agi] Updated AGI proposal (CMR v2.1)

2008-10-14 Thread BillK
On Tue, Oct 14, 2008 at 2:41 PM, Matt Mahoney wrote: But no matter. Whichever definition you accept, RSI is not a viable path to AGI. An AI that is twice as smart as a human can make no more progress than 2 humans. I can't say I've noticed two dogs being smarter than one dog. Admittedly, a

Re: [agi] open or closed source for AGI project?

2008-10-14 Thread Stephen Reed
Hi YKY, If your code will be open source lisp, then I have a few points learned from my experience at Cycorp. (1) Franz has a very good Common Lisp (Allegro) IDE for Windows and Linux, but is closed source (2) Steel Bank Common Lisp is open source, derived from CMU Common Lisp. Recent SBCL

Re: [agi] Updated AGI proposal (CMR v2.1)

2008-10-14 Thread Matt Mahoney
--- On Tue, 10/14/08, Vladimir Nesov [EMAIL PROTECTED] wrote: On Tue, Oct 14, 2008 at 8:36 AM, Matt Mahoney [EMAIL PROTECTED] wrote: Ben, If you want to argue that recursive self improvement is a special case of learning, then I have no disagreement with the rest of your argument.

Re: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-14 Thread Ben Goertzel
OK, but you have not yet explained what your theory of consciousness is, nor what the physical mechanism nor role for consciousness that you propose is ... you've just alluded obscurely to these things. So it's hard to react except with raised eyebrows and skepticism!! ben g On Tue, Oct 14,

RE: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-14 Thread Derek Zahn
I am reminded of this: http://www.serve.com/bonzai/monty/classics/MissAnneElk Date: Tue, 14 Oct 2008 17:14:39 -0400From: [EMAIL PROTECTED]: [EMAIL PROTECTED]: Re: [agi] Advocacy Is no Excuse for Exaggeration OK, but you have not yet explained what your theory of consciousness is, nor what

Re: [agi] Updated AGI proposal (CMR v2.1)

2008-10-14 Thread Ben Goertzel
Matt, But no matter. Whichever definition you accept, RSI is not a viable path to AGI. An AI that is twice as smart as a human can make no more progress than 2 humans. You don't have automatic self improvement until you have AI that is billions of times smarter. A team of a few people isn't

Re: [agi] Updated AGI proposal (CMR v2.1)

2008-10-14 Thread Mike Tintner
Will:There is a reason why lots of the planets biomass has stayed as bacteria. It does perfectly well like that. It survives. Too much processing power is a bad thing, it means less for self-preservation and affecting the world. Balancing them is a tricky proposition indeed Interesting thought.

Re: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-14 Thread Colin Hales
Ben Goertzel wrote: OK, but you have not yet explained what your theory of consciousness is, nor what the physical mechanism nor role for consciousness that you propose is ... you've just alluded obscurely to these things. So it's hard to react except with raised eyebrows and skepticism!!

Re: [agi] Advocacy Is no Excuse for Consciousness

2008-10-14 Thread John LaMuth
Colin Consc. by nature is subjective ... Can never prove this in a machine -- or other human beings for that matter We are underutilizing about 4 Billion + human Cons's on the earth today What goal -- besides vanity -- is there to simulate this mechanically ?? We need to simulate

Re: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-14 Thread Colin Hales
Hi Terren, They are not 'communities' in the sense that you mean. They are labs in various institutions that work on M/C-consciousness (or pretend to be doing cog sci, whilst actually doing it :-). All I can do is point you at the various references in the paper and get you to keep an eye on

Re: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-14 Thread Colin Hales
doobelow. Mike Tintner wrote: Colin: others such as Hynna and Boahen at Stanford, who have an unusual hardware neural architecture...(Hynna, K. M. and Boahen, K. 'Thermodynamically equivalent silicon models of voltage-dependent ion channels', /Neural Computation/ vol. 19, no.

Re: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-14 Thread Ben Goertzel
About self: you don't like Metzinger's neurophilosophy I presume? (Being No One is a masterwork in my view) I agree that integrative biology is the way to go for understanding brain function ... and I was talking to Walter Freeman about his work in the early 90's when we both showed up at the

Re: [agi] Updated AGI proposal (CMR v2.1)

2008-10-14 Thread Matt Mahoney
--- On Tue, 10/14/08, Terren Suydam [EMAIL PROTECTED] wrote: Matt, Your measure of intelligence seems to be based on not much more than storage capacity, processing power, I/O, and accumulated knowledge. This has the advantage of being easily formalizable, but has the disadvantage of

Re: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-14 Thread Colin Hales
Ben Goertzel wrote: Again, when you say that these neuroscience theories have squashed the computational theories of mind, it is not clear to me what you mean by the computational theories of mind. Do you have a more precise definition of what you mean? I suppose it's a bit ambiguous.

Re: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-14 Thread Ben Goertzel
Sure, I know Pylyshyn's work ... and I know very few contemporary AI scientists who adopt a strong symbol-manipulation-focused view of cognition like Fodor, Pylyshyn and so forth. That perspective is rather dated by now... But when you say Where computation is meant in the sense of abstract

Re: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-14 Thread Colin Hales
Ben Goertzel wrote: Sure, I know Pylyshyn's work ... and I know very few contemporary AI scientists who adopt a strong symbol-manipulation-focused view of cognition like Fodor, Pylyshyn and so forth. That perspective is rather dated by now... But when you say Where computation is meant

Re: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-14 Thread Ben Goertzel
I still don't really get it, sorry... ;-( Are you saying A) that a conscious, human-level AI **can** be implemented on an ordinary Turing machine, hooked up to a robot body or B) A is false ??? If you could clarify this point, I might have an easier time interpreting your other thoughts? I

Re: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-14 Thread Colin Hales
Ben Goertzel wrote: About self: you don't like Metzinger's neurophilosophy I presume? (Being No One is a masterwork in my view) I agree that integrative biology is the way to go for understanding brain function ... and I was talking to Walter Freeman about his work in the early 90's when

Re: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-14 Thread Colin Hales
Ben Goertzel wrote: I still don't really get it, sorry... ;-( Are you saying A) that a conscious, human-level AI **can** be implemented on an ordinary Turing machine, hooked up to a robot body or B) A is false B) Yeah that about does it. Specifically: It will never produce an

Re: COMP = false? (was Re: [agi] Advocacy Is no Excuse for Exaggeration)

2008-10-14 Thread Colin Hales
Matt Mahoney wrote: --- On Tue, 10/14/08, Colin Hales [EMAIL PROTECTED] wrote: The only reason for not connecting consciousness with AGI is a situation where one can see no mechanism or role for it. That inability is no proof there is noneand I have both to the point of having a