Re: [agi] Open-Source AGI

2007-05-11 Thread Mike Tintner
Thanks! That's the trouble with wikipedia - you think you have half an idea (although I certainly wouldn't rate the thought below of mine as an idea) and some bugger has already had it. About a year ago, I was getting into the open source movement and realising how huge its effects would be,

Re: [agi] Open-Source AGI

2007-05-11 Thread Bob Mottram
The open source idea sounds great and in general I agree with this approach. One of the main benefits in my view is ensuring that powerful new technology does not fall into the hands of any single individual, company or nation who could then monopolise its use, potentially with unfriendly

Re: [agi] Open-Source AGI...P.S.

2007-05-11 Thread Mike Tintner
I should add that part of the creative challenge of developing an integrational structure for AGI is to develop one that will allow CREATIVE minds to work together - and not just hacks a la Wikip. - and enable them to integrate whole sets of major new inventions and innovations. And that too,

Re: [agi] Open-Source AGI...P.S.

2007-05-11 Thread Benjamin Goertzel
On 5/11/07, Mike Tintner [EMAIL PROTECTED] wrote: I should add that part of the creative challenge of developing an integrational structure for AGI is to develop one that will allow CREATIVE minds to work together - and not just hacks a la Wikip. - and enable them to integrate whole sets of

Re: [agi] Open-Source AGI

2007-05-11 Thread J Storrs Hall, PhD
On Friday 11 May 2007 05:16:44 am Bob Mottram wrote: ... But in practice it's difficult to do AI in an open source way, because I've found that at least up until the present there have been very few people who actually know anything about the algorithms involved and can make a useful

Re: [agi] Open-Source AGI

2007-05-11 Thread A. T. Murray
Mike Tintner wrote: Thanks! [...] So, ATM, is anyone following up on your ideas and sourceforge framework? http://AIMind-I.com is where Mr. Frank J. Russo (FJR) has created its own website for his version of my http://mind.sourceforge.net/mind4th.html AI in Forth. On another note, Ben

RE: [agi] Tommy

2007-05-11 Thread Derek Zahn
J. Storrs Hall writes: Tommy, the scientific experiment and engineering project, is almost all about concept formation. Great project! While I'm not quite sure about meaning in the concept of price-theoretical market equilibria thing, I really like your idea and it's similar in broad

Re: [agi] Tommy

2007-05-11 Thread Bob Mottram
In order to differentiate this from the rest of the robotics crowd you need to avoid building a specialised pinball playing robot. If the machine can learn and form concepts based upon its experiences it should be able to do so with any kind of game, provided that suitable actuators are

RE: [agi] Tommy

2007-05-11 Thread Derek Zahn
Bob Mottram writes: In order to differentiate this from the rest of the robotics crowd you need to avoid building a specialised pinball playing robot. I can't speak for JoSH, but I got the impression that playing pinball or anything similar was not the object, the object was to provide real

Re: [agi] Tommy

2007-05-11 Thread Mike Tintner
Josh: Thus Tommy. My robotics project discards a major component of robotics that is apparently dear to the embodiment crowd: Tommy is stationary and not autonomous As Daniel Wolpert will tell you, the sea squirt devours its brain as soon as it stops moving. In the final and the first analysis,

Re: [agi] Tommy

2007-05-11 Thread Shane Legg
Josh, Interesting work, and I like the nature of your approach. We have essentially a kind of a pin ball machine at IDSIA and some of the guys were going to work on watching this and trying to learn simple concepts from the observations. I don't work on it so I'm not sure what the current state

Re: [agi] Tommy

2007-05-11 Thread J Storrs Hall, PhD
On Friday 11 May 2007 02:01:09 pm Mike Tintner wrote: ... As Daniel Wolpert will tell you, the sea squirt devours its brain as soon as it stops moving. As Dan Dennet has pointed out, this resembles what happens when one gets tenure... In the final and the first analysis, the brain is a

Re: [agi] Tommy

2007-05-11 Thread Vladimir Nesov
Friday, May 11, 2007, J Storrs Hall, PhD wrote: JSHP 2. The hard part is learning: the AI has to build its own world JSHP model. And for this it requires complex enough world to model. Information about the world can be given by static description (which also includes action-reaction

Re: [agi] Tommy

2007-05-11 Thread J Storrs Hall, PhD
Right. The key issue is autogeny in the mental architecture. Learning will be unsupervised to start, with internal feedback from how well the system is expecting what it sees next. Then we move into a mode where imitation is the key, with the system trying to do what a person just did (e.g.

Re: [agi] Tommy

2007-05-11 Thread Kingma, D.P.
Yes, thank you, a meaningful and very interesting project. I discussed this kind of system with a friend of mine half an hour ago. On 5/11/07, J Storrs Hall, PhD [EMAIL PROTECTED] wrote: 2. The hard part is learning: the AI has to build its own world model. My instinct and experience

Re: [agi] Determinism

2007-05-11 Thread Vladimir Nesov
Saturday, May 12, 2007, Matt Mahoney wrote: MM Now suppose you wanted to simulate A on A. (You may suspect a program has a MM virus and want to see what it would do without actually running it). Now you MM have the same problem. You need an array to reprsent your own memory, and it MM would

Re: [agi] Tommy

2007-05-11 Thread William Pearson
On 11/05/07, J Storrs Hall, PhD [EMAIL PROTECTED] wrote: Tommy, the scientific experiment and engineering project, is almost all about concept formation. He gets a voluminous input stream but is required to parse it into coherent concepts (e.g. objects, positions, velocities, etc). None of

Re: [agi] Tommy

2007-05-11 Thread Pei Wang
Josh, This is an interesting idea that deserves detailed discussion. Since the 90s there has been a strand in AI research that claims that robotics is necessary to the enterprise, based on the notion that having a body is necessary to intelligence. Symbols, it is said, must be grounded in

Re: [agi] Tommy

2007-05-11 Thread Mike Tintner
Josh, Since the 90s there has been a strand in AI research that claims that robotics is necessary to the enterprise, based on the notion that having a body is necessary to intelligence. Symbols, it is said, must be grounded in physical experience to have meaning. Without such grounding AI

Re: [agi] Tommy

2007-05-11 Thread Mike Tintner
Josh, I'm not quite sure what your angle is here, but I don't seem to be communicating, (please correct me). If BTW you and/or others aren't interested in this whole cultural history area, please ignore. I'm saying the last 400 years have been framed by Descartes' and science's mind VERSUS

Re: [agi] Tommy

2007-05-11 Thread Mike Tintner
Josh, [ignore previous truncated version] I'm not quite sure what your angle is here, but I don't seem to be communicating, (please correct me). If BTW you and/or others aren't interested in this whole cultural history area, please ignore. I'm saying the last 400 years have been framed by

Re: [agi] Tommy

2007-05-11 Thread Benjamin Goertzel
Computational AI/AGI vsrobotics (Symbolic AIvs (situated, embodied evolutionary robotics) What has been happening over the last decade or so, is that all these dichotomies have been dissolving. It's arguably a