RE: [agi] EllaZ systems

2002-12-09 Thread Kevin Copple
Hey Ben, It seems that recent college IT grads here hope to earn about 3000rmb (375usd) a month, but often must settle for less. This is based on my rather limited knowledge. Hopefully I will know more in the near future, since I have been getting the word out and have a local headhunter

RE: [agi] Hello from Kevin Copple

2002-12-09 Thread Ben Goertzel
Gary Miller wrote: I also agree that the AGI approach of modeling and creating a self learning system is a valid bottom up approach to AGI. But it is much harder for me with my limited mathematical and conceptual knowledge of the research to grasp how and when these systems will be able

Re: [agi] AI on TV

2002-12-09 Thread maitri
Ben, I just read the Bio. You gave alot more play to his ideas than the show did. You probably know this, but Starlab has folded and I think he was off to the states... The show seemed to indicate that nothing of note ever came out of the project. In fact, it appeared to not generate one

RE: [agi] AI on TV

2002-12-09 Thread Gary Miller
Title: Message On Dec. 9 Kevin said: "It seems to me that building a strictly "black box" AGI that only uses text or graphical input\output can have tremendous implications for our society, even without arms and eyes and ears, etc. Almost anything can be designed or contemplated within a

RE: [agi] AI on TV

2002-12-09 Thread Ben Goertzel
I was at Starlab one week after it folded. Hugo was the only one left there -- he was living in an apartment in the building. It was a huge, beautiful, ancient, building, formerly the Czech Embassy to Brussels I saw the CAM-Brain machine (CBM) there, disabled by Korkin (the maker) due

Re: [agi] AI on TV

2002-12-09 Thread Shane Legg
maitri wrote: The second guy was from either England or the states, not sure. He was working out of his garage with his wife. He was trying to develop robot AI including vision, speech, hearing and movement. This one's a bit more difficult, Steve Grand perhaps?

Re: [agi] AI on TV

2002-12-09 Thread Shane Legg
Gary Miller wrote: On Dec. 9 Kevin said: It seems to me that building a strictly black box AGI that only uses text or graphical input\output can have tremendous implications for our society, even without arms and eyes and ears, etc. Almost anything can be designed or contemplated within a

Re: [agi] AI on TV

2002-12-09 Thread maitri
that's him... - Original Message - From: Shane Legg [EMAIL PROTECTED] To: [EMAIL PROTECTED] Sent: Monday, December 09, 2002 3:43 PM Subject: Re: [agi] AI on TV maitri wrote: The second guy was from either England or the states, not sure. He was working out of his garage with

Re: [agi] AI on TV

2002-12-09 Thread maitri
I don't want to underestimate the value of embodiment for an AI system, especially for the development of consciousness. But this is just my opinion... As far as a very useful AGI, I don't see the necessity of a body or sensory inputs beyond textual input. Almost any form can be represented as

Re: [agi] AI on TV

2002-12-09 Thread Pei Wang
I have a paper (http://www.cogsci.indiana.edu/farg/peiwang/PUBLICATION/#semantics) on this topic, which is mostly in agreement with what Kevin said. For an intelligent system, it is important for its concepts and beliefs to be grounded on the system's experience, but such experience can be

Re: [agi] AI on TV

2002-12-09 Thread Alan Grimes
Ben Goertzel wrote: This is not a matter of principle, it's a matter of pragmatics I think that a perceptual-motor domain in which a variety of cognitively simple patterns are simply expressed, will make world-grounded early language learning much easier... If anyone has the software

Re: [agi] AI on TV

2002-12-09 Thread Shane Legg
I think my position is similar to Ben's; it's not really what you ground things in, but rather that you don't expose your limited little computer brain to an environment that is too complex -- at least not to start with. Language, even reasonably simple context free languages, could well be too

Re: [agi] AI on TV

2002-12-09 Thread Pei Wang
On this issue, we can distinguish 4 approaches: (1) let symbols get their meaning through interpretation (provided in another language) --- this is the approach used in traditional symbolic AI. (2) let symbols get their meaning by grounding on textual experience --- this is what I and Kevin

RE: [agi] Tony's 2d World

2002-12-09 Thread Ben Goertzel
Tony's 2D training world is a lot simpler than A2I2's, for now. [He is quite free to share details with you or this list, though.] For one thing, his initial shape-world is perception only, involving no action! The simple stuff that we're going to test with it right now, does not involve