Re: [agi] Ben vs. the AI academics...

2004-10-24 Thread Bill Hibbard
Yeah, it was fun to watch you stir them up, Ben. But they did take you seriously in the discussions, for example when they included your provocative quote in the plenary summary. A lot of the systems had impressive behavior, but most were dead end approaches, in my opinion, because they made

p.s., Re: [agi] Ben vs. the AI academics...

2004-10-24 Thread Bill Hibbard
My talk is available at: http://www.ssec.wisc.edu/~billh/g/FS104HibbardB.pdf There was a really interesting talk by the neuroscientist Richard Grainger with some publications available at: http://www.brainengineering.com/publications.html Cheers, Bill --- To unsubscribe, change your

Re: [agi] Unlimited intelligence.

2004-10-24 Thread Brad Wyble
On Thu, 21 Oct 2004, deering wrote: True intelligence must be aware of the widest possible context and derive super-goals based on direct observation of that context, and then generate subgoals for subcontexts. Anything with preprogrammed goals is limited intelligence. You have pre-programmed

RE: [agi] Ben vs. the AI academics...

2004-10-24 Thread Ben Goertzel
A lot of the systems had impressive behavior, but most were dead end approaches, in my opinion, because they made logical reasoning fundamental with learning as an add-on. The most impressive talk from the main stream AI community was by Deb Roy, who is achieving interesting

RE: [agi] Ben vs. the AI academics...

2004-10-24 Thread Brad Wyble
On Sun, 24 Oct 2004, Ben Goertzel wrote: One idea proposed by Minsky at that conference is something I disagree with pretty radically. He says that until we understand human-level intelligence, we should make our theories of mind as complex as possible, rather than simplifying them -- for fear of

RE: [agi] Ben vs. the AI academics...

2004-10-24 Thread Ben Goertzel
Hi Brad, Of course I understand that to get the academic community (or anyone else) really excited about Novamente as an AGI system, we'll need splashy demos. They will come in time, don't worry ;-) We have specifically chosen to develop Novamente in accordance with a solid long-term

RE: [agi] Ben vs. the AI academics...

2004-10-24 Thread Ben Goertzel
Now, I understand well that the human brain is a mess with a lot of complexity, a lot of different parts doing diverse things. However, what I think Minsky's architecture does is to explicitly embed, in his AI design, a diversity of phenomena that are better thought of as being emergent. My

RE: [agi] Ben vs. the AI academics...

2004-10-24 Thread Brad Wyble
Hi Brad, really excited about Novamente as an AGI system, we'll need splashy demos. They will come in time, don't worry ;-) We have specifically chosen to Looking forward to it as ever :) I can understand your frustration with this state of affairs. Getting people to buy into your

RE: [agi] Ben vs. the AI academics...

2004-10-24 Thread Ben Goertzel
Hi, Looking forward to it as ever :) I can understand your frustration with this state of affairs. Getting people to buy into your theoretical framework requires a major time investment on their part. This is why my own works stays within the bounds of conventional experimental and

[agi] convergence problem in Novamente

2004-10-24 Thread Yan King Yin
I just had a somewhat funny experience with the traditional AI research community Moshe Looks and I gave a talk Friday at the AAAI Symposium on Achieving Human-Level Intelligence Through Integrated Systems and Research. Our talk was an overview of Novamente; if you're curious our

RE: [agi] Ben vs. the AI academics...

2004-10-24 Thread Brad Wyble
So much for getting work done today :) I noticed at this conference that different researchers were using basic words like knowledge and representation and learning and evolution in very different ways -- which makes communication tricky! Don't get me started on Working Memory. In an AI context,

RE: [agi] convergence problem in Novamente

2004-10-24 Thread Ben Goertzel
Hi YKY, I agree that your algorithmic approach to AI is worth exploring, but I think one serious problem is that when you use the combinator logic to match various patterns, there is no gaurantee that the process will converge. I don't think mind is about guarantees! It's nondeterministic

Re: [agi] Ben vs. the AI academics...

2004-10-24 Thread Pei Wang
I just got home and have no time to write long emails --- I type much slower then Ben does. ;-) I'm very glad to meet Ben again, and Bill and Moshe for the first time (as well as some other people who are not in this list). The Symposium description and schedule can be found at

Re: [agi] Ben vs. the AI academics...

2004-10-24 Thread Pei Wang
One idea proposed by Minsky at that conference is something I disagree with pretty radically. He says that until we understand human-level intelligence, we should make our theories of mind as complex as possible, rather than simplifying them -- for fear of leaving something out! This reminds me

[agi] Model simplification and the kitchen sink

2004-10-24 Thread J . Andrew Rogers
On Oct 24, 2004, at 7:05 AM, Ben Goertzel wrote: One idea proposed by Minsky at that conference is something I disagree with pretty radically. He says that until we understand human-level intelligence, we should make our theories of mind as complex as possible, rather than simplifying them --

Re: [agi] Model simplification and the kitchen sink

2004-10-24 Thread justin corwin
James, I have to say, this is very interesting, and unless I'm very much mistaken, I'm not alone in flipping through my entry level chemistry works looking bibliographic references to chemical engineering references to beg/borrow/steal. But before I run out and start reading, I want to ask your

Re: [agi] Model simplification and the kitchen sink

2004-10-24 Thread Brad Wyble
Another point to this discussion is that the problems of AI and cognitive science are unsolvable by a single person. 1 brain can't understand itself, but perhaps 10,000 brains can understand or design 1 brain. Therefore, these sciences depend on the interaction of communities of scientists in

RE: [agi] Model simplification and the kitchen sink

2004-10-24 Thread Ben Goertzel
I think that, in approaching AI, one should try to find a theory that accounts for all cognitive phenomena observed in humans (and potentially for other cognitive phenomena not observed in humans, that one wants to see in one's AI). However, I think that oftentimes a relatively compact *theory*

Re: [agi] Model simplification and the kitchen sink

2004-10-24 Thread J . Andrew Rogers
I don't want to spend too much time on this, so I'll sum up a few things. The major difference between computer science and chemical engineering as a system model is that chemical engineering has no real axioms. Consequently, you get some inconsistencies that have to be resolved that don't

[agi] Good joke...

2004-10-24 Thread Ben Goertzel
http://www.1729.com/consciousness/math-journal.html --- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Re: [agi] Model simplification and the kitchen sink

2004-10-24 Thread J . Andrew Rogers
On Oct 24, 2004, at 2:14 PM, Brad Wyble wrote: Another point to this discussion is that the problems of AI and cognitive science are unsolvable by a single person. 1 brain can't understand itself, but perhaps 10,000 brains can understand or design 1 brain. This does not follow. You can build

Re: [agi] Model simplification and the kitchen sink

2004-10-24 Thread Eugen Leitl
On Sun, Oct 24, 2004 at 05:14:46PM -0400, Brad Wyble wrote: Another point to this discussion is that the problems of AI and cognitive science are unsolvable by a single person. 1 brain can't understand itself, but perhaps 10,000 brains can understand or design 1 brain. Intelligence is not