> From: Richard Loosemore [mailto:[EMAIL PROTECTED] > I think this is a very important issue in AGI, which is why I felt > compelled to say something. > > As you know, I keep trying to get meaningful debate to happen on the > subject of *methodology* in AGI. That is what my claims about the > complex systems problem are all about: the very serious possibility > that the existing AGI/AI methodology is so seriously broken that > virtually everything going on right now will be written up by future > historians as a complete waste of effort.
I don't think that will happen, sometimes a lot of energy expenditure needs to be made to just move ahead an inch. Also there is some spinning of wheels going on as other technologies mature which is happening quite well BTW. And there has been an awful lot of directly applicable and related theoretical work accomplished and proliferated over the last few decades. > In that context - where there is something of an agreement about what > the big unsolved problems are, and where I have raised questions about > the very foundations of today's AGI methodology - it is truly > astonishing to hear people talking about issues being more or less > solved, bar the shouting. Excuse my ignorance - top 3 unsolved problems are? - NLP, and what else? And then from what I have gathered on this email list you favor a complex systems emergent approach? But you somehow don't agree with mathematical models. That's an immediate turn-off for implementationalists so it's hard to gain acceptance. Could you give a one liner (or more) description of your theory again if you don't mind, or an URL - my interest is somewhat captivated. John ----- This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=8660244&id_secret=71597210-e47d1c
