John G. Rose wrote:
From: Richard Loosemore [mailto:[EMAIL PROTECTED]
I think this is a very important issue in AGI, which is why I felt
compelled to say something.

As you know, I keep trying to get meaningful debate to happen on the
subject of *methodology* in AGI.  That is what my claims about the
complex systems problem are all about:  the very serious possibility
that the existing AGI/AI methodology is so seriously broken that
virtually everything going on right now will be written up by future
historians as a complete waste of effort.

I don't think that will happen, sometimes a lot of energy expenditure needs
to be made to just move ahead an inch. Also there is some spinning of wheels
going on as other technologies mature which is happening quite well BTW. And
there has been an awful lot of directly applicable and related theoretical
work accomplished and proliferated over the last few decades.

In that context - where there is something of an agreement about what
the big unsolved problems are, and where I have raised questions about
the very foundations of today's AGI methodology - it is truly
astonishing to hear people talking about issues being more or less
solved, bar the shouting.

Excuse my ignorance - top 3 unsolved problems are? - NLP, and what else? And
then from what I have gathered on this email list you favor a complex
systems emergent approach? But you somehow don't agree with mathematical
models. That's an immediate turn-off for implementationalists so it's hard
to gain acceptance. Could you give a one liner (or more) description of your
theory again if you don't mind, or an URL - my interest is somewhat
captivated.

Top three?  I don't know if anyone ranks them.

Try:

1) Grounding Problem (the *real* one, not the cheap substitute that everyone usually thinks of as the symbol grounding problem).

2) The problem of desiging an inference control engine whose behavior is predictable/governable etc.

3) A way to represent things - and in particular, uncertainty - without getting buried up to the eyeballs in (e.g.) temporal logics that nobody believes in.

Take this with a pinch of salt: I am sure there are plenty of others. But if you came up with a *principled* solution to these issues, I'd be impressed.

One linear description of my theory?  I'll think about it.


Richard Loosemore

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=71653318-ec0059

Reply via email to