Yan King Yin wrote:
I'm not sure what exactly are your ideas for the mechanisms of "model" and "constraints", but in an AGI I think we can simply use predicate logic (or equivalently, conceptual graphs) to represent thoughts. I'd even go further to say that the brain actually uses /symbolic/ representations similar to these. There is no need for numerical constraints to converge iteratively, because the production rules are relatively simple when expressed symbolically (allowing for fuzziness). Why make a problem harder when there's a simple solution?

This is the one thing I am not so sure of: I think the idea of "predicate logic" carries a lot of baggage with it, which is likely to cause systems built that way to have convergence problems (decreasing stability and intelligence as the system scope is expanded to include higher proportions of grounded, autonomously acquired concepts, and as the period of standalone functioning is increased) that we are only just beginning to understand.

My feeling is that something similar to predicate logic is required, which will look like PL under some circumstances, or when viewed from a distance, but which when looked at closely will not be PL at all.

I'm working on it. (As hard as I can, though not by any means full time, alas).


Richard Loosemore


-------
To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to