Derek Zahn wrote:
Oh, one other thing I forgot to mention. To reach my cheerful conclusion about your paper, I have to be willing to accept your model of cognition. I'm pretty "easy" on that premise-granting, by which I mean that I'm normally willing to go along with architectural suggestions to see where they lead. But I will be curious to see whether others are also willing to go along with you on your generic cognitive system model.


That's an interesting point.

In fact, the argument doesn't change too much if we go to other models of cognition, it just looks different ... and more complicated, which is partly why I wanted to stick with my own formalism.

The crucial part is that there has to be a very powerful mechanism that lets the system analyze its own concepts - it has to be able to reflect on its own knowledge in a very recursive kind of way. Now, I think that Novamente, OpenCog and other systems will eventually have that sort of capability because it is such a crucial part of the "general" bit in "artificial general intelligence".

Once a system has that mechanism, I can use it to take the line I took in the paper.

Also, the generic model of cognition was useful to me in the later part of the paper where I want to analyze semantics. Other AGI architectures (logical ones for example) implicitly stick with the very strict kinds of semantics (possible worlds, e.g.) that I actually think cannot be made to work for all of cognition.

Anyhow, thanks for your positive comments.



Richard Loosemore


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com

Reply via email to