My contention is that the incremental approach will take unacceptably long to generate the compounds needed to solve nontrivially complex practical problems.
We don't have to start from atoms --- most compounds in our mind are obtained through interaction with other people. We just build upon them. I don't think it is realistic to build an AI system to generate compound terms with arbitrary complexity.
To me, the important thing here is not one or two great ideas, but the average quality of the compounds.
That's just not true -- for instance, what if you and I each generate 10 designs for an AGI system, and yours are all of quality 5, whereas 9 of mine are of quality 0 and one of mine is of quality 10? Clearly, in that case, I've done a better job of thinking about AGI than you! Assuming there is a selection algorithm in play that can pick which ideas to use adaptively.....
You assume that only the best result matters, and the bad ones don't hurt. It is not always the case. If I'm going to make decisions about my life, I'd rather have ten average ones, but not a perfect and nine terribly bad ones.
I think this aspect of NARS is like a kind of stochastic hill-climbing. Because I think NARS is not going to keep (P AND Q) in its active memory very long if it's not judged as "good" by the system. Is it? How does NARS, in the current conception, decide what to keep around and what not to? What am I misunderstanding here?
You are close --- under resources restriction, the system has to omit certain possibilities, which is the case for Novamente, too. The difference is just to decide what to omit. In NARS, the decision is experience based. No doubt many good opportunities are lost, but unless we assume we somehow know the future, I don't think we can avoid that.
Pei
-------
To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
