Ben, I have no problem with any of the points you made in the following.
However, the axioms of probability theory and interpretations of probability (frequentist, logical, subjective) all take a consistent probability distribution as precondition. Therefore, this assumption is and will be behind any "proof" that AGI systems must be based on probability theory to be optimal. If such a consistency can never be achieved by any concrete AGI system, I don't see the value of such a proof. It cannot even be taken as a useful upper bound or approximation in the design process, because accepting it and rejecting it leads to very different designs. It is just like designing an AGI under the assumption of infinite resources while saying that resources restrictions can be introduced gradually later --- it will never work, unless the old design is almost completely discarded. Pei On 2/4/07, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> > Again, to take consistency as an ultimate goal (which is never fully > achievable) and as a precondition (even an approximate one) are two > very different positions. I hope you are not suggesting the latter --- > at least your posting makes me feel that way. Hi, In the Novamente system, consistency is just one among many goals that are balanced internally by the system as it decides how to allocate its attention and how to prune its knowledge base. I happened to be thinking about consistency from a theoretic point of view lately, but, not because I think it's the sole key to intelligence or anything like that... It happens to be easier to think about mathematically than many of the other important properties of intelligence, however ;-) ben ----- This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id=303
----- This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id=303
