On 2/4/07, gts <[EMAIL PROTECTED]> wrote:
A mathematical test for objectivity/subjectivity might be whether Novamente (or any AGI) could allow, in principle, for the possibility of different posterior probabilities on bayes rule as can happen under subjectivism. My thought is that a programmer is essentially forced for practical reasons to disallow that sort of inconsistency -- that he must implement an objective interpretation.
When programming my AGI system, I'm forced to allow inconsistency. :)
The definition of 'probabilistic consistency' that I was using comes from ET Jaynes' book _Probability Theory - The Logic of Science_, page 114. These are Jaynes' three 'consistency desiderata' for a probabilistic robot: 1. If a conclusion can be reasoned out in more than one way, then every possible way must lead to the same result. 2. The robot takes into account all information relevant to the question. 3. The robot always represents equivalent states of information with equivalent plausibility assignments.
I don't think any intelligent system (human or machine) can achieve any of the three desiderata, except in trivial cases. Pei
Seems to me that strict enforcement of these desiderata (especially #3) would make the robot an objective bayesian as opposed to a subjective bayesian in the De Finetti sense. -gts ----- This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id=303
----- This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id=303
