On Thu, 08 Feb 2007 09:26:28 -0500, Pei Wang <[EMAIL PROTECTED]> wrote:
In simple cases like the above one, an AGI should achieve coherence with little difficulty. What an AGI cannot do is to guarantee coherence in all situations, which is impossible for human beings, neither --- think about situations where the incoherence of a bet setting needs many steps of inference, as well as necessary domain knowledge, to reveal.
Yes, but as I wrote to Ben yesterday, it is not possible to make a dutch book against an AGI that does not pretend to have knowledge it does not have.
So an AGI can be perfectly coherent, to *some* degree of knowledge, provided it knows its own bounds. And such a modest AGI would certainly be more trustworthy, especially if it were employed in such fields as national defense, where incoherent reasoning could lead to disaster.
-gts ----- This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id=303
