On Sat, 10 Feb 2007 13:59:27 -0500, Jef Allbright <[EMAIL PROTECTED]> wrote:

gts wrote:

I'm not expecting "essentially perfect" coherency in AGI.
I understand perfection is out of reach.

Thanks for quoting me here, Jef. I think Ben may have thought I believe something differently.

I understand and agree with everyone here that perfect coherency is not feasible in AGI.

My question to you was whether, as a professed C++ developer, you are
familiar with the well-known impracticality of certifying a non-trivial
software product to be essentially free of unexpected failure modes, and
if so, do you see a similarity to your question of coherent reasoning by
machine intelligence?

Sure, an analogy can be made.

In a similar vein, do you think you understand Ben's comment about the
problem being NP-hard?

Sure...

...our differences here seem to be a matter of degree....

I am less optimistic about the possibility of developing a smart, accurate, probabilistic AGI than I am about developing one that totally *smokes* humanity in measures of probabilistic (De Finetti) coherency.

By the way, De Finetti used the word "coherent" in the very standard
sense meaning that all the pieces must fit together from all possible
points of view (within all possible contexts.)

I was explaining that here yesterday.

This same concept of coherence is the basis of the axioms of probability...

Yes.

... and the principle of indifference.

No.

Understand this underlying concept and you may understand the others.

I understand it, Jef. But do you? The principle of indifference is not derived from or implied in any way by De Finetti coherency. De Finetti had no use for the idea. Neither do I.

-gts

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to