Hi,

Well, Jaynes showed that the PI can be derived from another assumption, right?: That equivalent states of information yield equivalent probabilities....

This seems to also be dealt with at the end of Cox's book "The Algebra of Probable Inference" where he derives the standard entropy formula for measuring information from some fundamental axiomatic assumptions.... One can define information axiomatically in this manner, then make an assumption of maximum entropy priors, and the PI follows from that...

But so far as I can tell this still requires an additional assumption beyond those required in order to derive probability theory...

-- Ben

On Jan 28, 2007, at 1:10 PM, gts wrote:

Hi Ben,

On Extropy-chat, you and I and others were discussing the foundations of probability theory, in particular the philosophical controversy surrounding the so-called Principle of Indifference. Probability theory is of course relevant to AGI because of its bearing on decision theory (I assume that's why you invited me here. :)

As you know, the Principle of Indifference (PI) states that if no reason exists to prefer any of n possibilities then each possibility should be assigned a probability equal to 1/n. The PI is known also as the Principle of Insufficient Reason, the name given it by classical probabilists who followed after Laplace, who took it for granted as a self-evident principle of logic. (It was John Maynard Keynes who later renamed it the Principle of Indifference.)

I found a discussion of the Principle of Insufficient Reason in this book about decision theory:

Choices: An Introduction to Decision Theory By Michael D. Resnik
http://books.google.com/books? vid=ISBN0816614407&id=4genrKNUkKcC&pg=RA2-PA35&lpg=RA2- PA35&ots=wE4Uxk7bqE&dq=principle+of+insufficient +reason&sig=PsMUy3fqcMgFha8Kyx2HLaC-EA8

This author criticizes the PI in two ways. His first is mainly philosophical: if there is no reason for assigning one set of probabilities rather than another, then there is no reason for assuming the states are equiprobable either. This is pretty much the same argument I was trying to make on ExI.

His second objection is one we had not discussed: though the PI seems like a common-sense way to proceed under conditions of uncertainty, invoking it can sometimes lead to disastrous consequences for the decision-maker. I would add that while the PI might be useful in some situations as a heuristic device in programming AGI, perhaps some accounting should be made for the extra risk it entails.

-gts


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to