The trouble is: what is usually called "conjunction fallacy" can be further divided into different cases. Some of them will usually be avoid by an AGI (gts' example), some are hard to avoid, and some are not really a fallacy at all (the classical "Linda the bank teller" example) --- the traditional conclusions are all based on the implicit assumption that "probability" is defined extensionally, so P(A&B) < P(A). In situations where a measurement M is defined intensionally, it is M(A&B) > M(A). The last case directly relates to the "representative heuristic", which I discussed in http://www.cogsci.indiana.edu/pub/wang.heuristic.ps
I'm not sure if Kolmogorov complexity plays a role in this or not. Pei On 2/8/07, Eliezer S. Yudkowsky <[EMAIL PROTECTED]> wrote:
Pei Wang wrote: > On 2/8/07, gts <[EMAIL PROTECTED]> wrote: >> >> I gave an example of a Dutch book in a post to Russell in which an >> incoherent thinker assigns a higher probability to intelligent life on >> Mars than to mere life on Mars. Since the first hypothesis can be true >> only if the second is true, it is incoherent to assign a higher >> probability to the first than to the second. >> >> Coherence is basically just common sense applied to probabilistic >> reasoning. I'm dismayed to learn from Ben that coherence is so difficult >> to achieve in AGI. > > In simple cases like the above one, an AGI should achieve coherence > with little difficulty. What an AGI cannot do is to guarantee > coherence in all situations, which is impossible for human beings, > neither --- think about situations where the incoherence of a bet > setting needs many steps of inference, as well as necessary domain > knowledge, to reveal. Actually, conjunction fallacy is probably going to be one of the most difficult of all biases to eliminate; it may even be provably impossible for entities using any complexity-based variant of Occam's Razor, such as Kolmogorov complexity. If you ask for P(A) at time T and then P(A&B) at time T+1, you should get a higher answer for P(A&B) wherever A is a complex set of variable values that are insufficiently supported by direct evidence, and B is a non-obvious compact explanation for A. Thus, seeing B reduces the apparent Kolmogorov complexity of A, raising A's prior. You cannot always see B directly from A because this amounts to always being able to find the most compact explanation, which amounts to finding the shortest Turing machine that reproduces the data, which is unsolvable by the halting problem. I have sometimes thought that Levin search might yield provably consistent probabilities - after all, a supposed explanation doesn't do you any good if you can't derive data from it or prove that it halts. Even so, seeing B directly from A might require an exponential search too costly to perform. Thus, conjunction fallacy - cases where being told about the hypothesis B raises the subjective probability of P(A&B) over that you previously gave to P(A) - is probably with us to stay, even unto the furthest stars. It may greatly diminish but not be utterly defeated. -- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial Intelligence ----- This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id=303
----- This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id=303
