At 10:17 AM -0700 7/15/03, Lotfi A. Zadeh wrote: > Here is a concrete example. Let X be a real-valued random variable. > What we know about the probability distribution ,P, is that its mean is >approximaately a and its variance is approximately b, where >"approximately a" and "approximately b" are fuzzy numbers defined by >their membership functions. The question is: What is the >entropy-maximizing P ? In a more general version, what we know are >approximate values of the first n moments of P. Can anyone point to a >discussion of this issue in the literature?
Please let us consider first the special, simpler case in which a and b (or higher moments) are defined by intervals, i.e. their membership function is only 0 or 1. Then P is defined by a (crisp) set S of probability distributions. This suggests to point to George J. Klir's theory of Generalized Information Theory, (see for example http://www.fuzzy.org.tw/download/IJFS_%AD%5E%A4%E5%B4%C1%A5Z/1(1)/01.pdf). This theory explicits indeed the multi dimensionality of uncertainty measures. Best wishes from the International Symposium on Imprecise Probabilities Theory and Applications in Lugano, Minh.
