In a recent message, I stated that the maximum entropy 
principle is not applicable when the side-conditions are imprecise. 
 Here is a concrete example.  Let X be a real-valued random variable. 
 What we know about the probability distribution ,P, is that its mean is 
approximaately a and its variance is approximately b, where 
"approximately a" and "approximately b" are fuzzy  numbers defined by 
their membership functions.  The question is:  What is the 
entropy-maximizing  P ?  In a more general version, what we know are 
approximate values of the first n moments of P.   Can anyone point to a 
discussion of this issue in the literature?

-- 
Lotfi A. Zadeh
Professor in the Graduate School, Computer Science Division
Department of Electrical Engineering and Computer Sciences
University of California
Berkeley, CA 94720 -1776
Director, Berkeley Initiative in Soft Computing (BISC) 


Reply via email to