I'd like to make a comment about Lotfi Zadeh's maxent question:

From: "Lotfi A. Zadeh" <[EMAIL PROTECTED]>
>  Here is a concrete example.  Let X be a real-valued random variable.
>  What we know about the probability distribution ,P, is that its mean is
> approximaately a and its variance is approximately b, where
> "approximately a" and "approximately b" are fuzzy  numbers defined by
> their membership functions.  The question is:  What is the
> entropy-maximizing  P ?

Prof. Zadeh argues that "approximately a" means this:
>    Concretization in terms of membership functions is simplest and most
> natural. If u is a real number, than the grade of membership of u in the
> fuzzy set "approximately a" is simply the subjective degree to which u
> fits your intended meaning of "approximately a."

Kathryn Blackmond Laskey <[EMAIL PROTECTED]> takes this line:
> My first instinct on this problem would be to model P with a
> higher-order distribution.  For example, we might use a Beta
> distribution with parameters alpha and beta, where we are uncertain
> about the values of alpha and beta.  The statement that the mean is
> "approximately a" would provide evidence that
>      alpha/(alpha+beta)
> is near a.  In other words, I would use the Bayesian network

             S
            ^^
           /  \
          /    \
        alpha beta
          \    /
           \  /
            vv
             p

> where S denotes an assertion, made in a given context, that the mean
> of the distribution is approximately a.  Then we would have to
> assess, conditional on different values of alpha and beta, in the
> context in which the assertion was made, the relative probabilities
> that such an assertion would be made given different values of alpha
> and beta.  I grant you that this seems on the surface to be more
> cumbersome than just writing down a fuzzy membership function, but it
> has a clearly defined semantics grounded in probability theory and
> rational evidential reasoning.

I suggest this: why not take "approximately a" to be just a, but treat this
value defeasibly.
So, if we know the mean of X is approximately a, we maximise entropy subject
to the constraint that the mean of X is a; this yields a unique probability
function; but probability according to the maxent approach is rational
degree of belief and is relative to backround knowledge, so if we later
learn a better approximation a' then we ought to change this probability
function to cope with the new knowledge (by maximising entropy again, or by
cross entropy updating).

This approach has the following advantages, as far as I can see:
1. It is simplest
2. it is very easy to implement
3. it yields a unique probability function
4. it is well justified: all our measurements of continuous quantities are
approximate, yet we are always happy to treat our best measurement
(defeasibly) as the value of the measured quantity. When Galileo meaured the
positions of the planets he didn't grade a range of alternative values as
members of the measurement set, nor did he quantify his higher-order degrees
of belief that each of a range of values represented the true value, he took
his measurements at face value, but with a pinch of salt, developing better
and better telescopes to improve his measurements.
5. it is better than the fuzzy approach in the following respect. while it
simplifies "approximately a" to a, this latter value is not taken out of
thin air, it is the approximate measurement. On the other hand the fuzzy
approach has to come up with a membership function of the fuzzy set
"approximately a" - a best a subjective process, at worst subject to
infinite regress (the values of the membership function realistically take
the form "approximately c"...)
6. it is better than the higher-order degree of belief approach for a
similar reason. Quantifying a prior over a range of values is at best
subjective, at worst subject to infinite regress.

All the best,
Jon
-------------------
Jon Williamson
Department of Philosophy, King's College, Strand, London, WC2R 2LS, UK
http://www.kcl.ac.uk/jw



Reply via email to