I have a function X, and two different approximator functions A and B

(A-X) is Gaussian (or at least appears to be from mean, variance, skew
and kurtosis calculations) with zero mean and variance of Va
(B-X) is Gaussian (or at least appears to be from mean, variance, skew
and kurtosis calculations) with zero mean and variance of Vb

Va is less than Vb, i.e. A tends to give a better estimate of X than B
does.

My question relates to the distribution of A.
I would like to be able to express this in terms of B (and/or its
distribution) and X.

As a first guess I used just the distribution of A and the value of X,
but quickly realised that this doesn't take account of the correlation
between A and B, i.e. if A underestimates X, then more often than not,
B underestimates X also.

My next suggestion would be to used E(B-A) and X. If I generate a
particular value of A (let us call it 'ax') from its distribution
(about X). Average this value with b and add it to E(B-A).

therefore a=[ax+b]/2+E(B-A)

Is this a reasonable thing to do, or is there a better way?


=================================================================
Instructions for joining and leaving this list, remarks about the
problem of INAPPROPRIATE MESSAGES, and archives are available at
                  http://jse.stat.ncsu.edu/
=================================================================

Reply via email to