I'm trying to better understand a statistical method which is vaguely
outlined in a paper I'm reading and am hoping a kind soul here can help.

The method is described as "variance reduction". The author uses it to
decide whether an economic indicator truly has some forecasting ability
over and above a simple "naive" forecast. The method consists of
ordering   values of the indicator (the independent variable) and the
associated historical future return (the dependent variable) then
grouping the observations into "cells". Each cell may have, for example,

200 observations of the ind var and the associated dep var. Averages are

taken for the values of the dependent variable in each cell; these, then

are the conditional forecasts ie. values of the dependent variable
conditional to the value of the indicator. An average is also calculated

over the entire set of dependent variable observations and is used to
define a "naive forecast".

The conditional forecasts are then compared to the naive forecast as a
first step in determining whether the indicator truly has predictive
significance. However, as when comparing two means from noisy data,
there's always a chance that the predictive value you've calculated is a

random result.

And that's my question. How does one use "variance reduction" to
determine whether two means are statistically different.

Any help or references to this technique as I've described it are most
welcome.

Best Regards,
Bill Vedder









=================================================================
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
                  http://jse.stat.ncsu.edu/
=================================================================

Reply via email to