* Gary Denton ([EMAIL PROTECTED]) wrote:

> To repeat it in words even a half-educated engineer can understand
> the 200+ year GDP is flawed because it assumes they can accurately
> measure GDP that far back - that historical GDP has its own set of
> assumptions.

How would you decide which data is accurate? What year does the
measurement of GDP begin to meet your standards of accuracy?

There is another useful data set that economic statisticians frequently
refer to. It was compiled by Dimson, Marsh, and Staunton in _The Triumph
of the Optimists_ and it covers all of the twentieth century for 16
differenet countries.

For Ireland, Switzerland, Canada, UK, US, Australia, S. Africa, and
Sweden (countries that did not have their capital stock destroyed
during the World Wars), the average growth in real GDP per capita
over the twentieth century for those 8 countries was 1.77%. So, going
out-of-sample from the data I referenced before to include 7 other
countries, and limiting the time period to the twentieth century (to
address you accuracy of older data objection), we have a change from
1.64% to 1.77%. Not a big difference. Looks like the SS trustees figures
for productivity growth are reasonable.

>  Anyone who does a long-term future projection and uses only a half
> of a standard deviation for their upper and lower bounds doesn't know
> economics or statistics unless they have some agenda.

We were discussion the SS Trustees multi-factor model, of which
productivity growth is only one factor. On the reference I gave, they
list 8 other key factors in their model. They don't give the relative
importance of each factor.

Anyone familiar with variability analysis in such a model knows that
when you have a multi-factor model and each factor has a random
variation, you should not simultaneously assign large variations in the
same direction to all the parameters simultaneously. Otherwise you end
up with tiny probabilities very quickly.

This is easy to see by checking some numbers. Consider a multi-factor
model with only 3 parameters (much fewer than the SS model, and so
this example favors useing even larger variations in the individual
parameters than the SS model)

In a model with 3 parameters, each having a standard normal distribution
uncorrelated with the others, assume that we choose all three parameters
to have a value one-half a standard deviation above the mean. For a
single normally-distributed parameter, the probability of it being found
at greater than one-half standard deviation above the mean is 30.9%. The
probability that all 3 parameters are at least one-half standard
deviation above the mean is (30.9%)^3 = 2.9%. So, in this example, the
"extreme" case is worse than all but 2.9% of the possibilities. If we
added a lower extreme case, that would also be 2.9%, so there would be a
chance of 100% - 2 x 2.9% = 94.1% chance that the parameters would fall
within the range of our two extreme cases.

A typical rule of thumb in statistical hypothesis testing is to
use 2-sigma confidence intervals. A 2-sigma confidence interval is
equivalent to 95.4% probability of being correct. So a simple 3-factor
model with half standard deviation variations in the parameters gives us
nearly 2-sigma confidence. Now the SS model is a lot more complicated
-- it has many more than 3 factors, and there will likely be some
correlations, but it is clear that we are in the ballpark of something
that is reasonable and not unjustified by mainstream statistical
analysis.


--
Erik Reuter   http://www.erikreuter.net/
_______________________________________________
http://www.mccmedia.com/mailman/listinfo/brin-l

Reply via email to