What is the source of the data that are used in the preparation of the 
input-output tables?
Is it done by surveys?
________________________________________
From: [email protected] [[email protected]] On 
Behalf Of Jurriaan Bendien [[email protected]]
Sent: Wednesday, March 09, 2011 12:14 PM
To: Progressive Economics
Subject: Re: [Pen-l] Marginalism wrong or not even wrong

Thanks for the comment Paul. I always admire people who take the trouble to
research the data, rather than just mouthe concepts. But I think there are
also hazards in this enterprise.

In the 1980s I proved that S/V differs between industries using data on
labour hours, wages, output and gross profits. Despite broadly similar wage
distibutions, profit volumes could also differ markedly (for an
international comparison see e.g. Alice Amsden 1981, "An International
Comparison of the Rate of Surplus Value in Manufacturing Industry").

But not only that, I also performed hundreds of alternative calculations in
order to get a sense of what the data available really entitles me to
conclude.

You get questions like, "why does the profit rate trend the same way, no
matter how exactly we calculate it?". (Cf. on this also: "The Tendency of
the Rate of Profit to Fall in the United States Part 1," by Dumenil, Glick
and Rangel, Contemporary Marxism No. 9 (1984), "Part 2", 1985, Contemporary
Marxism, No. 11, pp. 138-152).

Output measures used in national accounts are derived from business
expenditure and revenue data, but to arrive at a consistent measure of
national gross product, a special system of grossing and netting is applied,
which means that the income, expenditure & output measures used by
statisticians are not the same as the real income, expenditure & output
values of business. Moreover, because the derived statistical measures
diverge from reality, and because of data gaps, all kinds of imputations and
interpolations are made.

To cut a long story short, national accounts aggregates are themselves
"theoretical entities", i.e. composites the meaning of which depend on a
large number of statistical assumptions. As standardized measures, they may
indicate a broad trend across time, but for the rest they often aren't
really suitable for the purpose of a detailed disaggregate analysis. As soon
as you really do one, you realize that the large aggregates cannot be
reconciled with directly related base data, it "just doesn't add up".

In order to obtain a better data set, you have to rebuild it as much as
possible from scratch, with better concepts, i.e. from base data on
population, labour, wages, capital, incomes, expenditures and so forth.
These days modeling techniques exist which can aid far better estimates of
domestic income than the official "politicized" data provide.

Official economic data quality these days is, I suspect, increasingly poor.
That is because they try to get "as much data results as possible for the
least survey effort", to reduce costs. Effectively that means that they use
mathematical models based on the law of averages to extrapolate current data
from "indicative" and past data. Data discrepancies disappear, fluctuations
are smoothed out, missing values are interpolated etc. Naturally this
approach lends itself to politicaly convenient "creative statistics".

Once you do research on the revision of national accounts data across, say,
three previous decades, you realise that the magnitude of an aggregate or
its components can change retrospectively by 2%, 5%, 10% or even 15%,
because of methodological/definitional changes, valuation changes or new
survey data. Jochen Hartwig provided some evidence for instance to show that
"the divergence in growth rates [of real GDP] between the U.S. and the EU
since 1997 can be explained almost entirely in terms of changes to deflation
methods that have been introduced in the U.S. after 1997, but not - or only
to a very limited extent - in Europe" (Jochen Hartwig, "On Misusing National
Accounts Data for Governance Purposes". Working Papers, Swiss Institute for
Business Cycle Research & Swiss Federal Institute of Technology, No. 101,
March 2005).

The moral of the story: best not to perform operations on the data if the
data quality is such that such operations cannot yield reliable results.

Jurriaan


_______________________________________________
pen-l mailing list
[email protected]
https://lists.csuchico.edu/mailman/listinfo/pen-l

The University of Glasgow, charity number SC004401
_______________________________________________
pen-l mailing list
[email protected]
https://lists.csuchico.edu/mailman/listinfo/pen-l

Reply via email to