In a 2005 paper, Paul Cockshott colloquially explains the input/output technique for obtaining labour-values:
"If we divide the directly utilised labour by the dollar value of the industry's output, we get an initial figure for the amount of [direct] labour in each dollar of the output. For industry A we see that 0.32 units of labour go directly into each dollar of output. Since we already know the number of dollars worth of A's output used by every other industry, we can use this to work out the amount of indirect labour used in each industry when it spends a dollar on the output of industry A. This gives a second estimate for the labour used in each industry, which in turn gives us a better estimate for the number of units of labour per dollar output of all industries. We can repeat this process many times and as we do so, our estimates will converge on the true value." www.dcs.gla.ac.uk/~wpc/reports/rethinking.pdf As I noted however in 2008 http://ricardo.ecn.wfu.edu/~cottrell/ope/archive/0807/0135.html one problem of this iteration procedure is that it relies on the methodological assumption of a fixed ratio between labour time worked, paid labour time, and the value of gross output produced. It is assumed, that the magnitude of the indirect labour contained in each part of the output sold and transferred as an input by each sector {A} to other sectors {B,C,D...} will be accurately determined by applying the same labour-output ratio established for sector A's total gross output. Most likely this assumption is arbitrary (think of joint production, and qualitatively different outputs transferred by one sector to other, different sectors) and it introduces a margin of error, but this error is not corrected by additional iterations, nor can we establish what the magnitude of error is. The aim of the whole exercise is to demonstrate a strong correlation between labour-inputs and output values, but in reality labour-inputs are derived from output and input magnitudes which are themselves estimated using numerous statistical assumptions (including the law of averages, categorical assumptions, valuation adjustments, and imputations for missing data). Paul Cockshott doesn't deny the methodological problem and the problem of data accuracy, but he claims "what is interesting is that despite all these difficulties, the actual correlations between sectoral prices and values remains so strong." http://ricardo.ecn.wfu.edu/~cottrell/ope/archive/0807/0139.html "The bottom line Jurrian, is that despite all of these possible sources of error in the data we work with the results are still very good." http://ricardo.ecn.wfu.edu/~cottrell/ope/archive/0807/0171.html This is scientifically not really satisfactory however (some would say it's crap, or propaganda), because what we require specifically is a clear proof that the strong correlation obtained is not simply attributable to the chosen methodology itself (an artifact of research design and data constructs), and that the strong correlation obtained is superior to any alternative positive or negative correlations which might also be obtained. For such a proof, it would be useful that all the data assumptions and methodological assumptions implied in the calculation procedure are spelled out, and their likely margin of error is estimated, but to my knowledge this has never been done, since the data sets are simply accepted as given. The "science" conveniently stops at the point where a result is obtained which appears to clinch the case being made. Jurriaan _______________________________________________ pen-l mailing list [email protected] https://lists.csuchico.edu/mailman/listinfo/pen-l
