Dear friends in causality research,

In this brief greeting I would like to first
call attention to an approaching deadline and then
discuss a couple of recent articles.

1. 
Causality in Education Award - March 1, 2017

We are informed that the deadline for submitting a
nomination for the ASA Causality in Statistics 
Education Award is March 1, 2017. 
For more purpose, criteria and more information please see 
http://www.amstat.org/education/causalityprize/ .

2. 
The next issue of the Journal of Causal Inference
(JCI) is schedule to appear March, 2017.
See https://www.degruyter.com/view/j/jci 

MY contribution to this issue includes a tutorial paper
entitled: "A Linear 'Microscope' for Interventions and
Counterfactuals". An advance copy can be viewed here:
http://ftp.cs.ucla.edu/pub/stat_ser/r459.pdf
Enjoy!

3.
Overturning Econometrics Education
(or, do we need a "causal interpretation"?)

My attention was called to a recent paper by
Josh Angrist and Jorn-Steffen Pischke titled;
"Undergraduate econometrics instruction"
(A NBER working paper)
http://www.nber.org/papers/w23144?utm_campaign=ntw&utm_medium=email&utm_source=ntw
 

This paper advocates a pedagogical paradigm shift
that has methodological ramifications beyond econometrics instruction; 
As I understand it, the shift stands contrary to the traditional
teachings of causal inference, as defined by Sewal Wright (1920),
Haavelmo (1943), Marschak (1950), Wold (1960), and other
founding fathers of econometrics methodology.

In a nut shell, Angrist and Pischke  start with a set of favorite
statistical routines such as IV, regression, differences-in-differences
among others, and then search for "a set of control variables needed 
to insure that the regression-estimated effect of the variable of
interest has a causal interpretation"
Traditional causal inference (including economics) 
teaches us that asking whether the output of a statistical routine
"has a causal interpretation" is the wrong question
to ask, for it misses the direction of the analysis.  
Instead, one should start with the target
causal parameter itself, and asks whether it is 
ESTIMABLE (and if so how),  be it by IV, regression, 
differences-in-differences, or perhaps by some new routine 
that is yet to be discovered and ordained by name.
Clearly, no "causal interpretation" is needed for parameters
that are intrinsically causal; for example, "causal effect"
"path coefficient", "direct effect" or "effect of
treatment on the treated" or "probability of causation" 

In practical terms, the difference  between the two
paradigms is that estimability requires a substantive model
while interpretability appears to be model-free. 
A model exposes its assumptions explicitly, while 
statistical routines give the deceptive impression that
they run assumptions-free ( hence their popular appeal).
The former lends itself to judgmental and statistical tests,
the latter escapes such scrutiny.

In conclusion, if an educator needs to choose between 
the "interpretability" and "estimability" paradigms, I would go
for the latter. If traditional econometrics education
is tailored to support the estimability track, I do not
believe a paradigm shift is warranted towards
an "interpretation seeking" paradigm as the one
proposed by Angrist and Pischke,

I would gladly open this blog for additional discussion on 
this topic. 

I tried to post a comment on NBER (National
Bureau of Economic Research), but was rejected
for not being an approved "NBER family member".
If any of our readers is a ""NBER family member"
feel free to post the above.  
Note: "NBER working papers are circulated for discussion and
comment purposes." (page 1).

Judea


_______________________________________________
uai mailing list
[email protected]
https://secure.engr.oregonstate.edu/mailman/listinfo/uai

Reply via email to