Hello.
In principle Achim is right, by default vcovHC.plm does things the "Arellano" 
way, clustering by group and therefore giving SEs which are robust to general 
heteroskedasticity and serial correlation. The problem with your data, though, 
is that this estimator is N-consistent, so it is inappropriate for your 
setting. The other way round, on the converse (cluster="time") would yield a 
T-consistent estimator, robust to cross-sectional correlation: there's no 
escape, because the "big" dimension is always used to get robustness along the 
"small" one.

Therefore the road to go to have robustness along the "big" dimension is some 
sort of nonparametric truncation. So:

** 1st (possible) solution **

In my opinion, you would actually need a panel implementation of Newey-West, 
which is not implemented in 'plm' yet. It might well be feasible by applying 
vcovHAC{sandwich} to the time-demeaned data but I'm not sure; in this case, 
vcovHAC should be applied this way (here: the famous Munnell data, see 
example(plm))


> library(plm)
> fm<-log(gsp)~log(pcap)+log(pc)+log(emp)+unemp
> data(Produc)
> ## est. FE model
> femod<-plm(fm, Produc)
> ## extract time-demeaned data
> demy<-pmodel.response(femod, model="within")
> demX<-model.matrix(femod, model="within")
> ## estimate lm model on demeaned data
> ## (equivalent to FE, but makes a 'lm' object)
> demod<-lm(demy~demX-1)
> library(sandwich)
> library(lmtest)
> ## apply HAC covariance, e.g., to t-tests
> coeftest(demod, vcov=vcovHAC)

t test of coefficients:

                Estimate Std. Error t value  Pr(>|t|)    
demXlog(pcap) -0.0261497  0.0485168 -0.5390   0.59005    
demXlog(pc)    0.2920069  0.0496912  5.8764 6.116e-09 ***
demXlog(emp)   0.7681595  0.0677258 11.3422 < 2.2e-16 ***
demXunemp     -0.0052977  0.0018648 -2.8410   0.00461 ** 
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 

> ## same goes for waldtest(), lht() etc.

but beware, things are probably complicated by the serial correlation induced 
by demeaning: see the references in the serial correlation tests section of the 
package vignette. Caveat emptor.

** 2nd solution **
Another possible strategy is screening for serial correlation first: again, see 
?pbgtest, ?pdwtest and be aware of all the caveats detailed in the 
abovementioned section of the vignette regarding use on FE models.

** 3rd solution **
Another thing you could do (Hendry and friends would say "should" do!) to get 
rid of serial correlation is a dynamic FE panel, as the Nickell bias is of 
order 1/T and so might well be negligible in your case.

Anyway, thanks for motivating me: I thought we'd provided robust covariances 
all over the place, but there was one direction left ;^)
Giovanni

-----Messaggio originale-----
Da: Achim Zeileis [mailto:achim.zeil...@uibk.ac.at] 
Inviato: mercoledì 13 ottobre 2010 12:06
A: Max Brown
Cc: r-h...@stat.math.ethz.ch; yves.croiss...@univ-reunion.fr; Millo Giovanni
Oggetto: Re: [R] robust standard errors for panel data

On Wed, 13 Oct 2010, Max Brown wrote:

> Hi,
>
> I would like to estimate a panel model (small N large T, fixed 
> effects), but would need "robust" standard errors for that. In 
> particular, I am worried about potential serial correlation for a 
> given individual (not so much about correlation in the cross section).
>
>> From the documentation, it looks as if the vcovHC that comes with plm
> does not seem to do autocorrelation,

My understanding is that it does, in fact. The details say

      Observations may be clustered by '"group"' ('"time"') to account
      for serial (cross-sectional) correlation.

Thus, the default appears to be to account for serial correlation anyway. 
But I'm not an expert in panel-versions of these robust covariances. Yves and 
Giovanni might be able to say more.

> and the NeweyWest in the sandwich
> package says that it expects a fitted model of type "lm" or "glm" (it 
> says nothing about "plm").

That information in the "sandwich" package is outdated - prompted by your email 
I've just fixed the manual page in the development version.

In principle, everything in "sandwich" is object-oriented now, see
   vignette("sandwich-OOP", package = "sandwich")

However, the methods within "sandwich" are only sensible for cross-sectional 
data (vcovHC, sandwich, ...) or time series data (vcovHAC, NeweyWest, kernHAC, 
...). There is not yet explicit support for panel data.

hth,
Z

> How can I estimate the model and get robust standard errors?
>
> Thanks for your help.
>
> Max
>
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide 
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

 
Ai sensi del D.Lgs. 196/2003 si precisa che le informazi...{{dropped:13}}

______________________________________________
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Reply via email to