On 3/22/06, Niels A. Sommer <[EMAIL PROTECTED]> wrote: > Following lme model runs fine in general under R.2.1.1 but only for 9 out > of my 11 response variables under R.2.2.0. > > model for one of my response variables: > lme(Yresp~F1fix,random=list(const=pdBlocked(list(~F2mix-1,~Ass:F1fix-1,~F3mix-1,~F1fix:F3mix-1,~F2mix:F3mix-1),pdClass="pdIdent"))) > > Yresp is my response variable, F1fix is a fixed effect factor whereas > F2mix and F3mix are random effect factors. > const is set to rep(1,dim(Ycont)[1]). > > The strange thing is that if an intercept is omitted (F1fix-1) the R.2.2.0 > also runs a 100 %. It's the same model, just with another > parameterization??????
The first thing to do in such a case is to request verbose output from the optimizer by adding control = list(msVerbose = TRUE) to your call to lme. One thing that changed for lme between R-2.1.1 and R-2.2.0 is that the default optimizer is now nlminb. Previously it was optim. With a model of that complexity you may well find that you are not getting convergence either in R-2.1.1 or in R-2.2.0. It is just that you are learning about it in R-2.2.0 ______________________________________________ [email protected] mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
