Strange, this normally works, but in a recent run, I have a data set in an
xts format, that has a lot of NA's in a few of the variables in the leading
and trailing positions, due to some lagging calculations. Before running an
analysis, I use

newdata<-na.omit(orginaldata)

and normally a

dim(newdata)

shows the fewer rows. Now, for some reason I do this operation and see that
hundreds of rows SHOULD be removed, (I can plainly see the NAs in there) and
even test is.na(orginaldata$variable) and get a clear "TRUE", but the case
still remains after the "na.omit" operation. Yes, I'm spelling it right.

I'm doing this with many sets of data, and it works great except for this
one data set....

Any idea if there are limits on when this function works, or more
importantly, if there is a "manual" way to do it as a workaround?

        [[alternative HTML version deleted]]

______________________________________________
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Reply via email to