Hello everyone,

I'm running the following for loop to generate random variables in chunks of
60 at a time (l), here h is of order in millions (could be 5 to 6 millions),
note that generating all the variables at once could have an impact on the
final results

for(j in 1:h){
dat$t.o[seq(0,g1,l)[j]+1:l]<-dat$mu[seq(0,g1,l)[j]+1:l] +
rnorm(l,0,dat$g.var[seq(0,g1,l)[j]+1:l])
}

Is there any way that I can improve on this loop and preserve my objective
of generating variable 60 (l) at a time. What about calling C from R, will
that be a lot faster. Is this a typical situation designed for parallel
computing?

My knowledge of looping in R is very weak; but please note, that my interest
is not just on a piece of code that solve the problem; if you have a link to
a reference that discussed some of these issues let me know. I'm currently
reading the R Inferno; but any other reference will be appreciated.

Thanks

-- 
-Tony

        [[alternative HTML version deleted]]

______________________________________________
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Reply via email to