I have read in a file (call it myData). The actual file is about 3000x30,000 columns and object.size() says myData takes:
> 737910472/(1024^2) [1] 703.7263 Unfortunately, my program ends up using 40GB as indicated by maxvmem on Unix, which causes my department's cluster to stop working. Perhaps, I have some copying going on that I cannot find. I have created an example below that mimics my program. Could someone help me find my error? I am also confused about how to use Rprofmem to study this problem. Thanks for your time. Regards, Juliet #begin example response <- rnorm(50); x1 <- sample(c(1,2),50,replace=TRUE) age <- sample(seq(20,80),50,replace=TRUE) id <- rep(1:25,each=2) var1 <- rnorm(50); var2 <- rnorm(50); var3 <- rnorm(50); myData <- data.frame(response,x1,age,id,var1,var2,var3) numVars <- ncol(myData)-4; pvalues <- rep(-1,numVars); names(pvalues) <- colnames(myData)[5:ncol(myData)]; library(yags) for (Var_num in 1:numVars) { fit.yags <- yags(myData$response ~ myData$age+myData$x1*myData[,(Var_num+4)], id=myData$id, family=gaussian,corstr="exchangeable",alphainit=0.05) z.gee <- fit.y...@coefficients[5]/sqrt(fit.y...@robust.parmvar[5,5]); pval <- 2 * pnorm(abs(z.gee), lower.tail = FALSE); pvalues[Var_num] <- signif(pval,3); } ______________________________________________ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.