[R] error: subscript out of bounds?

2010-06-14 Thread Yesha Patel
Hi all,

I want to get results for cox proportional hazards on SNP data. I'm trying
to get HRs, CI's,  p-values for each individual SNP - this is represented
by cov[,i]. When I run my code, I get the following error: subscript out of
bounds. I don't know why I am getting this error.

I have looked through the R logs, but nothing that has been previously
suggested has helped so far. What am I doing wrong?

DS3 = time variable, S3 = censor/event variable, cov = my dataset, which is
in a data.frame format:

*
CODE:

test-NULL
bb=names(cov)
col = dim(cov)[2]
for(i in 19:col)
 {
mod=coxph(Surv(cov$DS3,cov$S3)~cov[,i]+AGE+sex,data=cov)
aa-summary(mod)
 test-rbind(test,c(bb[i],aa$coef[1,1],aa$coef[1,3],aa$coef[1,5],aa$conf.int
[1,1],aa$conf.int[1,3],aa$conf.int
[1,4],aa$coef[2,5],aa$coef[3,5],aa$coef[4,5]))
}
write.table(test,result.txt,quote=F)


Thanks!
-CC

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] ERROR: cannot allocate vector of size?

2010-05-21 Thread Yesha Patel
Hi!
Thanks for your reply!  After running the command below I am certain I am
using a 64-bit R.  I am running R through a linux cluster system where R is
globally available for all users.  I have asked the system administrators if
they would update their version R but they are not receptive of making the
change. If I must, I will try to install an updated version in my local
directory.

Though before I do that I want to make sure there are no other underlying
issues I should consider.  Could there be something else I need to look
into?

 .Machine$sizeof.pointer
[1] 8

Thank you!




2010/5/21 Uwe Ligges lig...@statistik.tu-dortmund.de

 At first, I'd try with an R version from 2010 rather than one from 2007.
 Next, I'd try to be sure to really have a 64-bit version of R rather than a
 32 bit one which is what I suspect.

 Best,
 Uwe Ligges



 On 20.05.2010 20:10, Yesha Patel wrote:

 I've looked through all of the posts about this issue (and there are
 plenty!) but I am still unable to solve the error. ERROR:  cannot allocate
 vector of size 455 Mb

 I am using R 2.6.2 - x86_64 on a Linux x86_64 Redhat cluster system. When
 I
 log in, based on the specs I provide [qsub -I -X -l arch=x86_64] I am
 randomly assigned to a x86_64 node.

 I am using package GenABEL. My data (~ 650,000 SNPs, 3,000 people) loads
 in
 okay and I am able to look at the data using basic commands [nids, nsnps,
 names(phdata)]

 The problem occurs when I try to run the extended analysis:  xs-
 mlreg(GASurv(age,dm2)~sex,dta)

 **

 1) I have looked through the memory limits on R
 mem.limits()
 nsize vsize
NANA

 2) Code:
 gc()
used  (Mb) gc trigger   (Mb) max used  (Mb)
 Ncells   961605  51.41710298   91.4  1021138  54.6
 Vcells 64524082 492.3  248794678 1898.2 68885474 525.6

 gc(reset=TRUE)
used  (Mb) gc trigger   (Mb) max used  (Mb)
 Ncells   961119  51.41710298   91.4   961119  51.4
 Vcells 64523417 492.3  199035742 1518.6 64523417 492.3

 3) Linux Memory Allocation - Note: Max Memory Size, Virtual Memory  Stack
 Size are all unlimited

 bash-3.2$ core file size  (blocks, -c) 0
 data seg size   (kbytes, -d) unlimited
 scheduling priority (-e) 0
 file size   (blocks, -f) unlimited
 pending signals (-i) 31743
 max locked memory   (kbytes, -l) 32
 max memory size (kbytes, -m) unlimited
 open files  (-n) 32768
 pipe size(512 bytes, -p) 8
 POSIX message queues (bytes, -q) 819200
 real-time priority  (-r) 0
 stack size  (kbytes, -s) unlimited
 cpu time   (seconds, -t) unlimited
 max user processes  (-u) 31743
 virtual memory  (kbytes, -v) unlimited
 file locks  (-x) unlimited

 4) free -mt
  total   used   free sharedbuffers cached
 Mem:  3901 99   3802  0  1 24
 -/+ buffers/cache: 73   3827
 Swap: 1027 37990
 Total:4929136   4792


 5) ps -u

 USER   PID %CPU %MEMVSZ   RSS TTY  STAT START   TIME COMMAND
 xx   22352  0.0  0.0  65136   956 pts/0S03:09   0:00 -tcsh
 xx   22354  0.0  0.0  13496  1792 pts/0S03:09   0:00
 /usr/sbin/pbs_m
 xx   22355  0.0  0.0   623260 pts/0S03:09   0:00 pbs_demux
 xx   29872  0.0  0.0  63736   920 pts/0R+   09:45   0:00 ps -u

 **

 Any solutions? Thank you!

[[alternative HTML version deleted]]

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.



[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] ERROR: cannot allocate vector of size?

2010-05-20 Thread Yesha Patel
I've looked through all of the posts about this issue (and there are
plenty!) but I am still unable to solve the error. ERROR:  cannot allocate
vector of size 455 Mb

I am using R 2.6.2 - x86_64 on a Linux x86_64 Redhat cluster system. When I
log in, based on the specs I provide [qsub -I -X -l arch=x86_64] I am
randomly assigned to a x86_64 node.

I am using package GenABEL. My data (~ 650,000 SNPs, 3,000 people) loads in
okay and I am able to look at the data using basic commands [nids, nsnps,
names(phdata)]

The problem occurs when I try to run the extended analysis:  xs -
mlreg(GASurv(age,dm2)~sex,dta)

**

1) I have looked through the memory limits on R
mem.limits()
nsize vsize
   NANA

2) Code:
gc()
   used  (Mb) gc trigger   (Mb) max used  (Mb)
Ncells   961605  51.41710298   91.4  1021138  54.6
Vcells 64524082 492.3  248794678 1898.2 68885474 525.6

gc(reset=TRUE)
   used  (Mb) gc trigger   (Mb) max used  (Mb)
Ncells   961119  51.41710298   91.4   961119  51.4
Vcells 64523417 492.3  199035742 1518.6 64523417 492.3

3) Linux Memory Allocation - Note: Max Memory Size, Virtual Memory  Stack
Size are all unlimited

bash-3.2$ core file size  (blocks, -c) 0
data seg size   (kbytes, -d) unlimited
scheduling priority (-e) 0
file size   (blocks, -f) unlimited
pending signals (-i) 31743
max locked memory   (kbytes, -l) 32
max memory size (kbytes, -m) unlimited
open files  (-n) 32768
pipe size(512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority  (-r) 0
stack size  (kbytes, -s) unlimited
cpu time   (seconds, -t) unlimited
max user processes  (-u) 31743
virtual memory  (kbytes, -v) unlimited
file locks  (-x) unlimited

4) free -mt
 total   used   free sharedbuffers cached
Mem:  3901 99   3802  0  1 24
-/+ buffers/cache: 73   3827
Swap: 1027 37990
Total:4929136   4792


5) ps -u

USER   PID %CPU %MEMVSZ   RSS TTY  STAT START   TIME COMMAND
xx   22352  0.0  0.0  65136   956 pts/0S03:09   0:00 -tcsh
xx   22354  0.0  0.0  13496  1792 pts/0S03:09   0:00
/usr/sbin/pbs_m
xx   22355  0.0  0.0   623260 pts/0S03:09   0:00 pbs_demux
xx   29872  0.0  0.0  63736   920 pts/0R+   09:45   0:00 ps -u

**

Any solutions? Thank you!

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] GenABEL - problems with load.gwaa.data

2010-02-23 Thread Yesha Patel
Hi all!  I am using GenABEL on R for GWAS analysis.  I am having a couple of
issues:

First, I am having a problem reading files (.map,  .ped, size 900Mb, using
windows 32-bit) onto R in the convert.snp.ped statement.  I am thinking
this problem is likely due to the large size of the files  my version of R
is not able to handle them, since I can read in smaller files.

Second, and the more precedent issue, is with the load.gwaa.data statement.
It keeps giving me the error: more columns than column names. I have tried
to read in the .dat file (and alternately the .csv file) without the header,
that does not work. I have checked my data-set and there aren't any
discernible columns that are missing. Here's my code:


mix - load.gwaa.data (phe = Z:/.../CCF Pheno.csv, gen = pedmap-0.raw,
force = T)

Any help will be appreciated. Thanks!

Euphie

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.