Re: [R] Linker problem in installing 64-bit R

2006-04-21 Thread Prof Brian Ripley
Note:

 configure:4041: checking whether the C compiler works
 configure:4059: error: cannot run C compiled programs.

so your compiler installation is broken, seriously enough that configure 
can make no progress.

Also, gcc 3.4.2 is old and has a known bug that stops some R packages 
using Fortran working correctly, so please avoid it.

Please seek local help with your OS.


On Thu, 20 Apr 2006, Min Shao wrote:

 Hi,

 I am trying to compile R-2.2.1 on Solaris 2.9 with a 64-bit build. Following
 the instructions in R Installation and Adminstration, I changed the

Note, those are example settings that have been tested, but not 
`instructions'.

 following settings in config.site:
 CC=gcc -m64
 F77=g77 -64
 CXX=g++ -m64
 LDFLAGS=-L/usr/local/lib/sparcv9 -L/usr/local/lib

 But I got the following error messages:
 configure:3987: gcc -m64  -I/usr/local/include -L/usr/local/lib/sparcv9
 -L/usr/local/lib conftest.c  5
 /usr/ccs/bin/ld: skipping incompatible /work/net-local-b/sparc-
 sun-solaris2.9/bin/../lib/gcc/sparc-sun-solaris2.9/3.4.2/sparcv9/libgcc.a
 when search
 ing for -lgcc
 /usr/ccs/bin/ld: skipping incompatible /work/net-local-b/sparc-
 sun-solaris2.9/bin/../lib/gcc/sparc-sun-solaris2.9/3.4.2/sparcv9/libgcc_eh.a
 when sea
 rching for -lgcc_eh
 /usr/ccs/bin/ld: skipping incompatible /lib/sparcv9/libc.so when searching
 for -lc
 /usr/ccs/bin/ld: skipping incompatible /usr/lib/sparcv9/libc.so when
 searching for -lc
 /usr/ccs/bin/ld: skipping incompatible /usr/lib/sparcv9/libc.so when
 searching for -lc
 /usr/ccs/bin/ld: skipping incompatible /work/net-local-b/sparc-
 sun-solaris2.9/bin/../lib/gcc/sparc-sun-solaris2.9/3.4.2/sparcv9/libgcc.a
 when search
 ing for -lgcc
 /usr/ccs/bin/ld: skipping incompatible /work/net-local-b/sparc-
 sun-solaris2.9/bin/../lib/gcc/sparc-sun-solaris2.9/3.4.2/sparcv9/libgcc_eh.a
 when sea
 rching for -lgcc_eh
 /usr/ccs/bin/ld: skipping incompatible /lib/sparcv9/libc.so when searching
 for -lc
 /usr/ccs/bin/ld: skipping incompatible /usr/lib/sparcv9/libc.so when
 searching for -lc
 /usr/ccs/bin/ld: skipping incompatible /usr/lib/sparcv9/libc.so when
 searching for -lc
 /usr/ccs/bin/ld: warning: sparc:v9 architecture of input file
 `/work/net-local-b/sparc-sun-solaris2.9/bin/../lib/gcc/sparc-sun-solaris2.9
 /3.4.2/spar
 cv9/crt1.o' is incompatible with sparc output
 /usr/ccs/bin/ld: warning: sparc:v9 architecture of input file
 `/work/net-local-b/sparc-sun-solaris2.9/bin/../lib/gcc/sparc-sun-solaris2.9
 /3.4.2/spar
 cv9/crti.o' is incompatible with sparc output
 /usr/ccs/bin/ld: warning: sparc:v9 architecture of input file
 `/usr/ccs/lib/sparcv9/values-Xa.o' is incompatible with sparc output
 /usr/ccs/bin/ld: warning: sparc:v9 architecture of input file
 `/work/net-local-b/sparc-sun-solaris2.9/bin/../lib/gcc/sparc-sun-solaris2.9
 /3.4.2/spar
 cv9/crtbegin.o' is incompatible with sparc output
 /usr/ccs/bin/ld: warning: sparc:v9 architecture of input file
 `/var/tmp//cckWp3Sb.o' is incompatible with sparc output
 /usr/ccs/bin/ld: warning: sparc:v9 architecture of input file
 `/work/net-local-b/sparc-sun-solaris2.9/bin/../lib/gcc/sparc-sun-solaris2.9
 /3.4.2/spar
 cv9/crtend.o' is incompatible with sparc output
 /usr/ccs/bin/ld: warning: sparc:v9 architecture of input file
 `/work/net-local-b/sparc-sun-solaris2.9/bin/../lib/gcc/sparc-sun-solaris2.9
 /3.4.2/spar
 cv9/crtn.o' is incompatible with sparc output
 configure:3990: $? = 0
 configure:4036: result: a.out
 configure:4041: checking whether the C compiler works
 configure:4047: ./a.out
 configure: line 1: 25817 Bus Error   (core dumped) ./$ac_file
 configure:4050: $? = 138
 configure:4059: error: cannot run C compiled programs.

 I would appreciate any help with resolving the problem.

 Thanks,

 Min

   [[alternative HTML version deleted]]

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


-- 
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] second try; writing user-defined GLM link function

2006-04-21 Thread Prof Brian Ripley
 glm.24.pred-predict(glm.24,newdata=nestday, type=response, SE.fit=T)

What is SE.fit?  The help says se.fit.  That _may_ be a problem.

However, I think the real problem is that the link function argument 
includes a reference to vc.apfa$days that is appropriate for fitting, not 
prediction. One way out might be (untested)

attach(va.apfa)
glm.24 - glm(formula = Success ~ NestHtZ + MeanAge + I(MeanAge^2) + 
I(MeanAge^3), family = logexposure(ExposureDays = days), data = vc.apfa)
detach()
attach(nestday)
glm.24.pred-predict(glm.24,newdata=nestday, type=response, SE.fit=T)
detach()

so that 'days' refers to the appropriate dataset both when fitting and 
predicting.

(This is bending glm() to do something it was not designed to do, so some 
hoop-jumping is needed.)


On Thu, 20 Apr 2006, Jessi Brown wrote:

 An update for all:

 Using the combined contributions from Mark and Dr. Ripley, I've been
 (apparently) successfully  formulating both GLM's and GLMM's (using
 the MASS function glmmPQL) analyzing my nest success data. The beta
 parameter estimates look reasonable and the top models resemble those
 from earlier analyses using a different nest survival analysis
 approach.

 However, I've now run into problems when trying to predict the daily
 survival rates from fitted models. For example, for a model
 considering nest height (NestHtZ) and nest age effects (MeanAge and
 related terms; there is an overall cubic time trend in this model), I
 tried to  predict the daily survival rate for each day out of a 67 day
 nest cycle (so MeanAge is a vector of 1 to 67) with mean nest height
 (also a vector 67 rows in length; both comprise the matrix nestday).
 Here's what happens:

 summary(glm.24)

 Call:
 glm(formula = Success ~ NestHtZ + MeanAge + I(MeanAge^2) + I(MeanAge^3),
family = logexposure(ExposureDays = vc.apfa$days), data = vc.apfa)

 Deviance Residuals:
Min   1Q   Median   3Q  Max
 -3.3264  -1.2341   0.6712   0.8905   1.5569

 Coefficients:
   Estimate Std. Error z value Pr(|z|)
 (Intercept)   6.5742015  1.7767487   3.700 0.000215 ***
 NestHtZ   0.6205444  0.2484583   2.498 0.012504 *
 MeanAge  -0.6018978  0.2983656  -2.017 0.043662 *
 I(MeanAge^2)  0.0380521  0.0152053   2.503 0.012330 *
 I(MeanAge^3) -0.0006349  0.0002358  -2.693 0.007091 **
 ---
 Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

 (Dispersion parameter for binomial family taken to be 1)

Null deviance: 174.86  on 136  degrees of freedom
 Residual deviance: 233.82  on 132  degrees of freedom
 AIC: 243.82

 Number of Fisher Scoring iterations: 13

 glm.24.pred-predict(glm.24,newdata=nestday, type=response, SE.fit=T)
 Warning message:
 longer object length
is not a multiple of shorter object length in: plogis(eta)^days


 Can anyone tell me what I'm doing wrong?

 cheers, Jessi Brown

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


-- 
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] PCA biplot question

2006-04-21 Thread bady
Hi, hi all,

 Hi everyone,
 I'd like to project two pcas onto one device window.
 I plot my first PCA:
 biplot(prcomp(t(cerebdevmat)), var.axes=FALSE, cex=c(1,.1),
 pc.biplot=TRUE)
 Now I'd like to project the features of another PCA onto this graph.
 Any suggestions?

You can used co-inertia analysis (coinertia in package ade4)
or maybe procuste  analysis (package vegan, ade4, shapes)
these analyses are adapted to the simultaneous analyses of two tables.

cheers,


Pierre



--
Ce message a été envoyé depuis le webmail IMP (Internet Messaging Program)

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] rcorrp.cens

2006-04-21 Thread Stefano Mazzuco
Hi R-users,

I'm having some problems in using the Hmisc package.

I'm estimating a cox ph model and want to test whether the drop in
concordance index due to omitting one covariate is significant. I think (but
I'm not sure) here are two ways to do that:

1) predict two cox model (the full model and model without the covariate of
interest) and estimate the concordance index (i.e. area under the ROC curve)
with rcorr.cens for both models, then compute the difference

2) predict the two cox models and estimate directly the difference between
the two c-indices using rcorrp.cens. But it seems that the rcorrp.cens gives
me the drop of Dxy index.

Do you have any hint?

Thanks
Stefano

[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Mutually Orthogonal Latin Squares

2006-04-21 Thread Robin Hankin
Hi Jinsong

Finding even a pair of mutually orthogonal latin squares for
arbitrary order is a difficult problem.  For example, Euler
conjectured that no orthogonal latin squares exist for order
4n+2; this was only disproved in 1960 (in 1900 it
was proved that  there are none of order 6). . . . evidently
a complicated research topic!

Now, this doesn't quite answer your question, but functions
panmagic.4(), panmagic.8() and magic.8() of the magic package
use Latin squares of sizes 4 and 8 for their construction.

HTH

Robin




On 20 Apr 2006, at 16:36, Jinsong Zhao wrote:

 Hi all,

 The package crossdes could contruct a complete sets of mutually  
 orthogonal latin squares.
 The construction works for prime powers only.

 I hope to know whether there is a way to construct a mutually  
 orthogonal Lation square for
 10 or other numbers that could not be prime powers.

 Thanks for any suggestions.

 Best wishes,
 Jinsong Zhao

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! http://www.R-project.org/posting- 
 guide.html

--
Robin Hankin
Uncertainty Analyst
National Oceanography Centre, Southampton
European Way, Southampton SO14 3ZH, UK
  tel  023-8059-7743

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Histogram to compare two datasets

2006-04-21 Thread Johan van Niekerk
Dear All,

I am trying to create a histogram-like plot for comparing two datasets - 
For each interval, I want it to draw 2 bars - one for representing the 
number of values in each dataset for that interval.

I have looked at the help for the hist() function and also for the 
histogram() function that is part of the lattice package, and neither 
seem to support multiple datasets.

Could someone please point me in the right direction?

Kind regards,
Johan van Niekerk

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Histogram to compare two datasets

2006-04-21 Thread Romain Francois
Le 21.04.2006 10:48, Johan van Niekerk a écrit :
 Dear All,

 I am trying to create a histogram-like plot for comparing two datasets - 
 For each interval, I want it to draw 2 bars - one for representing the 
 number of values in each dataset for that interval.

 I have looked at the help for the hist() function and also for the 
 histogram() function that is part of the lattice package, and neither 
 seem to support multiple datasets.

 Could someone please point me in the right direction?

 Kind regards,
 Johan van Niekerk
   
Look there :
http://addictedtor.free.fr/graphiques/search.php?q=histogram


-- 
visit the R Graph Gallery : http://addictedtor.free.fr/graphiques
mixmod 1.7 is released : http://www-math.univ-fcomte.fr/mixmod/index.php
+---+
| Romain FRANCOIS - http://francoisromain.free.fr   |
| Doctorant INRIA Futurs / EDF  |
+---+

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] No Discounts for Springer books (e.g. S Programming)?

2006-04-21 Thread Hans-Peter
Hi,

I want to buy the books Modern applied Statistics with S and S
Programming. Both are Springer books and according to the Discount
Info on the page books related to R
(http://www.r-project.org/doc/bib/R-books.html) subject to a 20 %
discount.

Unfortunatley the promotion code doesn't work. I already contacted
Springer but didn't get an answer.

Does someone know what happened with these discounts? I didn't find
anything related in the archives.

Thanks and best regards,
Hans-Peter

--
PS: if someone has a used or redundant copy for sale, don't hesitate
to contact me...  ;-)

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] online tutorials

2006-04-21 Thread Maxon, Matthew
I work for a Investment group with a very extensive training program and
we are having our new hires take a statistics course at University of
Chicago where they have to complete some assignments with R.  I was
wondering if there are any online tutorials that exist where we could
get our participants comfortable with R before the class itself?  I
appreciate any help at all.
 
Thanks,
Matt Maxon
Learning  Development
 
Matt Maxon
Citadel Investment Group, L.L.C.
312.395.2517 - office

 

 

-
-

CONFIDENTIALITY AND SECURITY NOTICE 

The contents of this message and any attachments may be privileged,
confidential and proprietary and also may be covered by the Electronic
Communications Privacy Act. If you are not an intended recipient, please
inform the sender of the transmission error and delete this message
immediately without reading, disseminating, distributing or copying the
contents. Citadel makes no assurances that this e-mail and any
attachments are free of viruses and other harmful code. 



[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] creating empty cells with table()

2006-04-21 Thread Jim Lemon
Owen, Jason wrote:
 
Hello,

Suppose I simulate 20 observations from the Poisson distribution
with lambda = 4.  I can summarize these values using table() and
then feed that to barplot() for a graph.

Problem: if there are empty categories (e.g. 0) or empty categories
within the range of the data (e.g. observations for 6, 7, 9), barplot()
does not include the empty cells in the x-axis of the plot.  Is there
any way to specify table() to have specific categories (in the above
example, probably 0:12) so that zeroes are included?

The integer.frequency function in the plotrix package (of which there 
will be a new version shortly) handles this, as I couldn't tame either 
tabulate or table to do what I wanted for the freq function. Note that 
it is _not_ a highly optimized function.

Jim

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] online tutorials

2006-04-21 Thread Peter Dalgaard
Maxon, Matthew [EMAIL PROTECTED] writes:

 I work for a Investment group with a very extensive training program and
 we are having our new hires take a statistics course at University of
 Chicago where they have to complete some assignments with R.  I was
 wondering if there are any online tutorials that exist where we could
 get our participants comfortable with R before the class itself?  I
 appreciate any help at all.

There are three major sources:

(1) Manuals that ship with R, notably An Introduction to R

(2) Online materials contributed to CRAN
  http://cran.r-project.org/other-docs.html

(3) Books...


Re. (2), among the shorter documents, The R Guide by Jason Owen
looks quite attractive for people at a very basic level of statistical
knowledge (matrix calculus is assumed in some sections, though).

However, these documents cover a wide range of target audiences, so
you should look at all of them and calibrate against your group.

-- 
   O__   Peter Dalgaard Øster Farimagsgade 5, Entr.B
  c/ /'_ --- Dept. of Biostatistics PO Box 2099, 1014 Cph. K
 (*) \(*) -- University of Copenhagen   Denmark  Ph:  (+45) 35327918
~~ - ([EMAIL PROTECTED])  FAX: (+45) 35327907

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Considering port of SAS application to R

2006-04-21 Thread Werner Wernersen
Hi there!

I am considering to port a SAS application to R and I would like to hear your 
opinion if you think this is possible and worthwhile. SAS is mainly used to do 
data management and then to do some aggregations and simple computations on the 
data and to output a modified data set. The main problem I see is the size of 
the data file. As I have no access to SAS yet I cannot give real details but 
the SAS data file is about 7 gigabytes large. (It's only the basic SAS system 
without any additional modules)

What do you think, would a port to R be possible with reasonable effort? Is R 
able to handle that size of data? Or is R prepared to work together with some 
database system?

Thanks for your thoughts!

Best regards,
  Werner


-

[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Considering port of SAS application to R

2006-04-21 Thread Philippe Grosjean
Please, read the R Data Import/Export manual provided with any version 
of R, and come back with more specific questions.

In general, R cannot deal with datasets as large as those handled by 
SAS. But this is true only when you use standard R functions, like 
read.table(), which are not written to save memory and load very large 
datasets (other aspects are optimized).

I would advise to put your data in a database and then access to it 
piece-by-piece using SQL queries. There are very little cases where you 
actually need the whole dataset in memory at once. A simple database 
system, if you just need to access those data (no complex database 
operations required) is SQLite. There is an R package to connect to such 
a database without extra software needed. Thus, very convenient.

Best,

Philippe Grosjean

..°}))
  ) ) ) ) )
( ( ( ( (Prof. Philippe Grosjean
  ) ) ) ) )
( ( ( ( (Numerical Ecology of Aquatic Systems
  ) ) ) ) )   Mons-Hainaut University, Belgium
( ( ( ( (
..

Werner Wernersen wrote:
 Hi there!
 
 I am considering to port a SAS application to R and I would like to hear your 
 opinion if you think this is possible and worthwhile. SAS is mainly used to 
 do data management and then to do some aggregations and simple computations 
 on the data and to output a modified data set. The main problem I see is the 
 size of the data file. As I have no access to SAS yet I cannot give real 
 details but the SAS data file is about 7 gigabytes large. (It's only the 
 basic SAS system without any additional modules)
 
 What do you think, would a port to R be possible with reasonable effort? Is R 
 able to handle that size of data? Or is R prepared to work together with some 
 database system?
 
 Thanks for your thoughts!
 
 Best regards,
   Werner
 
   
 -
 
   [[alternative HTML version deleted]]
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
 


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Considering port of SAS application to R

2006-04-21 Thread Gabor Grothendieck
R supports a number of databases and if you only need to work with a small
amount of data at once it should be readily do-able; however, R keeps objects
in memory and if you need large amounts at once then you could run into
problems.  Note that S-Plus keeps objects on disk and has other
features aimed at large data and might be an alternative if R cannot handle
the size and you want something based on the S language.

Since SAS was developed many years ago when optimizing computer
resources was more important than it is now it might be difficult to find
an alternative that matches it for performance with large data sets.

You probably want to quickly develop the core of your app in such a way
that it has the main performance characteristics of the full app so you
can get an idea of whether it will work prior to spending the time on the
full code.

Also note that R typically processes matrices faster than data frames
and, in general, how you write your application may affect its performance.

On 4/21/06, Werner Wernersen [EMAIL PROTECTED] wrote:
 Hi there!

 I am considering to port a SAS application to R and I would like to hear your 
 opinion if you think this is possible and worthwhile. SAS is mainly used to 
 do data management and then to do some aggregations and simple computations 
 on the data and to output a modified data set. The main problem I see is the 
 size of the data file. As I have no access to SAS yet I cannot give real 
 details but the SAS data file is about 7 gigabytes large. (It's only the 
 basic SAS system without any additional modules)

 What do you think, would a port to R be possible with reasonable effort? Is R 
 able to handle that size of data? Or is R prepared to work together with some 
 database system?

 Thanks for your thoughts!

 Best regards,
  Werner


 -

[[alternative HTML version deleted]]

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] online tutorials

2006-04-21 Thread louis homer
I got started with the book by Venables and Ripley 'Modern Applied Statistics 
with S-Plus'. If you will be working with the Windows version (unbelievably 
easy to install) you will also find helpful material in the help file.

On Thursday 20 April 2006 09:24, Maxon, Matthew wrote:
 I work for a Investment group with a very extensive training program and
 we are having our new hires take a statistics course at University of
 Chicago where they have to complete some assignments with R.  I was
 wondering if there are any online tutorials that exist where we could
 get our participants comfortable with R before the class itself?  I
 appreciate any help at all.

 Thanks,
 Matt Maxon
 Learning  Development

 Matt Maxon
 Citadel Investment Group, L.L.C.
 312.395.2517 - office




 
 -
 -

 CONFIDENTIALITY AND SECURITY NOTICE

 The contents of this message and any attachments may be privileged,
 confidential and proprietary and also may be covered by the Electronic
 Communications Privacy Act. If you are not an intended recipient, please
 inform the sender of the transmission error and delete this message
 immediately without reading, disseminating, distributing or copying the
 contents. Citadel makes no assurances that this e-mail and any
 attachments are free of viruses and other harmful code.



   [[alternative HTML version deleted]]

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide!
 http://www.R-project.org/posting-guide.html

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] rcorrp.cens

2006-04-21 Thread Frank E Harrell Jr
Stefano Mazzuco wrote:
 Hi R-users,
 
 I'm having some problems in using the Hmisc package.
 
 I'm estimating a cox ph model and want to test whether the drop in
 concordance index due to omitting one covariate is significant. I think (but
 I'm not sure) here are two ways to do that:
 
 1) predict two cox model (the full model and model without the covariate of
 interest) and estimate the concordance index (i.e. area under the ROC curve)
 with rcorr.cens for both models, then compute the difference
 
 2) predict the two cox models and estimate directly the difference between
 the two c-indices using rcorrp.cens. But it seems that the rcorrp.cens gives
 me the drop of Dxy index.
 
 Do you have any hint?
 
 Thanks
 Stefano

First of all, any method based on comparing rank concordances loses 
powers and is discouraged.  Likelihood ratio tests (e.g., by embedding a 
smaller model in a bigger one) are much more powerful.  If you must base 
comparisons on rank concordance (e.g., ROC area=C, Dxy) then rcorrp.cens 
can work if the sample size is large enough so that uncertainty about 
regression coefficient estimates may be ignored.  rcorrp.cens doesn't 
give the drop in C; it gives the probability that one model is more 
concordant with the outcome than another, among pairs of paired 
predictions.

The bootcov function in the Design package has a new version that will 
output bootstrap replicates of C for a model, and its help file tells 
you how to use that to compare C for two models.  This should only be 
done to show how low a power such a procedure has.  rcporrp is likely to 
be more powerful than that, but likelihood ratio is what you want.  You 
will find many cases where one model increases C by only 0.02 but it has 
many more useful (more extreme) predictions.

-- 
Frank E Harrell Jr   Professor and Chair   School of Medicine
  Department of Biostatistics   Vanderbilt University

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] online tutorials

2006-04-21 Thread falissard
If you are not afraid by exoticism, this web site
http://www.kb.u-psud.fr/acces-etudiant/cours/biostat/Biostat.htm
is a French speaking multimedia online tutorial to R...
Best regards,
Bruno


Bruno Falissard
INSERM U669, PSIGIAM
Paris Sud Innovation Group in Adolescent Mental Health
Maison de Solenn
97 Boulevard de Port Royal
75679 Paris cedex 14, France
tel : (+33) 6 81 82 70 76
fax : (+33) 1 45 59 34 18
web site : http://perso.wanadoo.fr/bruno.falissard/

 

-Message d'origine-
De : [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] De la part de Maxon, Matthew
Envoyé : jeudi 20 avril 2006 18:25
À : R-help@stat.math.ethz.ch
Objet : [R] online tutorials

I work for a Investment group with a very extensive training program and
we are having our new hires take a statistics course at University of
Chicago where they have to complete some assignments with R.  I was
wondering if there are any online tutorials that exist where we could
get our participants comfortable with R before the class itself?  I
appreciate any help at all.
 
Thanks,
Matt Maxon
Learning  Development
 
Matt Maxon
Citadel Investment Group, L.L.C.
312.395.2517 - office

 

 

-
-

CONFIDENTIALITY AND SECURITY NOTICE 

The contents of this message and any attachments may be privileged,
confidential and proprietary and also may be covered by the Electronic
Communications Privacy Act. If you are not an intended recipient, please
inform the sender of the transmission error and delete this message
immediately without reading, disseminating, distributing or copying the
contents. Citadel makes no assurances that this e-mail and any
attachments are free of viruses and other harmful code. 



[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide!
http://www.R-project.org/posting-guide.html

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] No Discounts for Springer books (e.g. S Programming)?

2006-04-21 Thread Hans-Peter
Dear Augustin,

 What you have to do is to register in that web page or in Springer Alert,
 and when you order the books, the discount is applied to your order
 automatically.

Thanks a lot, that worked. One has to register and then the discount
token is applied. - For the record: Amazon was quite a lot cheaper, so
it became a would have worked...

Thanks again and best regards,
Hans-Peter

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Considering port of SAS application to R

2006-04-21 Thread bogdan romocea
Forget about R for now and port the application to MySQL/PostgreSQL
etc, it is possible and worthwhile. In case you happen to use (and
really need) some SAS DATA STEP looping features you might be forced
to look into SQL cursors, otherwise the port should be (very)
straightforward.


 -Original Message-
 From: [EMAIL PROTECTED]
 [mailto:[EMAIL PROTECTED] On Behalf Of Werner
 Wernersen
 Sent: Friday, April 21, 2006 7:09 AM
 To: r-help@stat.math.ethz.ch
 Subject: [R] Considering port of SAS application to R

 Hi there!

 I am considering to port a SAS application to R and I would
 like to hear your opinion if you think this is possible and
 worthwhile. SAS is mainly used to do data management and then
 to do some aggregations and simple computations on the data
 and to output a modified data set. The main problem I see is
 the size of the data file. As I have no access to SAS yet I
 cannot give real details but the SAS data file is about 7
 gigabytes large. (It's only the basic SAS system without any
 additional modules)

 What do you think, would a port to R be possible with
 reasonable effort? Is R able to handle that size of data? Or
 is R prepared to work together with some database system?

 Thanks for your thoughts!

 Best regards,
   Werner

   
 -

   [[alternative HTML version deleted]]

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide!
 http://www.R-project.org/posting-guide.html


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Considering port of SAS application to R

2006-04-21 Thread Steve Miller
Good suggestion. Multiple gigabytes is stretching it with R. Use PostgreSQL
Python, and Python DBI database connectivity to replace your SAS data step,
then use the RODBC package to import data into R convenience stores as
appropriate.

Steve Miller

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of bogdan romocea
Sent: Friday, April 21, 2006 7:59 AM
To: [EMAIL PROTECTED]
Cc: r-help
Subject: Re: [R] Considering port of SAS application to R

Forget about R for now and port the application to MySQL/PostgreSQL
etc, it is possible and worthwhile. In case you happen to use (and
really need) some SAS DATA STEP looping features you might be forced
to look into SQL cursors, otherwise the port should be (very)
straightforward.


 -Original Message-
 From: [EMAIL PROTECTED]
 [mailto:[EMAIL PROTECTED] On Behalf Of Werner
 Wernersen
 Sent: Friday, April 21, 2006 7:09 AM
 To: r-help@stat.math.ethz.ch
 Subject: [R] Considering port of SAS application to R

 Hi there!

 I am considering to port a SAS application to R and I would
 like to hear your opinion if you think this is possible and
 worthwhile. SAS is mainly used to do data management and then
 to do some aggregations and simple computations on the data
 and to output a modified data set. The main problem I see is
 the size of the data file. As I have no access to SAS yet I
 cannot give real details but the SAS data file is about 7
 gigabytes large. (It's only the basic SAS system without any
 additional modules)

 What do you think, would a port to R be possible with
 reasonable effort? Is R able to handle that size of data? Or
 is R prepared to work together with some database system?

 Thanks for your thoughts!

 Best regards,
   Werner

   
 -

   [[alternative HTML version deleted]]

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide!
 http://www.R-project.org/posting-guide.html


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide!
http://www.R-project.org/posting-guide.html

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] message posting

2006-04-21 Thread Steven Lacey
Hi, 
 
I sent two emails to the R help list and got no reply. While this may be my
question, it is unlike the users of this list not to reply (thankfully!). I
checked for my message using the archive for April 2006 and found the
following where the text of message should have been:
 
An embedded and charset-unspecified text was scrubbed...
 
Does this message mean that my message wasn't posted or couldn't be viewed?
If so, what did I do wrong? The only thing that comes to mind is that I set
the font in outlook to courrier new so that the columns in a table would
line up.
 
Thanks for any advice,
Steve

[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Need R code

2006-04-21 Thread bogdan romocea
Here's an example.

lst - list()
for (i in 1:5) {
   lst[[i]] - data.frame(v=sample(1:20,10),sample(1:5,10,replace=TRUE))
   colnames(lst[[i]])[2] - paste(x,i,sep=)
   }
dfr - lst[[1]]
for (i in 2:length(lst)) dfr - merge(dfr,lst[[i]],all=TRUE)
dfr - dfr[order(dfr[,1]),]
print(dfr)


 -Original Message-
 From: [EMAIL PROTECTED]
 [mailto:[EMAIL PROTECTED] On Behalf Of stat stat
 Sent: Thursday, April 20, 2006 1:15 AM
 To: r-help@stat.math.ethz.ch
 Subject: [R] Need R code

 Dear r-users,

 Suppose I have three datasets:
   Dataset-1:
   Date  x y
   Jan-1,2005120   230
 Jan-2,2005123   -125
 Jan-3,2005-110  300
 Jan-4,2005114   -21
 Jan-7,200511299
 Mar-5,2005200   311

   Dataset-2:
   Date  x  y
   Jan-2,2005123   -125
 Jan-3,2005-110  300
 Jan-4,2005114   -21
 Jan-5,200511299
 Jan-6,2005-23   12
 Mar-5,2005200   311

   Dataset-3:
   Date  x  y
   Jan-3,2005-110  300
 Jan-4,2005114   -21
 Jan-5,200511299
 Mar-5,2005200   311
 Apl-23,2005   123   200
   Now I want to get the common dates along with x and y from
 this above three datasets keeping the same order
 in date-variable as it is.
   For ex. I want to get:
   Datex  y xy
  x  y
(from dataset-1) (from dataset-2)
 (from dataset-3)
 --
 --
   Jan-3,2005-110  300  -110 300
-110  300
 Jan-4,2005 114  -21 114-21
114   -21
 Mar-5,2005200   311   200 311
  200   311
   Can anyone give me any R code to implement this for any
 number of datasets ?
   Thanks and regards



 thanks in advance
   
 -


   [[alternative HTML version deleted]]

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide!
 http://www.R-project.org/posting-guide.html


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] message posting

2006-04-21 Thread Gavin Simpson
On Fri, 2006-04-21 at 09:22 -0400, Steven Lacey wrote:
 Hi, 
  
 I sent two emails to the R help list and got no reply. While this may be my
 question, it is unlike the users of this list not to reply (thankfully!). I
 checked for my message using the archive for April 2006 and found the
 following where the text of message should have been:
  
 An embedded and charset-unspecified text was scrubbed...
  
 Does this message mean that my message wasn't posted or couldn't be viewed?
 If so, what did I do wrong? The only thing that comes to mind is that I set
 the font in outlook to courrier new so that the columns in a table would
 line up.
  
 Thanks for any advice,
 Steve

Hi Steve

   [[alternative HTML version deleted]]
  
No idea if this has anything to do with it or not, but you are asked to
configure your emailer to *not* send html mail. Configure Outlook to
send plain text only (generally) or set it up to send plain text only to
r-help if you really want to send html-mail to others. It's been a long
while since I used Outlook/Windows, but IIRC you can set this up
somewhere amongst the myriad of prefs in Outlook.

G

-- 
%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%
*  Note new Address, Telephone  Fax numbers from 6th April 2006  *
%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%
Gavin Simpson 
ECRC  ENSIS  [t] +44 (0)20 7679 0522
UCL Department of Geography   [f] +44 (0)20 7679 0565
Pearson Building  [e] gavin.simpsonATNOSPAMucl.ac.uk
Gower Street  [w] http://www.ucl.ac.uk/~ucfagls/cv/
London, UK.   [w] http://www.ucl.ac.uk/~ucfagls/
WC1E 6BT.
%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] message posting

2006-04-21 Thread mike waters
This can be checked/set from the toolbar at the top via the drop-down box
next to the Compose in this mail format section displayed using:
Tools-Options-Mail Format

HTH

Regards

Mike

A picture may be worth a thousand words, but HTML adds far more than a
thousand bytes to an email.

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Gavin Simpson
Sent: 21 April 2006 14:32
To: Steven Lacey
Cc: r-help@stat.math.ethz.ch
Subject: Re: [R] message posting

On Fri, 2006-04-21 at 09:22 -0400, Steven Lacey wrote:
 Hi,
  
 I sent two emails to the R help list and got no reply. While this may 
 be my question, it is unlike the users of this list not to reply 
 (thankfully!). I checked for my message using the archive for April 
 2006 and found the following where the text of message should have been:
  
 An embedded and charset-unspecified text was scrubbed...
  
 Does this message mean that my message wasn't posted or couldn't be
viewed?
 If so, what did I do wrong? The only thing that comes to mind is that 
 I set the font in outlook to courrier new so that the columns in a 
 table would line up.
  
 Thanks for any advice,
 Steve

Hi Steve

   [[alternative HTML version deleted]]
  
No idea if this has anything to do with it or not, but you are asked to
configure your emailer to *not* send html mail. Configure Outlook to send
plain text only (generally) or set it up to send plain text only to r-help
if you really want to send html-mail to others. It's been a long while since
I used Outlook/Windows, but IIRC you can set this up somewhere amongst the
myriad of prefs in Outlook.

G

--
%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%
*  Note new Address, Telephone  Fax numbers from 6th April 2006  *
%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%
Gavin Simpson 
ECRC  ENSIS  [t] +44 (0)20 7679 0522
UCL Department of Geography   [f] +44 (0)20 7679 0565
Pearson Building  [e] gavin.simpsonATNOSPAMucl.ac.uk
Gower Street  [w] http://www.ucl.ac.uk/~ucfagls/cv/
London, UK.   [w] http://www.ucl.ac.uk/~ucfagls/
WC1E 6BT.
%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide!
http://www.R-project.org/posting-guide.html

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] aov contrasts residual error calculation

2006-04-21 Thread Steven Lacey
Hi, 
 
I am using aov with an Error component to model some repeated measures data.
By repeated measures I mean the data look something like this...
 
subjABC
1   411   15
2   312   17
3   5914
4   610   18
 
For each subject I have 3 observations, one in each of three conditions (A,
B, C). I want to test the following contrast (1, 0, -1). One solution is to
apply the contrast weights at the subject level explicitly and then call
t.test on the difference scores. However, I am looking for a more robust
solution as I my actual design has more within-subjects factors and one or
more between subjects factors.
 
A better solution is to specify the contrast in an argument to aov. The
estimated difference of the contrast is the same as that in the paired
t-test, but the residual df are double. While not what I expected, it
follows from the documentation, which explicitly states that these contrasts
are not to be used for any error term. Even though I specify 1 contrast,
there are 2 df for a 3 level factor, and I suspect internally the error term
is calculated by pooling across multiple contrasts. 
 
While very useful, I am wondering if there is way to get aov to calculate
the residual error term only based on the specified contrasts (i.e., not
assume homogeneity of variance and sphericity) for that strata?
 
If not, I could calculate them directly using model.matrix, but I've never
done that. If that is the preferred solution, I'd also appreciate coding
suggestions to do it efficiently. 


How would I do the same thing with a two factor anova where one factor is
within-subjects and one is between... 
Condition
Mapping SubjectABC
11 411   15 
12
13
14
15
16
17
18
29
210
 
Mapping is a between-subject factor. Condition is a within-subject factor.
There are 5 levels of mapping, 8 subjects nested in each level of mapping.
For each of the 40 combinations of mapping and subject there are 3
observations, one in each level of the condition factor. 
 
I want to estimate the pooled error associated with the following set of 4
orthogonal contrasts:
 
condition.L:mapping.L
condition.L:mapping.Q
condition.L:mapping.C
condition.L:mapping^4
 
What is the best way to do this? One way is to estimate the linear contrast
for condition for each subject, create a 40 row matrix where the measure for
each combination of mapping and subject is the linear contrast on condition.
If I pass this dataframe to aov, the mse it returns is the value I am
looking for. 
 
If possible, I would like to obtain the estimate without collapsing the
dataframe, but am not sure how to proceed. Suggestions?

Thanks,
Steve

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Need R-help

2006-04-21 Thread stat stat
Dear r users,
   
  I was trying to fit a garch(1,1) model to my dataset. But while executing I 
got a warning message NaNs produced in: sqrt(pred$e). And got the estimated 
sd's along with five NA, but as per my best knowledge I should get only one 
NA i.e. corresponding to the first observation only. If anyone tell me why I 
got this message it will be a great advantage for me.
   
  With regards,



thanks in advance

-


[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] where is the fitted value of autoregressive AR function in R?

2006-04-21 Thread Michael
 myAR3=ar(mytimeseries, FALSE, 3);
 print(myAR3);

Call:
ar(x = mytimeseries, aic = FALSE, order.max = 3)

Coefficients:
  123
 0.9220   0.0039  -0.1314



names(myAR)
 [1] orderar   var.pred x.mean
aic
 [6] n.used   order.maxpartialacf   resid
method
[11] series   frequencycall asy.var.coef




But where are the fitted values? And how can I construct the AR values from
those terms in myAR?

Thanks a lot!

[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] forcing apply() to return data frame

2006-04-21 Thread Federico Calboli
Hi All,

I am (almost) successfully using apply() to apply a function recursively 
on a data matrix. The function is question is as.genotype() from the 
library 'genetics'

apply(subset(chr1, names$breed == 'lab'),2,as.genotype,sep =)

Unfortuantely apply puts it's results into a matrix object rather than a 
data frame, tranforming my factors into numerics and making the results 
useless.

Is there a way of forcing apply() to return a data frame rather than a 
matrix?

Cheers,

Federico


-- 
Federico C. F. Calboli
Department of Epidemiology and Public Health
Imperial College, St Mary's Campus
Norfolk Place, London W2 1PG

Tel  +44 (0)20 7594 1602 Fax (+44) 020 7594 3193

f.calboli [.a.t] imperial.ac.uk
f.calboli [.a.t] gmail.com

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] there is no xls reader in R?

2006-04-21 Thread Johnsons
I have used the following successfully:

xls = odbcConnectExcel(fname)
Rawdata.temp = sqlQuery( chan, select * from [sheet1$], max=2800 )
close(xls)

Presuming your data is in the Excel tab sheet1.

Of course, this assumes that column headers are in the first row.

Greg Johnson

 

 -Original Message-
 From: roger bos [mailto:[EMAIL PROTECTED] 
 Sent: Thursday, April 20, 2006 6:14 AM
 To: Ko-Kang Kevin Wang
 Cc: R-help@stat.math.ethz.ch
 Subject: Re: [R] there is no xls reader in R?
 
 I like to use the RODBC package for doing this.  Here is my 
 code sample:
 
 xls -  odbcConnectExcel(fname)
 rawdata.temp -  sqlFetch(xls, rawdata, max=2800)
 close(xls)
 
 fname is the full path to the file and rawdata is the name 
 of the excel sheet I want to import.  I tried one other 
 approach (I think it was read.xls()) and that approach used 
 perl scripts to read in the xls file and was very slow.  
 RODBC is very fast and has always worked great for me.  
 Haven't tried any of the other ways mentioned.
 
 
 
 On 4/20/06, Ko-Kang Kevin Wang [EMAIL PROTECTED] wrote:
 
  Have a look at the read.xls() in gdata package.
 
  HTH,
 
  Kevin
 
  Michael wrote:
   Currently I have to convert all my xls into csv before I can 
   read it
  in
   and process the excel data in R...
  
   Is there a way to directly read in xls data?
  
   Thanks a lot!
  
 [[alternative HTML version deleted]]
  
   __
   R-help@stat.math.ethz.ch mailing list 
   https://stat.ethz.ch/mailman/listinfo/r-help
   PLEASE do read the posting guide!
  http://www.R-project.org/posting-guide.html
 
  --
  Ko-Kang Kevin Wang
  Homepage: http://wwwmaths.anu.edu.au/~wangk/
  Ph (W): +61-2-6125-2431
  Ph (H): +61-2-6125-7471
  Ph (M): +61-40-451-8301
 
  __
  R-help@stat.math.ethz.ch mailing list
  https://stat.ethz.ch/mailman/listinfo/r-help
  PLEASE do read the posting guide!
  http://www.R-project.org/posting-guide.html
 
 
   [[alternative HTML version deleted]]
 
 


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] R and ViM

2006-04-21 Thread Bill West
Yes,  my  r.vim  ftplugin file is a windows only solution.  It is not yet at
the point, however, where it may be called a solution. :)  It currently
only handles single lines of code.  I posted it before only as a proof of
concept.  I should have been clearer in my earier post.  

For my Linux computer I have been successfully using the R.vim script by
Johannes Ranke found in the vim scripts: 

http://www.vim.org/scripts/script.php?script_id=1048


This also uses Perl, but depends on IO::Pty, which (although I am no expert)
I do not believe is available for Windows.

--Bill







 

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Jose Quesada
Sent: Thursday, April 20, 2006 9:15 PM
To: r-help@stat.math.ethz.ch
Subject: [R] R and ViM

To: Martin Maechler [EMAIL PROTECTED]

 Indeed. Please do check the archives.

Yep. Post is there.

 Now back to the subject:  Jose, I think your main contribution is 
 based on autoHotKeys  and that only works on Windoze, right?
 Michael explicitly mentioned he's working in Mac OS X.

 Martin

That's correct, autoHotKeys is a win-only solution.
Bill West's solution uses use Win32::OLE, so I guess that means it's
win-only solution.

I think  François Pinard's is the easiest since it uses vim and R only (no
3rd application/language requireed), and that would probably work in all
platforms (not sure if the GNU readline inferface is implemented in all
builds of R, though).

I find autoHotKey very useful (e.g., it fires vim to fill textboxes like
this one -gmail editing window-, offering syntax highlighting
etc) so I use it all the time and don't mind. I can understand those who
don't want to install it just to communicate R and an editor. In that case,
the other two solutions are better. I'll refer to them in my Rvim page
(archives).

However, the new R.vim syntax file has 3-level coloring, and I'm working to
get improved indentation, TODO, DEBUG, etc highlighting, and other
improvements. The code templates using tSkeleton (not ready
yet) may be an added advantage. Those two things you can use even if you
don't use autohotkeys of course.

--
Cheers,
-Jose

PS: it seems that google mail (as an email reader) doesn't let you see your
own messages when posting to a list you are subscribed to.

--
Jose Quesada, PhD.

[EMAIL PROTECTED]   Dept. of Psychology
http://www.andrew.cmu.edu/~jquesada Sussex University
Brighton, UK



--
Cheers,
-Jose
--
Jose Quesada, PhD.

[EMAIL PROTECTED]   Dept. of Psychology
http://www.andrew.cmu.edu/~jquesada Sussex University
Brighton, UK

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide!
http://www.R-project.org/posting-guide.html

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] programming advice?

2006-04-21 Thread Charles Annis, P.E.
Dear R-helpers:

I am doing some exploratory programming and am considering a routine that
has several other routines defined within it, so that I can avoid a large
and messy global re-programming to avoid naming conflicts.  

My question is this:  Because it is interpreted, does R have to re-build
these internal routines every time the new routine is called?  I'm not
overly worried right now about speed, but since my test cases currently run
for several minutes (which is a long time to me) I don't want to learn a
lesson the hard way that you, kind readers, might help me avoid.

Thanks.

Charles Annis, P.E.

[EMAIL PROTECTED]
phone: 561-352-9699
eFax:  614-455-3265
http://www.StatisticalEngineering.com

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] forcing apply() to return data frame

2006-04-21 Thread Thomas Lumley
On Fri, 21 Apr 2006, Federico Calboli wrote:

 Hi All,

 I am (almost) successfully using apply() to apply a function recursively
 on a data matrix. The function is question is as.genotype() from the
 library 'genetics'

 apply(subset(chr1, names$breed == 'lab'),2,as.genotype,sep =)

 Unfortuantely apply puts it's results into a matrix object rather than a
 data frame, tranforming my factors into numerics and making the results
 useless.

 Is there a way of forcing apply() to return a data frame rather than a
 matrix?


The conversion to a matrix happens on the way in to apply, not on the way 
out, so no.

-thomas

Thomas Lumley   Assoc. Professor, Biostatistics
[EMAIL PROTECTED]   University of Washington, Seattle

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] programming advice?

2006-04-21 Thread Duncan Murdoch
On 4/21/2006 10:45 AM, Charles Annis, P.E. wrote:
 Dear R-helpers:
 
 I am doing some exploratory programming and am considering a routine that
 has several other routines defined within it, so that I can avoid a large
 and messy global re-programming to avoid naming conflicts.  
 
 My question is this:  Because it is interpreted, does R have to re-build
 these internal routines every time the new routine is called?  I'm not
 overly worried right now about speed, but since my test cases currently run
 for several minutes (which is a long time to me) I don't want to learn a
 lesson the hard way that you, kind readers, might help me avoid.

I don't know the exact breakdown of the timing, but much of the work is 
done only once, when the source is parsed.  At that point the outer 
function will be created in a structure parts of which are more or less 
identical to the structure of the functions that it creates within it. 
Then when it executes, copies need to be made and wrapped up as objects 
of their own.  Only the latter work needs to be repeated on every call.

I'd say avoiding naming conflicts is probably not a good enough reason 
on its own to use nested functions.  You're better off putting your code 
in a package with a NAMESPACE if that's your goal.  The main reason to 
use nested functions is for clarity:  if some part of the work of a big 
function is neatly encapsulated in a small nested function, then do it. 
  Those nested functions have access to the evaluation environment of 
their enclosing function.

There are other reasons for nested functions (e.g. to create functions 
with static storage), but they are less common than doing it just to 
make the code clear.

Duncan Murdoch

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] programming advice?

2006-04-21 Thread Thomas Lumley
On Fri, 21 Apr 2006, Charles Annis, P.E. wrote:

 Dear R-helpers:

 I am doing some exploratory programming and am considering a routine that
 has several other routines defined within it, so that I can avoid a large
 and messy global re-programming to avoid naming conflicts.

 My question is this:  Because it is interpreted, does R have to re-build
 these internal routines every time the new routine is called?  I'm not
 overly worried right now about speed, but since my test cases currently run
 for several minutes (which is a long time to me) I don't want to learn a
 lesson the hard way that you, kind readers, might help me avoid.

Yes and no.  The code will be parsed once, but the functions will be 
created each time the routine is called.  This results in some extra 
memory turnover, but is almost certainly not significant in the grand 
scheme of things.

Another approach to avoiding name conflicts is to put your code in a 
package with a namespace and export only the functions you want to be 
visible externally.

You might want to run your code under the profiler (?Rprof) to find out 
where it is really spending all its time.

-thomas

Thomas Lumley   Assoc. Professor, Biostatistics
[EMAIL PROTECTED]   University of Washington, Seattle

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] programming advice?

2006-04-21 Thread Seth Falcon
Charles Annis, P.E. [EMAIL PROTECTED]
writes:

 Dear R-helpers:

 I am doing some exploratory programming and am considering a routine that
 has several other routines defined within it, so that I can avoid a large
 and messy global re-programming to avoid naming conflicts.  

 My question is this:  Because it is interpreted, does R have to re-build
 these internal routines every time the new routine is called?  

If you mean:

   f - function(x) {
f1 - function(y) {...}
f2 - function(y) {...}
f3 - function(y) {...}
f1(x) + f2(x) + f3(x)
   }

Then, yes, as I understand it, each call to f() will include the
overhead of defining functions f1, f2, and f3.  In most cases, I would
expect this overhead to be quite small in relation to the actual
computations you are doing.  You can probably use Rprof() to confirm
this.

You can also look at local() which I think provides a similar local
namespace, but would not require redefinition of the helper functions.
Here is a small example:

f - local({
f1 - function(y) 2*y
f2 - function(y) y + 10
function(x) {
f1(x) + f2(x)
}
})


+ seth

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] forcing apply() to return data frame

2006-04-21 Thread Marc Schwartz (via MN)
On Fri, 2006-04-21 at 07:37 -0700, Thomas Lumley wrote:
 On Fri, 21 Apr 2006, Federico Calboli wrote:
 
  Hi All,
 
  I am (almost) successfully using apply() to apply a function recursively
  on a data matrix. The function is question is as.genotype() from the
  library 'genetics'
 
  apply(subset(chr1, names$breed == 'lab'),2,as.genotype,sep =)
 
  Unfortuantely apply puts it's results into a matrix object rather than a
  data frame, tranforming my factors into numerics and making the results
  useless.
 
  Is there a way of forcing apply() to return a data frame rather than a
  matrix?
 
 
 The conversion to a matrix happens on the way in to apply, not on the way 
 out, so no.

This may be a naive example, as I don't work in this domain, but based
upon reviewing the online help at:

  http://finzi.psych.upenn.edu/R/library/genetics/html/genotype.html

and presuming that the intent of the code above is referenced by the
first bullet in the Details section of the function, would the following
work?

This presumes that 'chr1' is a data frame or can be coerced to one as
in:

  chr1 - as.data.frame(chr1)

Thus:

  data.frame(lapply(subset(chr1, names$breed == 'lab'), 
as.genotype, sep =))

HTH,

Marc Schwartz

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] How does ccf() really work?

2006-04-21 Thread Spencer Graves
  The standard estimate of cross correlation uses the same denominator 
for all lags AND ignores the reduction in the number of observations. 
Consider the following:

a. - a-mean(a)
b. - b-mean(b)
SSa - sum(a.^2)
SSb - sum(b.^2)
SaSb - sqrt(SSa*SSb)

sum(a.*b.)/SaSb
# 0.618 = cor(a, b)
sum(a.[-1]*b.[-5])/SaSb
# 0.568 = cc lag 1

sum(a.[1]*b.[5])/SaSb
# -0.065
sum(a.[1:2]*b.[4:5])/SaSb
# -0.289

  These numbers match the results you reported below.  If I'm not 
mistaken, this also matches the definition of the cross correlation 
function in the original Box and Jenkins book [or the more recent Box, 
Jenkins, Reinsel], Time Series Analysis, Forecasting and Control.  The 
rationale, as I recall, is to reduce the false alarm rate by biasing 
estimates with larger lags toward zero, thereby compensating slightly 
for their increased random variability.

  hope this helps.
  spencer graves
p.s.  Thanks for including such a simple, self-contained example.  Posts 
that don't include examples like this are typically much more difficult 
to understand, which in turn increases the chances that a response will 
not help the questionner.

Robert Lundqvist wrote:

 I can't understand the results from cross-correlation function ccf() 
 even though it should be simple. 
 Here's my short example:
 *
 a-rnorm(5);b-rnorm(5)
 a;b
 [1]  1.4429135  0.8470067  1.2263730 -1.8159190 -0.6997260
 [1] -0.4227674  0.8602645 -0.6810602 -1.4858726 -0.7008563
 
 cc-ccf(a,b,lag.max=4,type=correlation)
 
 cc
 Autocorrelations of series 'X', by lag
 
 -4 -3 -2 -1  0  1  2  3  4 
 -0.056 -0.289 -0.232  0.199  0.618  0.568 -0.517 -0.280 -0.012 
 **
 With lag 4 and vectors of length 5 there should as far as I can see 
 only be 2 pairs of observations. The correlation would then be 1. 
 Guess I am missing something really simple here. Anyone who could 
 explain what is happening?
 
 Robert
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] R graph strip in Greek Letters

2006-04-21 Thread zj yang
Hi all,

I have one question on xyplot() function.

I am trying to produce a graph by this code.

tmp - data.frame(a=rnorm(20),
 b=rnorm(20),
 rho=factor(rep(1:2, c(10,10))),
 k=factor(rep(1:5,4)))

tmp.lattice -
xyplot(a ~ b | rho*k, data=tmp,
  par.strip.text=list(font=5),
  strip=function(...) strip.default(..., strip.names=c(TRUE,TRUE)))

names(tmp.lattice$condlevels) - c(k,r)
tmp.lattice

You can see that on the strip, I have both k and rho in greek letters. I
ONLY want rho to be greek letter and remain k as an ordinary lower case 'k'.
The other question is, can I change the : into = in the strip? Thanks a
lot.

Zijiang

[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] forcing apply() to return data frame

2006-04-21 Thread Fredrik Karlsson
Hi,

There is a frameApply fynction in the library gdata from the gregmisc package.
The manual says it is like the function 'by', but returns a data.frame.

Maybe this is something for you?

/Fredrik



2006/4/21, Federico Calboli [EMAIL PROTECTED]:
 Hi All,

 I am (almost) successfully using apply() to apply a function recursively
 on a data matrix. The function is question is as.genotype() from the
 library 'genetics'

 apply(subset(chr1, names$breed == 'lab'),2,as.genotype,sep =)

 Unfortuantely apply puts it's results into a matrix object rather than a
 data frame, tranforming my factors into numerics and making the results
 useless.

 Is there a way of forcing apply() to return a data frame rather than a
 matrix?

 Cheers,

 Federico


 --
 Federico C. F. Calboli
 Department of Epidemiology and Public Health
 Imperial College, St Mary's Campus
 Norfolk Place, London W2 1PG

 Tel  +44 (0)20 7594 1602 Fax (+44) 020 7594 3193

 f.calboli [.a.t] imperial.ac.uk
 f.calboli [.a.t] gmail.com

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] aov contrasts residual error calculation

2006-04-21 Thread Jacques Veslot

why not using lme() ?

first, you need transform data:
dat2 - as.data.frame(lapply(subset(dat, sel=-c(A,B,C)), rep, 3))
dat2$y - unlist(subset(dat, sel=c(A,B,C)), F, F)   
dat2$cond - factor(rep(c(A,B,C), each=nrow(dat)))

dat2$inter - factor(dat2$map):factor(dat2$cond)

lme1 - lme(fixed = y ~ mapping + cond + inter + other fixed effects,
random = ~ 1 |subj, data=dat2,
contrast=list(inter=poly(nlevels(dat2$inter)[,1:4]))






Steven Lacey a écrit :
 Hi, 
  
 I am using aov with an Error component to model some repeated measures data.
 By repeated measures I mean the data look something like this...
  
 subjABC
 1   411   15
 2   312   17
 3   5914
 4   610   18
  
 For each subject I have 3 observations, one in each of three conditions (A,
 B, C). I want to test the following contrast (1, 0, -1). One solution is to
 apply the contrast weights at the subject level explicitly and then call
 t.test on the difference scores. However, I am looking for a more robust
 solution as I my actual design has more within-subjects factors and one or
 more between subjects factors.
  
 A better solution is to specify the contrast in an argument to aov. The
 estimated difference of the contrast is the same as that in the paired
 t-test, but the residual df are double. While not what I expected, it
 follows from the documentation, which explicitly states that these contrasts
 are not to be used for any error term. Even though I specify 1 contrast,
 there are 2 df for a 3 level factor, and I suspect internally the error term
 is calculated by pooling across multiple contrasts. 
  
 While very useful, I am wondering if there is way to get aov to calculate
 the residual error term only based on the specified contrasts (i.e., not
 assume homogeneity of variance and sphericity) for that strata?
  
 If not, I could calculate them directly using model.matrix, but I've never
 done that. If that is the preferred solution, I'd also appreciate coding
 suggestions to do it efficiently. 
 
 
 How would I do the same thing with a two factor anova where one factor is
 within-subjects and one is between... 
 Condition
 Mapping SubjectABC
 11 411   15 
 12
 13
 14
 15
 16
 17
 18
 29
 210
  
 Mapping is a between-subject factor. Condition is a within-subject factor.
 There are 5 levels of mapping, 8 subjects nested in each level of mapping.
 For each of the 40 combinations of mapping and subject there are 3
 observations, one in each level of the condition factor. 
  
 I want to estimate the pooled error associated with the following set of 4
 orthogonal contrasts:
  
 condition.L:mapping.L
 condition.L:mapping.Q
 condition.L:mapping.C
 condition.L:mapping^4
  
 What is the best way to do this? One way is to estimate the linear contrast
 for condition for each subject, create a 40 row matrix where the measure for
 each combination of mapping and subject is the linear contrast on condition.
 If I pass this dataframe to aov, the mse it returns is the value I am
 looking for. 
  
 If possible, I would like to obtain the estimate without collapsing the
 dataframe, but am not sure how to proceed. Suggestions?
 
 Thanks,
 Steve
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
 


-- 
---
[EMAIL PROTECTED]
CNRS UMR 8090 - http://www-good.ibl.fr
Génomique et physiologie moléculaire des maladies métaboliques
I.B.L 2eme etage - 1 rue du Pr Calmette, B.P.245, 59019 Lille Cedex
Tel : 33 (0)3.20.87.10.44 Fax : 33 (0)3.20.87.10.31

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] programming advice?

2006-04-21 Thread Charles Annis, P.E.
Many thanks to Duncan Murdoch, Thomas Lumley, Patrick Burns, and Seth
Falcon, for the illuminating advice, which will be found in the R-help
archives.


Charles Annis, P.E.

[EMAIL PROTECTED]
phone: 561-352-9699
eFax:  614-455-3265
http://www.StatisticalEngineering.com
 

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Charles Annis, P.E.
Sent: Friday, April 21, 2006 10:46 AM
To: R-help@stat.math.ethz.ch
Subject: [R] programming advice?

Dear R-helpers:

I am doing some exploratory programming and am considering a routine that
has several other routines defined within it, so that I can avoid a large
and messy global re-programming to avoid naming conflicts.  

My question is this:  Because it is interpreted, does R have to re-build
these internal routines every time the new routine is called?  I'm not
overly worried right now about speed, but since my test cases currently run
for several minutes (which is a long time to me) I don't want to learn a
lesson the hard way that you, kind readers, might help me avoid.

Thanks.

Charles Annis, P.E.

[EMAIL PROTECTED]
phone: 561-352-9699
eFax:  614-455-3265
http://www.StatisticalEngineering.com

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide!
http://www.R-project.org/posting-guide.html

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] aov contrasts residual error calculation

2006-04-21 Thread Steven Lacey
Jacques, 

Thanks for the reply. I am not using lme because I don’t have the time to
understand how it works and I have a balanced design, so typcial linear
modelling in aov should be sufficient for my purposes. Down the road I plan
to learn lme, but I'm not there yet. So any suggestions with respect to aov
would be greatly appreciated.

Steve

-Original Message-
From: Jacques Veslot [mailto:[EMAIL PROTECTED] 
Sent: Friday, April 21, 2006 11:58 AM
To: Steven Lacey
Cc: r-help@stat.math.ethz.ch
Subject: Re: [R] aov contrasts residual error calculation



why not using lme() ?

first, you need transform data:
dat2 - as.data.frame(lapply(subset(dat, sel=-c(A,B,C)), rep, 3))
dat2$y - unlist(subset(dat, sel=c(A,B,C)), F, F)

dat2$cond - factor(rep(c(A,B,C), each=nrow(dat)))

dat2$inter - factor(dat2$map):factor(dat2$cond)

lme1 - lme(fixed = y ~ mapping + cond + inter + other fixed effects,
random = ~ 1 |subj, data=dat2,
contrast=list(inter=poly(nlevels(dat2$inter)[,1:4]))






Steven Lacey a écrit :
 Hi,
  
 I am using aov with an Error component to model some repeated measures 
 data. By repeated measures I mean the data look something like this...
  
 subjABC
 1   411   15
 2   312   17
 3   5914
 4   610   18
  
 For each subject I have 3 observations, one in each of three 
 conditions (A, B, C). I want to test the following contrast (1, 0, 
 -1). One solution is to apply the contrast weights at the subject 
 level explicitly and then call t.test on the difference scores. 
 However, I am looking for a more robust solution as I my actual design 
 has more within-subjects factors and one or more between subjects 
 factors.
  
 A better solution is to specify the contrast in an argument to aov. 
 The estimated difference of the contrast is the same as that in the 
 paired t-test, but the residual df are double. While not what I 
 expected, it follows from the documentation, which explicitly states 
 that these contrasts are not to be used for any error term. Even 
 though I specify 1 contrast, there are 2 df for a 3 level factor, and 
 I suspect internally the error term is calculated by pooling across 
 multiple contrasts.
  
 While very useful, I am wondering if there is way to get aov to 
 calculate the residual error term only based on the specified 
 contrasts (i.e., not assume homogeneity of variance and sphericity) 
 for that strata?
  
 If not, I could calculate them directly using model.matrix, but I've 
 never done that. If that is the preferred solution, I'd also 
 appreciate coding suggestions to do it efficiently.
 
 
 How would I do the same thing with a two factor anova where one factor 
 is within-subjects and one is between...
 Condition
 Mapping SubjectABC
 11 411   15 
 12
 13
 14
 15
 16
 17
 18
 29
 210
  
 Mapping is a between-subject factor. Condition is a within-subject 
 factor. There are 5 levels of mapping, 8 subjects nested in each level 
 of mapping. For each of the 40 combinations of mapping and subject 
 there are 3 observations, one in each level of the condition factor.
  
 I want to estimate the pooled error associated with the following set 
 of 4 orthogonal contrasts:
  
 condition.L:mapping.L
 condition.L:mapping.Q
 condition.L:mapping.C
 condition.L:mapping^4
  
 What is the best way to do this? One way is to estimate the linear 
 contrast for condition for each subject, create a 40 row matrix where 
 the measure for each combination of mapping and subject is the linear 
 contrast on condition. If I pass this dataframe to aov, the mse it 
 returns is the value I am looking for.
  
 If possible, I would like to obtain the estimate without collapsing 
 the dataframe, but am not sure how to proceed. Suggestions?
 
 Thanks,
 Steve
 
 __
 R-help@stat.math.ethz.ch mailing list 
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! 
 http://www.R-project.org/posting-guide.html
 


-- 
---
[EMAIL PROTECTED]
CNRS UMR 8090 - http://www-good.ibl.fr
Génomique et physiologie moléculaire des maladies métaboliques I.B.L 2eme
etage - 1 rue du Pr Calmette, B.P.245, 59019 Lille Cedex Tel : 33
(0)3.20.87.10.44 Fax : 33 (0)3.20.87.10.31

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] R debugging options

2006-04-21 Thread Spencer Graves
  Regarding a function that lists functions, have you considered 
getFunctions in library(svIDE)?  You need to provide the argument, as 
in getFunctions(1);   getFunctions() returns an error message.

  Beyond this, the objects function in S-Plus (at least version 6.2) 
has a classes argument, which the R 2.2.1 implementation does not 
have.  It doesn't look like it would be too difficult to add such an 
argument to objects in R, but I have not been in a position to 
volunteer to do it, and without that, I didn't feel it was appropriate 
for me to suggest it.

  hope this helps,
  spencer graves

John Fox wrote:

 Dear Larry,
 
 I'm not aware of an existing function that lists functions, but here's a
 simple solution:
 
 listFunctions - function(all.names=FALSE, envir=.GlobalEnv){
 # all.names=TRUE: include names beginning with .
 # envir: environment to search
 Objects - objects(envir, all.names=all.names)
 if (length(Objects) == 0) Objects
 else names(which(sapply(Objects, 
 function(object) is.function(eval(parse(text=object),
 envir=envir)
 }
 
 Getting mtrace() to use the function names returned by listFunctions() is a
 bit tricky, because of the way mtrace() evaluates its arguments. You could
 do something like the following:
 
 for(f in listFunctions()) mtrace(char.fname=f)
 
 Perhaps someone else knows of an existing or better solution.
 
 I hope this helps,
  John
 
 
 John Fox
 Department of Sociology
 McMaster University
 Hamilton, Ontario
 Canada L8S 4M4
 905-525-9140x23604
 http://socserv.mcmaster.ca/jfox 
  
 
 
-Original Message-
From: [EMAIL PROTECTED] 
[mailto:[EMAIL PROTECTED] On Behalf Of Larry Howe
Sent: Tuesday, April 18, 2006 12:46 PM
To: r-help@stat.math.ethz.ch
Subject: Re: [R] R debugging options

On Monday April 17 2006 21:08, Francisco J. Zagmutt wrote:

RSiteSearch(debug) or RSiteSearch(debugging) will give 

you a lot 

or relevant information.  I personally use library(debug) 

extensivelly 

and it should do all the taks you asked about. There is a 

nice article 

describing the debug lilbrary in the 2003/3 issue of R News 
http://cran.r-project.org/doc/Rnews/Rnews_2003-3.pdf

Cheers

Francisco

Wow! That is a great package. I think it does all I need.

Is there a way to turn on debugging for all loaded functions? 
My source file contains many functions and I would prefer not 
to have to mtrace() each one. 
Something like


mtrace(how_do_I_get_a_list_of_all_loaded_functions)

?

Larry

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! 
http://www.R-project.org/posting-guide.html
 
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] aov contrasts residual error calculation

2006-04-21 Thread Jacques Veslot
with error strata, aov() becomes difficult too...

a nice and brief presentation of mixed models with r by John Fox at:
http://cran.r-project.org/doc/contrib/Fox-Companion/appendix.html




Steven Lacey a écrit :

 Jacques, 
 
 Thanks for the reply. I am not using lme because I don’t have the time to
 understand how it works and I have a balanced design, so typcial linear
 modelling in aov should be sufficient for my purposes. Down the road I plan
 to learn lme, but I'm not there yet. So any suggestions with respect to aov
 would be greatly appreciated.
 
 Steve
 
 -Original Message-
 From: Jacques Veslot [mailto:[EMAIL PROTECTED] 
 Sent: Friday, April 21, 2006 11:58 AM
 To: Steven Lacey
 Cc: r-help@stat.math.ethz.ch
 Subject: Re: [R] aov contrasts residual error calculation
 
 
 
 why not using lme() ?
 
 first, you need transform data:
 dat2 - as.data.frame(lapply(subset(dat, sel=-c(A,B,C)), rep, 3))
 dat2$y - unlist(subset(dat, sel=c(A,B,C)), F, F)
 
 dat2$cond - factor(rep(c(A,B,C), each=nrow(dat)))
 
 dat2$inter - factor(dat2$map):factor(dat2$cond)
 
 lme1 - lme(fixed = y ~ mapping + cond + inter + other fixed effects,
   random = ~ 1 |subj, data=dat2,
   contrast=list(inter=poly(nlevels(dat2$inter)[,1:4]))
 
 
   
   
 
 
 Steven Lacey a écrit :
 
Hi,
 
I am using aov with an Error component to model some repeated measures 
data. By repeated measures I mean the data look something like this...
 
subjABC
1   411   15
2   312   17
3   5914
4   610   18
 
For each subject I have 3 observations, one in each of three 
conditions (A, B, C). I want to test the following contrast (1, 0, 
-1). One solution is to apply the contrast weights at the subject 
level explicitly and then call t.test on the difference scores. 
However, I am looking for a more robust solution as I my actual design 
has more within-subjects factors and one or more between subjects 
factors.
 
A better solution is to specify the contrast in an argument to aov. 
The estimated difference of the contrast is the same as that in the 
paired t-test, but the residual df are double. While not what I 
expected, it follows from the documentation, which explicitly states 
that these contrasts are not to be used for any error term. Even 
though I specify 1 contrast, there are 2 df for a 3 level factor, and 
I suspect internally the error term is calculated by pooling across 
multiple contrasts.
 
While very useful, I am wondering if there is way to get aov to 
calculate the residual error term only based on the specified 
contrasts (i.e., not assume homogeneity of variance and sphericity) 
for that strata?
 
If not, I could calculate them directly using model.matrix, but I've 
never done that. If that is the preferred solution, I'd also 
appreciate coding suggestions to do it efficiently.


How would I do the same thing with a two factor anova where one factor 
is within-subjects and one is between...
Condition
Mapping SubjectABC
11 411   15 
12
13
14
15
16
17
18
29
210
 
Mapping is a between-subject factor. Condition is a within-subject 
factor. There are 5 levels of mapping, 8 subjects nested in each level 
of mapping. For each of the 40 combinations of mapping and subject 
there are 3 observations, one in each level of the condition factor.
 
I want to estimate the pooled error associated with the following set 
of 4 orthogonal contrasts:
 
condition.L:mapping.L
condition.L:mapping.Q
condition.L:mapping.C
condition.L:mapping^4
 
What is the best way to do this? One way is to estimate the linear 
contrast for condition for each subject, create a 40 row matrix where 
the measure for each combination of mapping and subject is the linear 
contrast on condition. If I pass this dataframe to aov, the mse it 
returns is the value I am looking for.
 
If possible, I would like to obtain the estimate without collapsing 
the dataframe, but am not sure how to proceed. Suggestions?

Thanks,
Steve

__
R-help@stat.math.ethz.ch mailing list 
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! 
http://www.R-project.org/posting-guide.html

 
 
 


-- 
---
[EMAIL PROTECTED]
CNRS UMR 8090 - http://www-good.ibl.fr
Génomique et physiologie moléculaire des maladies métaboliques
I.B.L 2eme etage - 1 rue du Pr Calmette, B.P.245, 59019 Lille Cedex
Tel : 33 (0)3.20.87.10.44 Fax : 33 (0)3.20.87.10.31

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Titles in MAplots

2006-04-21 Thread Brooks, Anthony B
Hi
Does anyone know how to set the titles in MAplots to just show the CEL file 
name?
So far I have;
 
#define 'Array' as object containing CEL names
Array - col.names(Data)
#open bmp and make a separate bmp for each MAplot
bmp(C:/MAplot%03d.bmp)
#remove the annotation and minimise margins
par(ann=FALSE)
par(mar=c(1,1,1,1))
#MAplot
MAplot(Data...
 
Does anyone know the correct arguments? Do I need to create another parameter 
value?
 
Tony

[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] aov contrasts residual error calculation

2006-04-21 Thread Spencer Graves
Hi, Steve:

  I'm on the opposite extreme:  I don't know aov, and given that it is 
largely obsolete, I'm not too eager to learn it.

  spencer graves

Steven Lacey wrote:

 Jacques, 
 
 Thanks for the reply. I am not using lme because I don’t have the time to
 understand how it works and I have a balanced design, so typcial linear
 modelling in aov should be sufficient for my purposes. Down the road I plan
 to learn lme, but I'm not there yet. So any suggestions with respect to aov
 would be greatly appreciated.
 
 Steve
 
 -Original Message-
 From: Jacques Veslot [mailto:[EMAIL PROTECTED] 
 Sent: Friday, April 21, 2006 11:58 AM
 To: Steven Lacey
 Cc: r-help@stat.math.ethz.ch
 Subject: Re: [R] aov contrasts residual error calculation
 
 
 
 why not using lme() ?
 
 first, you need transform data:
 dat2 - as.data.frame(lapply(subset(dat, sel=-c(A,B,C)), rep, 3))
 dat2$y - unlist(subset(dat, sel=c(A,B,C)), F, F)
 
 dat2$cond - factor(rep(c(A,B,C), each=nrow(dat)))
 
 dat2$inter - factor(dat2$map):factor(dat2$cond)
 
 lme1 - lme(fixed = y ~ mapping + cond + inter + other fixed effects,
   random = ~ 1 |subj, data=dat2,
   contrast=list(inter=poly(nlevels(dat2$inter)[,1:4]))
 
 
   
   
 
 
 Steven Lacey a écrit :
 
Hi,
 
I am using aov with an Error component to model some repeated measures 
data. By repeated measures I mean the data look something like this...
 
subjABC
1   411   15
2   312   17
3   5914
4   610   18
 
For each subject I have 3 observations, one in each of three 
conditions (A, B, C). I want to test the following contrast (1, 0, 
-1). One solution is to apply the contrast weights at the subject 
level explicitly and then call t.test on the difference scores. 
However, I am looking for a more robust solution as I my actual design 
has more within-subjects factors and one or more between subjects 
factors.
 
A better solution is to specify the contrast in an argument to aov. 
The estimated difference of the contrast is the same as that in the 
paired t-test, but the residual df are double. While not what I 
expected, it follows from the documentation, which explicitly states 
that these contrasts are not to be used for any error term. Even 
though I specify 1 contrast, there are 2 df for a 3 level factor, and 
I suspect internally the error term is calculated by pooling across 
multiple contrasts.
 
While very useful, I am wondering if there is way to get aov to 
calculate the residual error term only based on the specified 
contrasts (i.e., not assume homogeneity of variance and sphericity) 
for that strata?
 
If not, I could calculate them directly using model.matrix, but I've 
never done that. If that is the preferred solution, I'd also 
appreciate coding suggestions to do it efficiently.


How would I do the same thing with a two factor anova where one factor 
is within-subjects and one is between...
Condition
Mapping SubjectABC
11 411   15 
12
13
14
15
16
17
18
29
210
 
Mapping is a between-subject factor. Condition is a within-subject 
factor. There are 5 levels of mapping, 8 subjects nested in each level 
of mapping. For each of the 40 combinations of mapping and subject 
there are 3 observations, one in each level of the condition factor.
 
I want to estimate the pooled error associated with the following set 
of 4 orthogonal contrasts:
 
condition.L:mapping.L
condition.L:mapping.Q
condition.L:mapping.C
condition.L:mapping^4
 
What is the best way to do this? One way is to estimate the linear 
contrast for condition for each subject, create a 40 row matrix where 
the measure for each combination of mapping and subject is the linear 
contrast on condition. If I pass this dataframe to aov, the mse it 
returns is the value I am looking for.
 
If possible, I would like to obtain the estimate without collapsing 
the dataframe, but am not sure how to proceed. Suggestions?

Thanks,
Steve

__
R-help@stat.math.ethz.ch mailing list 
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! 
http://www.R-project.org/posting-guide.html

 
 


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] R graph strip in Greek Letters

2006-04-21 Thread Sundar Dorai-Raj


zj yang wrote:
 Hi all,
 
 I have one question on xyplot() function.
 
 I am trying to produce a graph by this code.
 
 tmp - data.frame(a=rnorm(20),
  b=rnorm(20),
  rho=factor(rep(1:2, c(10,10))),
  k=factor(rep(1:5,4)))
 
 tmp.lattice -
 xyplot(a ~ b | rho*k, data=tmp,
   par.strip.text=list(font=5),
   strip=function(...) strip.default(..., strip.names=c(TRUE,TRUE)))
 
 names(tmp.lattice$condlevels) - c(k,r)
 tmp.lattice
 
 You can see that on the strip, I have both k and rho in greek letters. I
 ONLY want rho to be greek letter and remain k as an ordinary lower case 'k'.
 The other question is, can I change the : into = in the strip? Thanks a
 lot.
 
 Zijiang
 
   [[alternative HTML version deleted]]
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

Does this help?

library(lattice)
trellis.par.set(theme = col.whitebg())

tmp - data.frame(a = rnorm(20),
   b = rnorm(20),
   rho = factor(rep(1:2, c(10,10))),
   k = factor(rep(1:5,4)))

strip.math - function(which.given, which.panel, var.name, 
factor.levels, ...) {
   vn - var.name[which.given]
   fl - factor.levels
   expr - paste(vn, ==, fl, collapse = ,)
   expr - paste(expression(, expr, ), sep = )
   fl - eval(parse(text = expr))
   strip.default(which.given, which.panel, vn, fl, ...)
}

xyplot(a ~ b | rho*k, data=tmp, strip = strip.math)

The eval(parse(text = ...))) is the only way I know how to build a 
valid expression for the plotmath functions. See ?plotmath for more 
information.

HTH,

--sundar

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] R and ViM

2006-04-21 Thread Jeffrey Horner
Bill West wrote:
 Yes,  my  r.vim  ftplugin file is a windows only solution.  It is not yet at
 the point, however, where it may be called a solution. :)  It currently
 only handles single lines of code.  I posted it before only as a proof of
 concept.  I should have been clearer in my earier post.  
 
 For my Linux computer I have been successfully using the R.vim script by
 Johannes Ranke found in the vim scripts: 
 
 http://www.vim.org/scripts/script.php?script_id=1048
 
 
 This also uses Perl, but depends on IO::Pty, which (although I am no expert)
 I do not believe is available for Windows.
 
 --Bill

I've been following this thread with some interest as I use both R and 
VIM, but I just want to point out that we have come full circle by *not* 
answering the original thread creator's question:

https://stat.ethz.ch/pipermail/r-help/2006-April/092554.html

My question now is, whether there are already people out there knowing
how to do this in a similar easy way as with Emacs, and if those would
be willing to share this knowledge.
I did already research on the web on this topic, but i couldn't find
satisfying answers, except one good looking approach on the ViM-website
with a perl-script called funnel.pl, which I couldn't make running on
my mac OSX 10.3.9 so far.

So this last post suggests that the VIM script (id 1048) is useful 
(which it is), but in fact the original thread creator already stated 
that it doesn't work for his platform.

The real answer is: there's probably no current 'similar way as with 
Emacs [and ESS]' to integrate R and VIM on mac OSX 10.3.9, but I bet you 
might find a better answer on the R-SIG-Mac email list here:

https://stat.ethz.ch/mailman/listinfo/r-sig-mac

So I hope this helps, Michael.

...

I point all this out to illuminate how hard it is to both convey a 
message in email and also to interpret a message in email, to the 
point that responses to questions on this list can really wander off on 
tangents (which is not necessarily a bad thing; I had no clue the R.vim 
script existed for Linux). I personally find benefit in reading both the 
R-help and R-devel mailing lists, and I commend and am thankful for all 
those email authors who convey their message in email with sincerity, 
honesty, etc.

There was a recent Wired article here:

http://www.wired.com/news/technology/0,70179-0.html

that cited a Journal of Personality and Social Psychology paper titled 
Egocentrism over e-mail: Can we communicate as well as we think?:

http://content.apa.org/journals/psp/89/6

which essentially points out the fact that determining the tone of an 
author's email is no better than chance, which I believe can lead to 
misinterpretation of message and meaning, and especially when authors 
use negative messages like RTFM (google it; you'll figure it out quickly).

-- 
Jeffrey Horner   Computer Systems Analyst School of Medicine
615-322-8606 Department of Biostatistics   Vanderbilt University

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Considering port of SAS application to R

2006-04-21 Thread Werner Wernersen
Now it sounds like a consense that it is not advisable to do everything in R 
but to use a database system and scripting language instead for the rough work.

Thanks for all of your suggestions!
  Werner
  

Steve Miller [EMAIL PROTECTED] schrieb: Good suggestion. Multiple gigabytes 
is stretching it with R. Use PostgreSQL
Python, and Python DBI database connectivity to replace your SAS data step,
then use the RODBC package to import data into R convenience stores as
appropriate.

Steve Miller

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of bogdan romocea
Sent: Friday, April 21, 2006 7:59 AM
To: [EMAIL PROTECTED]
Cc: r-help
Subject: Re: [R] Considering port of SAS application to R

Forget about R for now and port the application to MySQL/PostgreSQL
etc, it is possible and worthwhile. In case you happen to use (and
really need) some SAS DATA STEP looping features you might be forced
to look into SQL cursors, otherwise the port should be (very)
straightforward.


 -Original Message-
 From: [EMAIL PROTECTED]
 [mailto:[EMAIL PROTECTED] On Behalf Of Werner
 Wernersen
 Sent: Friday, April 21, 2006 7:09 AM
 To: r-help@stat.math.ethz.ch
 Subject: [R] Considering port of SAS application to R

 Hi there!

 I am considering to port a SAS application to R and I would
 like to hear your opinion if you think this is possible and
 worthwhile. SAS is mainly used to do data management and then
 to do some aggregations and simple computations on the data
 and to output a modified data set. The main problem I see is
 the size of the data file. As I have no access to SAS yet I
 cannot give real details but the SAS data file is about 7
 gigabytes large. (It's only the basic SAS system without any
 additional modules)

 What do you think, would a port to R be possible with
 reasonable effort? Is R able to handle that size of data? Or
 is R prepared to work together with some database system?

 Thanks for your thoughts!

 Best regards,
   Werner

   
 -

  [[alternative HTML version deleted]]

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide!
 http://www.R-project.org/posting-guide.html


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide!
http://www.R-project.org/posting-guide.html




-

[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] AIC and numbers of parameters

2006-04-21 Thread Philip Stephens

Hi.  I'm fairly new to R and have a quick question regarding AIC, logLik 
and numbers of parameters.  I see that there has been some correspondence 
on this in the past but none of the threads seem to have been 
satisfactorily resolved.

I have been trying to use R to obtain AIC for fitted models and then to 
extrapolate to AICc.

For example, using simple x-y regression data, I fitted a linear regression 
[lm(y~x)] and an exponential curve [nls(y~A*exp(B*x))].  AICs reported for 
these models are based on degrees of freedom of 3 and 2, respectively.  It 
seems clear to me that the number of parameters in both cases is 3 (two 
coefficients plus the error term).  Consequently, I'm dubious about the 
values of AIC provided by R.  Equally, however, I'm reluctant to base my 
own calculations on logLik and my own estimates of K, without first 
determining why R uses the degrees of freedom it does.

If anyone can explain why R gives different dfs for two models each with 
two parameters, I should be very grateful.

Thanks,
Phil

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] lmer{lme4}, poisson family and residuals

2006-04-21 Thread Ben Bolker
Amelie LESCROEL lescroel_cebc at no-log.org writes:

 
 Hello,
 
 I’m trying to fit the following model:
 
 Dependent variable: MAXDEPTH (the maximum depth reached by a penguin during
 a given dive)
 
 Fixed effects: SUCCESSMN (an index of the “individual quality” of a bird),
 STUDYDAY (the day of the study, from -5 to 20, with 0=Dec 20), and the
 interaction SUCCESSMN*STUDYDAY
 
 Random effect: BIRD (the bird id, as each bird is performing several dives)
 

  It isn't immediately clear to me why the maximum depth should
be Poisson-distributed ... as R is trying to tell you, Poisson
distributions only make sense for integer data.  Your best bets
are probably (1) reconsider the error distribution, (2) perhaps
trying using nlme instead of lmer -- it is more polished, and you
can use Pinheiro and Bates Mixed-Effects Models in S and S-PLUS
as a reference.

  Ben Bolker

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

Re: [R] second try; writing user-defined GLM link function

2006-04-21 Thread Jessi Brown
Ok, I persuaded predict.glm to work with the default type (link)
instead of response. However, I  don't think it did what I expected
it to do, as the output (which should be log-odds, right?) leads to
non-sensical final daily survival rates (daily survival rate =
probability of surviving that particular interval). For example, for a
nest of age 67 days, the log-odds of survival are -53.88, leading to a
daily survival rate of essentially 0 - not at all what was observed!

I see two possibilities here. Either the predict function is not
working as expected (still not accounting for interval length), or
possibly something is off in my nest age temporal trend modeling
(predicted log-odds values are reasonable up to MeanAge=44 or so).

Anyone have any thoughts here?

 glm.24.pred-predict(glm.24, newdata=vc.new)
 glm.24.pred
  1   2   3   4   5  
6   7
  6.0097208   5.5175350   5.0938348   4.7348110   4.4366542  
4.193   4.0077050
  8   9  10  11  12 
13  14
  3.8692940   3.7765131   3.7255530   3.7126045   3.7338582  
3.7855051   3.8637357
 15  16  17  18  19 
20  21
  3.9647409   4.0847113   4.2198379   4.3663111   4.5203220  
4.6780610   4.8357191
 22  23  24  25  26 
27  28
  4.9894870   5.134   5.2701150   5.3893566   5.4894710  
5.5666488   5.6170809
 29  30  31  32  33 
34  35
  5.6369580   5.6224707   5.5698100   5.4751664   5.3347309  
5.1446940   4.9012465
 36  37  38  39  40 
41  42
  4.6005793   4.2388830   3.8123484   3.3171662   2.7495272  
2.1056221   1.3816416
 43  44  45  46  47 
48  49
  0.5737766  -0.3217823  -1.3088443  -2.3912186  -3.5727145 
-4.8571413  -6.2483083
 50  51  52  53  54 
55  56
 -7.7500246  -9.3660995 -11.1003423 -12.9565623 -14.9385687
-17.0501707 -19.2951776
 57  58  59  60  61 
62  63
-21.6773987 -24.2006432 -26.8687203 -29.6854394 -32.6546097
-35.7800404 -39.0655408
 64  65  66  67
-42.5149201 -46.1319876 -49.9205526 -53.8844243


On 4/20/06, Prof Brian Ripley [EMAIL PROTECTED] wrote:
  glm.24.pred-predict(glm.24,newdata=nestday, type=response, SE.fit=T)

 What is SE.fit?  The help says se.fit.  That _may_ be a problem.

 However, I think the real problem is that the link function argument
 includes a reference to vc.apfa$days that is appropriate for fitting, not
 prediction. One way out might be (untested)

 attach(va.apfa)
 glm.24 - glm(formula = Success ~ NestHtZ + MeanAge + I(MeanAge^2) +
 I(MeanAge^3), family = logexposure(ExposureDays = days), data = vc.apfa)
 detach()
 attach(nestday)
 glm.24.pred-predict(glm.24,newdata=nestday, type=response, SE.fit=T)
 detach()

 so that 'days' refers to the appropriate dataset both when fitting and
 predicting.

 (This is bending glm() to do something it was not designed to do, so some
 hoop-jumping is needed.)


 On Thu, 20 Apr 2006, Jessi Brown wrote:

  An update for all:
 
  Using the combined contributions from Mark and Dr. Ripley, I've been
  (apparently) successfully  formulating both GLM's and GLMM's (using
  the MASS function glmmPQL) analyzing my nest success data. The beta
  parameter estimates look reasonable and the top models resemble those
  from earlier analyses using a different nest survival analysis
  approach.
 
  However, I've now run into problems when trying to predict the daily
  survival rates from fitted models. For example, for a model
  considering nest height (NestHtZ) and nest age effects (MeanAge and
  related terms; there is an overall cubic time trend in this model), I
  tried to  predict the daily survival rate for each day out of a 67 day
  nest cycle (so MeanAge is a vector of 1 to 67) with mean nest height
  (also a vector 67 rows in length; both comprise the matrix nestday).
  Here's what happens:
 
  summary(glm.24)
 
  Call:
  glm(formula = Success ~ NestHtZ + MeanAge + I(MeanAge^2) + I(MeanAge^3),
 family = logexposure(ExposureDays = vc.apfa$days), data = vc.apfa)
 
  Deviance Residuals:
 Min   1Q   Median   3Q  Max
  -3.3264  -1.2341   0.6712   0.8905   1.5569
 
  Coefficients:
Estimate Std. Error z value Pr(|z|)
  (Intercept)   6.5742015  1.7767487   3.700 0.000215 ***
  NestHtZ   0.6205444  0.2484583   2.498 0.012504 *
  MeanAge  -0.6018978  0.2983656  -2.017 0.043662 *
  I(MeanAge^2)  0.0380521  0.0152053   2.503 0.012330 *
  I(MeanAge^3) -0.0006349  0.0002358  -2.693 0.007091 **
  ---
  Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
 
  (Dispersion parameter for binomial family taken to be 

Re: [R] Parallel computing with the snow package: external file I/O possible?

2006-04-21 Thread Martin Morgan
Waichler, Scott R [EMAIL PROTECTED] writes:

 genoud()) should be independent.  However, in the code below, the
 directory created by each node has the same random number in its name.

This is probably because 'random' numbers are generated starting from
a 'seed', the seed is determined (by default) from the system time,
and the system time on the two nodes is identical. The same seed is
being used, hence the same random number sequence.

See http://www.stat.uiowa.edu/~luke/R/cluster/cluster.html

Martin

 I was expecting the contents of fun() or fn() to be independent from all
 other executions of the same function.  What am I missing here?

 #  Begin code
 library(snow)
 setDefaultClusterOptions(outfile=/tmp/cluster1)
 setDefaultClusterOptions(master=moab)
 cl - makeCluster(c(moab, escalante), type=SOCK)

 # Define base pathname for output from my.test()
 base.dir - ~

 # Define a function that is called by clusterCall()
 my.test - function(base.dir) {
   this.host - as.character(system(hostname, intern=T))
   this.rnd - sample(1:1e6, 1)
   test.file - paste(sep=, base.dir, this.host, _, this.rnd)
   file.create(test.file)
 }  # end my.test()

 clusterCall(cl, my.test, base.dir)
 stopCluster(cl)
 #  End code 

 For example, the files moab_65835 and escalante_65835 are created.

 Regards,
 Scott Waichler
 Pacific Northwest National Laboratory
 [EMAIL PROTECTED]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] question in fitting AR model in time series?

2006-04-21 Thread Michael
How to force some AR coefficients to be ZERO and only allow the function
ar in R to change the other coefficients?

Thanks a lot!

[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Hmisc + summarize + quantile: Why only quantiles for first variable in data frame?

2006-04-21 Thread Kim Milferstedt
Hi Frank Harrell,

thanks for the response. I understand your comment but I wasn't able 
to find (or recognize) an answer on how to tell FUN explicitely to 
use matrix operations. Would you be so kind and give me an example?

Thanks so much,

Kim


Please read the documentation and see the examples.  The first 
argument to summarize is a matrix or vector and if a matrix, FUN 
must use matrix operations if you want column-by-column results.

FH
--
Frank E Harrell Jr   Professor and Chair   School of Medicine
  Department of Biostatistics   Vanderbilt University

Kim Milferstedt wrote:
Hi,
I'm working on a data set that contains a couple of factors and a 
number of dependent variables. From all of these dependent 
variables I would like to calculate mean, standard deviation and 
quantiles. With the function FUN I get all the means and stdev that 
I want but quantiles are only calculated for the first of the 
dependent variables (column 8 in the summarize command). What do I 
have to do differently in order to get all the quantiles that I want?
Thanks,
Kim
sgldm2-  read.table(E:/analysistemp/060412_test_data2.txt, header=T)
attach(sgldm2)
names(sgldm2)
FUN -  function(x)c(Mean=mean(x,na.rm=TRUE), 
STDEV=sd(x,na.rm=TRUE), Quantile=quantile(x, probs= 
c(0.25,0.50,0.75),na.rm=TRUE))
ordering-  llist(time_h_f, Distance_f)
resALL  -  summarize(sgldm2[,8:10], ordering, FUN)

__

Kim Milferstedt
University of Illinois at Urbana-Champaign
Department of Civil and Environmental Engineering
4125 Newmark Civil Engineering Building
205 North Mathews Avenue MC 250
Urbana, IL 61801
USA
phone: (001) 217 333-9663
fax: (001) 217 333-6968
email: [EMAIL PROTECTED]
http://cee.uiuc.edu/research/morgenroth/index.asp
___

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] question in fitting AR model in time series?

2006-04-21 Thread Rolf Turner

See argument ``fixed'' in help(arima).

cheers,

Rolf Turner
[EMAIL PROTECTED]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Creat new column based on condition

2006-04-21 Thread Sachin J
Hi,
   
  How can I accomplish this task in R?
   
V1
10
20
30
10
10
20
 
  Create a new column V2 such that: 
  If V1 = 10 then V2 = 4
  If V1 = 20 then V2 = 6
  V1 =   30 then V2 = 10
   
  So the O/P looks like this
   
V1  V2
10   4
20   6
30  10
10   4
10   4  
20   6
   
  Thanks in advance.
   
  Sachin

__



[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Creat new column based on condition

2006-04-21 Thread Gabor Grothendieck
Try:

V1 - matrix(c(10, 20, 30, 10, 10, 20), nc = 1)

V2 - 4 * (V1 == 10) + 6 * (V1 == 20) + 10 * (V1 == 30)

or

V2 - matrix(c(4, 6, 10)[V1/10], nc = 1)

On 4/21/06, Sachin J [EMAIL PROTECTED] wrote:
 Hi,

  How can I accomplish this task in R?

V1
10
20
30
10
10
20

  Create a new column V2 such that:
  If V1 = 10 then V2 = 4
  If V1 = 20 then V2 = 6
  V1 =   30 then V2 = 10

  So the O/P looks like this

V1  V2
10   4
20   6
30  10
10   4
10   4
20   6

  Thanks in advance.

  Sachin

 __



[[alternative HTML version deleted]]

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Creat new column based on condition

2006-04-21 Thread Sachin J
Hi Gabor,
   
  The first one works fine. Just out of curiosity, in second solution: I dont 
want to create a matrix. I want to add a new column to the existing dataframe 
(i.e. V2 based on the values in V1). Is there a way to do it?
   
  TIA
  Sachin
   
  

Gabor Grothendieck [EMAIL PROTECTED] wrote:
  Try:

V1 - matrix(c(10, 20, 30, 10, 10, 20), nc = 1)

V2 - 4 * (V1 == 10) + 6 * (V1 == 20) + 10 * (V1 == 30)

or

V2 - matrix(c(4, 6, 10)[V1/10], nc = 1)

On 4/21/06, Sachin J wrote:
 Hi,

 How can I accomplish this task in R?

 V1
 10
 20
 30
 10
 10
 20

 Create a new column V2 such that:
 If V1 = 10 then V2 = 4
 If V1 = 20 then V2 = 6
 V1 = 30 then V2 = 10

 So the O/P looks like this

 V1 V2
 10 4
 20 6
 30 10
 10 4
 10 4
 20 6

 Thanks in advance.

 Sachin

 __



 [[alternative HTML version deleted]]

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html




-

[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Creat new column based on condition

2006-04-21 Thread Gabor Grothendieck
DF - data.frame(V1 = c(10, 20, 30, 10, 10, 20))
DF$V2 - with(DF, 4 * (V1 == 10) + 6 * (V1 == 20) + 10 * (V1 == 30))
DF$V3 - c(4, 6, 10)[DF$V1/10]

or

DF - data.frame(V1 = c(10, 20, 30, 10, 10, 20))
DF - transform(DF, V2 = 4 * (V1 == 10) + 6 * (V1 == 20) + 10 * (V1 == 30),
  V3 = c(4, 6, 10)[V1/10])

On 4/21/06, Sachin J [EMAIL PROTECTED] wrote:

 Hi Gabor,

 The first one works fine. Just out of curiosity, in second solution: I dont
 want to create a matrix. I want to add a new column to the existing
 dataframe (i.e. V2 based on the values in V1). Is there a way to do it?

 TIA
 Sachin



 Gabor Grothendieck [EMAIL PROTECTED] wrote:

 Try:

 V1 - matrix(c(10, 20, 30, 10, 10, 20), nc = 1)

 V2 - 4 * (V1 == 10) + 6 * (V1 == 20) + 10 * (V1 == 30)

 or

 V2 - matrix(c(4, 6, 10)[V1/10], nc = 1)

 On 4/21/06, Sachin J wrote:
  Hi,
 
  How can I accomplish this task in R?
 
  V1
  10
  20
  30
  10
  10
  20
 
  Create a new column V2 such that:
  If V1 = 10 then V2 = 4
  If V1 = 20 then V2 = 6
  V1 = 30 then V2 = 10
 
  So the O/P looks like this
 
  V1 V2
  10 4
  20 6
  30 10
  10 4
  10 4
  20 6
 
  Thanks in advance.
 
  Sachin
 
  __
 
 
 
  [[alternative HTML version deleted]]
 
  __
  R-help@stat.math.ethz.ch mailing list
  https://stat.ethz.ch/mailman/listinfo/r-help
  PLEASE do read the posting guide!
 http://www.R-project.org/posting-guide.html
 




 
 Love cheap thrills? Enjoy PC-to-Phone calls to 30+ countries for just 2¢/min
 with Yahoo! Messenger with Voice.



__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] how to control the data type

2006-04-21 Thread Pontarelli, Brett
You can also try:

round(runif(1)*10^4)/10^4

--Brett

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Dirk Eddelbuettel
Sent: Thursday, April 20, 2006 8:18 PM
To: zhongmiao wang
Cc: r-help@stat.math.ethz.ch
Subject: Re: [R] how to control the data type


On 20 April 2006 at 22:00, zhongmiao wang wrote:
| Hello:
| I am generating a random number with rnorm(1). The generated number 
| has 8 decimals. I don't want so many decimals. How to control the 
| number of decimals in R?

You must have installed the professional version of R, just downgrade to the 
home version which has only four digits precision.

Seriously, your equating display precision with the representation -- which is 
not correct.  Consider:

 a - 1/3
 print(a)
[1] 0.333
 print(a, digits=18)
[1] 0.15

where the second print statement forces a display beyond the number of
significant digits.   So with that, your rnorm(1) result will also have more
than the eight digits you saw earlier:

 print(rnorm(1), digits=18)
[1] -0.201840514213267291

Hope this helps,  Dirk

--
Hell, there are no rules here - we're trying to accomplish something. 
  -- Thomas A. Edison

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] R debugging options

2006-04-21 Thread John Fox
Dear Spencer,

I wasn't aware of getFunctions(), though I've now taken a look at it. It's
pretty similar to listFunctions() (from my email to the list), except that
getFunctions() uses exists() rather than is.function() to test whether an
object is a function. There must be a reason for this, but I can't think
what it is, since in both cases the vector of object names shouldn't include
nonexistent objects.

Regards,
 John


John Fox
Department of Sociology
McMaster University
Hamilton, Ontario
Canada L8S 4M4
905-525-9140x23604
http://socserv.mcmaster.ca/jfox 
 

 -Original Message-
 From: Spencer Graves [mailto:[EMAIL PROTECTED] 
 Sent: Friday, April 21, 2006 11:21 AM
 To: John Fox
 Cc: 'Larry Howe'; r-help@stat.math.ethz.ch; Philippe Grosjean
 Subject: Re: [R] R debugging options
 
 Regarding a function that lists functions, have you 
 considered getFunctions in library(svIDE)?  You need to 
 provide the argument, as 
 in getFunctions(1);   getFunctions() returns an error message.
 
 Beyond this, the objects function in S-Plus (at 
 least version 6.2) has a classes argument, which the R 
 2.2.1 implementation does not have.  It doesn't look like it 
 would be too difficult to add such an argument to objects 
 in R, but I have not been in a position to volunteer to do 
 it, and without that, I didn't feel it was appropriate for me 
 to suggest it.
 
 hope this helps,
 spencer graves
 
 John Fox wrote:
 
  Dear Larry,
  
  I'm not aware of an existing function that lists functions, 
 but here's 
  a simple solution:
  
  listFunctions - function(all.names=FALSE, envir=.GlobalEnv){
  # all.names=TRUE: include names beginning with .
  # envir: environment to search
  Objects - objects(envir, all.names=all.names)
  if (length(Objects) == 0) Objects
  else names(which(sapply(Objects, 
  function(object) is.function(eval(parse(text=object),
  envir=envir)
  }
  
  Getting mtrace() to use the function names returned by 
 listFunctions() 
  is a bit tricky, because of the way mtrace() evaluates its 
 arguments. 
  You could do something like the following:
  
  for(f in listFunctions()) mtrace(char.fname=f)
  
  Perhaps someone else knows of an existing or better solution.
  
  I hope this helps,
   John
  
  
  John Fox
  Department of Sociology
  McMaster University
  Hamilton, Ontario
  Canada L8S 4M4
  905-525-9140x23604
  http://socserv.mcmaster.ca/jfox
  
  
  
 -Original Message-
 From: [EMAIL PROTECTED] 
 [mailto:[EMAIL PROTECTED] On Behalf Of Larry Howe
 Sent: Tuesday, April 18, 2006 12:46 PM
 To: r-help@stat.math.ethz.ch
 Subject: Re: [R] R debugging options
 
 On Monday April 17 2006 21:08, Francisco J. Zagmutt wrote:
 
 RSiteSearch(debug) or RSiteSearch(debugging) will give
 
 you a lot
 
 or relevant information.  I personally use library(debug)
 
 extensivelly
 
 and it should do all the taks you asked about. There is a
 
 nice article
 
 describing the debug lilbrary in the 2003/3 issue of R News 
 http://cran.r-project.org/doc/Rnews/Rnews_2003-3.pdf
 
 Cheers
 
 Francisco
 
 Wow! That is a great package. I think it does all I need.
 
 Is there a way to turn on debugging for all loaded functions? 
 My source file contains many functions and I would prefer 
 not to have 
 to mtrace() each one.
 Something like
 
 
 mtrace(how_do_I_get_a_list_of_all_loaded_functions)
 
 ?
 
 Larry
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! 
 http://www.R-project.org/posting-guide.html
  
  
  __
  R-help@stat.math.ethz.ch mailing list
  https://stat.ethz.ch/mailman/listinfo/r-help
  PLEASE do read the posting guide! 
 http://www.R-project.org/posting-guide.html

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Creat new column based on condition

2006-04-21 Thread Duncan Murdoch
On 4/21/2006 4:05 PM, Sachin J wrote:
 Hi,

   How can I accomplish this task in R?

 V1
 10
 20
 30
 10
 10
 20
  
   Create a new column V2 such that: 
   If V1 = 10 then V2 = 4
   If V1 = 20 then V2 = 6
   V1 =   30 then V2 = 10

Gabor's solution is fine; something that looks a little bit more like 
your code is this:

  V2 - NA
  V2 - ifelse( V1 == 10, 4, V2)
  V2 - ifelse( V1 == 20, 6, V2)
  V2 - ifelse( V1 == 30, 10, V2)

or

  V2 - ifelse( V1 == 10, 4,
  ifelse( V1 == 20, 6,
ifelse( V1 == 30, 10, NA )))

(where the NA is to handle any unexpected case where V1 isn't 10, 20 or 
30).  My preference would be to use just one assignment, and if I was 
sure 10, 20 and 30 were the only possibilities, would use

  V2 - ifelse( V1 == 10, 4,
  ifelse( V1 == 20, 6, 10 ))

Duncan Murdoch

   So the O/P looks like this

 V1  V2
 10   4
 20   6
 30  10
 10   4
 10   4  
 20   6

   Thanks in advance.

   Sachin
 
 __
 
 
 
   [[alternative HTML version deleted]]
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Hmisc + summarize + quantile: Why only quantiles for first variable in data frame?

2006-04-21 Thread Frank E Harrell Jr
Kim Milferstedt wrote:
 Hi Frank Harrell,
 
 thanks for the response. I understand your comment but I wasn't able to 
 find (or recognize) an answer on how to tell FUN explicitely to use 
 matrix operations. Would you be so kind and give me an example?
 
 Thanks so much,
 
 Kim
 

See http://biostat.mc.vanderbilt.edu/SasByMeansExample plus an example 
in the help file for summary.formula in Hmisc which uses the apply 
function.  summary.formula and summarize are similar in the use of FUN 
(which summary.formula unfortunately calls 'fun').

Frank

 
 Please read the documentation and see the examples.  The first 
 argument to summarize is a matrix or vector and if a matrix, FUN must 
 use matrix operations if you want column-by-column results.

 FH
 -- 
 Frank E Harrell Jr   Professor and Chair   School of Medicine
  Department of Biostatistics   Vanderbilt University
 
 
 Kim Milferstedt wrote:

 Hi,
 I'm working on a data set that contains a couple of factors and a 
 number of dependent variables. From all of these dependent variables 
 I would like to calculate mean, standard deviation and quantiles. 
 With the function FUN I get all the means and stdev that I want but 
 quantiles are only calculated for the first of the dependent 
 variables (column 8 in the summarize command). What do I have to do 
 differently in order to get all the quantiles that I want?
 Thanks,
 Kim
 sgldm2-  read.table(E:/analysistemp/060412_test_data2.txt, 
 header=T)
 attach(sgldm2)
 names(sgldm2)
 FUN -  function(x)c(Mean=mean(x,na.rm=TRUE), 
 STDEV=sd(x,na.rm=TRUE), Quantile=quantile(x, probs= 
 c(0.25,0.50,0.75),na.rm=TRUE))
 ordering-  llist(time_h_f, Distance_f)
 resALL  -  summarize(sgldm2[,8:10], ordering, FUN)


 __

 Kim Milferstedt
 University of Illinois at Urbana-Champaign
 Department of Civil and Environmental Engineering
 4125 Newmark Civil Engineering Building
 205 North Mathews Avenue MC 250
 Urbana, IL 61801
 USA
 phone: (001) 217 333-9663
 fax: (001) 217 333-6968
 email: [EMAIL PROTECTED]
 http://cee.uiuc.edu/research/morgenroth/index.asp
 ___
 
 
 


-- 
Frank E Harrell Jr   Professor and Chair   School of Medicine
  Department of Biostatistics   Vanderbilt University

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Creat new column based on condition

2006-04-21 Thread Peter Dalgaard
Duncan Murdoch [EMAIL PROTECTED] writes:

 On 4/21/2006 4:05 PM, Sachin J wrote:
  Hi,
 
How can I accomplish this task in R?
 
  V1
  10
  20
  30
  10
  10
  20
   
Create a new column V2 such that: 
If V1 = 10 then V2 = 4
If V1 = 20 then V2 = 6
V1 =   30 then V2 = 10
 
 Gabor's solution is fine; something that looks a little bit more like 
 your code is this:
 
   V2 - NA
   V2 - ifelse( V1 == 10, 4, V2)
   V2 - ifelse( V1 == 20, 6, V2)
   V2 - ifelse( V1 == 30, 10, V2)
 
 or
 
   V2 - ifelse( V1 == 10, 4,
   ifelse( V1 == 20, 6,
 ifelse( V1 == 30, 10, NA )))
 
 (where the NA is to handle any unexpected case where V1 isn't 10, 20 or 
 30).  My preference would be to use just one assignment, and if I was 
 sure 10, 20 and 30 were the only possibilities, would use
 
   V2 - ifelse( V1 == 10, 4,
   ifelse( V1 == 20, 6, 10 ))
 
 Duncan Murdoch


 I think I'd go for something like

   V2 - c(4, 6, 10)[factor(V1, levels=c(10, 20, 30))]



So the O/P looks like this
 
  V1  V2
  10   4
  20   6
  30  10
  10   4
  10   4  
  20   6
 
Thanks in advance.
 
Sachin
  
  __
  
  
  
  [[alternative HTML version deleted]]
  
  __
  R-help@stat.math.ethz.ch mailing list
  https://stat.ethz.ch/mailman/listinfo/r-help
  PLEASE do read the posting guide! 
  http://www.R-project.org/posting-guide.html
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
 

-- 
   O__   Peter Dalgaard Øster Farimagsgade 5, Entr.B
  c/ /'_ --- Dept. of Biostatistics PO Box 2099, 1014 Cph. K
 (*) \(*) -- University of Copenhagen   Denmark  Ph:  (+45) 35327918
~~ - ([EMAIL PROTECTED])  FAX: (+45) 35327907

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Creat new column based on condition

2006-04-21 Thread Frank E Harrell Jr
Duncan Murdoch wrote:
 On 4/21/2006 4:05 PM, Sachin J wrote:
 
Hi,
   
  How can I accomplish this task in R?
   
V1
10
20
30
10
10
20
 
  Create a new column V2 such that: 
  If V1 = 10 then V2 = 4
  If V1 = 20 then V2 = 6
  V1 =   30 then V2 = 10
 
 
 Gabor's solution is fine; something that looks a little bit more like 
 your code is this:
 
   V2 - NA
   V2 - ifelse( V1 == 10, 4, V2)
   V2 - ifelse( V1 == 20, 6, V2)
   V2 - ifelse( V1 == 30, 10, V2)

or

V2 - 4*(V1==10)+6*(V2==20)+10*(V2==30)
V2[V2==0] - NA

Frank

 
 or
 
   V2 - ifelse( V1 == 10, 4,
   ifelse( V1 == 20, 6,
 ifelse( V1 == 30, 10, NA )))
 
 (where the NA is to handle any unexpected case where V1 isn't 10, 20 or 
 30).  My preference would be to use just one assignment, and if I was 
 sure 10, 20 and 30 were the only possibilities, would use
 
   V2 - ifelse( V1 == 10, 4,
   ifelse( V1 == 20, 6, 10 ))
 
 Duncan Murdoch
 
   
  So the O/P looks like this
   
V1  V2
10   4
20   6
30  10
10   4
10   4  
20   6
   
  Thanks in advance.
   
  Sachin

-- 
Frank E Harrell Jr   Professor and Chair   School of Medicine
  Department of Biostatistics   Vanderbilt University

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Creat new column based on condition

2006-04-21 Thread Gabor Grothendieck
Here is a compact solution using approx:

  DF$V2 - approx(c(10, 20, 30), c(4,6,10), DF$V1)$y


On 4/21/06, Gabor Grothendieck [EMAIL PROTECTED] wrote:
 DF - data.frame(V1 = c(10, 20, 30, 10, 10, 20))
 DF$V2 - with(DF, 4 * (V1 == 10) + 6 * (V1 == 20) + 10 * (V1 == 30))
 DF$V3 - c(4, 6, 10)[DF$V1/10]

 or

 DF - data.frame(V1 = c(10, 20, 30, 10, 10, 20))
 DF - transform(DF, V2 = 4 * (V1 == 10) + 6 * (V1 == 20) + 10 * (V1 == 30),
  V3 = c(4, 6, 10)[V1/10])

 On 4/21/06, Sachin J [EMAIL PROTECTED] wrote:
 
  Hi Gabor,
 
  The first one works fine. Just out of curiosity, in second solution: I dont
  want to create a matrix. I want to add a new column to the existing
  dataframe (i.e. V2 based on the values in V1). Is there a way to do it?
 
  TIA
  Sachin
 
 
 
  Gabor Grothendieck [EMAIL PROTECTED] wrote:
 
  Try:
 
  V1 - matrix(c(10, 20, 30, 10, 10, 20), nc = 1)
 
  V2 - 4 * (V1 == 10) + 6 * (V1 == 20) + 10 * (V1 == 30)
 
  or
 
  V2 - matrix(c(4, 6, 10)[V1/10], nc = 1)
 
  On 4/21/06, Sachin J wrote:
   Hi,
  
   How can I accomplish this task in R?
  
   V1
   10
   20
   30
   10
   10
   20
  
   Create a new column V2 such that:
   If V1 = 10 then V2 = 4
   If V1 = 20 then V2 = 6
   V1 = 30 then V2 = 10
  
   So the O/P looks like this
  
   V1 V2
   10 4
   20 6
   30 10
   10 4
   10 4
   20 6
  
   Thanks in advance.
  
   Sachin
  
   __
  
  
  
   [[alternative HTML version deleted]]
  
   __
   R-help@stat.math.ethz.ch mailing list
   https://stat.ethz.ch/mailman/listinfo/r-help
   PLEASE do read the posting guide!
  http://www.R-project.org/posting-guide.html
  
 
 
 
 
  
  Love cheap thrills? Enjoy PC-to-Phone calls to 30+ countries for just 2¢/min
  with Yahoo! Messenger with Voice.
 
 


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] online tutorials

2006-04-21 Thread John Kane
For real newbies like myself , good tutorial  are had to find, and much of the 
documentation is completelly opaque at first. :)  However I would recommend 
http://www.math.ilstu.edu/dhkim/Rstuff/Rtutor.html as a good place to start 
with real basics.  It is a bit like the Sample session in the Intro to R but I 
found it a bit more user friendlyl.

  Another source that I have just found is 
http://pj.freefaculty.org/R/Rtips.html which has some good basic stuff but it 
is not a tutorial.


- Original Message 
From: Maxon, Matthew [EMAIL PROTECTED]
To: R-help@stat.math.ethz.ch
Sent: Thursday, April 20, 2006 12:24:39 PM
Subject: [R] online tutorials

I work for a Investment group with a very extensive training program and
we are having our new hires take a statistics course at University of
Chicago where they have to complete some assignments with R.  I was
wondering if there are any online tutorials that exist where we could
get our participants comfortable with R before the class itself?  I
appreciate any help at all.
 
Thanks,
Matt Maxon
Learning  Development
 
Matt Maxon
Citadel Investment Group, L.L.C.
312.395.2517 - office

 

 

-
-

CONFIDENTIALITY AND SECURITY NOTICE\ \ The contents of this ...{{dropped}}

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] how to control the data type

2006-04-21 Thread John Kane
Have a look at  ?round for one way. 

- Original Message 
From: zhongmiao wang [EMAIL PROTECTED]
To: r-help@stat.math.ethz.ch
Sent: Thursday, April 20, 2006 11:00:25 PM
Subject: [R] how to control the data type

Hello:
I am generating a random number with rnorm(1). The generated number
has 8 decimals. I don't want so many decimals. How to control the
number of decimals in R?
Thanks!

Zhongmiao Wang

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Feeding a sequence to a function

2006-04-21 Thread Frank Black
Dear all,

I have written a function that takes two arguments, and I would like to feed 
it all pairs of values from -10 to 10.  The following code works for all 
pairs of values between 1 and 10, but given R's indexing, I can't extend it 
back to cover the zeros and the negative values.  I'd appreciate any 
suggestions for a work-around.

Thanks!  Fred

Tau - matrix(0,10,10)
S2 - matrix(0,10,10)
R2w - matrix(0,10,10)

for (i in 1:10) {
 for (j in 1:10) {
  out - function(i,j)
  Tau[i,j] - [EMAIL PROTECTED]
  S2[i,j] - [EMAIL PROTECTED]
  R2w[i,j] [EMAIL PROTECTED]
 }
}

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] plotting order in a barchart.: two problems

2006-04-21 Thread John Kane
I am new at R and have a simple problem, I think. I
have a matrix:   

   [,1]   [,2]  [,3]
a.5 2.4
b4 3.44
c4  4 2

I want a barchart with the [,1] column plotted as the
left-most group on a vertical bar chart or as the top
group on a horizontal chart.  However the plotting
order is , [,3] [,2] [,1].  

I also am having a problem getting the rows to plot on
a vertical chart in the order : a , b, c .  I achieved
this in a horizontal chart by doing a cbind(c,b,a) but
for some reason I do not seem to be getting this to
work for the vertical chart.  

Nothing in Sort() or Order() seem particularly
appropriate ( or I don't understand what the help is
telling me).  

Any suggestions would be appreciated.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] how to control the data type

2006-04-21 Thread Berton Gunter
print(rnorm(1),digits=3)

The key is understanding the difference between the value and what is
printed automatically with the default number of digits given by the
options() value currently in effect.

?options  for digits

-- Bert Gunter
Genentech Non-Clinical Statistics
South San Francisco, CA
 

 -Original Message-
 From: [EMAIL PROTECTED] 
 [mailto:[EMAIL PROTECTED] On Behalf Of John Kane
 Sent: Friday, April 21, 2006 3:59 PM
 To: zhongmiao wang; r-help@stat.math.ethz.ch
 Subject: Re: [R] how to control the data type
 
 Have a look at  ?round for one way. 
 
 - Original Message 
 From: zhongmiao wang [EMAIL PROTECTED]
 To: r-help@stat.math.ethz.ch
 Sent: Thursday, April 20, 2006 11:00:25 PM
 Subject: [R] how to control the data type
 
 Hello:
 I am generating a random number with rnorm(1). The generated number
 has 8 decimals. I don't want so many decimals. How to control the
 number of decimals in R?
 Thanks!
 
 Zhongmiao Wang
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! 
 http://www.R-project.org/posting-guide.html
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! 
 http://www.R-project.org/posting-guide.html


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] a question on df of linear model

2006-04-21 Thread Spencer Graves
  Short answer:  3=8-5.

  Longer answer:

  (1) mf4Orth.lm:  The degrees of freedom for the output of lm is 5 = 
the 4 linear regression parameters + the estimated residual standard 
deviation.

  (2) fm2Orth.lme:  The lme fit with 8 degrees of freedom adds to 
that model random = ~I(age-11)|Subject, which estimated 3 additional 
parameters:  The standard deviation of the adjustments for each subject 
to the intercept and the I(age-11) term in the model plus the 
correlation between those two random adjustments.

  (3) The naive, traditional theory for nested hypotheses would say 
that 2*log(likelihood ratio) is approximately distributed as chi-square 
with degrees of freedom = the number of additional parameters estimated 
in the larger model but fixed in the smaller one.  This number is 3 in 
this case.  However, the traditional chi-square approximation to the 
distribution of 2*log(likelihood ratio) is known NOT to work well in 
this case because 2 of the 3 additional parameters estimated are fixed 
at a boundary in the smaller model, and the third one becomes 
meaningless in that case.  To understand this issue better and to see 
how to get around it, please read ch. 2 of Pinheiro and Bates.

  Does this answer the question?
  hope this helps.
  spencer graves

Joe Moore wrote:

 Dear R-users:
 
 On page 155 of Mixed-effects Models in S and S-Plus, the degree of 
 freedoms of the anova comparison of lme and lm are 8 and 5.
 
 But when I use the following SAS code:
 proc glm data=ortho2;
class gender;
model distance = age|gender / solution ;
 run;
 
 The df is 3.
 
 Could you please explain this to me?
 
 Thanks
 
 Joe
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Minor documentation issue

2006-04-21 Thread Vivek Satsangi
I looked at ?seq

--
-- Vivek Satsangi
Rochester, NY USA

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Minor documentation issue

2006-04-21 Thread Vivek Satsangi
(Sorry about the last email which was incomplete. I hit 'send' accidentally).

I looked at ?seq. One of the forms given under Usage is seq(from).
This would be the form used if seq is called with only one argument.
However, this should actually say seq(to). For example,
 seq(1)
[1] 1
 seq(3)
[1] 1 2 3

Cheers,
--
-- Vivek Satsangi
Rochester, NY USA

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Minor documentation issue

2006-04-21 Thread Gabor Grothendieck
It is matched by the first argument which is called from even though,
as the documentation indicates under the explanation of the last
form, it refers to the ending value.  Note, for example, that
seq(from = 3) gives 1:3 and not 3:1.

Also the help file does say:

The interpretation of the unnamed arguments of 'seq' is _not_
 standard, ...

On 4/21/06, Vivek Satsangi [EMAIL PROTECTED] wrote:
 (Sorry about the last email which was incomplete. I hit 'send' accidentally).

 I looked at ?seq. One of the forms given under Usage is seq(from).
 This would be the form used if seq is called with only one argument.
 However, this should actually say seq(to). For example,
  seq(1)
 [1] 1
  seq(3)
 [1] 1 2 3

 Cheers,
 --
 -- Vivek Satsangi
 Rochester, NY USA

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Minor documentation issue

2006-04-21 Thread Duncan Murdoch
On 4/21/2006 9:26 PM, Vivek Satsangi wrote:
 (Sorry about the last email which was incomplete. I hit 'send' accidentally).
 
 I looked at ?seq. One of the forms given under Usage is seq(from).
 This would be the form used if seq is called with only one argument.
 However, this should actually say seq(to). For example,
 seq(1)
 [1] 1
 seq(3)
 [1] 1 2 3

Try this:

  debug(seq.default)
  seq(3)

You'll then drop into the browser at the start of executing the 
seq.default function.  You can print the value of to and of from, 
and you'll see this:

Browse[1] to
[1] 1
Browse[1] from
[1] 3

So the documentation is correct, even if the usage in this case is the 
reverse of English usage.

(You'll quickly want undebug(seq.default) if you don't want to spend a 
lot of time in the debugger; seq() is a pretty commonly used function!)

Duncan Murdoch

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] problems performing bootstrap

2006-04-21 Thread John Sorkin
R2.1.1
Windows XP

I have defined a function:

best-function (x) 
{
xx-pmax(x[,1],x[,2])
pvalue-t.test(xx,mu=100)
pvalue$p.value
}

and then I try to bootstrap best as follows:

 boot( array(rnorm(30,100,1),c(15,2)),best,R=100 )

and get the following message:

Error in statistic(data, original, ...) : unused argument(s) ( ...)

I would appreciate any help that you might offer.
Thanks,
John

John Sorkin M.D., Ph.D.
Chief, Biostatistics and Informatics
Baltimore VA Medical Center GRECC and
University of Maryland School of Medicine Claude Pepper OAIC

University of Maryland School of Medicine
Division of Gerontology
Baltimore VA Medical Center
10 North Greene Street
GRECC (BT/18/GR)
Baltimore, MD 21201-1524

410-605-7119
[EMAIL PROTECTED]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Nonlinear Regression model: Diagnostics

2006-04-21 Thread Spencer Graves
  I don't know how to get the error message you reported.  The 
following modification of the first 'nls' example worked for me:

DNase1 - subset(DNase, Run == 1)
fm1DNase1 - nls( density ~ SSlogis(log(conc), Asym, xmid, scal), DNase1)
profile(fm1DNase1)

fm1DNase1.2 - nls( density ~ SSlogis(log(conc), Asym, xmid, scal), 
DNase1, alg=default, trace=TRUE)
profile(fm1DNase1.2)

  Have you made scatterplots indicating that the model you are trying 
to fit seems plausible?  If yes, I suggest you try to produce an 
extremely simple, self-contained example that generates your error 
message, then send that to this list.  Before you submit another post, 
however, please read the posting guide! 
www.R-project.org/posting-guide.html.  Some people have reported that 
the posting guide helped them solve their own problem.  Failing that, I 
know that at least one of the R Project's leading contributors has a 
policy of not responding to posts that seem inconsistent with the 
suggestions in that guide.  Even without that, I believe that posts more 
consistent with that guide tend to be clearer and easier to understand. 
  This tends to increase chances of getting (quickly) the information 
you most need to proceed.

  hope this helps,
  spencer graves

Sachin J wrote:
 Hi,

   I am trying to run the following nonlinear regression model. 

nreg - nls(y ~ exp(-b*x), data = mydf, start = list(b = 0), alg = 
 default, trace = TRUE)

   OUTPUT: 
   24619327 :  0 
 24593178 :  0.0001166910 
 24555219 :  0.0005019005 
 24521810 :  0.001341571 
 24500774 :  0.002705402 
 24490713 :  0.004401078 
 24486658 :  0.00607728 
 24485115 :  0.007484372 
 24484526 :  0.008552635 
 24484298 :  0.009314779 
 24484208 :  0.009837009 
 24484172 :  0.01018542 
 24484158 :  0.01041381 
 24484152 :  0.01056181 
 24484150 :  0.01065700 
 24484149 :  0.01071794 
 24484148 :  0.01075683 
 24484148 :  0.01078161 
 24484148 :  0.01079736 
 24484148 :  0.01080738 
 24484148 :  0.01081374 
 Nonlinear regression model
   model:  y ~ exp(-b * x) 
data:  mydf 
  b 
 0.01081374 
  residual sum-of-squares:  24484148 
 
   My question is how do I interpret the results of this model. 

profile(nreg)

   24484156 :   
 24484156 :   
 24484156 :   
 24484156 :   
 24484156 :   
 24484156 :   
 24484156 :   
 24484156 :   
 24484156 :   
 24484156 :   
 24484156 :   
 24484156 :   
 24484156 :   
 24484156 :   
 24484156 :   
 24484156 :   
 24484156 :   
 24484156 :   
 24484156 :   
 24484156 :   
 24484156 :   
 24484156 :   
 24484156 :   
 24484156 :   
 24484156 :   
 24484156 :   
 24484156 :   
 24484156 :   
 24484156 :   
 24484156 :   
 24484156 :   
 24484156 :   
 24484156 :   
 24484156 :   
 24484156 :   
 24484156 :   
 24484156 :   
 24484156 :   
 24484156 :   
 24484156 :   
 24484156 :   
 24484156 :   
 24484156 :   
 24484156 :   
 24484156 :   
 24484156 :   
 24484156 :   
 24484156 :   
 24484156 :   
 24484156 :   
 24484156 :   
 Error in prof$getProfile() : number of iterations exceeded maximum of 50

   I am unable to understand the error cause. Any pointers would be of great 
 help. 

   Regards,
   Sachin
 
   
 -
 
   [[alternative HTML version deleted]]
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] problems performing bootstrap

2006-04-21 Thread Andrew Robinson
John,

boot requires that the function accept an index (or similar), which is
used to communicate the permutation of x to the function.

Try instead:

best - function (x,i)  {
  xx - pmax(x[i,1],x[i,2])
  pvalue - t.test(xx,mu=100)
  pvalue$p.value
}

Cheers

Andrew

On Fri, Apr 21, 2006 at 10:03:12PM -0400, John Sorkin wrote:
 R2.1.1
 Windows XP
 
 I have defined a function:
 
 best-function (x) 
 {
 xx-pmax(x[,1],x[,2])
 pvalue-t.test(xx,mu=100)
 pvalue$p.value
 }
 
 and then I try to bootstrap best as follows:
 
  boot( array(rnorm(30,100,1),c(15,2)),best,R=100 )
 
 and get the following message:
 
 Error in statistic(data, original, ...) : unused argument(s) ( ...)
 
 I would appreciate any help that you might offer.
 Thanks,
 John
 
 John Sorkin M.D., Ph.D.
 Chief, Biostatistics and Informatics
 Baltimore VA Medical Center GRECC and
 University of Maryland School of Medicine Claude Pepper OAIC
 
 University of Maryland School of Medicine
 Division of Gerontology
 Baltimore VA Medical Center
 10 North Greene Street
 GRECC (BT/18/GR)
 Baltimore, MD 21201-1524
 
 410-605-7119
 [EMAIL PROTECTED]
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

-- 
Andrew Robinson  
Department of Mathematics and StatisticsTel: +61-3-8344-9763
University of Melbourne, VIC 3010 Australia Fax: +61-3-8344-4599
Email: [EMAIL PROTECTED] http://www.ms.unimelb.edu.au

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html