Re: [R] my error with augPred

2006-09-07 Thread Spencer Graves
  Thank you for providing such a complete, self contained example.  
I found that 'predict.nlme' does not like a factor in the 'fixed' 
argument as you used it, fixed=list(Asym~x1, R0+lrc~1).  To see this, 
I added 'x1.' as a numeric version of the factor 'x1' and reran it 
successfully: 

fm2.-update(fm1, fixed=list(Asym~x1., R0+lrc~1), start=c(103,0,-8.5,-3))
aP2. - augPred(fm2.)
plot(aP2.)

  Unfortunately, it looks like this work-around won't help you with 
your original problem, because there, the counterpart to 'x1' is an 
ordered factor with more than 2 levels. 

  The error message refers to 'predict.nlme'.  I know no reason why 
'predict.nlme' shouldn't work with a factor with more than 2 levels in 
this context.  If it were my problem and it was sufficiently important, 
I would make a local copy of 'predict.nlme' as follows: 

  predict.nlme - getAnywhere(predict.nlme)

  Then I'd use 'debug(nlme:::predict.nlme)' to walk through the 
problem example line by line until I figured out what I had to change to 
make this work. 

  I hesitate to use the B word, but I think it might be 
appropriate to file a bug report on this;  perhaps someone else will do 
that.  

  I'm sorry I couldn't solve your original problem.  With luck, 
someone else will convert this example into a fix to the code. 
  Spencer Graves
 
Petr Pikal wrote:
 Hallo

 thank you for your response. I am not sure but maybe fixed effects 
 cannot be set to be influenced by a factor to be able to use augPred.

 lob-Loblolly[Loblolly$Seed!=321,]
 set.seed(1)
 lob-data.frame(lob, x1=sample(letters[1:3], replace=T)) # add a 
 #factor
 lob-groupedData(height~age|Seed, data=lob)
 fm1 - nlme(height ~ SSasymp(age, Asym, R0, lrc),
 data = lob,
 fixed = Asym + R0 + lrc ~ 1,
 random = Asym ~ 1,
 start = c(Asym = 103, R0 = -8.5, lrc = -3.3))

 fm2-update(fm1, fixed=list(Asym~x1, R0+lrc~1), start=c(103,0,-8.5,-3))
  ^^^
 and

 plot(augPred(fm2))

 Throws an error.
 So it is not possible to use augPred with such constructions.

 Best regards.
 Petr Pikal

 On 2 Sep 2006 at 17:58, Spencer Graves wrote:

 Date sent:Sat, 02 Sep 2006 17:58:05 -0700
 From: Spencer Graves [EMAIL PROTECTED]
 To:   Petr Pikal [EMAIL PROTECTED]
 Copies to:r-help@stat.math.ethz.ch
 Subject:  Re: [R] my error with augPred

   
 comments in line 

 Petr Pikal wrote:
 
 Dear all

 I try to refine my nlme models and with partial success. The model
 is refined and fitted (using Pinheiro/Bates book as a tutorial) but
 when I try to plot

 plot(augPred(fit4))

 I obtain
 Error in predict.nlme(object, value[1:(nrow(value)/nL), , drop =
 FALSE],  : 
 Levels (0,3.5],(3.5,5],(5,7],(7,Inf] not allowed for 
 vykon.fac
   

 Is it due to the fact that I have unbalanced design with not all
 levels of vykon.fac present in all levels of other explanatory
 factor variable?
   
   
 I don't know, but I'm skeptical. 
 
 I try to repeat 8.19 fig which is OK until I try:

 fit4 - update(fit2, fixed = list(A+B~1,xmid~vykon.fac, scal~1), 
 start = c(57, 100, 700, rep(0,3), 13))

 I know I should provide an example but maybe somebody will be clever
 enough to point me to an explanation without it.
   
   
 I'm not. 

 To answer these questions without an example from you, I'd have to
 make up my own example and try to see if I could replicate the error
 messages you report, and I'm not sufficiently concerned about this
 right now to do that. 

 Have you tried taking an example from the book and deleting certain
 rows from the data to see if you can force it to reproduce your error?


 Alternatively, have you tried using 'debug' to trace through the code
 line by line until you learn enough of what it's doing to answer your
 question? 

 Spencer Graves
 
 nlme version 3.1-75
 SSfpl model
 R 2.4.0dev (but is the same in 2.3.1), W2000.

 Thank you
 Best regards.

 Petr PikalPetr Pikal
 [EMAIL PROTECTED]

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html and provide commented,
 minimal, self-contained, reproducible code.

   
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html and provide commented,
 minimal, self-contained, reproducible code.
 

 Petr Pikal
 [EMAIL PROTECTED]



__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Alternatives to merge for large data sets?

2006-09-07 Thread Adam D. I. Kramer
Hello,

I am trying to merge two very large data sets, via

pubbounds.prof -
merge(x=pubbounds,y=prof,by.x=user,by.y=userid,all=TRUE,sort=FALSE)

which gives me an error of

Error: cannot allocate vector of size 2962 Kb

I am reasonably sure that this is correct syntax.

The trouble is that pubbounds and prof are large; they are data frames which
take up 70M and 11M respectively when saved as .Rdata files.

I understand from various archive searches that merge can't handle that,
because merge takes n^2 memory, which I do not have.

My question is whether there is an alternative to merge which would carry
out the process in a slower, iterative manner...or if I should just bite the
bullet, write.table, and use a perl script to do the job.

Thankful as always,
Adam D. I. Kramer

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] how to create time series object

2006-09-07 Thread gyadav

hi all

i have date and the return series like below, but the dates are not in 
uniform intervals. Please show me the way how to create a time series in 
'R' so that dates are also associated with the returns. 

thanks in advance

   Sayonara With Smile  With Warm Regards :-)

  G a u r a v   Y a d a v
  Senior Executive Officer,
  Economic Research  Surveillance Department,
  Clearing Corporation Of India Limited.

  Address: 5th, 6th, 7th Floor, Trade Wing 'C',  Kamala City, S.B. Marg, 
Mumbai - 400 013
  Telephone(Office): - +91 022 6663 9398 ,  Mobile(Personal) (0)9821286118
  Email(Office) :- [EMAIL PROTECTED] ,  Email(Personal) :- 
[EMAIL PROTECTED]



DISCLAIMER AND CONFIDENTIALITY CAUTION:\ \ This message and ...{{dropped}}

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] [R-pkgs] odfWeave Version 0.4.4

2006-09-07 Thread Kuhn, Max
Version 0.4.4 of odfWeave is available from CRAN. A Windows binary
should be available shortly.

This version requires base R version 2.3.1 or greater.

Changes from the last version include

  - Non-English character sets are handled better. For example, Chinese
characters can be included in R code. See the file testCases.odt in
the examples directory for an example

  - Image specifications, such as format and size, have been moved out
of odfWeaveControl. They are now controlled by the functions
getImageDefs and setImageDefs. This change allows the user to easily
modify the image properties in code chunks so that figures can have
different sizes or types throughout the document.

  - When odfWeave is invoked, a check for a zip program is done and a
more meaningful error is reported. This should help users better
understand the odfWeave software dependencies. 

  - A new XML parser was written so that users no longer need to turn
off the size optimization feature in OpenOffice. 

  - Three bugs were fixed:

o If the user specified a relative path to the source file, an error
occurred

o Fonts contained in the style definitions are automatically
registered in the ODF document. Previously, fonts that specified using
setStyleDefs but were not used in the document were ignored. Now, the
fonts found using getStyleDefs() at the start of odfWeave execution are
added to the document.

o A new function, odfTmpDir, is now used to set the path to the
working directory. A new directory is created in the location of
tempdir().

As always, please send any comments, suggestions or bug reports to
max.kuhn at pfizer.com.

Max
--
LEGAL NOTICE\ Unless expressly stated otherwise, this messag...{{dropped}}

___
R-packages mailing list
R-packages@stat.math.ethz.ch
https://stat.ethz.ch/mailman/listinfo/r-packages

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Alternatives to merge for large data sets?

2006-09-07 Thread Prof Brian Ripley
Which version of R?

Please try 2.4.0 alpha, as it has a different and more efficient 
algorithm for the case of 1-1 matches.

On Wed, 6 Sep 2006, Adam D. I. Kramer wrote:

 Hello,
 
 I am trying to merge two very large data sets, via
 
 pubbounds.prof -
 merge(x=pubbounds,y=prof,by.x=user,by.y=userid,all=TRUE,sort=FALSE)
 
 which gives me an error of
 
 Error: cannot allocate vector of size 2962 Kb
 
 I am reasonably sure that this is correct syntax.
 
 The trouble is that pubbounds and prof are large; they are data frames which
 take up 70M and 11M respectively when saved as .Rdata files.
 
 I understand from various archive searches that merge can't handle that,
 because merge takes n^2 memory, which I do not have.

Not really true (it has been changed since those days).  Of course, if you 
have multiple matches it must do so.

 My question is whether there is an alternative to merge which would carry
 out the process in a slower, iterative manner...or if I should just bite the
 bullet, write.table, and use a perl script to do the job.
 
 Thankful as always,
 Adam D. I. Kramer

-- 
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] singular factor analysis

2006-09-07 Thread Patrick Burns
This is a very common computation in finance.

On the public domain page of the Burns Statistics website
in the financial part is the code and R help file for
'factor.model.stat'.  Most of the complication of the code
is to deal with missing values.

Patrick Burns
[EMAIL PROTECTED]
+44 (0)20 8525 0696
http://www.burns-stat.com
(home of S Poetry and A Guide for the Unwilling S User)

Spencer Graves wrote:

  Are there any functions available to do a factor analysis with 
fewer observations than variables?  As long as you have more than 3 
observations, my computations suggest you have enough data to estimate a 
factor analysis covariance matrix, even though the sample covariance 
matrix is singular.  I tried the naive thing and got an error: 

  set.seed(1)
  X - array(rnorm(50), dim=c(5, 10))
  factanal(X, factors=1)
Error in solve.default(cv) : system is computationally singular: 
reciprocal condition number = 4.8982e-018

  I can write a likelihood for a multivariate normal and solve it, 
but I wondered if there is anything else available that could do this? 

  Thanks,
  Spencer Graves

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


  


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Model vs. Observed for a lme() regression fit using two variables

2006-09-07 Thread CG Pettersson
Dear all.

R 2.3.1, W2k.

I am working with a field trial series where, for the moment, I do 
regressions using more than one covariate to explain the protein levels 
in malting barley.

To do this I use lme() and a mixed call, structured by both experiment 
(trial) and repetition in each experiment (block). Everything works 
fine, resulting in nice working linear models using two covariates. But 
how do I visualize this in an efficient and clear way?

What I want is something like the standard output from all multivariate 
tools I have worked with (Observed vs. Predicted) with the least square 
line in the middle. It is naturally possible to plot each covariate 
separate, and also to use the 3d- sqatterplot in Rcmdr to plot both at 
the same time, but I want a plain 2d plot.

Who has made a plotting method for this and where do I find it?
Or am I missing something obvious here, that this plot is easy to 
achieve without any ready made methods?

Cheers
/CG

-- 
CG Pettersson, MSci, PhD Stud.
Swedish University of Agricultural Sciences (SLU)
Dept. of Crop Production Ecology. Box 7043.
SE-750 07 UPPSALA, Sweden.
+46 18 671428, +46 70 3306685
[EMAIL PROTECTED]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Model vs. Observed for a lme() regression fit using two variables

2006-09-07 Thread Andrew Robinson
Hi CG,

I think that the best pair of summary plots are 

1) the fitted values without random effects against the observed
   response variable, and

2) fitted values with random effects against the observed response
   variable.

The first plot gives a summary of the overall quality of the fixed
effects of the model, the second gives a summary of the overall
quality of the fixed effects and random effects of the model.

eg

fm1 - lme(distance ~ age, data = Orthodont)

plot(fitted(fm1, level=0), Orthodont$distance)
abline(0, 1, col=red)

plot(fitted(fm1, level=1), Orthodont$distance)
abline(0, 1, col=red)

I hope that this helps.

Andrew

On Thu, Sep 07, 2006 at 11:35:40AM +0200, CG Pettersson wrote:
 Dear all.
 
 R 2.3.1, W2k.
 
 I am working with a field trial series where, for the moment, I do 
 regressions using more than one covariate to explain the protein levels 
 in malting barley.
 
 To do this I use lme() and a mixed call, structured by both experiment 
 (trial) and repetition in each experiment (block). Everything works 
 fine, resulting in nice working linear models using two covariates. But 
 how do I visualize this in an efficient and clear way?
 
 What I want is something like the standard output from all multivariate 
 tools I have worked with (Observed vs. Predicted) with the least square 
 line in the middle. It is naturally possible to plot each covariate 
 separate, and also to use the 3d- sqatterplot in Rcmdr to plot both at 
 the same time, but I want a plain 2d plot.
 
 Who has made a plotting method for this and where do I find it?
 Or am I missing something obvious here, that this plot is easy to 
 achieve without any ready made methods?
 
 Cheers
 /CG
 
 -- 
 CG Pettersson, MSci, PhD Stud.
 Swedish University of Agricultural Sciences (SLU)
 Dept. of Crop Production Ecology. Box 7043.
 SE-750 07 UPPSALA, Sweden.
 +46 18 671428, +46 70 3306685
 [EMAIL PROTECTED]
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.

-- 
Andrew Robinson  
Department of Mathematics and StatisticsTel: +61-3-8344-9763
University of Melbourne, VIC 3010 Australia Fax: +61-3-8344-4599
Email: [EMAIL PROTECTED] http://www.ms.unimelb.edu.au

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] legend problems in lattice

2006-09-07 Thread Ernst O Ahlberg Helgee
Hi!
Im sorry to bother you but I cant fix this.
I use the lattice function levelplot and I want the colorkey at the 
bottom, how do I get it there? I have tried changing colorkey.space and 
changing in legend but I cant get it right, plz help

btw I'd like to speceify strings to appear at the tick marks and also 
there I fail any thoughts?

cheers
Ernst

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] barplot: different colors for the bar and the strips

2006-09-07 Thread Hao Chen
Hi,

I am using barplot and would like to know if it is possible to have bars
filled with one color while use a different color for the shading lines. 

The following code colors the shading lines, leaving the bars in white:

 barplot(1:5, col=c(1:5), density=c(1:5)*5)

while the colors are applied to the bars when density is removed.

 barplot(1:5, col=c(1:5))

I did check ?barplot and found the following: 

col: a vector of colors for the bars or bar components. 
 
 Thanks,

 Hao
 --

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] legend problems in lattice

2006-09-07 Thread Sundar Dorai-Raj


Ernst O Ahlberg Helgee wrote:
 Hi!
 Im sorry to bother you but I cant fix this.
 I use the lattice function levelplot and I want the colorkey at the 
 bottom, how do I get it there? I have tried changing colorkey.space and 
 changing in legend but I cant get it right, plz help
 
 btw I'd like to speceify strings to appear at the tick marks and also 
 there I fail any thoughts?
 
 cheers
 Ernst
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.

Hi, Ernst,

Please read ?levelplot. Under the argument for colorkey you will see:

colorkey: logical specifying whether a color key is to be drawn
   alongside the plot, or a list describing the color key. The
   list may contain the following components:


   'space': location of the colorkey, can be one of 'left',
'right', 'top' and 'bottom'.  Defaults to
'right'.


So the answer to your first question is:

levelplot(..., colorkey = list(space = bottom))

For your second question, use the scale argument. See ?xyplot for 
details. For example,

levelplot(..., scale = list(x = list(at = 1:4, labels = letters[1:4])))

HTH,

--sundar

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Conservative ANOVA tables in lmer

2006-09-07 Thread Martin Henry H. Stevens
Dear lmer-ers,
My thanks for all of you who are sharing your trials and tribulations  
publicly.

I was hoping to elicit some feedback on my thoughts on denominator  
degrees of freedom for F ratios in mixed models. These thoughts and  
practices result from my reading of previous postings by Doug Bates  
and others.

- I start by assuming that the appropriate denominator degrees lies  
between n - p and and n - q, where n=number of observations, p=number  
of fixed effects (rank of model matrix X), and q=rank of Z:X.
- I then conclude that good estimates of P values on the F ratios lie  
between 1 - pf(F.ratio, numDF, n-p) and 1 - pf(F.ratio, numDF, n-q).
- I further surmise that the latter of these (1 - pf(F.ratio, numDF,  
n-q)) is the more conservative estimate.

When I use these criteria and compare my ANOVA table to the results  
of analysis of Helmert contrasts using MCMC sample with highest  
posterior density intervals, I find that my conclusions (e.g. factor  
A, with three levels, has a significant effect on the response  
variable) are qualitatively the same.

Comments?

Hank


Dr. Hank Stevens, Assistant Professor
338 Pearson Hall
Botany Department
Miami University
Oxford, OH 45056

Office: (513) 529-4206
Lab: (513) 529-4262
FAX: (513) 529-4243
http://www.cas.muohio.edu/~stevenmh/
http://www.muohio.edu/ecology/
http://www.muohio.edu/botany/
E Pluribus Unum

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Stacking a list of data.frames

2006-09-07 Thread TAPO \(Thomas Agersten Poulsen\)
Dear list,

I have a list of data.frames (generated by by), that I want to stack into a 
single data.frame.

I can do this by cbind, but only by subsetting the list explicitly like this:

cbind(l[[1]],l[[2]],l[[3]],l[[4]])

I find this ugly and not very general.

I tried
cbind(l)
cbind(l[[1:4]])
but they do not give the right result.

Please help!

Best regards
Thomas
--
Thomas A PoulsenScientist, Ph.D.
Novozymes A/S   Protein Design / Bioinformatics
Brudelysvej 26, 1US.24  Phone: +45 44 42 27 23
DK-2880 Bagsværd.   Fax:   +45 44 98 02 46

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Stacking a list of data.frames

2006-09-07 Thread Dimitris Rizopoulos
try this:

do.call(cbind, l)


Best,
Dimitris


Dimitris Rizopoulos
Ph.D. Student
Biostatistical Centre
School of Public Health
Catholic University of Leuven

Address: Kapucijnenvoer 35, Leuven, Belgium
Tel: +32/(0)16/336899
Fax: +32/(0)16/337015
Web: http://med.kuleuven.be/biostat/
 http://www.student.kuleuven.be/~m0390867/dimitris.htm


- Original Message - 
From: TAPO (Thomas Agersten Poulsen) [EMAIL PROTECTED]
To: r-help@stat.math.ethz.ch
Sent: Thursday, September 07, 2006 1:56 PM
Subject: [R] Stacking a list of data.frames


Dear list,

I have a list of data.frames (generated by by), that I want to 
stack into a single data.frame.

I can do this by cbind, but only by subsetting the list explicitly 
like this:

cbind(l[[1]],l[[2]],l[[3]],l[[4]])

I find this ugly and not very general.

I tried
cbind(l)
cbind(l[[1:4]])
but they do not give the right result.

Please help!

Best regards
Thomas
--
Thomas A PoulsenScientist, Ph.D.
Novozymes A/S   Protein Design / Bioinformatics
Brudelysvej 26, 1US.24  Phone: +45 44 42 27 23
DK-2880 Bagsværd.   Fax:   +45 44 98 02 46

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide 
http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] merging tables by columns AND rows

2006-09-07 Thread isidora k
Hi everyone!
I have 100 tables of the form:
XCOORD,YCOORD,OBSERVATION
27.47500,42.52641,177
27.48788,42.52641,177
27.50075,42.52641,179
27.51362,42.52641,178
27.52650,42.52641,180
27.53937,42.52641,178
27.55225,42.52641,181
27.56512,42.52641,177
27.57800,42.52641,181
27.59087,42.52641,181
27.60375,42.52641,180
27.61662,42.52641,181
..., ..., ...
with approximately 100 observations for each. All
these tables have the same xcoord and ycoord and I
would like to get a table of the form
XCOORD,YCOORD,OBSERVATION1,OBSERVATION2,... 
27.47500,42.52641,177,233,...
27.48788,42.52641,177,345,...
27.50075,42.52641,179,233,...
27.51362,42.52641,178,123,...
27.52650,42.52641,180,178,...
27.53937,42.52641,178,...,...
27.55225,42.52641,181,...
27.56512,42.52641,177,...
27.57800,42.52641,181,...
27.59087,42.52641,181,...
27.60375,42.52641,180,...
27.61662,42.52641,181,...
In other words I would like to merge all the tables
taking into account the common row names of their
xcoords AND ycoords.
Is there any way to do this in R?
I would be grateful for any advice.
Many Thanks
Isidora

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] merging tables by columns AND rows

2006-09-07 Thread Roger Bivand
On Thu, 7 Sep 2006, isidora k wrote:

 Hi everyone!
 I have 100 tables of the form:
 XCOORD,YCOORD,OBSERVATION
 27.47500,42.52641,177
 27.48788,42.52641,177
 27.50075,42.52641,179
 27.51362,42.52641,178
 27.52650,42.52641,180
 27.53937,42.52641,178
 27.55225,42.52641,181
 27.56512,42.52641,177
 27.57800,42.52641,181
 27.59087,42.52641,181
 27.60375,42.52641,180
 27.61662,42.52641,181
 ..., ..., ...
 with approximately 100 observations for each. All
 these tables have the same xcoord and ycoord and I
 would like to get a table of the form
 XCOORD,YCOORD,OBSERVATION1,OBSERVATION2,... 
 27.47500,42.52641,177,233,...
 27.48788,42.52641,177,345,...
 27.50075,42.52641,179,233,...
 27.51362,42.52641,178,123,...
 27.52650,42.52641,180,178,...
 27.53937,42.52641,178,...,...
 27.55225,42.52641,181,...
 27.56512,42.52641,177,...
 27.57800,42.52641,181,...
 27.59087,42.52641,181,...
 27.60375,42.52641,180,...
 27.61662,42.52641,181,...
 In other words I would like to merge all the tables
 taking into account the common row names of their
 xcoords AND ycoords.

Your data look very much like a rectangular grid. If you had either posted
from an identifiable institution or included an informative signature,
then we'd have known which field you're in, so the following is guesswork.

If all of your data is for a full grid, with the same coordinates always
in the same order, any missing values fully represented in the data, then
reading the first data set in as a data.frame or matrix, and converting it
to a SpatialGridDataFrame object (defined in the sp contributed package)
will give you a base to start from. 

From that you just add columns, one column for each data set, by reading
in just the data you need (for example using scan). This depends crucially
on the same grid being used each time, with the data in the same order. If
the coordinates differ between data sets, bets are off.

If these are spatial data, please consider the R-sig-geo mailing list for 
more targetted help.

 Is there any way to do this in R?
 I would be grateful for any advice.
 Many Thanks
 Isidora
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.
 

-- 
Roger Bivand
Economic Geography Section, Department of Economics, Norwegian School of
Economics and Business Administration, Helleveien 30, N-5045 Bergen,
Norway. voice: +47 55 95 93 55; fax +47 55 95 95 43
e-mail: [EMAIL PROTECTED]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] how to create time series object

2006-09-07 Thread Gabor Grothendieck
You can use the 'zoo' or 'its' packages.  For 'zoo' see the
documents listed at the end of:

http://cran.r-project.org/src/contrib/Descriptions/zoo.html

On 9/6/06, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:

 hi all

 i have date and the return series like below, but the dates are not in
 uniform intervals. Please show me the way how to create a time series in
 'R' so that dates are also associated with the returns.

 thanks in advance

   Sayonara With Smile  With Warm Regards :-)

  G a u r a v   Y a d a v
  Senior Executive Officer,
  Economic Research  Surveillance Department,
  Clearing Corporation Of India Limited.

  Address: 5th, 6th, 7th Floor, Trade Wing 'C',  Kamala City, S.B. Marg,
 Mumbai - 400 013
  Telephone(Office): - +91 022 6663 9398 ,  Mobile(Personal) (0)9821286118
  Email(Office) :- [EMAIL PROTECTED] ,  Email(Personal) :-
 [EMAIL PROTECTED]


 
 DISCLAIMER AND CONFIDENTIALITY CAUTION:\ \ This message and ...{{dropped}}

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Axes of a histogram

2006-09-07 Thread Rotkiv, Rehceb
Hello everyone,

I would be glad if you could help out an R-beginner here... I have a
vector of categorial data like this

 v - c(1, 1, 2, 2, 2, 3, 3, 4, 4, 4)

When I do

 hist(v)

I get the x-axis of the histogram with floating point labels: 1.0, 1.5,
2.0, etc. Is it possible to tell R that the data consists of categories,
i.e. that I only want the category names (1, 2, 3, 4) on my x-axis?

Thanks in advance,
Rehceb Rotkiv

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Axes of a histogram

2006-09-07 Thread Dimitris Rizopoulos
probably you're looking for a barplot, e.g.,

v - c(1, 1, 2, 2, 2, 3, 3, 4, 4, 4)
plot(factor(v))


I hope it helps.

Best,
Dimitris


Dimitris Rizopoulos
Ph.D. Student
Biostatistical Centre
School of Public Health
Catholic University of Leuven

Address: Kapucijnenvoer 35, Leuven, Belgium
Tel: +32/(0)16/336899
Fax: +32/(0)16/337015
Web: http://med.kuleuven.be/biostat/
 http://www.student.kuleuven.be/~m0390867/dimitris.htm


- Original Message - 
From: Rotkiv, Rehceb [EMAIL PROTECTED]
To: r-help@stat.math.ethz.ch
Sent: Thursday, September 07, 2006 2:35 PM
Subject: [R] Axes of a histogram


 Hello everyone,

 I would be glad if you could help out an R-beginner here... I have a
 vector of categorial data like this

 v - c(1, 1, 2, 2, 2, 3, 3, 4, 4, 4)

 When I do

 hist(v)

 I get the x-axis of the histogram with floating point labels: 1.0, 
 1.5,
 2.0, etc. Is it possible to tell R that the data consists of 
 categories,
 i.e. that I only want the category names (1, 2, 3, 4) on my x-axis?

 Thanks in advance,
 Rehceb Rotkiv

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide 
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.
 


Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] merging tables by columns AND rows

2006-09-07 Thread isidora k
Some of the coordinates might not match and also I do
not have the same number of observations in every
table but I want to get only the common ones back.
This is where it gets tricky!I have tried merge, scan
and every joining function I could find but nothing
seems to do what I want.
the R-sig-geo mailing list sounds like a good idea!
Thank you!

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Axes of a histogram

2006-09-07 Thread Marc Schwartz
On Thu, 2006-09-07 at 14:35 +0200, Rotkiv, Rehceb wrote:
 Hello everyone,
 
 I would be glad if you could help out an R-beginner here... I have a
 vector of categorial data like this
 
  v - c(1, 1, 2, 2, 2, 3, 3, 4, 4, 4)
 
 When I do
 
  hist(v)
 
 I get the x-axis of the histogram with floating point labels: 1.0, 1.5,
 2.0, etc. Is it possible to tell R that the data consists of categories,
 i.e. that I only want the category names (1, 2, 3, 4) on my x-axis?
 
 Thanks in advance,
 Rehceb Rotkiv

You don't want a histogram, but a barplot:

  barplot(table(v))

See ?barplot and ?table

HTH,

Marc Schwartz

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] barplot: different colors for the bar and the strips

2006-09-07 Thread Marc Schwartz
On Thu, 2006-09-07 at 06:18 -0500, Hao Chen wrote:
 Hi,
 
 I am using barplot and would like to know if it is possible to have bars
 filled with one color while use a different color for the shading lines. 
 
 The following code colors the shading lines, leaving the bars in white:
 
  barplot(1:5, col=c(1:5), density=c(1:5)*5)
 
 while the colors are applied to the bars when density is removed.
 
  barplot(1:5, col=c(1:5))
 
 I did check ?barplot and found the following: 
 
   col: a vector of colors for the bars or bar components. 
  
  Thanks,
 
  Hao

Note the key word 'or' in the description of the 'col' argument.

You need to make two separate calls to barplot(). The first using the
fill colors, then the second using the shading lines AND setting 'add =
TRUE', so that the second plot overwrites the first without clearing the
plot device.

 barplot(1:5, col=c(1:5))

 barplot(1:5, col = black, density=c(1:5), add = TRUE)

Just be sure that any other arguments, such as axis limits, are
identical between the two calls.

HTH,

Marc Schwartz

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Conservative ANOVA tables in lmer

2006-09-07 Thread Douglas Bates
Thanks for your summary, Hank.

On 9/7/06, Martin Henry H. Stevens [EMAIL PROTECTED] wrote:
 Dear lmer-ers,
 My thanks for all of you who are sharing your trials and tribulations
 publicly.

 I was hoping to elicit some feedback on my thoughts on denominator
 degrees of freedom for F ratios in mixed models. These thoughts and
 practices result from my reading of previous postings by Doug Bates
 and others.

 - I start by assuming that the appropriate denominator degrees lies
 between n - p and and n - q, where n=number of observations, p=number
 of fixed effects (rank of model matrix X), and q=rank of Z:X.

I agree with this but the opinion is by no means universal.  Initially
I misread the statement because I usually write the number of columns
of Z as q.

It is not easy to assess rank of Z:X numerically.  In many cases one
can reason what it should be from the form of the model but a general
procedure to assess the rank of a matrix, especially a sparse matrix,
is difficult.

An alternative which can be easily calculated is n - t where t is the
trace of the 'hat matrix'.  The function 'hatTrace' applied to a
fitted lmer model evaluates this trace (conditional on the estimates
of the relative variances of the random effects).

 - I then conclude that good estimates of P values on the F ratios lie
 between 1 - pf(F.ratio, numDF, n-p) and 1 - pf(F.ratio, numDF, n-q).
 - I further surmise that the latter of these (1 - pf(F.ratio, numDF,
 n-q)) is the more conservative estimate.

 When I use these criteria and compare my ANOVA table to the results
 of analysis of Helmert contrasts using MCMC sample with highest
 posterior density intervals, I find that my conclusions (e.g. factor
 A, with three levels, has a significant effect on the response
 variable) are qualitatively the same.

 Comments?

I would be happy to re-institute p-values for fixed effects in the
summary and anova methods for lmer objects using a denominator degrees
of freedom based on the trace of the hat matrix or the rank of Z:X if
others will volunteer to respond to the these answers are obviously
wrong because they don't agree with whatever and the idiot who wrote
this software should be thrashed to within an inch of his life
messages.  I don't have the patience.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] graphics - joining repeated measures with a line

2006-09-07 Thread hadley wickham
 I would like to join repeated measures for patients across two visits using
 a line. The program below uses symbols to represent each patient. Basically,
 I would like to join each pair of symbols.

This is easy in ggplot:

install.packages(ggplot)
library(ggplot)

qplot(visit, var, id=patient, type=c(line, point), colour=factor(patient))

Regards,

Hadley

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] counting process form of a cox model (cluster(id))?

2006-09-07 Thread z . dalton
Hi,

I am currently analysising a counting process form of a cox model allowing for 
the inclusion of time dependent covariates.  An example model I have fitted is 

modlqol-coxph(Surv(Tstart,Tstop,cens.time)~tmt.first+risk 
+lqol+cluster(id),data=cat)
summary(modlqol)

My question is quick.  I am looking at 1 event (death), and repeated 
measurements (the time dependent covariate 'lqol') are frequently taken on a 
subject, so I assume that measurements on the same subject will be correlated.  
For this reason, I included the cluster(id) term in the model.  However, on p70 
Therneau and Grambsch, it states 

'one concern that often arises is that observations on the same individual are 
correlated and thus would not be handled by standard methods.  This is not 
actually an issue. ...'

so, does anyone recommend that I include the 'cluster(id)' term or does this 
only need to be utilised in the situation where there is multiple events (eg in 
the bladder cancer study by Wei, Lin and Weissfeld) ?

I appreciate any help on the matter,

Thanks,

Zoe

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Conservative ANOVA tables in lmer

2006-09-07 Thread lorenz.gygax
Dear Douglas,

 I would be happy to re-institute p-values for fixed effects in the
 summary and anova methods for lmer objects using a denominator
 degrees of freedom based on the trace of the hat matrix or the rank
 of Z:X

Please do!

 if others will volunteer to respond to the these answers are
 obviously wrong because they don't agree with whatever and the
 idiot who wrote this software should be thrashed to within an inch
 of his life messages.  I don't have the patience.

I would try to take up my shares of these type or questions.

Best regards, Lorenz
- 
Lorenz Gygax
Dr. sc. nat., postdoc

Centre for proper housing of ruminants and pigs
Swiss Federal Veterinary Office
Agroscope Reckenholz-Tänikon Research Station ART

Tänikon, CH-8356 Ettenhausen / Switzerland
Tel: +41 052 368 33 84
Fax: +41 052 365 11 90
[EMAIL PROTECTED]
www.art.admin.ch

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] continuation lines in R script files

2006-09-07 Thread Evan Cooch
Joris De Wolf wrote:
 Are your sure your second solution does not work? Try again...

   

Turns out the second approach did work - but only once I stopped 
cutting-and-pasting between two different operating systems (Linux and 
Windows under Linux). Apparently, some of the cut-and-paste things I was 
doing added weird EOL characters (unseen) or some such...

Ah well.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Memory allocation

2006-09-07 Thread alex lam \(RI\)
Dear list,

I have been trying to run the function qvalue under the package qvalue
on a vector with about 20 million values.

 asso_p.qvalue-qvalue(asso_p.vector)
Error: cannot allocate vector of size 156513 Kb
 sessionInfo()
Version 2.3.1 (2006-06-01)
i686-pc-linux-gnu

attached base packages:
[1] methods   stats graphics  grDevices utils
datasets
[7] base

other attached packages:
qvalue
 1.1
 gc()
used  (Mb) gc trigger   (Mb)  max used   (Mb)
Ncells320188   8.6   23540643  628.7  20464901  546.5
Vcells 101232265 772.4  294421000 2246.3 291161136 2221.4

I have been told that the linux box has 4Gb of RAM, so it should be able
to do better than this.
I searched the FAQ and found some tips on increasing memory size, but
they seem to be windows specific, such as memory.size() and the
-max-mem-size flag. On my linux box R didn't recognise them.

I don't understand the meaning of max-vsize, max-nsize and max-ppsize.
Any help on how to increase the memory allocation on linux is much
appreciated.

Many thanks,
Alex


Alex Lam
PhD student
Department of Genetics and Genomics
Roslin Institute (Edinburgh)
Roslin
Midlothian EH25 9PS

Phone +44 131 5274471
Web   http://www.roslin.ac.uk

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Problem with Variance Components (and general glmmconfusion)

2006-09-07 Thread Toby Gardner
Dear Dr Bates,



Many thanks for such a useful response to my problem.



Regarding Variance Components .



The VarCorr function runs fine for lmer objects once the nlme package is 
removed.



Regarding the format of the nested random effects for an lmer object, you 
said:



In recent versions of lme4 you can use the specification

model2 - lmer(y ~ 1 + (1|groupA/groupB))

Your version may be correct or not.  It depends on what the distinct
levels of groupB correspond to.  The version with the / is more
reliable.



This works well. These are environmental data measured at plots within sites 
(B), within forests (A).



Here is the model (I have put a dump of the data file used for these 
analyses at the end of this email):



 modelusd-lmer(USD~1 + (1|forest/site))

 summary(modelusd)

Linear mixed-effects model fit by REML

Formula: USD ~ 1 + (1 | forest/site)

  AIC  BIClogLik MLdeviance REMLdeviance

 816.7469 825.7788 -405.3734   815.0236 810.7469

Random effects:

 Groups  NameVariance Std.Dev.

 site:forest (Intercept)  6.2099  2.4920

 forest  (Intercept) 33.0435  5.7483

 Residual10.4335  3.2301

number of obs: 150, groups: site:forest, 15; forest, 3



Fixed effects:

Estimate Std. Error t value

(Intercept)   9.8033 3.3909  2.8911



And VarCorr confirms the variance components:



 VarCorr(modelusd)

$`site:forest`

1 x 1 Matrix of class dpoMatrix

(Intercept)

(Intercept)6.209851



$forest

1 x 1 Matrix of class dpoMatrix

(Intercept)

(Intercept)33.04345



attr(,sc)

[1] 3.230100



And following your suggestion I used the HPDinterval to obtain a measure of 
error around the random effects:



 MC.modelusd-mcmcsamp(modelusd, 5)

 HPDinterval(MC.modelusd)

 lower  upper

(Intercept) -3.5815626  23.202750

log(sigma^2) 2.1168335   2.594976

log(st:f.(In))   0.8209994   3.250159

log(frst.(In))   1.0676778   8.676852

deviance   814.8747050 829.055630

attr(,Probability)

[1] 0.95



What I am really after are the intra-class correlation coefficients so I can 
demonstrate the variability in a given environmental variable at different 
spatial scales.  I can of course calculate the % variance explained for each 
random effect from the summary(lmer).  However - and this may be a stupid 
question! - but can the intervals for the StDev of the random effects also 
just be transformed to intervals of the variance (and then converted to % 
values for the intra-class correlation coefficients) by squaring?



Ideally I would like to partition the variance explained by all (three) 
spatially nested scales - forest / site / array - where array is the sample 
unit.  Using lmer produces the model summary I want:



 modelusd2-lmer(USD~1 + (1|forest/site/array))

 summary(modelusd2)



Linear mixed-effects model fit by REML

Formula: USD ~ 1 + (1 | forest/site/array)

  AIC  BIClogLik MLdeviance REMLdeviance

 818.7469 830.7894 -405.3734   815.0236 810.7469

Random effects:

 Groups  NameVariance Std.Dev.

 array:(site:forest) (Intercept)  7.5559  2.7488

 site:forest (Intercept)  6.2099  2.4920

 forest  (Intercept) 33.0435  5.7484

 Residual 2.8776  1.6963

number of obs: 150, groups: array:(site:forest), 150; site:forest, 15; 
forest, 3



Fixed effects:

Estimate Std. Error t value

(Intercept)   9.8033 3.3909  2.8911



However - the mcmcsamp process fails



 MC.modelusd2-mcmcsamp(modelusd2, 5)



with this error message:



Error: Leading minor of order 1 in downdated X'X is not positive definite

Error in t(.Call(mer_MCMCsamp, object, saveb, n, trans, verbose)) :

unable to find the argument 'x' in selecting a method for 
function 't'





Am I trying something impossible here?



Regarding GLMMs..(now with species count data, blocking random factors and 
multiple fixed factors)



When using lmer I would suggest using method = Laplace and perhaps
control = list(usePQL = FALSE, msVerbose = 1) as I mentioned in
another reply to the list a few minutes ago.



This seems to work well, thanks.



With the greatest respect to all concerned, if I could I would like to echo 
the request by Martin Maechler on the list a few weeks ago that it would be 
extremely useful (especially for newcomers like me - and likely would 
greatly reduce the traffic on this list looking at many of the past threads) 
if authors of packages were able to be explicit in the help files about how 
functions differ (key advantages and disadvantages) from packages offering 
otherwise very similar functions (e.g. lmer/glmmML - although the subsequent 
comment by Dr Bates on this helped a lot).



Many thanks!



Toby Gardner



platform   i386-pc-mingw32
arch   i386
os mingw32
system i386, mingw32
status
major  2
minor  3.1
year

Re: [R] Memory allocation

2006-09-07 Thread Prof Brian Ripley
On Thu, 7 Sep 2006, alex lam (RI) wrote:

 Dear list,
 
 I have been trying to run the function qvalue under the package qvalue
 on a vector with about 20 million values.
 
  asso_p.qvalue-qvalue(asso_p.vector)
 Error: cannot allocate vector of size 156513 Kb
  sessionInfo()
 Version 2.3.1 (2006-06-01)
 i686-pc-linux-gnu
 
 attached base packages:
 [1] methods   stats graphics  grDevices utils
 datasets
 [7] base
 
 other attached packages:
 qvalue
  1.1
  gc()
 used  (Mb) gc trigger   (Mb)  max used   (Mb)
 Ncells320188   8.6   23540643  628.7  20464901  546.5
 Vcells 101232265 772.4  294421000 2246.3 291161136 2221.4
 
 I have been told that the linux box has 4Gb of RAM, so it should be able
 to do better than this.

But it also has a 4Gb/process address space, and of that some (1Gb?) is 
reserved for the system.  So it is quite possible that with 2.2Gb used you 
are unable to find any large blocks.

 I searched the FAQ and found some tips on increasing memory size, but
 they seem to be windows specific, such as memory.size() and the
 -max-mem-size flag. On my linux box R didn't recognise them.

?Memory-limits is the key

 Error messages beginning 'cannot allocate vector of size' indicate
 a failure to obtain memory, either because the size exceeded the
 address-space limit for a process or, more likely, because the
 system was unable to provide the memory.  Note that on a 32-bit OS
 there may well be enough free memory available, but not a large
 enough contiguous block of address space into which to map it.

 I don't understand the meaning of max-vsize, max-nsize and max-ppsize.
 Any help on how to increase the memory allocation on linux is much
 appreciated.

Get a 64-bit OS.

-- 
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Conservative ANOVA tables in lmer

2006-09-07 Thread Martin Maechler
 DB == Douglas Bates [EMAIL PROTECTED]
 on Thu, 7 Sep 2006 07:59:58 -0500 writes:

DB Thanks for your summary, Hank.
DB On 9/7/06, Martin Henry H. Stevens [EMAIL PROTECTED] wrote:
 Dear lmer-ers,
 My thanks for all of you who are sharing your trials and tribulations
 publicly.

 I was hoping to elicit some feedback on my thoughts on denominator
 degrees of freedom for F ratios in mixed models. These thoughts and
 practices result from my reading of previous postings by Doug Bates
 and others.

 - I start by assuming that the appropriate denominator degrees lies
 between n - p and and n - q, where n=number of observations, p=number
 of fixed effects (rank of model matrix X), and q=rank of Z:X.

DB I agree with this but the opinion is by no means universal.  Initially
DB I misread the statement because I usually write the number of columns
DB of Z as q.

DB It is not easy to assess rank of Z:X numerically.  In many cases one
DB can reason what it should be from the form of the model but a general
DB procedure to assess the rank of a matrix, especially a sparse matrix,
DB is difficult.

DB An alternative which can be easily calculated is n - t where t is the
DB trace of the 'hat matrix'.  The function 'hatTrace' applied to a
DB fitted lmer model evaluates this trace (conditional on the estimates
DB of the relative variances of the random effects).

 - I then conclude that good estimates of P values on the F ratios lie
   between 1 - pf(F.ratio, numDF, n-p) and 1 - pf(F.ratio, numDF, n-q).
   -- I further surmise that the latter of these (1 - pf(F.ratio, numDF,
   n-q)) is the more conservative estimate.

This assumes that the true distribution (under H0) of that F ratio
*is*  F_{n1,n2}  for some (possibly non-integer)  n1 and n2.
But AFAIU, this is only approximately true at best, and AFAIU,
the quality of this approximation has only been investigated
empirically for some situations. 
Hence, even your conservative estimate of the P value could be
wrong (I mean wrong on the wrong side instead of just
conservatively wrong).  Consequently, such a P-value is only
``approximately conservative'' ...
I agree howevert that in some situations, it might be a very
useful descriptive statistic about the fitted model.

Martin

 When I use these criteria and compare my ANOVA table to the results
 of analysis of Helmert contrasts using MCMC sample with highest
 posterior density intervals, I find that my conclusions (e.g. factor
 A, with three levels, has a significant effect on the response
 variable) are qualitatively the same.

 Comments?

DB I would be happy to re-institute p-values for fixed effects in the
DB summary and anova methods for lmer objects using a denominator degrees
DB of freedom based on the trace of the hat matrix or the rank of Z:X if
DB others will volunteer to respond to the these answers are obviously
DB wrong because they don't agree with whatever and the idiot who wrote
DB this software should be thrashed to within an inch of his life
DB messages.  I don't have the patience.

DB __
DB R-help@stat.math.ethz.ch mailing list
DB https://stat.ethz.ch/mailman/listinfo/r-help
DB PLEASE do read the posting guide 
http://www.R-project.org/posting-guide.html
DB and provide commented, minimal, self-contained, reproducible code.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Conservative ANOVA tables in lmer

2006-09-07 Thread Douglas Bates
On 9/7/06, Martin Maechler [EMAIL PROTECTED] wrote:
  DB == Douglas Bates [EMAIL PROTECTED]
  on Thu, 7 Sep 2006 07:59:58 -0500 writes:

 DB Thanks for your summary, Hank.
 DB On 9/7/06, Martin Henry H. Stevens [EMAIL PROTECTED] wrote:
  Dear lmer-ers,
  My thanks for all of you who are sharing your trials and tribulations
  publicly.

  I was hoping to elicit some feedback on my thoughts on denominator
  degrees of freedom for F ratios in mixed models. These thoughts and
  practices result from my reading of previous postings by Doug Bates
  and others.

  - I start by assuming that the appropriate denominator degrees lies
  between n - p and and n - q, where n=number of observations, p=number
  of fixed effects (rank of model matrix X), and q=rank of Z:X.

 DB I agree with this but the opinion is by no means universal.  Initially
 DB I misread the statement because I usually write the number of columns
 DB of Z as q.

 DB It is not easy to assess rank of Z:X numerically.  In many cases one
 DB can reason what it should be from the form of the model but a general
 DB procedure to assess the rank of a matrix, especially a sparse matrix,
 DB is difficult.

 DB An alternative which can be easily calculated is n - t where t is the
 DB trace of the 'hat matrix'.  The function 'hatTrace' applied to a
 DB fitted lmer model evaluates this trace (conditional on the estimates
 DB of the relative variances of the random effects).

  - I then conclude that good estimates of P values on the F ratios lie
between 1 - pf(F.ratio, numDF, n-p) and 1 - pf(F.ratio, numDF, n-q).
-- I further surmise that the latter of these (1 - pf(F.ratio, numDF,
n-q)) is the more conservative estimate.

 This assumes that the true distribution (under H0) of that F ratio
 *is*  F_{n1,n2}  for some (possibly non-integer)  n1 and n2.
 But AFAIU, this is only approximately true at best, and AFAIU,
 the quality of this approximation has only been investigated
 empirically for some situations.
 Hence, even your conservative estimate of the P value could be
 wrong (I mean wrong on the wrong side instead of just
 conservatively wrong).  Consequently, such a P-value is only
 ``approximately conservative'' ...
 I agree howevert that in some situations, it might be a very
 useful descriptive statistic about the fitted model.

Thank you for pointing that out Martin.  I agree.  As I mentioned a
value of the denominator degrees of freedom based on the trace of the
hat matrix is conditional on the estimates of the relative variances
of the random effects.  I think an argument could still be made for
the upper bound on the dimension of the model space being rank of Z:X
and hence a lower bound on the dimension of the space in which the
residuals lie as being n - rank[Z:X].  One possible approach would be
to use the squared length of the projection of the data vector into
the orthogonal complement of Z:X as the sum of squares and n -
rank(Z:X) as the degrees of freedom and base tests on that.  Under the
assumptions on the model I think an F ratio calculated using that
actually would have an F distribution.


 Martin

  When I use these criteria and compare my ANOVA table to the results
  of analysis of Helmert contrasts using MCMC sample with highest
  posterior density intervals, I find that my conclusions (e.g. factor
  A, with three levels, has a significant effect on the response
  variable) are qualitatively the same.

  Comments?

 DB I would be happy to re-institute p-values for fixed effects in the
 DB summary and anova methods for lmer objects using a denominator degrees
 DB of freedom based on the trace of the hat matrix or the rank of Z:X if
 DB others will volunteer to respond to the these answers are obviously
 DB wrong because they don't agree with whatever and the idiot who wrote
 DB this software should be thrashed to within an inch of his life
 DB messages.  I don't have the patience.

 DB __
 DB R-help@stat.math.ethz.ch mailing list
 DB https://stat.ethz.ch/mailman/listinfo/r-help
 DB PLEASE do read the posting guide 
 http://www.R-project.org/posting-guide.html
 DB and provide commented, minimal, self-contained, reproducible code.


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Conservative ANOVA tables in lmer

2006-09-07 Thread Peter Dalgaard
Martin Maechler [EMAIL PROTECTED] writes:

  DB == Douglas Bates [EMAIL PROTECTED]
  on Thu, 7 Sep 2006 07:59:58 -0500 writes:
 
 DB Thanks for your summary, Hank.
 DB On 9/7/06, Martin Henry H. Stevens [EMAIL PROTECTED] wrote:
  Dear lmer-ers,
  My thanks for all of you who are sharing your trials and tribulations
  publicly.
 
  I was hoping to elicit some feedback on my thoughts on denominator
  degrees of freedom for F ratios in mixed models. These thoughts and
  practices result from my reading of previous postings by Doug Bates
  and others.
 
  - I start by assuming that the appropriate denominator degrees lies
  between n - p and and n - q, where n=number of observations, p=number
  of fixed effects (rank of model matrix X), and q=rank of Z:X.
 
 DB I agree with this but the opinion is by no means universal.  Initially
 DB I misread the statement because I usually write the number of columns
 DB of Z as q.
 
 DB It is not easy to assess rank of Z:X numerically.  In many cases one
 DB can reason what it should be from the form of the model but a general
 DB procedure to assess the rank of a matrix, especially a sparse matrix,
 DB is difficult.
 
 DB An alternative which can be easily calculated is n - t where t is the
 DB trace of the 'hat matrix'.  The function 'hatTrace' applied to a
 DB fitted lmer model evaluates this trace (conditional on the estimates
 DB of the relative variances of the random effects).
 
  - I then conclude that good estimates of P values on the F ratios lie
between 1 - pf(F.ratio, numDF, n-p) and 1 - pf(F.ratio, numDF, n-q).
-- I further surmise that the latter of these (1 - pf(F.ratio, numDF,
n-q)) is the more conservative estimate.
 
 This assumes that the true distribution (under H0) of that F ratio
 *is*  F_{n1,n2}  for some (possibly non-integer)  n1 and n2.
 But AFAIU, this is only approximately true at best, and AFAIU,
 the quality of this approximation has only been investigated
 empirically for some situations. 
 Hence, even your conservative estimate of the P value could be
 wrong (I mean wrong on the wrong side instead of just
 conservatively wrong).  Consequently, such a P-value is only
 ``approximately conservative'' ...
 I agree howevert that in some situations, it might be a very
 useful descriptive statistic about the fitted model.

I'm very wary of ANY attempt at guesswork in these matters. 

I may be understanding the post wrongly, but consider this case: Y_ij
= mu + z_i + eps_ij, i = 1..3, j=1..100

I get rank(X)=1, rank(X:Z)=3,  n=300

It is well known that the test for mu=0 in this case is obtained by
reducing data to group means, xbar_i, and then do a one-sample t test,
the square of which is F(1, 2), but it seems to be suggested that
F(1, 297) is a conservative test???!

-- 
   O__   Peter Dalgaard Øster Farimagsgade 5, Entr.B
  c/ /'_ --- Dept. of Biostatistics PO Box 2099, 1014 Cph. K
 (*) \(*) -- University of Copenhagen   Denmark  Ph:  (+45) 35327918
~~ - ([EMAIL PROTECTED])  FAX: (+45) 35327907

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] singular factor analysis

2006-09-07 Thread Spencer Graves
Hi, Patrick:  Thanks very much.  I'll try it.  Spencer Graves

Patrick Burns wrote:
 This is a very common computation in finance.

 On the public domain page of the Burns Statistics website
 in the financial part is the code and R help file for
 'factor.model.stat'.  Most of the complication of the code
 is to deal with missing values.

 Patrick Burns
 [EMAIL PROTECTED]
 +44 (0)20 8525 0696
 http://www.burns-stat.com
 (home of S Poetry and A Guide for the Unwilling S User)

 Spencer Graves wrote:

  Are there any functions available to do a factor analysis with 
 fewer observations than variables?  As long as you have more than 3 
 observations, my computations suggest you have enough data to 
 estimate a factor analysis covariance matrix, even though the sample 
 covariance matrix is singular.  I tried the naive thing and got an 
 error:
  set.seed(1)
  X - array(rnorm(50), dim=c(5, 10))
  factanal(X, factors=1)
 Error in solve.default(cv) : system is computationally singular: 
 reciprocal condition number = 4.8982e-018

  I can write a likelihood for a multivariate normal and solve it, 
 but I wondered if there is anything else available that could do this?
  Thanks,
  Spencer Graves

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide 
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


  


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Matrix multiplication using apply() or lappy() ?

2006-09-07 Thread Tim Hesterberg
[EMAIL PROTECTED] asked:
I am trying to divide the columns of a matrix by the first row in the 
matrix.

Dividing columns of a matrix by a vector is a pretty fundamental
operation, and the query resulted in a large number of suggestions:

x/matrix(v, nrow(x), ncol(x), byrow = TRUE))
sweep(x, 2, v, /)
x / rep(v, each = nrow(x))
x / outer(rep(1, nrow(x)), v)
x %*% diag(1/v)
t(apply(x, 1, function(x) x/v))
x/rep(v, each=nrow(x))
t(apply(x, 1, /, v))
library(reshape); iapply(x, 1, /, v)  # R only
t(t(x)/v)
scale(x, center = FALSE, v)  # not previously suggested


It is unsatisfactory when such a fundamental operation is
done in so many different ways.  
* It makes it hard to read other people's code.  
* Some of these are very inefficient.

I propose to create standard functions and possibly operator forms
for this and similar operators:

colPlus(x, v)   x %c+% v
colMinus(x, v)  x %c-% v
colTimes(x, v)  x %c*% v
colDivide(x, v) x %c/% v
colPower(x, v)  x %c^% v

Goals are:
* more readable code
* generic functions, with methods for objects such as data frames
  and S-PLUS bigdata objects  (this would be for both S-PLUS and R)
* efficiency -- use the fastest of the above methods, or drop to C
  to avoid replicating v.
* allow error checking (that length of v matches number of columns of x)

I'd like feedback (to me, I'll summarize for the list) on:
* the suggestion in general
* are names like colPlus OK, or do you have other suggestions?
* create both functions and operators, or just the functions?
* should there be similar operations for rows?  

Note:  similar operations for rows are not usually needed, because
x * v  # e.g. where v = colMeans(x)
is equivalent to (but faster than)
x * rep(v, length = length(x))
The advantage would be that
colTimes(x, v)
could throw an error if length(v) != nrow(x)

Tim Hesterberg

P.S.  Of the suggestions, my preference is
a / rep(v, each=nrow(a))
It was to support this and similar +-*^ operations that I originally
added the each argument to rep.


| Tim Hesterberg   Research Scientist  |
| [EMAIL PROTECTED]  Insightful Corp.|
| (206)802-23191700 Westlake Ave. N, Suite 500 |
| (206)283-8691 (fax)  Seattle, WA 98109-3044, U.S.A.  |
|  www.insightful.com/Hesterberg   |

Download the S+Resample library from www.insightful.com/downloads/libraries

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] area between two curves, but one is not continuous

2006-09-07 Thread Anton Meyer
Hello,

I want to colorize the area between two curves, but one of these
curves isn't continuous.

The best solution I found is the 2nd example in the help of polygon,
but how can I get no area filling for the missing data in the 2nd curve.

example:

x1 = c(1:8)
x2 = c(1:8)
y1 = c(1,5,6,1,4,5,5,5)
y2 = c(0,3,3,NA,NA,1,3,4)

plot(x1,y1,type=l)
lines(x2,y2)

for the missing parts I want no filling.

so for this examples the code would be:
polygon(c(1:3,3:1),c(y1[1:3],rev(y2[1:3])),col=green)
polygon(c(6:8,8:6),c(y1[6:8],rev(y2[6:8])),col=green)

How can I generalize this for a longer curve with more data?

AxM

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] [OT] Important stat dates

2006-09-07 Thread Erin Hodgess
Dear R People:

Way Off Topic:

Is anyone aware of a website that contains important dates
in statistics history, please?

Maybe a sort of This Day in Statistics, please?

I thought that my students might get a kick out of that.

(actually I will probably enjoy it more than them!)

Thanks for any help!

I tried (via Google) today in statistics and today in statistics
history but nothing worthwhile appeared.

Sincerely,
Erin Hodgess
Associate Professor
Department of Computer and Mathematical Sciences
University of Houston - Downtown
mailto: [EMAIL PROTECTED]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] barplot: different colors for the bar and the strips

2006-09-07 Thread Hao Chen
Hello Marc Schwartz

On Thu, Sep 07, 2006 at 07:54:05AM -0500, Marc Schwartz wrote:
 On Thu, 2006-09-07 at 06:18 -0500, Hao Chen wrote:
  Hi,
  
  I am using barplot and would like to know if it is possible to have bars
  filled with one color while use a different color for the shading lines. 
  
  The following code colors the shading lines, leaving the bars in white:
  
   barplot(1:5, col=c(1:5), density=c(1:5)*5)
  
  while the colors are applied to the bars when density is removed.
  
   barplot(1:5, col=c(1:5))
  
  I did check ?barplot and found the following: 
  
  col: a vector of colors for the bars or bar components. 
   
   Thanks,
  
   Hao
 
 Note the key word 'or' in the description of the 'col' argument.
 
 You need to make two separate calls to barplot(). The first using the
 fill colors, then the second using the shading lines AND setting 'add =
 TRUE', so that the second plot overwrites the first without clearing the
 plot device.
 
  barplot(1:5, col=c(1:5))
 
  barplot(1:5, col = black, density=c(1:5), add = TRUE)
 
 Just be sure that any other arguments, such as axis limits, are
 identical between the two calls.
 
 HTH,
 
 Marc Schwartz
 


Thank you very much for your help. It works but only in the order as you
put it, since the following code only shows the color, but not the
shading lines:

barplot(1:5, col = black, density=c(1:5))
barplot(1:5, col=c(1:5), add = TRUE)

Hao Chen

---
Mining PubMed: http://www.chilibot.net
-

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] barplot: different colors for the bar and the strips

2006-09-07 Thread Marc Schwartz (via MN)
On Thu, 2006-09-07 at 12:14 -0500, Hao Chen wrote:
 Hello Marc Schwartz
 
 On Thu, Sep 07, 2006 at 07:54:05AM -0500, Marc Schwartz wrote:
  On Thu, 2006-09-07 at 06:18 -0500, Hao Chen wrote:
   Hi,
   
   I am using barplot and would like to know if it is possible to have bars
   filled with one color while use a different color for the shading lines. 
   
   The following code colors the shading lines, leaving the bars in white:
   
barplot(1:5, col=c(1:5), density=c(1:5)*5)
   
   while the colors are applied to the bars when density is removed.
   
barplot(1:5, col=c(1:5))
   
   I did check ?barplot and found the following: 
   
 col: a vector of colors for the bars or bar components. 

Thanks,
   
Hao
  
  Note the key word 'or' in the description of the 'col' argument.
  
  You need to make two separate calls to barplot(). The first using the
  fill colors, then the second using the shading lines AND setting 'add =
  TRUE', so that the second plot overwrites the first without clearing the
  plot device.
  
   barplot(1:5, col=c(1:5))
  
   barplot(1:5, col = black, density=c(1:5), add = TRUE)
  
  Just be sure that any other arguments, such as axis limits, are
  identical between the two calls.
  
  HTH,
  
  Marc Schwartz
  
 
 
 Thank you very much for your help. It works but only in the order as you
 put it, since the following code only shows the color, but not the
 shading lines:
 
 barplot(1:5, col = black, density=c(1:5))
 barplot(1:5, col=c(1:5), add = TRUE)
 
 Hao Chen

That is correct. The sequence is important, as the shading lines are
drawn with a transparent background, enabling the original color to be
seen.

Reversing the order, you are overplotting the shading lines with opaque
colored rectangles. Hence, the lines are lost.

HTH,

Marc

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] rgdal on a Mac

2006-09-07 Thread Jonathan Boyd Thayn
I am trying to install the rgdal package on my Mac OS X 3.9.  I 
downloaded and installed the GDAL libraries from Fink and then tried to 
install rgdal and got the following message.  I tried to determine if 
the GDAL libraries were in my path but I'm not sure how to do that.  
Any ideas?  Thanks.


trying URL 
'http://www.biometrics.mtu.edu/CRAN/src/contrib/rgdal_0.4-10.tar.gz'
Content type 'application/x-gzip' length 4009531 bytes
opened URL
==
downloaded 3915Kb

* Installing *source* package 'rgdal' ...
gdal-config: gdal-config
./configure: line 1: gdal-config: command not found

The gdal-config script distributed with GDAL could not be found.
If you have not installed the GDAL libraries, you can
download the source from  http://www.gdal.org/
If you have installed the GDAL libraries, then make sure that
gdal-config is in your path. Try typing gdal-config at a
shell prompt and see if it runs. If not, use:
  --configure-args='--with-gdal-config=/usr/local/bin/gdal-config' echo 
with appropriate values for your installation.


The downloaded packages are in
/private/tmp/Rtmp9zhfAK/downloaded_packages
** Removing 
'/Library/Frameworks/R.framework/Versions/2.2/Resources/library/rgdal'
** Restoring previous 
'/Library/Frameworks/R.framework/Versions/2.2/Resources/library/rgdal'
ERROR: configuration failed for package 'rgdal'


Jonathan B. Thayn
Kansas Applied Remote Sensing (KARS) Program
University of Kansas
Higuchi Hall
2101 Constant Avenue
Lawrence, Kansas 66047-3759
[EMAIL PROTECTED]
www.kars.ku.edu/about/people/thayn/JonSite/Welcome.html
[[alternative text/enriched version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] [OT] Important stat dates

2006-09-07 Thread Marc Schwartz (via MN)
On Thu, 2006-09-07 at 11:57 -0500, Erin Hodgess wrote:
 Dear R People:
 
 Way Off Topic:
 
 Is anyone aware of a website that contains important dates
 in statistics history, please?
 
 Maybe a sort of This Day in Statistics, please?
 
 I thought that my students might get a kick out of that.
 
 (actually I will probably enjoy it more than them!)
 
 Thanks for any help!
 
 I tried (via Google) today in statistics and today in statistics
 history but nothing worthwhile appeared.


Here are two pages that you might find helpful:

  http://www.york.ac.uk/depts/maths/histstat/welcome.htm

  http://www.economics.soton.ac.uk/staff/aldrich/Figures.htm

Both have additional references and reciprocal links.

HTH,

Marc Schwartz

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] plot image matrix with row/col labels

2006-09-07 Thread Michael Friendly
I'm working with an historical image that may be (one of?) the first 
uses of gray-scale shading to show the pattern of values in a 
matrix/table, later used by Bertin in his 'reorderable matrix'
and sometimes called a scalogram.

The image is at
http://euclid.psych.yorku.ca/SCS/Gallery/images/Private/scalogram.jpg
The rows refer to the arrondisements of Paris, the cols to various
population characteristics.

I want to read it into R with rimage(read.jpeg), calcualte avg. shading 
values,
and recreate an approximation to the image with the row and column 
labels.  I'm stuck at this last step,
plot(imagematrix(mat))
(and also on how to read the image from a URL).

Can someone help?  My code is below

library(rimage)
image - read.jpeg(C:/Documents/milestone/images/scalogram.jpg)
## how to read from web?
#image - 
read.jpeg(http://euclid.psych.yorku.ca/SCS/Gallery/images/Private/scalogram.jpg;)

# remove row/col headers
img2 - image[480:1740, 470:2350]
str(img2)

# size of each blob
ht -floor(nrow(img2)/20);
wd -floor(ncol(img2)/40)

# calculate trimmed mean of pixel values
mat - matrix(nrow=20,ncol=40,0)
for (i in 1:20) {
for (j in 1:40) {
rows - seq(1+(i-1)*ht, i*ht)
cols - seq(1+(j-1)*wd, j*wd)
blob - img2[ rows,  cols ]
mat[i,j] - mean(blob, trim=0.1)
}
}


# names for arrrondisements
rnames - c(
01 Louvre, 02 Bourse, 03 Temple, 04 Hotel de Ville, 05 
Pantheon,
06 Luxembourg, 07 Palais, 08 Eglise, 09 Opera, 10 St. Laurent,
11 Popincourt, 12 Reuilly, 13 Goeblins, 14 Observatoire, 15 
Vaurigard,
16 Passy, 17 Batingnoles, 18 Montmartre, 19 B. Chaumont, 20 
Menilmontant)

#names for population characteristics
cnames - c(01 Accrois. pop, 02 Pop specifique, 03 
Habitants/menage, 04 Maisons/hectare, 05 Habitants/maison,
06 Appart./maison, 07 Appart. vacantes, 08 Locaux Indust.C, 09 
Garnisson,, 10 Parisiens,
11 Provinseaux, 12 Etrangers, 13 Calvinistes, 14 Lutheriens, 
15 Isrealites,
16 Libres penseurs, 17 Illettres, 18 Enfants, 19 Mineurs, 20 
Adultes,
21 Vieillards, 22 Electeurs, 23 Horticulture, 24 Industrie, 25 
Commerce,
26 Transports, 27 Prof. diverses, 28 Prof. liberales, 29 Forces 
publiques, 30 Admin. publique,
31 Clerge,  32 Proprietaires rentiers, 33 Pop. aisee, 34 
Employees, 35 Ouvriers,
36 Journaliers, 37 Domestiques, 38 Chevaux, 39 Chiens, 40 
Moralite
)

dimnames(mat) - list(rnames, cnames)

# how to plot the image matrix with row/col names???

# show the image matrix
plot(imagematrix(mat))


-- 
Michael Friendly Email: friendly AT yorku DOT ca
Professor, Psychology Dept.
York University  Voice: 416 736-5115 x66249 Fax: 416 736-5814
4700 Keele Streethttp://www.math.yorku.ca/SCS/friendly.html
Toronto, ONT  M3J 1P3 CANADA

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Alternatives to merge for large data sets?

2006-09-07 Thread Adam D. I. Kramer

On Thu, 7 Sep 2006, Prof Brian Ripley wrote:

 Which version of R?

Previously, 2.3.1.

 Please try 2.4.0 alpha, as it has a different and more efficient
 algorithm for the case of 1-1 matches.

I downloaded and installed R-latest, but got the same error message:

Error: cannot allocate vector of size 7301 Kb

...though at least the too-big size was larger this time.

My data set is not exactly 1-1; every item in prof may have one or more
matches in pubbounds, though every item in pubbounds corrosponds only to
one prof.

--Adam


 On Wed, 6 Sep 2006, Adam D. I. Kramer wrote:

 Hello,

 I am trying to merge two very large data sets, via

 pubbounds.prof -
 merge(x=pubbounds,y=prof,by.x=user,by.y=userid,all=TRUE,sort=FALSE)

 which gives me an error of

 Error: cannot allocate vector of size 2962 Kb

 I am reasonably sure that this is correct syntax.

 The trouble is that pubbounds and prof are large; they are data frames which
 take up 70M and 11M respectively when saved as .Rdata files.

 I understand from various archive searches that merge can't handle that,
 because merge takes n^2 memory, which I do not have.

 Not really true (it has been changed since those days).  Of course, if you
 have multiple matches it must do so.

 My question is whether there is an alternative to merge which would carry
 out the process in a slower, iterative manner...or if I should just bite the
 bullet, write.table, and use a perl script to do the job.

 Thankful as always,
 Adam D. I. Kramer

 -- 
 Brian D. Ripley,  [EMAIL PROTECTED]
 Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
 University of Oxford, Tel:  +44 1865 272861 (self)
 1 South Parks Road, +44 1865 272866 (PA)
 Oxford OX1 3TG, UKFax:  +44 1865 272595


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] October R/Splus course @ 3 locations *** R/Splus Fundamentals and Programming Techniques

2006-09-07 Thread elvis
XLSolutions Corporation (www.xlsolutions-corp.com) is proud to
announce our 2-day October 2006 R/S-plus Fundamentals and Programming
Techniques : www.xlsolutions-corp.com/Rfund.htm

*** Washington DC / October 12-13, 2006
*** Seattle Wa  / October 19-20
*** San Francisco / October 26-27  

Reserve your seat now at the early bird rates! Payment due AFTER
the class

Course Description:

This two-day beginner to intermediate R/S-plus course focuses on a
broad spectrum of topics, from reading raw data to a comparison of R
and S. We will learn the essentials of data manipulation, graphical
visualization and R/S-plus programming. We will explore statistical
data analysis tools,including graphics with data sets. How to enhance
your plots, build your own packages (librairies) and connect via
ODBC,etc.
We will perform some statistical modeling and fit linear regression
models. Participants are encouraged to bring data for interactive
sessions

With the following outline:

- An Overview of R and S
- Data Manipulation and Graphics
- Using Lattice Graphics
- A Comparison of R and S-Plus
- How can R Complement SAS?
- Writing Functions
- Avoiding Loops
- Vectorization
- Statistical Modeling
- Project Management
- Techniques for Effective use of R and S
- Enhancing Plots
- Using High-level Plotting Functions
- Building and Distributing Packages (libraries)
- Connecting; ODBC, Rweb, Orca via sockets and via Rjava


Email us for group discounts.
Email Sue Turner: [EMAIL PROTECTED]
Phone: 206-686-1578
Visit us: www.xlsolutions-corp.com/training.htm
Please let us know if you and your colleagues are interested in this
classto take advantage of group discount. Register now to secure your
seat!

Interested in R/Splus Advanced course? email us.


Cheers,
Elvis Miller, PhD
Manager Training.
XLSolutions Corporation
206 686 1578
www.xlsolutions-corp.com
[EMAIL PROTECTED]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Alternatives to merge for large data sets?

2006-09-07 Thread bogdan romocea
One obvious alternative is an SQL join, which you could do directly in
a DBMS, or from R via RMySQL / RSQLite /... Keep in mind that creating
indexes on user/userid before the join may save a lot of time.


 -Original Message-
 From: [EMAIL PROTECTED]
 [mailto:[EMAIL PROTECTED] On Behalf Of Adam
 D. I. Kramer
 Sent: Thursday, September 07, 2006 2:46 PM
 To: Prof Brian Ripley
 Cc: r-help@stat.math.ethz.ch
 Subject: Re: [R] Alternatives to merge for large data sets?


 On Thu, 7 Sep 2006, Prof Brian Ripley wrote:

  Which version of R?

 Previously, 2.3.1.

  Please try 2.4.0 alpha, as it has a different and more efficient
  algorithm for the case of 1-1 matches.

 I downloaded and installed R-latest, but got the same error message:

 Error: cannot allocate vector of size 7301 Kb

 ...though at least the too-big size was larger this time.

 My data set is not exactly 1-1; every item in prof may have
 one or more
 matches in pubbounds, though every item in pubbounds
 corrosponds only to
 one prof.

 --Adam

 
  On Wed, 6 Sep 2006, Adam D. I. Kramer wrote:
 
  Hello,
 
  I am trying to merge two very large data sets, via
 
  pubbounds.prof -
 
 merge(x=pubbounds,y=prof,by.x=user,by.y=userid,all=TRUE,so
 rt=FALSE)
 
  which gives me an error of
 
  Error: cannot allocate vector of size 2962 Kb
 
  I am reasonably sure that this is correct syntax.
 
  The trouble is that pubbounds and prof are large; they are
 data frames which
  take up 70M and 11M respectively when saved as .Rdata files.
 
  I understand from various archive searches that merge
 can't handle that,
  because merge takes n^2 memory, which I do not have.
 
  Not really true (it has been changed since those days).  Of
 course, if you
  have multiple matches it must do so.
 
  My question is whether there is an alternative to merge
 which would carry
  out the process in a slower, iterative manner...or if I
 should just bite the
  bullet, write.table, and use a perl script to do the job.
 
  Thankful as always,
  Adam D. I. Kramer
 
  --
  Brian D. Ripley,  [EMAIL PROTECTED]
  Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
  University of Oxford, Tel:  +44 1865 272861 (self)
  1 South Parks Road, +44 1865 272866 (PA)
  Oxford OX1 3TG, UKFax:  +44 1865 272595
 

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Running/submitting script files

2006-09-07 Thread Zodet, Marc W. (AHRQ)
All:

 

Is there any way to run an R script file (i.e., *.R) from the command
prompt in the console window.  Ultimately, I'm looking to put such code
in a script file so that it can set off other R scripts/programs as
needed.

 

Thanks.

 

Marc

 


[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Running/submitting script files

2006-09-07 Thread Dirk Eddelbuettel

On 7 September 2006 at 14:39, Zodet, Marc W. (AHRQ) wrote:
| Is there any way to run an R script file (i.e., *.R) from the command
| prompt in the console window.  Ultimately, I'm looking to put such code
| in a script file so that it can set off other R scripts/programs as
| needed.

Which platform?  On Linux/Unix, Jeffey Horner's interp does just that.
Currently at version 0.0.4 and may undergo a renaming in the near future ...

http://wiki.r-project.org/rwiki/doku.php?id=developers:rinterp

Hth, Dirk

-- 
Hell, there are no rules here - we're trying to accomplish something. 
  -- Thomas A. Edison

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Running/submitting script files

2006-09-07 Thread roger bos
Using R in batch mode should work on both Windows and Linux:

R CMD BATCH (assuming that R.exe is in your path)

Even without R's location in your path, you could issue the following
command at the prompt (in windows):
c:\Program Files\R\R-2.3.1\bin\R.exe CMD BATCH --vanilla --slave
i:\R_HOME\batch_file.R

where batch_file.R has the script you want to run.  I use this command in
windows task scheduler.



On 9/7/06, Dirk Eddelbuettel [EMAIL PROTECTED] wrote:


 On 7 September 2006 at 14:39, Zodet, Marc W. (AHRQ) wrote:
 | Is there any way to run an R script file (i.e., *.R) from the command
 | prompt in the console window.  Ultimately, I'm looking to put such code
 | in a script file so that it can set off other R scripts/programs as
 | needed.

 Which platform?  On Linux/Unix, Jeffey Horner's interp does just that.
 Currently at version 0.0.4 and may undergo a renaming in the near future
 ...

 http://wiki.r-project.org/rwiki/doku.php?id=developers:rinterp

 Hth, Dirk

 --
 Hell, there are no rules here - we're trying to accomplish something.
  -- Thomas A. Edison

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Running wilcox.test function on two lists

2006-09-07 Thread Raj, Towfique
Dear all,
I'm a newbie to R and I would really apperciate any help with the following:

I have two lists, l1 and l2:

l1:
$A*0101
[1] 0.076 0.109 0.155  0.077 0.09 0  0  0.073
[9] 0.33  0.0034 0.0053


$A*0247
[1] 0 0 0.5 .004 0 0 0

$A*0248
[1] 0 0 0.3 0 0.06



l2:

$A*1101
[1] 0.17  0.24  0.097  0.075  0.067

$A*0247
numeric(0)

$A*0248
[1] 0.031



Basically, what I want to do is run wilcox.test() on each entry pair
in the list.

1) I want to loop through the list to run wilcox.test for each entry
of the list. How would I do that? mapply()?

  wilcox.test(l0[[1]],l1[[1]]) for the first one and so on

2) I want to exclude the list entry which has no values (i.e. A*0247).

3) Finally, I only want the to see the 'p-value' for each list names.
The output I want capture is only the 'p-value' object from
wilcox.test.

namep-value
   A*0101  0.8329
   

I'm grateful for any help, or any pointers to a good online tutorial.

Thanks a lot in advance,

-T.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Model vs. Observed for a lme() regression fit using two variables

2006-09-07 Thread CG Pettersson
Hi Andrew,

Thanks a lot, That would give me what I want.
But using my own data and models resulted in this:

 plot(fitted(tcos31.c.cp, level=1), FCR.c$g.cp)
Error in xy.coords(x, y, xlabel, ylabel, log) :
'x' and 'y' lengths differ

This is quite correct, as there are some missing values in the covariate
and I made the model using the 'na.action=na.omit' option.

I know there is a way of using the model to fix this, but haven´t been
able to get the code right during the afternoon.

How do I code this and where should I have looked?

Cheers
/CG




On Thu, September 7, 2006 12:03 pm, Andrew Robinson said:
 Hi CG,

 I think that the best pair of summary plots are

 1) the fitted values without random effects against the observed
response variable, and

 2) fitted values with random effects against the observed response
variable.

 The first plot gives a summary of the overall quality of the fixed
 effects of the model, the second gives a summary of the overall
 quality of the fixed effects and random effects of the model.

 eg

 fm1 - lme(distance ~ age, data = Orthodont)

 plot(fitted(fm1, level=0), Orthodont$distance)
 abline(0, 1, col=red)

 plot(fitted(fm1, level=1), Orthodont$distance)
 abline(0, 1, col=red)

 I hope that this helps.

 Andrew

 On Thu, Sep 07, 2006 at 11:35:40AM +0200, CG Pettersson wrote:
 Dear all.

 R 2.3.1, W2k.

 I am working with a field trial series where, for the moment, I do
 regressions using more than one covariate to explain the protein levels
 in malting barley.

 To do this I use lme() and a mixed call, structured by both experiment
 (trial) and repetition in each experiment (block). Everything works
 fine, resulting in nice working linear models using two covariates. But
 how do I visualize this in an efficient and clear way?

 What I want is something like the standard output from all multivariate
 tools I have worked with (Observed vs. Predicted) with the least square
 line in the middle. It is naturally possible to plot each covariate
 separate, and also to use the 3d- sqatterplot in Rcmdr to plot both at
 the same time, but I want a plain 2d plot.

 Who has made a plotting method for this and where do I find it?
 Or am I missing something obvious here, that this plot is easy to
 achieve without any ready made methods?

 Cheers
 /CG

 --
 CG Pettersson, MSci, PhD Stud.
 Swedish University of Agricultural Sciences (SLU)
 Dept. of Crop Production Ecology. Box 7043.
 SE-750 07 UPPSALA, Sweden.
 +46 18 671428, +46 70 3306685
 [EMAIL PROTECTED]

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.

 --
 Andrew Robinson
 Department of Mathematics and StatisticsTel: +61-3-8344-9763
 University of Melbourne, VIC 3010 Australia Fax: +61-3-8344-4599
 Email: [EMAIL PROTECTED] http://www.ms.unimelb.edu.au



-- 
CG Pettersson, MSci, PhD Stud.
Swedish University of Agricultural Sciences (SLU)
Dep. of Crop Production Ekology. Box 7043.
SE-750 07 Uppsala, Sweden
[EMAIL PROTECTED]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Running wilcox.test function on two lists

2006-09-07 Thread Dimitrios Rizopoulos
try something like the following:

lis1 - c(lapply(1:10, rnorm, n = 10))
lis2 - c(lapply(1:10, rnorm, n = 10))
lis1[[5]] - lis2[[8]] - numeric(0)

ind - sapply(lis1, length)  0  sapply(lis2, length)  0
lis1 - lis1[ind]
lis2 - lis2[ind]

mapply(function(x, y) wilcox.test(x, y)$p.value, lis1, lis2)


I hope it helps.

Best,
Dimitris


Dimitris Rizopoulos
Ph.D. Student
Biostatistical Centre
School of Public Health
Catholic University of Leuven

Address: Kapucijnenvoer 35, Leuven, Belgium
Tel: +32/(0)16/336899
Fax: +32/(0)16/337015
Web: http://med.kuleuven.be/biostat/
  http://www.student.kuleuven.be/~m0390867/dimitris.htm


Quoting Raj, Towfique [EMAIL PROTECTED]:

 Dear all,
 I'm a newbie to R and I would really apperciate any help with the following:

 I have two lists, l1 and l2:

 l1:
 $A*0101
 [1] 0.076 0.109 0.155  0.077 0.09 0  0  0.073
 [9] 0.33  0.0034 0.0053


 $A*0247
 [1] 0 0 0.5 .004 0 0 0

 $A*0248
 [1] 0 0 0.3 0 0.06

 

 l2:

 $A*1101
 [1] 0.17  0.24  0.097  0.075  0.067

 $A*0247
 numeric(0)

 $A*0248
 [1] 0.031

 

 Basically, what I want to do is run wilcox.test() on each entry pair
 in the list.

 1) I want to loop through the list to run wilcox.test for each entry
 of the list. How would I do that? mapply()?

   wilcox.test(l0[[1]],l1[[1]]) for the first one and so on

 2) I want to exclude the list entry which has no values (i.e. A*0247).

 3) Finally, I only want the to see the 'p-value' for each list names.
 The output I want capture is only the 'p-value' object from
 wilcox.test.

 namep-value
A*0101  0.8329
    

 I'm grateful for any help, or any pointers to a good online tutorial.

 Thanks a lot in advance,

 -T.

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.





Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] pairwise.t.test vs. t. test

2006-09-07 Thread Li,Qinghong,ST.LOUIS,Molecular Biology
Hi,

If I set the p.adjust=none, does it meant that the output p values from the 
pairwise.t.test will be the same as those from individual t.tests (set 
var.equal=T, alternative=t)?

I actually got different p values from the two tests. See below. Is it supposed 
to be this way?

Thanks
Johnny

 x
 [1] 61.6 52.7 61.3 65.2 62.8 63.7 64.8 58.7 44.9 57.0 64.3 55.1 50.0 41.0
[15] 43.0 45.9 52.2 45.5 46.9 31.6 40.6 44.8 39.4 31.0 37.5 32.6 23.2 34.6
[29] 38.3 38.1 19.5 21.2 15.8 33.3 28.6 25.8
 Grp
 [1] Yng Yng Yng Yng Yng Yng Yng Yng Yng Yng Yng Yng Med Med Med Med Med Med
[19] Med Med Med Med Med Med Old Old Old Old Old Old Old Old Old Old Old Old
Levels: Yng Med Old
  pairwise.t.test(x=x,g=Grp,p.adjust.method=none)

Pairwise comparisons using t tests with pooled SD 

data:  x and Grp 

Yng Med
Med 1.0e-06 -  
Old 2.0e-12 2.6e-05

P value adjustment method: none 


 t.test(x=x[1:12],y=x[25:36],var.equal=T, alternative=t)

Two Sample t-test

data:  x[1:12] and x[25:36] 
t = 10.5986, df = 22, p-value = 4.149e-10
alternative hypothesis: true difference in means is not equal to 0 
95 percent confidence interval:
 24.37106 36.22894 
sample estimates:
mean of x mean of y 
 59.34167  29.04167 

[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] reading images in R

2006-09-07 Thread Nair, Murlidharan T
 

Are there functions to read image files in jpg, gif or even a pdf file?

Thanks ../Murli

 

 


[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] labeling graphs

2006-09-07 Thread Nair, Murlidharan T
I am trying to add text at specific location on my graph. I know this
can be done in R but I can't recollect. 

I was trying using locator() to identify the position and use identify()
but I can get it to work. Can someone jog my memory?

Thanks ../Murli

 


[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] augPred plot in nlme library

2006-09-07 Thread Afshartous, David
All,
 
I'm trying to create an augPred plot in the nlme library, similar to the
plot on 
p.43 of Pinheiro  Bates (Mixed Effects Models in S and S-Plus) for
their Pixel data.
 
My data structure is the same as the example but I still get the error
msg below.
 

 comp.adj.UKV - groupedData(adj.UKV ~ Time | Patient_no/Lisinopril, 
 data = comp.adj.UKV.frm, order.groups = F)
 
 fm1comp = lme(adj.UKV ~ Time + Time.sq, data = comp.adj.UKV, random =
list(Patient_no = ~ 1) )
 
 plot(augPred(fm1comp, level= 1))
Error in model.frame(formula, rownames, variables, varnames, extras,
extranames,  : 
variable lengths differ

I've checked all the variale lengths, and have also made sure that
factors are correctly defined 
as factors.   Is there anything special I need to be doing for augPred
to work correctly?  I checked
the help but didn't find much.
 
cheers,
dave
 
 
 
 
 
David Afshartous, PhD
University of Miami
School of Business
Rm KE-408
Coral Gables, FL 33124
 

[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] pairwise.t.test vs. t. test

2006-09-07 Thread MARK LEEDS
no, because the formula for the test statistics ( even assuming that 
variances are equal ) of the two different tests are different. in the 
pairwise t test, the pairwise differences are
viewed as one sample so it turns into a one sample test. any intro stat book 
will have the formulas.

 
 
 mark





- Original Message - 
From: Li,Qinghong,ST.LOUIS,Molecular Biology [EMAIL PROTECTED]
To: r-help@stat.math.ethz.ch
Sent: Thursday, September 07, 2006 5:07 PM
Subject: [R] pairwise.t.test vs. t. test


 Hi,

 If I set the p.adjust=none, does it meant that the output p values from 
 the pairwise.t.test will be the same as those from individual t.tests (set 
 var.equal=T, alternative=t)?

 I actually got different p values from the two tests. See below. Is it 
 supposed to be this way?

 Thanks
 Johnny

 x
 [1] 61.6 52.7 61.3 65.2 62.8 63.7 64.8 58.7 44.9 57.0 64.3 55.1 50.0 41.0
 [15] 43.0 45.9 52.2 45.5 46.9 31.6 40.6 44.8 39.4 31.0 37.5 32.6 23.2 34.6
 [29] 38.3 38.1 19.5 21.2 15.8 33.3 28.6 25.8
 Grp
 [1] Yng Yng Yng Yng Yng Yng Yng Yng Yng Yng Yng Yng Med Med Med Med Med 
 Med
 [19] Med Med Med Med Med Med Old Old Old Old Old Old Old Old Old Old Old 
 Old
 Levels: Yng Med Old
  pairwise.t.test(x=x,g=Grp,p.adjust.method=none)

Pairwise comparisons using t tests with pooled SD

 data:  x and Grp

Yng Med
 Med 1.0e-06 -
 Old 2.0e-12 2.6e-05

 P value adjustment method: none


 t.test(x=x[1:12],y=x[25:36],var.equal=T, alternative=t)

Two Sample t-test

 data:  x[1:12] and x[25:36]
 t = 10.5986, df = 22, p-value = 4.149e-10
 alternative hypothesis: true difference in means is not equal to 0
 95 percent confidence interval:
 24.37106 36.22894
 sample estimates:
 mean of x mean of y
 59.34167  29.04167

 [[alternative HTML version deleted]]

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide 
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] pairwise.t.test vs. t. test

2006-09-07 Thread Chuck Cleland
MARK LEEDS wrote:
 no, because the formula for the test statistics ( even assuming that 
 variances are equal ) of the two different tests are different. in the 
 pairwise t test, the pairwise differences are
 viewed as one sample so it turns into a one sample test. any intro stat book 
 will have the formulas.
  
  mark

  Actually, I think the difference is due to the SD being pooled across
all 3 groups in the pairwise.t.test, but just 2 groups in t.test.

 - Original Message - 
 From: Li,Qinghong,ST.LOUIS,Molecular Biology [EMAIL PROTECTED]
 To: r-help@stat.math.ethz.ch
 Sent: Thursday, September 07, 2006 5:07 PM
 Subject: [R] pairwise.t.test vs. t. test
 
 
 Hi,

 If I set the p.adjust=none, does it meant that the output p values from 
 the pairwise.t.test will be the same as those from individual t.tests (set 
 var.equal=T, alternative=t)?

 I actually got different p values from the two tests. See below. Is it 
 supposed to be this way?

 Thanks
 Johnny

 x
 [1] 61.6 52.7 61.3 65.2 62.8 63.7 64.8 58.7 44.9 57.0 64.3 55.1 50.0 41.0
 [15] 43.0 45.9 52.2 45.5 46.9 31.6 40.6 44.8 39.4 31.0 37.5 32.6 23.2 34.6
 [29] 38.3 38.1 19.5 21.2 15.8 33.3 28.6 25.8
 Grp
 [1] Yng Yng Yng Yng Yng Yng Yng Yng Yng Yng Yng Yng Med Med Med Med Med 
 Med
 [19] Med Med Med Med Med Med Old Old Old Old Old Old Old Old Old Old Old 
 Old
 Levels: Yng Med Old
  pairwise.t.test(x=x,g=Grp,p.adjust.method=none)
Pairwise comparisons using t tests with pooled SD

 data:  x and Grp

Yng Med
 Med 1.0e-06 -
 Old 2.0e-12 2.6e-05

 P value adjustment method: none


 t.test(x=x[1:12],y=x[25:36],var.equal=T, alternative=t)
Two Sample t-test

 data:  x[1:12] and x[25:36]
 t = 10.5986, df = 22, p-value = 4.149e-10
 alternative hypothesis: true difference in means is not equal to 0
 95 percent confidence interval:
 24.37106 36.22894
 sample estimates:
 mean of x mean of y
 59.34167  29.04167

 [[alternative HTML version deleted]]

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide 
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.

 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.

-- 
Chuck Cleland, Ph.D.
NDRI, Inc.
71 West 23rd Street, 8th floor
New York, NY 10010
tel: (212) 845-4495 (Tu, Th)
tel: (732) 512-0171 (M, W, F)
fax: (917) 438-0894

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] rgdal on a Mac

2006-09-07 Thread Roger Bivand
On Thu, 7 Sep 2006, Jonathan Boyd Thayn wrote:

 I am trying to install the rgdal package on my Mac OS X 3.9.  I 
 downloaded and installed the GDAL libraries from Fink and then tried to 
 install rgdal and got the following message.  I tried to determine if 
 the GDAL libraries were in my path but I'm not sure how to do that.  
 Any ideas?  Thanks.
 

(The R-sig-geo mailing list may be a more appropriate place to look for an 
answer)

Unfortunately, I as maintainer of the package have no access to OSX. I do 
know that OSX users have installed rgdal successfully, and installation 
instructions are on the Rgeo website:

http://www.sal.uiuc.edu/tools/tools-sum/rgeo/rgeo-detail/map-packages-on-cran

OSX: The rgdal source package from CRAN can be installed on OSX by first 
installing PROJ.4 and GDAL, then installing sp, and finally download the 
source package tarball to a suitable temporary location, and install with 
R CMD INSTALL ... your options ... rgdal*.tar.gz. Your options give the 
locations, if required, of --with-gdal-config=, --with-proj-include=, 
and/or --with-proj-lib=, all within --configure-args='' as described in 
section 1.2.2 of the ''Writing R extensions'' manual.

But this presupposes that you can find the installed software on your 
system yourself, something that is difficult to do at a distance. If OSX 
has the locate utility, you could run it in a terminal, or search for the 
files needed (in Finder??), but an OSX user would know the correct way 
forward. I expect that you have installed PROJ.4 too - do either of 
proj -lp or gdalinfo --formats or ogrinfo --formats at a terminal prompt 
say anything useful to indicate that the applications using the libraries 
are available and working?

 
 trying URL 
 'http://www.biometrics.mtu.edu/CRAN/src/contrib/rgdal_0.4-10.tar.gz'
 Content type 'application/x-gzip' length 4009531 bytes
 opened URL
 ==
 downloaded 3915Kb
 
 * Installing *source* package 'rgdal' ...
 gdal-config: gdal-config
 ./configure: line 1: gdal-config: command not found
 
 The gdal-config script distributed with GDAL could not be found.
 If you have not installed the GDAL libraries, you can
 download the source from  http://www.gdal.org/
 If you have installed the GDAL libraries, then make sure that
 gdal-config is in your path. Try typing gdal-config at a
 shell prompt and see if it runs. If not, use:
   --configure-args='--with-gdal-config=/usr/local/bin/gdal-config' echo 
 with appropriate values for your installation.
 
 
 The downloaded packages are in
   /private/tmp/Rtmp9zhfAK/downloaded_packages
 ** Removing 
 '/Library/Frameworks/R.framework/Versions/2.2/Resources/library/rgdal'
 ** Restoring previous 
 '/Library/Frameworks/R.framework/Versions/2.2/Resources/library/rgdal'
 ERROR: configuration failed for package 'rgdal'
 
 
 Jonathan B. Thayn
 Kansas Applied Remote Sensing (KARS) Program
 University of Kansas
 Higuchi Hall
 2101 Constant Avenue
 Lawrence, Kansas 66047-3759
 [EMAIL PROTECTED]
 www.kars.ku.edu/about/people/thayn/JonSite/Welcome.html
   [[alternative text/enriched version deleted]]
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.
 

-- 
Roger Bivand
Economic Geography Section, Department of Economics, Norwegian School of
Economics and Business Administration, Helleveien 30, N-5045 Bergen,
Norway. voice: +47 55 95 93 55; fax +47 55 95 95 43
e-mail: [EMAIL PROTECTED]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] pairwise.t.test vs. t. test

2006-09-07 Thread MARK LEEDS
thanks. i assumed we we were talking about the standard textbook difference 
between the t test and pairwise t test.
my bad.



- Original Message - 
From: Chuck Cleland [EMAIL PROTECTED]
To: MARK LEEDS [EMAIL PROTECTED]
Cc: Li,Qinghong,ST.LOUIS,Molecular Biology [EMAIL PROTECTED]; 
r-help@stat.math.ethz.ch
Sent: Thursday, September 07, 2006 6:44 PM
Subject: Re: [R] pairwise.t.test vs. t. test


 MARK LEEDS wrote:
 no, because the formula for the test statistics ( even assuming that
 variances are equal ) of the two different tests are different. in the
 pairwise t test, the pairwise differences are
 viewed as one sample so it turns into a one sample test. any intro stat 
 book
 will have the formulas.

  mark

  Actually, I think the difference is due to the SD being pooled across
 all 3 groups in the pairwise.t.test, but just 2 groups in t.test.

 - Original Message - 
 From: Li,Qinghong,ST.LOUIS,Molecular Biology 
 [EMAIL PROTECTED]
 To: r-help@stat.math.ethz.ch
 Sent: Thursday, September 07, 2006 5:07 PM
 Subject: [R] pairwise.t.test vs. t. test


 Hi,

 If I set the p.adjust=none, does it meant that the output p values 
 from
 the pairwise.t.test will be the same as those from individual t.tests 
 (set
 var.equal=T, alternative=t)?

 I actually got different p values from the two tests. See below. Is it
 supposed to be this way?

 Thanks
 Johnny

 x
 [1] 61.6 52.7 61.3 65.2 62.8 63.7 64.8 58.7 44.9 57.0 64.3 55.1 50.0 
 41.0
 [15] 43.0 45.9 52.2 45.5 46.9 31.6 40.6 44.8 39.4 31.0 37.5 32.6 23.2 
 34.6
 [29] 38.3 38.1 19.5 21.2 15.8 33.3 28.6 25.8
 Grp
 [1] Yng Yng Yng Yng Yng Yng Yng Yng Yng Yng Yng Yng Med Med Med Med Med
 Med
 [19] Med Med Med Med Med Med Old Old Old Old Old Old Old Old Old Old Old
 Old
 Levels: Yng Med Old
  pairwise.t.test(x=x,g=Grp,p.adjust.method=none)
Pairwise comparisons using t tests with pooled SD

 data:  x and Grp

Yng Med
 Med 1.0e-06 -
 Old 2.0e-12 2.6e-05

 P value adjustment method: none


 t.test(x=x[1:12],y=x[25:36],var.equal=T, alternative=t)
Two Sample t-test

 data:  x[1:12] and x[25:36]
 t = 10.5986, df = 22, p-value = 4.149e-10
 alternative hypothesis: true difference in means is not equal to 0
 95 percent confidence interval:
 24.37106 36.22894
 sample estimates:
 mean of x mean of y
 59.34167  29.04167

 [[alternative HTML version deleted]]

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide 
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.

 -- 
 Chuck Cleland, Ph.D.
 NDRI, Inc.
 71 West 23rd Street, 8th floor
 New York, NY 10010
 tel: (212) 845-4495 (Tu, Th)
 tel: (732) 512-0171 (M, W, F)
 fax: (917) 438-0894


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] pairwise.t.test vs. t. test

2006-09-07 Thread Peter Dalgaard
MARK LEEDS [EMAIL PROTECTED] writes:

 thanks. i assumed we we were talking about the standard textbook difference 
 between the t test and pairwise t test.
 my bad.

Notice the difference between paired and pairwise...


 - Original Message - 
 From: Chuck Cleland [EMAIL PROTECTED]
 To: MARK LEEDS [EMAIL PROTECTED]
 Cc: Li,Qinghong,ST.LOUIS,Molecular Biology [EMAIL PROTECTED]; 
 r-help@stat.math.ethz.ch
 Sent: Thursday, September 07, 2006 6:44 PM
 Subject: Re: [R] pairwise.t.test vs. t. test
 
 
  MARK LEEDS wrote:
  no, because the formula for the test statistics ( even assuming that
  variances are equal ) of the two different tests are different. in the
  pairwise t test, the pairwise differences are
  viewed as one sample so it turns into a one sample test. any intro stat 
  book
  will have the formulas.
 
   mark
 
   Actually, I think the difference is due to the SD being pooled across
  all 3 groups in the pairwise.t.test, but just 2 groups in t.test.
 
  - Original Message - 
  From: Li,Qinghong,ST.LOUIS,Molecular Biology 
  [EMAIL PROTECTED]
  To: r-help@stat.math.ethz.ch
  Sent: Thursday, September 07, 2006 5:07 PM
  Subject: [R] pairwise.t.test vs. t. test
 
 
  Hi,
 
  If I set the p.adjust=none, does it meant that the output p values 
  from
  the pairwise.t.test will be the same as those from individual t.tests 
  (set
  var.equal=T, alternative=t)?
 
  I actually got different p values from the two tests. See below. Is it
  supposed to be this way?
 
  Thanks
  Johnny
 
  x
  [1] 61.6 52.7 61.3 65.2 62.8 63.7 64.8 58.7 44.9 57.0 64.3 55.1 50.0 
  41.0
  [15] 43.0 45.9 52.2 45.5 46.9 31.6 40.6 44.8 39.4 31.0 37.5 32.6 23.2 
  34.6
  [29] 38.3 38.1 19.5 21.2 15.8 33.3 28.6 25.8
  Grp
  [1] Yng Yng Yng Yng Yng Yng Yng Yng Yng Yng Yng Yng Med Med Med Med Med
  Med
  [19] Med Med Med Med Med Med Old Old Old Old Old Old Old Old Old Old Old
  Old
  Levels: Yng Med Old
   pairwise.t.test(x=x,g=Grp,p.adjust.method=none)
 Pairwise comparisons using t tests with pooled SD
 
  data:  x and Grp
 
 Yng Med
  Med 1.0e-06 -
  Old 2.0e-12 2.6e-05
 
  P value adjustment method: none
 
 
  t.test(x=x[1:12],y=x[25:36],var.equal=T, alternative=t)
 Two Sample t-test
 
  data:  x[1:12] and x[25:36]
  t = 10.5986, df = 22, p-value = 4.149e-10
  alternative hypothesis: true difference in means is not equal to 0
  95 percent confidence interval:
  24.37106 36.22894
  sample estimates:
  mean of x mean of y
  59.34167  29.04167
 
  [[alternative HTML version deleted]]
 
  __
  R-help@stat.math.ethz.ch mailing list
  https://stat.ethz.ch/mailman/listinfo/r-help
  PLEASE do read the posting guide
  http://www.R-project.org/posting-guide.html
  and provide commented, minimal, self-contained, reproducible code.
 
 
  __
  R-help@stat.math.ethz.ch mailing list
  https://stat.ethz.ch/mailman/listinfo/r-help
  PLEASE do read the posting guide 
  http://www.R-project.org/posting-guide.html
  and provide commented, minimal, self-contained, reproducible code.
 
  -- 
  Chuck Cleland, Ph.D.
  NDRI, Inc.
  71 West 23rd Street, 8th floor
  New York, NY 10010
  tel: (212) 845-4495 (Tu, Th)
  tel: (732) 512-0171 (M, W, F)
  fax: (917) 438-0894
 
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.
 

-- 
   O__   Peter Dalgaard Øster Farimagsgade 5, Entr.B
  c/ /'_ --- Dept. of Biostatistics PO Box 2099, 1014 Cph. K
 (*) \(*) -- University of Copenhagen   Denmark  Ph:  (+45) 35327918
~~ - ([EMAIL PROTECTED])  FAX: (+45) 35327907

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Probabilites for all groups using knn function in R

2006-09-07 Thread Liang Wei
Hello, dear useR,

Is there anyways to get the posterior probabilites for each group by using 
knn() instead of only get the proportions of winning class?

Example knn(train=Train[,-c(1:3)], test=Test, cl=group.id.train,k=K, prob=True)
will give you the proportions for winning votes, but I also want to know the 
other proportions of votes, how can I get that?

Thanks very much in advance!

Sincerely,

Leon


[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] labeling graphs

2006-09-07 Thread Gabor Grothendieck
Issue this command and then click anywhere on the plot.

loc - locator(1); do.call(text, c(loc, abc))


On 9/7/06, Nair, Murlidharan T [EMAIL PROTECTED] wrote:
 I am trying to add text at specific location on my graph. I know this
 can be done in R but I can't recollect.

 I was trying using locator() to identify the position and use identify()
 but I can get it to work. Can someone jog my memory?

 Thanks ../Murli




[[alternative HTML version deleted]]

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] area between two curves, but one is not continuous

2006-09-07 Thread Gabor Grothendieck
If you don't need borders on the polygons then it can be simply done two points
at a time checking that neither point is an NA:

# data
x1 - x2 - 1:8
y1 - c(1,5,6,1,4,5,5,5)
y2 - c(0,3,3,NA,NA,1,3,4)

# plot
plot(x1,y1,type=l)
lines(x2,y2)

# fill in area between curves with green two points at a time
for(i in seq(2, length(x1)))
   if (!any(is.na(y2[c(i-1, i)])))
  polygon(c(x1[i-1], x1[i], x2[i], x2[i-1]),
 c(y1[i-1], y1[i], y2[i], y2[i-1]),
 col = green, border = 0)



On 9/7/06, Anton Meyer [EMAIL PROTECTED] wrote:
 Hello,

 I want to colorize the area between two curves, but one of these
 curves isn't continuous.

 The best solution I found is the 2nd example in the help of polygon,
 but how can I get no area filling for the missing data in the 2nd curve.

 example:

 x1 = c(1:8)
 x2 = c(1:8)
 y1 = c(1,5,6,1,4,5,5,5)
 y2 = c(0,3,3,NA,NA,1,3,4)

 plot(x1,y1,type=l)
 lines(x2,y2)

 for the missing parts I want no filling.

 so for this examples the code would be:
 polygon(c(1:3,3:1),c(y1[1:3],rev(y2[1:3])),col=green)
 polygon(c(6:8,8:6),c(y1[6:8],rev(y2[6:8])),col=green)

 How can I generalize this for a longer curve with more data?

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Weighted association map

2006-09-07 Thread kone
Could somebody program this kind of plot type to R, if none exists,  
based on mds or correlation tables or some more suitable method? What  
do you think about idea? Does it work? None similar or better exists?

http://weightedassociationmap.blogspot.com/


Atte Tenkanen
University of Turku, Finland

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.