Re: [R] Cannot update some packages after upgrade to 2.1.1

2005-07-16 Thread Prof Brian Ripley
-lf77blas is part of ATLAS, so I do suspect the RPM builder had ATLAS 
installed.

lme4 needs a compatible Matrix installed.

I do think installing from the sources would solve this, but probably you 
need to discuss this with the RPM maintainer as a dependency appears to 
be missing.

On Fri, 15 Jul 2005, Kevin E. Thorpe wrote:

 I just upgraded to version 2.1.1 (from 2.0.1) today.

  R.version
  _
 platform i686-pc-linux-gnu
 arch i686
 os   linux-gnu
 system   i686, linux-gnu
 status
 major2
 minor1.1
 year 2005
 month06
 day  20
 language R

 I am using SuSE 9.2 and did the upgrade using rpm -U with the RPM
 available on CRAN.  After upgrading r-base, I ran update.packages().
 Four previously installed packages failed to update:

   Matrix (0.95-5 to 0.97-4)
   gam (0.93 to 0.94)
   lme4 (0.95-3 to 0.96-1)
   mgcv (1.3-1 to 1.3-4)

 In the case of Matrix, gam and mgcv I get the message:

   [long path]/ld: cannot find -lf77blas

 In the case of lme4 the messages are:

 ** preparing package for lazy loading
 Error in setMethod(coef, signature(object = lmList), function(object,  :
 no existing definition for function 'coef'
 Error: unable to load R code in package 'lme4'
 Execution halted
 ERROR: lazy loading failed for package 'lme4'

 I have searched the SuSE repository for any package that provides
 f77blas but came up empty.

 I also could not identify any relevant messages in the mailing list
 archives.

 Is R looking for that library because it was present on the machine
 the RPM was built on?  Would building R myself solve the missing library
 problem or did I do something wrong?

 -- 
 Kevin E. Thorpe
 Biostatistician/Trialist, Knowledge Translation Program
 Assistant Professor, Department of Public Health Sciences
 Faculty of Medicine, University of Toronto
 email: [EMAIL PROTECTED]  Tel: 416.946.8081  Fax: 416.971.2462

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


-- 
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Coxph with factors

2005-07-16 Thread Kylie-Anne Richards
Thank you for your help.

 In any case, to specify f.pom You need it to be a factor with the same set 
 of levels.  You don't say what the lowest level of pom is, but if it is, 
 say, -3.

 f.pom=factor(-3, levels=seq(-3,2.5, by=0.5))


For this particular model, f.pom starts at -5.5 going to 2 in 0.5 
increments. I seem to have misunderstood your explanation, as R is still 
producing an error.

final-coxph(Surv(time.sec,done)~f.pom+vo+po,data=DATA)
final.surv-survfit((final), 
data.frame(po=0,vo=1,f.pom=factor(-5.5,levels=seq(-5.5,2,by=0.5)),individual=T))
summary(final.surv)
Error in x2 %*% coef : non-conformable arguments
Execution halted


 I would first note that the survival function at zero covariates is not a 
 very useful thing and is usually numerically unstable, and it's often more 
 useful to get the survival function at some reasonable set of covariates.


Please correct me if I'm wrong, I was under the impression that the survival 
function at zero covariates gave the baseline distribution. I.e. if given 
the baseline prob.,S_0, at time t, one could calculate the survival prob for 
specified covariates by 
S_0^exp(beta(vo)*specified(vo)+beta(po)*specified(po)+beta(f.pom at the 
level of interest)) for time t.

Since I was unable to get survfit to work with specified covariates, I was 
using the survival probs of the 'avg' covariates, S(t), to determine the 
baseline at time t, i.e. 
S(t)^(1/exp(beta(vo)*mean(vo)+beta(po)*mean(po)+beta(f.pom-5.5)*mean(f.pom-5.5)+beta(f.pom-5.0)*mean(f.pom-5.0)+).
 
And then proceeding as mention in the above paragraph (clearly not an 
efficient way of doing things).


- Original Message - 
From: Thomas Lumley [EMAIL PROTECTED]
To: Kylie-Anne Richards [EMAIL PROTECTED]
Cc: r-help@stat.math.ethz.ch
Sent: Saturday, July 16, 2005 12:31 AM
Subject: Re: [R] Coxph with factors


 On Fri, 15 Jul 2005, Kylie-Anne Richards wrote:

 FIRST Q: The default uses the mean of 'vo' and mean of 'po', but what is 
 it
 using for the factors?? Is it the sum of the coef of the factors divided 
 by
 the number of factors??

 It uses the mean of each factor variable.  The $means component of the fit 
 gives the values it uses.

 SECOND Q: For a model with covariates I would normally specify:
 final.surv-survfit((final), data.frame(po=0,vo=0,pom=0,individual=T)) to
 get the baseline survival prob.; what would I specify for a model with a
 factor, i.e., 'f.pom' ??

 I would first note that the survival function at zero covariates is not a 
 very useful thing and is usually numerically unstable, and it's often more 
 useful to get the survival function at some reasonable set of covariates.

 In any case, to specify f.pom You need it to be a factor with the same set 
 of levels.  You don't say what the lowest level of pom is, but if it is, 
 say, -3.

 f.pom=factor(-3, levels=seq(-3,2.5, by=0.5))


  =thomas


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] cbind a list of matrices

2005-07-16 Thread Mauro Gasparini

Dear users,

I have a list of several matrices with the same number of columns,
how do I rbind them all with a vectorized command?

A related simpler question is, how do I vectorize the instruction
that rbinds together several copies of the same matrix?

Thanks.

-- 
Mauro Gasparini

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] R: to the power

2005-07-16 Thread Bernardo Rangel Tura
At 10:11 12/7/2005, [EMAIL PROTECTED] wrote:

hi all

why does R do this:

(-8)^(1/3)=NaN

the answer should be : -2

Allan

In my computer:
  (-8)^(1/3)
[1] NaN

  -8^(1/3)
[1] -2

  -(8^(1/3))
[1] -2


The problem is -8 or the problem is (-8) ?


[]s
Tura 


-- 
No virus found in this outgoing message.
Checked by AVG Anti-Virus.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] cbind a list of matrices

2005-07-16 Thread Ted Harding
On 16-Jul-05 Mauro Gasparini wrote:
 
 Dear users,
 
 I have a list of several matrices with the same number of columns,
 how do I rbind them all with a vectorized command?
 
 A related simpler question is, how do I vectorize the instruction
 that rbinds together several copies of the same matrix?

Didn't you simply try:

 A-matrix(c(1.1,1.2,1.3,1.4,1.5,1.6),ncol=3)
 B-matrix(c(2.1,2.2,2.3,2.4,2.5,2.6),ncol=3)
 C-matrix(c(3.1,3.2,3.3,3.4,3.5,3.6),ncol=3)
 A
 [,1] [,2] [,3]
[1,]  1.1  1.3  1.5
[2,]  1.2  1.4  1.6
 B
 [,1] [,2] [,3]
[1,]  2.1  2.3  2.5
[2,]  2.2  2.4  2.6
 C
 [,1] [,2] [,3]
[1,]  3.1  3.3  3.5
[2,]  3.2  3.4  3.6
 rbind(A,B,C)
 [,1] [,2] [,3]
[1,]  1.1  1.3  1.5
[2,]  1.2  1.4  1.6
[3,]  2.1  2.3  2.5
[4,]  2.2  2.4  2.6
[5,]  3.1  3.3  3.5
[6,]  3.2  3.4  3.6
 rbind(A,A,A)
 [,1] [,2] [,3]
[1,]  1.1  1.3  1.5
[2,]  1.2  1.4  1.6
[3,]  1.1  1.3  1.5
[4,]  1.2  1.4  1.6
[5,]  1.1  1.3  1.5
[6,]  1.2  1.4  1.6

If there's an exception under which the above does not work,
I'd be interested to hear of it!

Best wishes,
Ted.



E-Mail: (Ted Harding) [EMAIL PROTECTED]
Fax-to-email: +44 (0)870 094 0861
Date: 16-Jul-05   Time: 10:29:59
-- XFMail --

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] cbind a list of matrices

2005-07-16 Thread Peter Dalgaard
(Ted Harding) [EMAIL PROTECTED] writes:

 On 16-Jul-05 Mauro Gasparini wrote:
  
  Dear users,
  
  I have a list of several matrices with the same number of columns,
  how do I rbind them all with a vectorized command?
  
  A related simpler question is, how do I vectorize the instruction
  that rbinds together several copies of the same matrix?
 
 Didn't you simply try:
 
  A-matrix(c(1.1,1.2,1.3,1.4,1.5,1.6),ncol=3)
  B-matrix(c(2.1,2.2,2.3,2.4,2.5,2.6),ncol=3)
  C-matrix(c(3.1,3.2,3.3,3.4,3.5,3.6),ncol=3)
  A
  [,1] [,2] [,3]
 [1,]  1.1  1.3  1.5
 [2,]  1.2  1.4  1.6
  B
  [,1] [,2] [,3]
 [1,]  2.1  2.3  2.5
 [2,]  2.2  2.4  2.6
  C
  [,1] [,2] [,3]
 [1,]  3.1  3.3  3.5
 [2,]  3.2  3.4  3.6
  rbind(A,B,C)
  [,1] [,2] [,3]
 [1,]  1.1  1.3  1.5
 [2,]  1.2  1.4  1.6
 [3,]  2.1  2.3  2.5
 [4,]  2.2  2.4  2.6
 [5,]  3.1  3.3  3.5
 [6,]  3.2  3.4  3.6
  rbind(A,A,A)
  [,1] [,2] [,3]
 [1,]  1.1  1.3  1.5
 [2,]  1.2  1.4  1.6
 [3,]  1.1  1.3  1.5
 [4,]  1.2  1.4  1.6
 [5,]  1.1  1.3  1.5
 [6,]  1.2  1.4  1.6
 
 If there's an exception under which the above does not work,
 I'd be interested to hear of it!

If the matrices literally are in a list, you might need

do.call(rbind, l)

which for multiple copies (5, say) of the same matrix might be done via

M - matrix(1:4,2)
MM - do.call(rbind,replicate(5, M, simplify=FALSE))

although it might be more efficient with

MM - matrix(t(M),5*2,2,byrow=T) 

-- 
   O__   Peter Dalgaard Øster Farimagsgade 5, Entr.B
  c/ /'_ --- Dept. of Biostatistics PO Box 2099, 1014 Cph. K
 (*) \(*) -- University of Copenhagen   Denmark  Ph:  (+45) 35327918
~~ - ([EMAIL PROTECTED])  FAX: (+45) 35327907

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Can't get sample function from An Introduction to R to work

2005-07-16 Thread Spencer Graves
  When you copied it into MS Word, did you Copy - paste special - 
unformatted text?  This trick sometimes might expose (or eliminate) 
nonprinting characters or characters with special attributes that might 
have created the problem.

  spencer graves

David Groos wrote:

 Thank you-all very much for your help, your responses and help has been 
 very encouraging.  The following doesn't close the case but it tables 
 it...
 
 First I copied Ken's code into my R Console and...it worked great!  
 That was baffling as it looked identical to mine.
 
 I did not explicitly say earlier that the code I sent out I had copied 
 from the console, pasted into MS Word, changed font size, then pasted 
 it into the e-mail--in other words, that was a copy of one of the codes 
 that didn't work.
 
 Anyway,  I then copied the non-working, pasted it into Console, 
 expecting that it would say Error: syntax error upon pressing the 
 return key at the end of this line:
 
+ tst-(yb1-yb2)/sqrt(s*(1/n1+1/n2))
 
 but it didn't!  and the code worked this time, also!
 I then went about doing what I could to replicate the error from before 
 and was as unsuccessful in doing that as I was in making it work, 
 earlier.
 
 
 On Jul 15, 2005, at 4:43 PM, Peter Dalgaard wrote:
 
 
Bret Collier [EMAIL PROTECTED] writes:


David,
If below is exactly what you typed, check your code again, I think you
are missing a '}' after the last 2 parentheses.

That's not supposed to cause a syntax error, just another '+'.
 
 (right! that's what I thought...)
 
I can copy and paste the code as written and not get an error:


twosam-function(y1, y2) {

+ n1-length(y1);n2 -length(y2)
+ yb1-mean(y1); yb2-mean(y2)
+ s1-var(y1);s2-var(y2)
+ s-((n1-1)*s1 + (n2-1)*s2)/(n1+n2-2)
+ tst-(yb1-yb2)/sqrt(s*(1/n1+1/n2))
+ }

Perhaps this was the 1st time David got his typing right? The error is
of the sort you'd get if you had

 s-((n1-1)*s1 + (n2-1)*s2/(n1+n2-2)
 
 Interestingly enough, I did try this line when first I was trying to 
 make the code work and with just 1 ) at the end I didn't get the 
 error message, but then again I wasn't able to make the whole program 
 work, either.
 
 In conclusion, I can't explain why it didn't first work time nor why I 
 couldn't replicate the error.  I think I ought to e-mail the Mac R 
 folks about this.
 
 Again, Thanks,
 
 -David
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

-- 
Spencer Graves, PhD
Senior Development Engineer
PDF Solutions, Inc.
333 West San Carlos Street Suite 700
San Jose, CA 95110, USA

[EMAIL PROTECTED]
www.pdf.com http://www.pdf.com
Tel:  408-938-4420
Fax: 408-280-7915

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] rmpi in windows

2005-07-16 Thread Uwe Ligges
Uzuner, Tolga wrote:
 Hi Folks,
 Has anyone been able to get rmpi to work under windows ?

Rmpi uses the LAM implementation of MPI.
See
http://www.lam-mpi.org/faq/category12.php3
and read FAQ 2 which implicitly tells us that there is no native port, 
hence you cannot run it under Windows.
The package maintainer may know better.

Uwe Ligges


BTW: Why do you ask twice (in private message and on R-help)? I am 
reading R-help anyway ...




 Thanks,
 Tolga
 
 Please follow the attached hyperlink to an important disclaimer
 http://www.csfb.com/legal_terms/disclaimer_europe.shtml
 
 
 
 ==
 Please access the attached hyperlink for an important electronic 
 communications disclaimer: 
 
 http://www.csfb.com/legal_terms/disclaimer_external_email.shtml
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Proportion test in three-chices experiment

2005-07-16 Thread Spencer Graves
  Have you considered BTm in library(BradleyTerry)?  Consider the 
following example:

  cond1 - data.frame(winner=rep(LETTERS[1:3], e=2),
+   loser=c(B,C,A,C,A,B),
+   Freq=1:6)
  cond2 - data.frame(winner=rep(LETTERS[1:3], e=2),
+   loser=c(B,C,A,C,A,B),
+   Freq=6:1)
  fit1 - BTm(cond1~..)
  fit2 - BTm(cond2~..)
  fit12 - BTm(rbind(cond1, cond2)~..)
  Dev12 - (fit1$deviance+fit2$deviance
+   -fit12$deviance)
  pchisq(Dev12, 2, lower=FALSE)
[1] 0.8660497

  This says the difference between the two data sets, cond1 and cond2, 
are not statistically significant.

  Do you present each subject with onely one pair?  If yes, then this 
model is appropriate.  If no, then the multiple judgments by the same 
subject are not statistically independent, as assumed by this model. 
However, if you don't get statistical significance via this kind of 
computation, it's unlikely that a better model would give you 
statistical significance.  If you get a p value of, say, 0.04, then the 
difference is probably NOT statistically significant.

  The p value you get here would be an upper bound.  You could get a 
lower bound by using only one of the three pairs presented to each 
subject selected at random.  If that p value were statistically 
significant, then I think it is safe to say that your two sets of 
conditions are significantly different.  For any value in between, it 
would depend on how independent the three choices by the same subject. 
You might, for example, delete one of the three pairs at random and use 
the result of that comparison.

  There are doubtless better techniques, but I'm not familiar with 
them.  Perhaps someone else will reply to my reply.

  spencer graves

Rafael Laboissiere wrote:

 Hi,
 
 I wish to analyze with R the results of a perception experiment in which
 subjects had to recognize each stimulus among three choices (this was a
 forced-choice design).  The experiment runs under two different
 conditions and the data is like the following:
 
N1 : count of trials in condition 1
p11, p12, p13: proportions of choices 1, 2, and 3 in condition 1

N2 : count of trials in condition 2
p21, p22, p23: proportions of choices 1, 2, and 3 in condition 2

 How can I test whether the triple (p11,p12,p13) is different from the
 triple (p21,p22,p23)?  Clearly, prop.test does not help me here, because
 it relates to two-choices tests.
 
 I apologize if the answer is trivial, but I am relatively new to R and
 could not find any pointers in the FAQ or in the mailing list archives.
 
 Thanks in advance for any help,
 

-- 
Spencer Graves, PhD
Senior Development Engineer
PDF Solutions, Inc.
333 West San Carlos Street Suite 700
San Jose, CA 95110, USA

[EMAIL PROTECTED]
www.pdf.com http://www.pdf.com
Tel:  408-938-4420
Fax: 408-280-7915

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] cbind a list of matrices

2005-07-16 Thread S.O. Nyangoma

 Didn't you simply try:
 
  A-matrix(c(1.1,1.2,1.3,1.4,1.5,1.6),ncol=3)
  B-matrix(c(2.1,2.2,2.3,2.4,2.5,2.6),ncol=3)
  C-matrix(c(3.1,3.2,3.3,3.4,3.5,3.6),ncol=3)
  A
 [,1] [,2] [,3]
 [1,]  1.1  1.3  1.5
 [2,]  1.2  1.4  1.6
  B
 [,1] [,2] [,3]
 [1,]  2.1  2.3  2.5
 [2,]  2.2  2.4  2.6
  C
 [,1] [,2] [,3]
 [1,]  3.1  3.3  3.5
 [2,]  3.2  3.4  3.6
  rbind(A,B,C)
 [,1] [,2] [,3]
 [1,]  1.1  1.3  1.5
 [2,]  1.2  1.4  1.6
 [3,]  2.1  2.3  2.5
 [4,]  2.2  2.4  2.6
 [5,]  3.1  3.3  3.5
 [6,]  3.2  3.4  3.6
  rbind(A,A,A)
 [,1] [,2] [,3]
 [1,]  1.1  1.3  1.5
 [2,]  1.2  1.4  1.6
 [3,]  1.1  1.3  1.5
 [4,]  1.2  1.4  1.6
 [5,]  1.1  1.3  1.5
 [6,]  1.2  1.4  1.6
 


This would be adequate for low dimensional dataset e.g. the example 
you have give. What if you have a list of thousands of matrices. Is 
there an efficient way of doing this other than looping?

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Padding in lattice plots

2005-07-16 Thread Federico Gherardini
On Friday 15 July 2005 17:00, Deepayan Sarkar wrote:
 On 7/15/05, Federico Gherardini [EMAIL PROTECTED] wrote:
  On Friday 15 July 2005 14:42, you wrote:
   Hi all,
   I've used the split argument to print four lattice plots on a single
   page. The problem now is that I need to reduce the amount of white
   space between the plots. I've read other mails in this list about the
   new trellis parameters layout.heights and layout.widhts but I haven't
   been able to use them properly. I've tried to input values between 0
   and 1 as the padding value (both left and right and top and bottom) but
   nothing changed. It seems I can only increase the padding by using
   values  1. Any ideas?
  
   Thanks in advance for your help
   Federico Gherardini
 
  It seems like I've found an answer myself you have to use negative
  values to decrease the padding. I thought it was something like the cex
  parameter which acts like a multiplier

 I thought so too.

  but this is not the case.

 Could you post what you used? There are several different padding
 parameters you need to set to 0, did you change them all?

 Deepayan

Hi Deepayan
This is what I used I don't know if I did everything the proper way but 
at least I got the result I was seeking! :)

trellis.par.set(list(layout.heights = list(top.padding = -1)))

trellis.par.set(list(layout.heights = list(bottom.padding = -1, 
axis.xlab.padding = 1, xlab = -1.2)))

trellis.par.set(list(layout.widths = list(left.padding = -1)))

trellis.par.set(list(layout.widths = list(right.padding = -1, 
ylab.axis.padding = -0.5)))

Do these settings make any sense?

Federico

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] summary: cbind a list of matrices

2005-07-16 Thread Mauro Gasparini


Thanks to all kind people who answered. Attached is a useful reply,
among many others.

--
Mauro Gasparini

Professore Straordinario di Statistica
Dipartimento di Matematica, Politecnico di Torino
Corso Duca degli Abruzzi 24 I-10129 Torino, Italy 


tel: +39 011 564 7546
fax: +39 011 564 7599
email: [EMAIL PROTECTED]
www: http://calvino.polito.it/~gasparin/

Coordinatore Scientifico del
Centro Universitario di Statistica per le Scienze Biomediche
dell'Università Vita-Salute San Raffaele di Milano
www: http://www.cussb.unihsr.it/
---BeginMessage---

Two different techniques for the two questions:

do.call('rbind', list.of.matrices)

orig.mat[rep(1:nrow(orig.mat), 5), ]

See S Poetry for details.

Patrick Burns
[EMAIL PROTECTED]
+44 (0)20 8525 0696
http://www.burns-stat.com
(home of S Poetry and A Guide for the Unwilling S User)

Mauro Gasparini wrote:


Dear users,

I have a list of several matrices with the same number of columns,
how do I rbind them all with a vectorized command?

A related simpler question is, how do I vectorize the instruction
that rbinds together several copies of the same matrix?

Thanks.

 




---End Message---
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

Re: [R] Can't get sample function from An Introduction to R to work

2005-07-16 Thread Thomas Lumley
On Fri, 15 Jul 2005, David Groos wrote:

 Thank you-all very much for your help, your responses and help has been
 very encouraging.  The following doesn't close the case but it tables
 it...

 First I copied Ken's code into my R Console and...it worked great!
 That was baffling as it looked identical to mine.

 I did not explicitly say earlier that the code I sent out I had copied
 from the console, pasted into MS Word, changed font size, then pasted
 it into the e-mail--in other words, that was a copy of one of the codes
 that didn't work.


Mysterious syntax errors on OS X can result from copying and pasting into 
R. Files that look perfectly innocent can contain Windows line endings 
(\r\n) or en-dashes instead of hyphens.  These might well be transparently 
fixed in the process of converting to email.

If you actually typed into the R console then this won't explain it, of 
course.

-thomas

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Coxph with factors

2005-07-16 Thread Thomas Lumley
On Sat, 16 Jul 2005, Kylie-Anne Richards wrote:

 Thank you for your help.
 
 In any case, to specify f.pom You need it to be a factor with the same set 
 of levels.  You don't say what the lowest level of pom is, but if it is, 
 say, -3.
 
 f.pom=factor(-3, levels=seq(-3,2.5, by=0.5))
 

 For this particular model, f.pom starts at -5.5 going to 2 in 0.5 increments. 
 I seem to have misunderstood your explanation, as R is still producing an 
 error.


In the model you showed, there were no factor levels below -2.5.  You need 
to make sure that the levels are the same in the initial data and the data 
supplied to survfit.  Check this with levels().

 
 I would first note that the survival function at zero covariates is not a 
 very useful thing and is usually numerically unstable, and it's often more 
 useful to get the survival function at some reasonable set of covariates.
 

 Please correct me if I'm wrong, I was under the impression that the survival 
 function at zero covariates gave the baseline distribution. I.e. if given the 
 baseline prob.,S_0, at time t, one could calculate the survival prob for 
 specified covariates by 
 S_0^exp(beta(vo)*specified(vo)+beta(po)*specified(po)+beta(f.pom at the level 
 of interest)) for time t.

 Since I was unable to get survfit to work with specified covariates, I was 
 using the survival probs of the 'avg' covariates, S(t), to determine the 
 baseline at time t, i.e. 
 S(t)^(1/exp(beta(vo)*mean(vo)+beta(po)*mean(po)+beta(f.pom-5.5)*mean(f.pom-5.5)+beta(f.pom-5.0)*mean(f.pom-5.0)+).
  
 And then proceeding as mention in the above paragraph (clearly not an 
 efficient way of doing things).


Yes, but you don't need to go via the baseline.  The survival curves for 
any two covariate vectors z1 and z2 are related by

S(t; z1)= S(t; z2)^(z1-z2)

For convenience of mathematical notation, mathematical statisticians write 
everything in terms of z2=0, and call this the baseline. In the real 
world, though, you are better off with a baseline defined at a covariate 
value somewhere in the vicinity of the actual data. If, as if often the 
case, the zero covariate value is a long way from the observed data, both 
the computation of the survival curve at zero and the transformation to 
the covariates you want are numerically ill-conditioned.

So, you can use the baseline returned by survfit(z2), which is at 
z2=fit$means, to do anything you can do with the baseline at z=0, and the 
computations will be more accurate.

-thomas

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] R: to the power

2005-07-16 Thread Thomas Lumley
On Sat, 16 Jul 2005, Bernardo Rangel Tura wrote:

 At 10:11 12/7/2005, [EMAIL PROTECTED] wrote:

 hi all

 why does R do this:

 (-8)^(1/3)=NaN

 the answer should be : -2


Yes and no.

The problem is that the reciprocal of 3 is not exactly representable as a 
floating point number (it has an infinite binary expansion 
.010101010101...)

So the R expression 1/3 actually returns a number slightly different from 
one-third. It is a fraction with denominator a power of two (probably 
2^53).  Now, -8 to power that is a fraction with denominator a power of 2 
is not a real number, so, NaN.

It would be nice if R could realize that you meant the cube root of -8, 
but that requires either magical powers or complicated and unreliable 
heuristics.  The real solution might be a function like
   root(x,a,b)
to compute  x^(a/b), where a and b could then be exactly representable 
integers. If someone wants to write one

-thomas

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Coxph with factors

2005-07-16 Thread Thomas Lumley
On Sat, 16 Jul 2005, Thomas Lumley wrote:

 Yes, but you don't need to go via the baseline.  The survival curves for
 any two covariate vectors z1 and z2 are related by

 S(t; z1)= S(t; z2)^(z1-z2)

Actually
   S(t; z1)=S(t;z2) ^(beta'(z1-z2))

of course.

-thomas

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Proportion test in three-chices experiment

2005-07-16 Thread Jonathan Baron
I suspect that there are more direct ways to do this test, but it 
is unclear to me just what the issue is.  For example, if there
are many subjects and very few stimuli for each, you might want
to get some sort of measure of ability for each subject (many
possibilities here, then test the measure across subjects with a
t test.  The measure must be chosen so that you can specify a
null hypothesis.  It must be directional.

If you have a few subjects and many trials per subject, then you
could do a significance test for each subject.
You want a directional test, because you have a specific
hypothesis, namely, that the correct answer will occur more often 
than predicted from the marginal frequencies in the 3x3 table.
(I assume it is a 3x3 table with stimuli as rows and responses ad 
columns, and you want to show that the diagonal cells are higher
than predicted.) One possibility is kappa, which is in the vcd
package, and also in psy and concord, in somewhat different
forms.

Usually in this sort of experiment, though, there isn't much of
an issue about whether subjects are transmitting information at
all.  Rather the issue is testing alternative models of what they 
are doing.

Jon

On 07/16/05 06:33, Spencer Graves wrote:
 Have you considered BTm in library(BradleyTerry)?  Consider the
 following example:
 
   cond1 - data.frame(winner=rep(LETTERS[1:3], e=2),
 +   loser=c(B,C,A,C,A,B),
 +   Freq=1:6)
   cond2 - data.frame(winner=rep(LETTERS[1:3], e=2),
 +   loser=c(B,C,A,C,A,B),
 +   Freq=6:1)
   fit1 - BTm(cond1~..)
   fit2 - BTm(cond2~..)
   fit12 - BTm(rbind(cond1, cond2)~..)
   Dev12 - (fit1$deviance+fit2$deviance
 +   -fit12$deviance)
   pchisq(Dev12, 2, lower=FALSE)
 [1] 0.8660497
 
 This says the difference between the two data sets, cond1 and cond2,
 are not statistically significant.
 
 Do you present each subject with onely one pair?  If yes, then this
 model is appropriate.  If no, then the multiple judgments by the same
 subject are not statistically independent, as assumed by this model.
 However, if you don't get statistical significance via this kind of
 computation, it's unlikely that a better model would give you
 statistical significance.  If you get a p value of, say, 0.04, then the
 difference is probably NOT statistically significant.
 
 The p value you get here would be an upper bound.  You could get a
 lower bound by using only one of the three pairs presented to each
 subject selected at random.  If that p value were statistically
 significant, then I think it is safe to say that your two sets of
 conditions are significantly different.  For any value in between, it
 would depend on how independent the three choices by the same subject.
 You might, for example, delete one of the three pairs at random and use
 the result of that comparison.
 
 There are doubtless better techniques, but I'm not familiar with
 them.  Perhaps someone else will reply to my reply.
 
 spencer graves
 
 Rafael Laboissiere wrote:
 
  Hi,
 
  I wish to analyze with R the results of a perception experiment in which
  subjects had to recognize each stimulus among three choices (this was a
  forced-choice design).  The experiment runs under two different
  conditions and the data is like the following:
 
 N1 : count of trials in condition 1
 p11, p12, p13: proportions of choices 1, 2, and 3 in condition 1
 
 N2 : count of trials in condition 2
 p21, p22, p23: proportions of choices 1, 2, and 3 in condition 2
 
  How can I test whether the triple (p11,p12,p13) is different from the
  triple (p21,p22,p23)?  Clearly, prop.test does not help me here, because
  it relates to two-choices tests.
 
  I apologize if the answer is trivial, but I am relatively new to R and
  could not find any pointers in the FAQ or in the mailing list archives.
 
  Thanks in advance for any help,
 
 
 --
 Spencer Graves, PhD
 Senior Development Engineer
 PDF Solutions, Inc.
 333 West San Carlos Street Suite 700
 San Jose, CA 95110, USA
 
 [EMAIL PROTECTED]
 www.pdf.com http://www.pdf.com
 Tel:  408-938-4420
 Fax: 408-280-7915
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

-- 
Jonathan Baron, Professor of Psychology, University of Pennsylvania
Home page: http://www.sas.upenn.edu/~baron

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] topical guide to R packages

2005-07-16 Thread Vivek Rao
I would like to see R packages arranged by topic. CRAN
Task Views are at 
http://lib.stat.cmu.edu/R/CRAN/src/contrib/Views/ ,
but I'd like something more detailed. For example, the
IMSL Fortran library, version 4 is easy to navigate
and has procedures arranged according to following
topics:

   Basic Statistics
   Regression
   Correlation
   Analysis of Variance
   Categorical and Discrete Data Analysis
   Nonparametric Statistics
   Tests of Goodness of FIt and Randomness
   Time Series Analysis and Forecasting
   Covariance Structures and Factor Analysis
   Discriminant Analysis
   Cluster Analysis
   Sampling
   Survival Analysis, Life Testing, and Reliability
   Multidimensional Scaling
   Density and Hazard Estimation
   Line Printer Graphics
   Probability Distribution Functions and Inverses
   Random Number Generation
   Utilities

As a start, here are the R packages I know of for time
series analysis:

dyn: Time Series Regression
dynlm: Dynamic Linear Regression
GeneTS: Microarray Time Series and Network Analysis
its: Irregular Time Series
sspir: State Space Models in R 
tseries: Time series analysis and computational
finance
urca: Unit root and cointegration tests for time
series data
uroot: Unit root tests and graphics for seasonal time
series 

I would like to see lists corresponding to other
categories of IMSL. Another classification system is
GAMS at http://gams.nist.gov/Classes.plain . Section
L, Statistics, probability, would be relevant.

Regards,
Vivek Rao

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] xlabel and ylabel in ROC curve ??

2005-07-16 Thread luk
 
When I draw a ROC curve, the xlabel is '1-spec:', and ylabel is 'sens:' . My 
question is: can I specify them to '1-specificity', and 'sensitivity' ? thanks 
for your help. Lu.


__



[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Confidence Intervals for Arbitrary Functions

2005-07-16 Thread Jeff Newmiller
I have a rather basic background in statistics, and am looking for
assistance in solving what I expect is a common type of problem.

I have measurements of physical processes, and mathematical models of
those processes that I want to feed the measurements into. A simple case
is using measurements of electric power entering and leaving a
power conversion device, sampled at regular intervals, and summed to
estimate energy in and out, and dividing the energy out by the energy in
to get an estimate of efficiency.  I know that power efficiency varies
with power level, but for this calculation I am interested in the
quantifying the overall efficiency rather than the instantaneous
efficiency.

If the energy quantities are treated as a normally-distributed random
variable (per measurement uncertainty), is there a package that simplifies
the determination of the probability distribution function for the
quotient of these values? Or, in the general sense, if I have a function
that computes a measure of interest, are such tools general enough to
handle this? (The goal being to determine a confidence interval for the
computed quantity.)

As an attempt to understand the issues, I have used SQL to generate
discrete sampled normal distributions, and then computed new abscissa
values using a function such as division and computing the joint
probability as the ordinate, and then re-partitioned the result into new
bins using GROUP BY.  This is general enough to handle non-normal
distributions as well, though I don't know how to quantify the numerical
stability/accuracy of this computational procedure. However, this is
pretty tedious... it seems like R ought to have some straightforward
solution to this problem, but I don't seem to know what search terms to
use.

---
Jeff NewmillerThe .   .  Go Live...
DCN:[EMAIL PROTECTED]Basics: ##.#.   ##.#.  Live Go...
  Live:   OO#.. Dead: OO#..  Playing
Research Engineer (Solar/BatteriesO.O#.   #.O#.  with
/Software/Embedded Controllers)   .OO#.   .OO#.  rocks...1k

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] topical guide to R packages

2005-07-16 Thread Duncan Murdoch
On 7/16/2005 1:17 PM, Vivek Rao wrote:
 I would like to see R packages arranged by topic. CRAN
 Task Views are at 
 http://lib.stat.cmu.edu/R/CRAN/src/contrib/Views/ ,
 but I'd like something more detailed. For example, the
 IMSL Fortran library, version 4 is easy to navigate
 and has procedures arranged according to following
 topics:
 
Basic Statistics
Regression
Correlation
Analysis of Variance
Categorical and Discrete Data Analysis
Nonparametric Statistics
Tests of Goodness of FIt and Randomness
Time Series Analysis and Forecasting
Covariance Structures and Factor Analysis
Discriminant Analysis
Cluster Analysis
Sampling
Survival Analysis, Life Testing, and Reliability
Multidimensional Scaling
Density and Hazard Estimation
Line Printer Graphics
Probability Distribution Functions and Inverses
Random Number Generation
Utilities
 
 As a start, here are the R packages I know of for time
 series analysis:
 
 dyn: Time Series Regression
 dynlm: Dynamic Linear Regression
 GeneTS: Microarray Time Series and Network Analysis
 its: Irregular Time Series
 sspir: State Space Models in R 
 tseries: Time series analysis and computational
 finance
 urca: Unit root and cointegration tests for time
 series data
 uroot: Unit root tests and graphics for seasonal time
 series 
 
 I would like to see lists corresponding to other
 categories of IMSL. Another classification system is
 GAMS at http://gams.nist.gov/Classes.plain . Section
 L, Statistics, probability, would be relevant.

I don't see a Task View for time series, so why don't you write it, for 
a start?

Duncan Murdoch

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] PBSmapping and shapefiles

2005-07-16 Thread Denis Chabot
Hi,

Is there a way, preferably with R, to read shapefiles and transform  
them in a format that I could then use with package PBSmapping?

I have been able to read such files into R with maptools' read.shape  
and plot it with plot.Map, but I'd like to bring the data to  
PBSmapping and plot from there. I also looked at the package  
shapefile, but it does not seem to do what I want either.

Sincerely,

Denis Chabot

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Confidence Intervals for Arbitrary Functions

2005-07-16 Thread Gabor Grothendieck
On 7/16/05, Jeff Newmiller [EMAIL PROTECTED] wrote:
 I have a rather basic background in statistics, and am looking for
 assistance in solving what I expect is a common type of problem.
 
 I have measurements of physical processes, and mathematical models of
 those processes that I want to feed the measurements into. A simple case
 is using measurements of electric power entering and leaving a
 power conversion device, sampled at regular intervals, and summed to
 estimate energy in and out, and dividing the energy out by the energy in
 to get an estimate of efficiency.  I know that power efficiency varies
 with power level, but for this calculation I am interested in the
 quantifying the overall efficiency rather than the instantaneous
 efficiency.
 
 If the energy quantities are treated as a normally-distributed random
 variable (per measurement uncertainty), is there a package that simplifies
 the determination of the probability distribution function for the
 quotient of these values? Or, in the general sense, if I have a function
 that computes a measure of interest, are such tools general enough to
 handle this? (The goal being to determine a confidence interval for the
 computed quantity.)
 
 As an attempt to understand the issues, I have used SQL to generate
 discrete sampled normal distributions, and then computed new abscissa
 values using a function such as division and computing the joint
 probability as the ordinate, and then re-partitioned the result into new
 bins using GROUP BY.  This is general enough to handle non-normal
 distributions as well, though I don't know how to quantify the numerical
 stability/accuracy of this computational procedure. However, this is
 pretty tedious... it seems like R ought to have some straightforward
 solution to this problem, but I don't seem to know what search terms to
 use.
 

There is some discussion about the ratio of normals at:

   http://www.pitt.edu/~wpilib/statfaq.html

but you may just want to use bootstrapping:

  library(boot)
  library(simpleboot)

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] R: to the power

2005-07-16 Thread François Pinard
[Thomas Lumley]

 It would be nice if R could realize that you meant the cube root
 of -8, but that requires either magical powers or complicated and
 unreliable heuristics.  The real solution might be a function like
 root(x,a,b) to compute x^(a/b), where a and b could then be exactly
 representable integers. If someone wants to write one...

While this could be done with moderate difficulty for the simpler cases,
one cannot reasonably ask R to be and do everything. :-)

So far, I see R more on the numerical side of things.  If you want
precise, exact solutions to various mathematical problems, you might
consider installing a Computer Algebra System on your machine, next to
R, for handling the symbolic side of things.

One such system which is both free and very capable might be Maxima.
Its convoluted story is rooted 40 years in the past.  Some may say it
lacks some chrome and be mislead; don't be, the engine is pretty solid.
Peek at http://maxima.sourceforge.net if you think you need such a
beast.  Beware: to use it, you need either GCL or Clisp pre-installed.

-- 
François Pinard   http://pinard.progiciels-bpi.ca

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] xfig device - depth

2005-07-16 Thread Thomas Zanon
Hi,
 I hope this is the right list for my posting, since I've never posted 
to any R list before.
I'm quite extensively using the xfig graphics device and as far as I 
figured out this
device writes all the objects into xfig layer 100 (based on what I saw 
in the devPS.c
file -if this is the file to output to xfig format - depth 100 is 
hardcoded). Are the any
plans to implement xfig layer depth control in the xfig graphics device 
or am I just
missing something.
Thanks a lot in advance
Thomas Zanon

-- 
*
Thomas Zanon

2116 Hamerschlag Hall
Department of Electrical and Computer Engineering
Carnegie Mellon University
5000 Forbes Avenue
Pittsburgh, PA 15213-3890

phone office: (412) 268-6638
fax:  (412) 268-3204
email+UID:[EMAIL PROTECTED]
homepage: http://www.ece.cmu.edu/~zanon

* 



-- 
No virus found in this outgoing message.
Checked by AVG Anti-Virus.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Time Series Count Models

2005-07-16 Thread Brett Gordon
Hello,

I'm trying to model the entry of certain firms into a larger number of
distinct markets over time. I have a short time series, but a large
cross section (small T, big N).

I have both time varying and non-time varying variables. Additionally,
since I'm modeling entry of firms, it seems like the number of
existing firms in the market at time t should depend on the number of
firms at (t-1), so I would like to include the lagged cumulative count
as well.

My basic question is whether it is appropriate (in a statistical
sense) to include both the time varying variables and the lagged
cumulative count variable. The lagged count aside, I know there are
standard extensions to count models to handle time series. However,
I'm not sure if anything changes when lagged values of the cumulative
dependent variable are added (i.e. are the regular standard errors
correct, are estimates consistent, etc)

I would greatly appreciate it if anyone can direct me to relevant
material on this. As a note, I have already looked at Cameron and
Trivedi's book.

Many thanks,
Brett

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Time Series Count Models

2005-07-16 Thread Brett Gordon
Hello,

I'm trying to model the entry of certain firms into a larger number of
distinct markets over time. I have a short time series, but a large
cross section (small T, big N).

I have both time varying and non-time varying variables. Additionally,
since I'm modeling entry of firms, it seems like the number of
existing firms in the market at time t should depend on the number of
firms at (t-1), so I would like to include the lagged cumulative count.

My basic question is whether it is appropriate (in a statistical
sense) to include both the time varying variables and the lagged
cumulative count variable. The lagged count aside, I know there are
standard extensions to count models to handle time series. However,
I'm not sure if anything changes when lagged values of the cumulative
dependent variable are added (i.e. are the regular standard errors
correct, are estimates consistent, etc).

Can I still use one of the time series count models while including
this lagged cumulative value?

I would greatly appreciate it if anyone can direct me to relevant
material on this. As a note, I have already looked at Cameron and
Trivedi's book.

Many thanks,

Brett

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Is it possible to coerce R to continue proceeding the next command in a loop after an error message ?

2005-07-16 Thread Yimeng Lu
Hello R-users,

In a loop, if a function, such as nls, gives an error, is it possible to
coerce R to continue proceeding the next command with the same
loop?

Thanks so much for your advice!

Hanna Lu
[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Is it possible to coerce R to continue proceeding the next command in a loop after an error message ?

2005-07-16 Thread Duncan Murdoch
Yimeng Lu wrote:
 Hello R-users,
 
 In a loop, if a function, such as nls, gives an error, is it possible to
 coerce R to continue proceeding the next command with the same
 loop?
 

Yes, see the try() function.  The basic usage is something like

value - try( some calculation )
if (inherits(value, try-error))   handle the error
else  handle a correct calculation

Duncan Murdoch

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Is it possible to coerce R to continue proceeding the next command in a loop after an error message ?

2005-07-16 Thread Spencer Graves
  ?try

  spencer graves

Yimeng Lu wrote:
 Hello R-users,
 
 In a loop, if a function, such as nls, gives an error, is it possible to
 coerce R to continue proceeding the next command with the same
 loop?
 
 Thanks so much for your advice!
 
 Hanna Lu
   [[alternative HTML version deleted]]
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

-- 
Spencer Graves, PhD
Senior Development Engineer
PDF Solutions, Inc.
333 West San Carlos Street Suite 700
San Jose, CA 95110, USA

[EMAIL PROTECTED]
www.pdf.com http://www.pdf.com
Tel:  408-938-4420
Fax: 408-280-7915

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] nlme and spatially correlated errors

2005-07-16 Thread Spencer Graves
  Have you tried anova(fit1, fit2), where

  fit1 - lme(one model...)
  fit2 - lme(a submodel ... )

This anova does about the best that anyone knows how to do -- or at 
lest did 7 years ago when it was written.  If the submodel changes the 
fixed effects, you should use method='ML'.  If the submodel changes 
the noise model specification, use method='REML'.  See Pinheiro and 
Bates (2000) Mixed-Effect Models in S and S-Plus (Springer).  If you 
need something more precise than the standard approximations, try 
simulate.lme.

  buena suerte!
  spencer graves


Patricia Balvanera wrote:

 Dear R users,
 
 I am using lme and nlme to account for spatially correlated errors as 
 random effects. My basic question is about being able to correct F, p, R2 
 and parameters of models that do not take into account the nature of such 
 errors using gls, glm or nlm and replace them for new F, p, R2 and 
 parameters using lme and nlme as random effects.
 
 I am studying distribution patterns of 50 tree species along a gradient. 
 That gradient
 was sampled through 27 transects, with 10 plots within each transect. For 
 each plot I
 have data on presence/absence, abundance and basal area of the species. I 
 also have data
 for 4 environmental variables related to water availability (soil water 
 retention
 capacity, slope, insolation, altitude) and X and Y coordinates for each 
 plot. I explored
 wether the relationship between any of the response variables 
 (presence/absence,
 abundance, basal area) and the environmental variables was linear, 
 polinomial, or
 non-linear.
 
 My main interest in this question is that I proceeded to correct for spatial
 autocorrelation (both within transects and overall) following the 
 procedures suggest by
 Crawley 2002 for linear models
 e.g. (GUAMAC = a species, CRAS = soil water retention capacity, TRANSECTO = 
 transect)
   model1-gls(GUAMAC ~ CRAS)
   model2-lme(GUAMAC ~ CRAS, random = ~ 1 | TRANSECTO)
   model3-lme(GUAMAC ~ CRAS, random = GUAMAC ~ CRAS | TRANSECTO)
   model4-lme(GUAMAC ~ CRAS, random = GUAMAC ~ CRAS -1 | TRANSECTO)
   AIC(model1,model2,model3,model4)
 df AIC
 model1 3 3730.537
 model2 4 3698.849
 model3 6 3702.408
 model4 4 3704.722
   plot(Variogram(model2, form = ~ X + Y))
   model5-update(model2,corr=corSpher(c(30,0.8), form = ~ X + Y, nugget = T))
   plot(Variogram(modelo7, resType = n))
   summary(model5)
 
 In this case I obtain new F for the independent variable INSOLACION, new R2 
 for the whole model and new parameters for the linear model.
 
 I have also applied this procedure to polinomial models and to glms with 
 binomial errors
 (presence/absence) with no problem.
 
 I am nevertheless stuck with non-linear models. I am using the protocols 
 you suggested
 in the 1998 manuals by Pinheiro and Bates, and those suggested by Crawley 
 2002.
 Please find enclose an example with an
 exponential model (which I chose for being simple). In fact the linear 
 models I am using
 are a bit more complicated.
 (HELLOT is a species, INSOLACION = INSOLATION, basal = basal area of the 
 species, TRANSECTO = transect)
 
   HELLOT ~ exp(A + (B * INSOLACION))
   basal.HELLOT -function(A,B,INSOLACION) exp(A + (B * INSOLACION))
   HELLOT ~ basal.HELLOT(A,B,INSOLACION)
   basal.HELLOT- deriv(~ exp(A + (B * INSOLACION))
 + , LETTERS [1:2], function(A, B, INSOLACION){})
   model1- nlme(model = HELLOT ~ exp(A + (B * INSOLACION)), fixed = A + B 
 ~ 1,
 random = A + B ~ 1, groups = ~ TRANSECTO, start = list(fixed = c(5.23, 
 -0.05)))
 
 It runs perfectly and gives new values for parameters A and B, but would 
 only give me F for fixed effects of A and B, while what I am really looking 
 for is F for fixed effects of INSOLACION and the R2 of the new model.
 
 Thank you so much in advance for your help
 
 
 
 Dra. Patricia Balvanera
 Centro de Investigaciones en Ecosistemas, UNAM-Campus Morelia
 Apdo. Postal 27-3, Xangari
 58090 Morelia, Michoacán, Mexico
 Tel. (52-443)3-22-27-07, (52-55) 56-23-27-07
 FAX (52-443) 3-22-27-19, (52-55) 56-23-27-19
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

-- 
Spencer Graves, PhD
Senior Development Engineer
PDF Solutions, Inc.
333 West San Carlos Street Suite 700
San Jose, CA 95110, USA

[EMAIL PROTECTED]
www.pdf.com http://www.pdf.com
Tel:  408-938-4420
Fax: 408-280-7915

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] printing the name of the arguments passed to a function

2005-07-16 Thread John Sorkin
R2.1.1
Win 2k
I have a function, B, within a function, A.
I would like to have B print the name of the argument passed to it (not
the value of the arguments).
i.e.,
 
A-function()
{
B-function(x,y)
{
 fit1-lm(y~x,data=jo)
 print(summary(fit1)
 I want B to print the string age and the string height.
}
B(age,height)
}
 
I would like have function B print the name of the arguments passed to
it, i.e. I want B to print the word
age  
and the word
height
 
I would appreciate any suggestions that would help me accomplish this
task.
 
Thanks,
John
 
John Sorkin M.D., Ph.D.
Chief, Biostatistics and Informatics
Baltimore VA Medical Center GRECC and
University of Maryland School of Medicine Claude Pepper OAIC
 
University of Maryland School of Medicine
Division of Gerontology
Baltimore VA Medical Center
10 North Greene Street
GRECC (BT/18/GR)
Baltimore, MD 21201-1524
 
410-605-7119 
- NOTE NEW EMAIL ADDRESS:
[EMAIL PROTECTED]

[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Can't get sample function from An Introduction to R to work

2005-07-16 Thread David Groos

On Jul 16, 2005, at 9:58 AM, Thomas Lumley wrote:

 On Fri, 15 Jul 2005, David Groos wrote:

 Thank you-all very much for your help, your responses and help has 
 been
 very encouraging.  The following doesn't close the case but it tables
 it...

 First I copied Ken's code into my R Console and...it worked great!
 That was baffling as it looked identical to mine.

 I did not explicitly say earlier that the code I sent out I had copied
 from the console, pasted into MS Word, changed font size, then pasted
 it into the e-mail--in other words, that was a copy of one of the 
 codes
 that didn't work.


 Mysterious syntax errors on OS X can result from copying and pasting 
 into R. Files that look perfectly innocent can contain Windows line 
 endings (\r\n) or en-dashes instead of hyphens.  These might well be 
 transparently fixed in the process of converting to email.
I'll put this, as well as the copying/pasting as unformatted text as 
recommends Spencer, into my bag of tricks.

 If you actually typed into the R console then this won't explain it, 
 of course.
And indeed, I did type right into the R console, so the mystery 
remains...
Thanks,
David

   -thomas


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] 2D contour predictions

2005-07-16 Thread Spencer Graves
  It's not so simple, but consider the following:


x=rep(1:6, each=2, length=24)
y=rep(1:6, each=4)
z=rep(0:1, length=24)
set.seed(1)
tstDF - data.frame(x=x, y=y, z=z,
w=(x-3.5)^2+(y-3.5)+z+0.1*rnorm(24))
fit - lm(w~x+y+z+I(x^2)+I(y^2), tstDF)
x0 - seq(1, 5, .5)
y0 - seq(1, 6, .5)
Grid - expand.grid(x=x0, y=y0)
Grid$z - rep(0.5, dim(Grid)[1])
pred - predict(fit, Grid)

contour(x0, y0, array(pred, dim=c(length(x0), length(y0

  This kind of thing is discussed in Venables and Ripley (2002) Modern 
Applied Statistics with S, 4th ed. (Springer).  I highly recommend this 
book.

  spencer graves

Michael Hopkins wrote:

 
 Hi All
 
 I have been fitting regression models and would now like to produce some
 contour  image plots from the predictors.
 
 Is there an easy way to do this?  My current (newbie) experience with R
 would suggest there is but that it's not always easy to find it!
 
 f3 - lm( fc ~ poly( speed, 2 ) + poly( torque, 2 ) + poly( sonl, 2 ) +
 poly( p_rail, 2 ) + poly( pil_sep, 2 ) + poly( maf, 2 ) + (speed + torque +
 sonl + p_rail + pil_sep + maf)^2 )
 
 hat - predict( f3 )
 
 contour( sonl, maf, hat )
 
 Error in contour.default(sonl, maf, hat) :
 increasing 'x' and 'y' values expected
 
 image(sonl, maf, hat)
 
 Error in image.default(sonl, maf, hat) : increasing 'x' and 'y' values
 expected
 
 I have tried naïve sorting but no luck with that.
 
 I suspect I may need to produce a data grid of some kind but I'm not clear
 how I would use R to specify such a 2D slice in the 6D design space.
 
 TIA
 
 Michael
 
 P.S.  Whilst I'm asking the list - is there an easier way of expressing a
 full 2nd order model than the one I used above?  I'm sure there is but
 finding it etc etc...
 
 
 _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/
 
 _/_/   _/_/_/ Hopkins Research Ltd
_/_/   _/_/
   _/_/_/_/   _/_/_/  http://www.hopkins-research.com/
  _/_/   _/   _/
 _/_/   _/ _/   'touch the future'

 _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

-- 
Spencer Graves, PhD
Senior Development Engineer
PDF Solutions, Inc.
333 West San Carlos Street Suite 700
San Jose, CA 95110, USA

[EMAIL PROTECTED]
www.pdf.com http://www.pdf.com
Tel:  408-938-4420
Fax: 408-280-7915

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] printing the name of the arguments passed to a function

2005-07-16 Thread Spencer Graves
  How about the following:

  tstFn - function(x){
+   xName - deparse(substitute(x))
+   cat(xName)
+   done
+ }
  tstFn(Varname)
Varname[1] done
 
  tstF1 - function(x){
+   tstF2 - function(y){
+ cat(deparse(substitute(y)),
+ doesn't work\n)
+   }
+   tstF2(x)
+   xName - deparse(substitute(x))
+   tstF3 - function(y, yName){
+ cat(yName, works\n)
+   }
+   tstF3(x, xName)
+   done
+ }
  tstF1(Varname)
x doesn't work
Varname works
[1] done

 spencer graves

John Sorkin wrote:

 R2.1.1
 Win 2k
 I have a function, B, within a function, A.
 I would like to have B print the name of the argument passed to it (not
 the value of the arguments).
 i.e.,
  
 A-function()
 {
 B-function(x,y)
 {
  fit1-lm(y~x,data=jo)
  print(summary(fit1)
  I want B to print the string age and the string height.
 }
 B(age,height)
 }
  
 I would like have function B print the name of the arguments passed to
 it, i.e. I want B to print the word
 age  
 and the word
 height
  
 I would appreciate any suggestions that would help me accomplish this
 task.
  
 Thanks,
 John
  
 John Sorkin M.D., Ph.D.
 Chief, Biostatistics and Informatics
 Baltimore VA Medical Center GRECC and
 University of Maryland School of Medicine Claude Pepper OAIC
  
 University of Maryland School of Medicine
 Division of Gerontology
 Baltimore VA Medical Center
 10 North Greene Street
 GRECC (BT/18/GR)
 Baltimore, MD 21201-1524
  
 410-605-7119 
 -- NOTE NEW EMAIL ADDRESS:
 [EMAIL PROTECTED]
 
   [[alternative HTML version deleted]]
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

-- 
Spencer Graves, PhD
Senior Development Engineer
PDF Solutions, Inc.
333 West San Carlos Street Suite 700
San Jose, CA 95110, USA

[EMAIL PROTECTED]
www.pdf.com http://www.pdf.com
Tel:  408-938-4420
Fax: 408-280-7915

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Strange problems with lattice plots

2005-07-16 Thread Wladimir Eremeev
Dear r-help,
I need to draw some lattice plots and have stuck in strange problems.
   
   I have a data frame of 405 rows:
 str(dframe)
`data.frame':   405 obs. of  4 variables:
 $ year : num  1990 1990 1990 1990 1990 1990 1990 1990 1990 1990 ...
 $ month: num  1 2 3 4 5 6 7 8 9 10 ...
 $ V_A2 : num  NA NA NA NA NA ...
 $ V_A3 : num  NA NA NA NA NA ...

Variable month changes from 1 to 27 as the data frame includes monthly
measurements and some aggregations of them.
Variable year varies from 1990 to 2004.
Variables V_A2 and V_A3 contain floating point values and several NAs (as
sometimes measurements were unavailable).

I draw plots:

xyplot((V_A3/25)~year|
   factor(month,labels=c(month.name,Annual mean,
  Annual integral,Integral (Jan-Mar),Integral 
(Apr-Jun),Integral (Jul-Sep),Integral (Oct-Dec),
  Integral (Jan-Feb),Integral (Jan-Mar),Integral 
(Jan-Apr),Integral (Jan-May),Integral (Jan-Jun),
  Integral (Jan-Jul),Integral (Jan-Aug),Integral 
(Jan-Sep),Integral (Jan-Oct))),
   data=dfame,as.table=TRUE,type=c(o,g,r),layout=c(3,9),
   ylab=Flow, Sv,xlab=NULL)

However, instead of plots I am getting the following error
Error in do.call(pmax, lapply(cond, is.na)) :
symbol print-name too long

Shortening factor labels doesn't help.
Removing them at all solves the problem, but this doesn't meet my needs, as I 
have to
explain the meaning of each plot.

Could you, please, explain me, what does it mean, and what should I do
to avoid this and get my plots?

The ways to solve my task I see now are either to make subsets (I
wonder if it will work with the only addition of subset parameter to
call of xyplot) or to add captions to the plots to the picture in the
graphic editor.
I would like to avoid these.

Thank you very much in helping me to solve the problem!

---
Best regards,
Wladimirmailto:[EMAIL PROTECTED]
==
Research Scientist, PhD   Leninsky Prospect 33,
Space Monitoring  Ecoinformation Systems Sector, Moscow, Russia, 119071,
Institute of Ecology, Phone: (095) 135-9972;
Russian Academy of Sciences   Fax: (095) 135-9972

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Strange problems with lattice plots

2005-07-16 Thread Wladimir Eremeev

Yes!!! I have overcome it!

The solution is

xyplot((V_A3/25)~year|factor(month),
strip=strip.custom(factor.levels=c(month.name,Annual mean,
  Annual integral,Integral (Jan-Mar),Integral 
(Apr-Jun),Integral (Jul-Sep),Integral (Oct-Dec),
  
 and so on...


Sorry to disturb.
 
WE Dear r-help,
WE I need to draw some lattice plots and have stuck in strange problems.
   
WEI have a data frame of 405 rows:
 str(dframe)
WE `data.frame':   405 obs. of  4 variables:
WE  $ year : num  1990 1990 1990 1990 1990 1990 1990 1990 1990 1990 ...
WE  $ month: num  1 2 3 4 5 6 7 8 9 10 ...
WE  $ V_A2 : num  NA NA NA NA NA ...
WE  $ V_A3 : num  NA NA NA NA NA ...

WE Variable month changes from 1 to 27 as the data frame includes monthly
WE measurements and some aggregations of them.
WE Variable year varies from 1990 to 2004.
WE Variables V_A2 and V_A3 contain floating point values and several NAs (as
WE sometimes measurements were unavailable).

WE I draw plots:

WE xyplot((V_A3/25)~year|
WEfactor(month,labels=c(month.name,Annual mean,
WE   Annual integral,Integral
WE (Jan-Mar),Integral (Apr-Jun),Integral (Jul-Sep),Integral
WE (Oct-Dec),
WE   Integral (Jan-Feb),Integral
WE (Jan-Mar),Integral (Jan-Apr),Integral (Jan-May),Integral
WE (Jan-Jun),
WE   Integral (Jan-Jul),Integral
WE (Jan-Aug),Integral (Jan-Sep),Integral (Jan-Oct))),
WEdata=dfame,as.table=TRUE,type=c(o,g,r),layout=c(3,9),
WEylab=Flow, Sv,xlab=NULL)

WE However, instead of plots I am getting the following error
WE Error in do.call(pmax, lapply(cond, is.na)) :
WE symbol print-name too long

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html