Re: [R] (Statistics question) - Nonlinear regression and simultaneous equation

2007-07-06 Thread Spencer Graves
  Not all parameters are estimable in some systems of equations like 
the classical errors in X regression. 

  Consistency is an asymptotic property:  On average, as the sample 
size increases without bound, a consistent estimator will converge to 
what you want.  I'm no expert in asymptotics, but I recall theorems that 
suggest that the estimator obtained from a single step in a maximum 
likelihood estimation can be consistent -- provided the information is 
available in the data and the structure of the model.  The issue is not 
whether you use SVM (support vector machine?), FIML (full information 
maximum likelihood?) or the 2SLS (2 stage least squares?) or only the 
first step. 

  Is there information in your data for estimating all the 
parameters in your model?  By information here, I mean something like 
Fisher information, the negative expectation of the matrix of second 
partial derivatives with respect to parameters you want to estimate of a 
log(likelihood) for your model.  Is this matrix ill conditioned?  What 
happens to its eigenvalues as your hypothetical sample size increases 
without bound? 

  If these comments do not seem relevant to your question, please 
provide more detail of your specific application, preferably including 
commented, minimal, self-contained, reproducible code, as requested at 
the end of every email forwarded by r-help. 

  Hope this helps. 
  Spencer Graves

[EMAIL PROTECTED] wrote:
 Hi,I have a fundamental questions that I'm a bit confused. If any guru from 
 this circle could help me out, I would really appreciate.I have a system of 
 equations in which some of the endogs appear on right hand sides of some 
 equations. To solve this, one needs a technique like 2SLS or FIML to 
 circumvent inconsistency of the estimated coefficients. My question is that 
 if I apply the nonlinear regression like SVM regression. Do I still need to 
 worry about endogeneity? Meaning, what I only need to care is the 1st step of 
 2SLS. That would mean that I only need to carry out the SVM regression on all 
 the exogs. Am I missing anything here? Thank you so much.Regards,- adschai

   [[alternative HTML version deleted]]

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Empirical copula in R

2007-07-04 Thread Spencer Graves
  I just got 203 hits from RSiteSearch(copula) and 60 from 
RSiteSearch(copula, fun).  Most if not all of the first 24 or the 
hits in the latter referred to the 'copula' package.  Have you reviewed 
these?  The 25th hit in the latter referred to an 'fgac' package for 
'Generalized Archimedean Copula', with a Brazilian author and 
maintainer, who presumably is not related to the 'copula' author and 
maintainer at U. Iowa. 

  Also, have you tried contacting the official maintainers of 
these packages?  You can get an email address for them from 
help(package=copula) and help (package=fgac). 

  Hope this helps. 
  Spencer Graves

GWeiss wrote:
 Hi,

 I would like to implement the empirical copula in R, does anyone know if it
 is included in a package? I know it is not in the Copula package. This one
 only includes a gof-test based on the empirical copula process.

 Thanks for your help!
 Gregor


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] how to solve a min problem

2007-07-03 Thread Spencer Graves
  Do you mean

  minimize mu with 0  b_func(S+mu)  800? 

  For this kind of problem, I'd first want to know the nature of 
b_func.  Without knowing more, I might try to plot b_func(S+mu) vs. 
mu, then maybe use 'optimize'. 

  If this is not what you mean, please be more specific:  I'm 
confused. 

  Hope this helps. 
  Spencer Graves

domenico pestalozzi wrote:
 I know it's possible to solve max e min problems  by using these functions:

 nlm, optimize, optim

 but I don't know how to use them (...if possible...) to solve this problem.

 I have a personal function called  b_func(S) where S is an input array (1 X
 n)  and I'd like:

 minimize mean(S) with 0  b_funct  800.

 I know that the solution exists, but It's possible to calculate it in R?
 The b_func is non linear and it calculates a particular value using S as
 input and applying a convergent iterative algorithm.

 thanks


 domenico

   [[alternative HTML version deleted]]

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] unequal variance assumption for lme (mixed effect model)

2007-07-01 Thread Spencer Graves
  The 'weights' argument on 'lm' is assumed to identify a vector of 
the same length as the response, giving numbers that are inversely 
proportional to the variance for each observation. 

  However, 'lm' provides no capability to estimate weights.  If you 
want to do that, the varFunc capabilities in the 'nlme' package is the 
best tool I know for that purpose. 

  If someone thinks there are better tools available for estimating 
heterscedasticity, I hope s/he will enlighten us both. 

  Hope this helps.
  Spencer Graves   

shirley zhang wrote:
 Thanks for Spencer and Simon's help.  I've got very interesting
 results based on your suggestions.

 One more question,  how to handle unequal variance problme in lm()?
 Isn't the weights option also, which means weighted least squares,
 right?  Can you give me an example of setting this parameter in lm()
 to account for  different variance assumption in each group?

 Thanks again,
 Shirley


 On 6/29/07, Spencer Graves [EMAIL PROTECTED] wrote:
 comments in line

 shirley zhang wrote:
  Hi Simon,
 
  Thanks for your reply. Your reply reminds me that book. I've read it
  long time ago, but haven't  try the weights option in my projects
  yet:)
 
  Is the heteroscedastic test always less powerful because we have to
  estimate the within group variance from the given data?
 
 SG:  In general, I suspect we generally lose power when we estimate more
 parameters.

 SG:  You can check this using the 'simulate.lme' function, whose use is
 illustrated in the seminal work reported in sect. 2.4 of Pinheiro and
 Bates (2000) Mixed-Effects Models in S and S-Plus (Springer).
  Should we check whether each group has equal variance before using
  weights=varIdent()? If we should, what is the function for linear
  mixed model?
 
 SG:  The general advice I've seen is to avoid excessive
 overparameterization of heterscedasticity and correlations.  However,
 parsimonious correlation had heterscedasticity models would likely be
 wise.  Years ago, George Box expressed concern about people worrying too
 much about outliers, which are often fairly obvious and relatively easy
 to detect, while they worried too little, he thought, about dependence,
 especially serial dependence, which is generally more difficult to
 detect and creates bigger problems in inference than outliers.  He
 wrote, Why worry about mice when there are tigers about?

 SG:  Issues of this type can be fairly easily evaluated using
 'simulate.lme'.

  Hope this helps.
  Spencer Graves
  Thanks,
  Shirley
 
  On 6/27/07, Simon Blomberg [EMAIL PROTECTED] wrote:
 
  The default settings for lme do assume equal variances within groups.
  You can change that by using the various varClasses. see 
 ?varClasses. A
  simple example would be to allow unequal variances across groups. 
 So if
  your call to lme was:
 
  lme(...,random=~1|group,...)
 
  then to allow each group to have its own variance, use:
 
  lme(...,random=~1|group, weights=varIdent(form=~1|group),...)
 
  You really really should read Pinheiro  Bates (2000). It's all 
 there.
 
  HTH,
 
  Simon.
 
  , On Wed, 2007-06-27 at 21:55 -0400, shirley zhang wrote:
 
  Dear Douglas and R-help,
 
  Does lme assume normal distribution AND equal variance among groups
  like anova() does? If it does, is there any method like unequal
  variance T-test (Welch T) in lme when each group has unequal 
 variance
  in my data?
 
  Thanks,
  Shirley
 
  __
  R-help@stat.math.ethz.ch mailing list
  https://stat.ethz.ch/mailman/listinfo/r-help
  PLEASE do read the posting guide 
 http://www.R-project.org/posting-guide.html
  and provide commented, minimal, self-contained, reproducible code.
 
  --
  Simon Blomberg, BSc (Hons), PhD, MAppStat.
  Lecturer and Consultant Statistician
  Faculty of Biological and Chemical Sciences
  The University of Queensland
  St. Lucia Queensland 4072
  Australia
 
  Room 320, Goddard Building (8)
  T: +61 7 3365 2506
  email: S.Blomberg1_at_uq.edu.au
 
  The combination of some data and an aching desire for
  an answer does not ensure that a reasonable answer can
  be extracted from a given body of data. - John Tukey.
 
 
 
 
  __
  R-help@stat.math.ethz.ch mailing list
  https://stat.ethz.ch/mailman/listinfo/r-help
  PLEASE do read the posting guide 
 http://www.R-project.org/posting-guide.html
  and provide commented, minimal, self-contained, reproducible code.
 


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] unequal variance assumption for lme (mixed effect model)

2007-06-29 Thread Spencer Graves
comments in line

shirley zhang wrote:
 Hi Simon,

 Thanks for your reply. Your reply reminds me that book. I've read it
 long time ago, but haven't  try the weights option in my projects
 yet:)

 Is the heteroscedastic test always less powerful because we have to
 estimate the within group variance from the given data?
   
SG:  In general, I suspect we generally lose power when we estimate more 
parameters. 

SG:  You can check this using the 'simulate.lme' function, whose use is 
illustrated in the seminal work reported in sect. 2.4 of Pinheiro and 
Bates (2000) Mixed-Effects Models in S and S-Plus (Springer). 
 Should we check whether each group has equal variance before using
 weights=varIdent()? If we should, what is the function for linear
 mixed model?
   
SG:  The general advice I've seen is to avoid excessive 
overparameterization of heterscedasticity and correlations.  However, 
parsimonious correlation had heterscedasticity models would likely be 
wise.  Years ago, George Box expressed concern about people worrying too 
much about outliers, which are often fairly obvious and relatively easy 
to detect, while they worried too little, he thought, about dependence, 
especially serial dependence, which is generally more difficult to 
detect and creates bigger problems in inference than outliers.  He 
wrote, Why worry about mice when there are tigers about? 

SG:  Issues of this type can be fairly easily evaluated using 
'simulate.lme'. 

  Hope this helps. 
  Spencer Graves
 Thanks,
 Shirley

 On 6/27/07, Simon Blomberg [EMAIL PROTECTED] wrote:
   
 The default settings for lme do assume equal variances within groups.
 You can change that by using the various varClasses. see ?varClasses. A
 simple example would be to allow unequal variances across groups. So if
 your call to lme was:

 lme(...,random=~1|group,...)

 then to allow each group to have its own variance, use:

 lme(...,random=~1|group, weights=varIdent(form=~1|group),...)

 You really really should read Pinheiro  Bates (2000). It's all there.

 HTH,

 Simon.

 , On Wed, 2007-06-27 at 21:55 -0400, shirley zhang wrote:
 
 Dear Douglas and R-help,

 Does lme assume normal distribution AND equal variance among groups
 like anova() does? If it does, is there any method like unequal
 variance T-test (Welch T) in lme when each group has unequal variance
 in my data?

 Thanks,
 Shirley

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.
   
 --
 Simon Blomberg, BSc (Hons), PhD, MAppStat.
 Lecturer and Consultant Statistician
 Faculty of Biological and Chemical Sciences
 The University of Queensland
 St. Lucia Queensland 4072
 Australia

 Room 320, Goddard Building (8)
 T: +61 7 3365 2506
 email: S.Blomberg1_at_uq.edu.au

 The combination of some data and an aching desire for
 an answer does not ensure that a reasonable answer can
 be extracted from a given body of data. - John Tukey.


 

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Dominant eigenvector displayed as third (Marco Visser)

2007-06-29 Thread Spencer Graves
  There is no dominant eigenvalue:  The eigenvalues of that matrix 
are the 6 different roots of 5.  All have modulus (or absolute value) = 
1.307660.  When I raised them all to the 6th power, all 6 were 5+0i. 

  Someone else can tell us why this is, but this should suffice as 
an initial answer to your question. 

  Hope this helps. 
  Spencer Graves

Marco Visser wrote:
 Dear R users  Experts,

 This is just a curiousity, I was wondering why the dominant eigenvetor and 
 eigenvalue 
 of the following matrix is given as the third. I guess this could complicate 
 automatic selection 
 procedures. 

 000005
 100000
 010000
 001000
 000100
 000010

 Please copy  paste the following into R;

 a=c(0,0,0,0,0,5,1,0,0,0,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,1,0)
 mat=matrix(a, ncol=6,byrow=T)
 eigen(mat)

 The matrix is a population matrix for a plant pathogen (Powell et al 2005).

 Basically I would really like to know why this happens so I will know if it 
 can occur 
 again. 

 Thanks for any comments,

 Marco Visser


 Comment: In Matlab the the dominant eigenvetor and eigenvalue 
 of the described matrix are given as the sixth. Again no idea why.

 reference

 J. A. Powell, I. Slapnicar and W. van der Werf. Epidemic spread of a 
 lesion-forming 
 plant pathogen - analysis of a mechanistic model with infinite age structure. 
 (2005) 
 Linear Algebra and its Applications 298. p 117-140.  





 Ready
  for the edge of your seat? 
 Check out tonight's top picks on Yahoo! TV. 

   [[alternative HTML version deleted]]

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] R.matlab questions

2007-06-28 Thread Spencer Graves
Hello:

  Two questions about R.matlab:

  1.  How to break a hung R-Matlab connection?

  2.  How to execute R.matlab commands from within a function?


BREAKING AN R-Matlab CONNECTION

  Sometimes an attempted R.matlab command locks up my computer.  The 
standard R break process interrupts the R command.  However, when I do 
that, the command to Matlab is still pending, and I don't know an easy 
way to interrupt that.  A simple, self-contained example appears below.

  The easiest way I've found so far to interrupt Matlab is to quit R. 
This will finally release Matlab.


CALLING R.matlab FUNCTIONS FROM WITHIN A FUNCTION

  An R.matlab function call that works as a direct R command hangs for 
me when executed within an R function.  A simple, self-contained example 
appears below.

How can I work around this?


  Thanks,
  Spencer Graves
Using Matlab 7.3.0 (R2006b) under Windows XP Pro.
 sessionInfo()
R version 2.5.0 (2007-04-23)
i386-pc-mingw32

locale:
LC_COLLATE=English_United States.1252;LC_CTYPE=English_United
States.1252;LC_MONETARY=English_United
States.1252;LC_NUMERIC=C;LC_TIME=English_United States.1252

attached base packages:
[1] splines   stats graphics  grDevices utils datasets
[7] methods   base

other attached packages:
R.matlab R.oo  fda  zoo
  1.1.3  1.2.7  1.2.1  1.3-1

###

#EXAMPLE:  CALLING R.matlab FROM WITHIN A FUCTION?

# 1.  library(R.matlab)
library(R.matlab)
# 2.  Create a Matlab client object to support communications
(matlab - Matlab())
# Optionally set setVerbose(..., -2) to get max info
setVerbose(matlab, -2)

# 3.  Start Matlab
# 4.  Ask Matlab to become a slave

#Matlab MatlabServer

# 5.  Open the connection from R to MatlabServer
(isOpenMatlab - open(matlab))

# NOTE:  If Matlab is not frozen:
#R close(matlab)
# returns local control to Matlab.
# Control of Matlab can be returned to R at any time
# by repeating steps 4  5.

# 6.  matlab.compute.a
compute - evaluate(matlab, a = 1+2)
(a0 - getVariable(matlab, a))

# The above works fine for me.

# 7.  R.matlab.compute.a function
R.matlab.compute.a - function(text=a=1+2, matlabClient=matlab){
# text = a Matlab expression that stores 'a'
   ev0 - evaluate(matlabClient, text)
   getVariable(matlabClient, a)
}

# The following locks up both R and Matlab for me:
R.matlab.compute.a()

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] stepAIC on lm() where response is a matrix..

2007-06-27 Thread Spencer Graves
  I see several options for you: 

  1.  Write a function 'dropterm.mlm', copying 'dropterm.lm' and 
modifying it as you think appropriate.  The function 'dropterm.lm' is 
hidden in a namespace, which you can see from 'methods(dropterm)'.  To 
get it, either use getAnywhere(dropterm.lm) or MASS:::dropterm.lm.

  2.  Use 'stepAIC' in the univariate mode.  If they both select the 
same model, it would strongly suggest that you would get the same answer 
from a multivariate version.  Fit that multivariate version and be happy. 

  3.  If univariate analyses produce different models and you want a 
common one, take the models you get, and interpolate manually a list of 
alternative plausible models between the two best univariate models.  
Then fit those manually and select the one with the smallest AIC. 

  Hope this helps. 
  Spencer Graves

vinod gullu wrote:
 dear R users,

 I have fit the lm() on a mtrix of responses. 
 i.e M1 = lm(cbind(R1,R2)~ X+Y+0). When i use
 summary(M1), it shows details for R1 and R2
 separately. Now i want to use stepAIC on these models.
 But when i use stepAIC(M1) an error message  comes
 saying that dropterm.mlm is not implemented. What is
 the way out to use stepAIC in such cases.

 regards,


  
 
 The fish are biting.

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Gaussian elimination - singular matrix

2007-06-27 Thread Spencer Graves
  RSiteSearch(generalized inverse, fun) produced 194 hits for me 
just now, including references to the following:

Ginv {haplo.stats}
ginv {MASS}
invgen {far}
Ginv {haplo.score}

At least some of these should provide a solution to a singular system. 
You could further provide side constraints as Moshe suggested, but I 
suspect that 'ginv', for example, might return the minimum length 
solution automatically.

Hope this helps.
Spencer Graves

Moshe Olshansky wrote:
 All the nontrivial solutions to AX = 0 are the
 eigenvectors of A corresponding to eigenvalue 0 (try
 eigen function).
 The non-negative solution may or may not exist.  For
 example, if A is a 2x2 matrix Aij = 1 for 1 =i,j =2
 then the only non-trivial solution to AX = 0 is X =
 a*(1,-1), where a is any nonzero scalar.  So in this
 case there is no non-negative solution. 
 Let X1, X2,...,Xk be all the k independent
 eigenvectors corresponding to eigenvalue 0, i.e. AXi =
 0 for i = 1,2,...,k.  Any linear combination of them,
 X = X1,...,Xk, i.e. a1*X1 + ... + ak*Xk is also a
 solution of AX = 0.  Let B be a matrix whose COLUMNS
 are the vectors X1,...,Xk (B = (X1,...,Xk).  Then
 finding a1,...,ak for which all elements of X are
 non-negative is equivalent to finding a vector a =
 (a1,...,ak) such that B*a = 0.  Of course a =
 (0,...,0) is a solution.  The question whether there
 exists another one.  Try to solve the following Linear
 Programming problem:
 max a1
 subject to B*a = 0
 (you can start with a = (0,...,0) which is a feasible
 solution).
 If you get a non-zero solution fine.  If not try to
 solve
 min a1
 subject to B*a = 0
 if you still get 0 try this with max a2, then min a2,
 max a3, min a3, etc.  If all the 2k problems have only
 0 solution then there are no nontrivial nonnegative
 solutions.  Otherwise you will find it.
 Instead of 2k LP (Linear Programming) problems you can
 look at one QP (Quadratic Programming) problem:
 max a1^2 + a2^2 + ... + ak^2 
 subject to B*a = 0
 If there is a nonzero solution a = (a1,...,ak) then X
 = a1X1 +...+ak*Xk is what you are looking for. 
 Otherwise there is no nontrivial nonnegative solution.

 --- Bruce Willy [EMAIL PROTECTED] wrote:

   
 I am sorry, there is just a mistake : the solution
 cannot be unique (because it is a vectorial space)
 (but then I might normalize it)
  
 can R find one anyway ?
  
 This is equivalent to finding an eigenvector in
 fact From: [EMAIL PROTECTED] To:
 r-help@stat.math.ethz.ch Date: Wed, 27 Jun 2007
 22:51:41 + Subject: [R] Gaussian elimination -
 singular matrix   Hello,  I hope it is not a
 too stupid question.  I have a singular matrix A
 (so not invertible).  I want to find a nontrivial
 nonnegative solution to AX=0 (kernel of A)  It is
 a special matrix A (so maybe this nonnegative
 solution is unique)  The authors of the article
 suggest a Gaussian elimination method  Do you know
 if R can do that automatically ? I have seen that
 solve has an option LAPACK but it does not seem
 to work with me :(  Thank you very much

 
 _
   
 Le blog Messenger de Michel, candidat de la Nouvelle
 Star : analyse, news, coulisses… A découvrir ! 
 [[alternative HTML version deleted]] 

 
 _
   
 Le blog Messenger de Michel, candidat de la Nouvelle
 Star : analyse, news, coulisses… A découvrir !

  [[alternative HTML version deleted]]

 
 __
   
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained,
 reproducible code.

 

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] nlme correlated random effects

2007-06-26 Thread Spencer Graves
  I haven't seen a reply to this, so I will offer a comment in case 
you haven't already solved the problem. 

  Have you consulted the Mixed-Effects Bible for S-Plus / R, 
Pinheiro and Bates (2000) Mixed-Effects Models in S and S-Plus 
(Springer)?  If yes, have you worked through appropriate portions of the 
book and the companion script files available with the standard R 
distribution in ~R\library\nlme\scripts?  I just did grep 'pdB' *.* 
in that directory and found 5 uses of pdBlocked, 3 in ch04.R, and 1 each 
in ch06.R and ch08.R.  Also,  RSiteSearch(pdBlocked with 2 random 
effects) produced 69 hits for me just now.  You may not find anything 
useful there, but 69 does not seem to large a list to search, and there 
seems like a reasonable chance of finding something useful there. 

  Beyond this, a recommended approach to difficult questions like 
this is to try to simplify it to the maximum extent possible.  For 
example, it sounds to me like your question, can I use pdBlocked with 
only 2 random effects, could be answered without the complexity of 
'nlme'.  What phony data set can you generate with the minimum number of 
observations and variables that could help answer this question?  The 
process of producing such a simplified example might produce an answer 
to your question.  If it doesn't, then you can submit that simple, 
self-contained example to this list.  Doing so will increase the chances 
of a useful reply. 

  I know this doesn't answer your question, but I hope it helps. 
  Best Wishes,
  Spencer Graves

Daniel O'Shea wrote:
 I am examining the following nlme model.

 asymporig-function(x,th1,th2)th1*(1-exp(-exp(th2)*x))
 mod1-nlme(fa20~(ah*habdiv+ad*log(d)+ads*ds+ads2*ds2+at*trout)+asymporig(da.p,th1,th2),
 fixed=ah+ad+ads+ads2+at+th1+th2~1,
 random=th1+th2~1,
 start=c(ah=.9124,ad=.9252,ads=.5,ads2=-.1,at=-1,th1=2.842,th2=-6.917),
 data=pca1.grouped)

 However, the two random effects (th1 and th2) which describe the asymptotic 
 relationship between richness (fa20) and area (da.p) are correlated: -0.904 
 with approximate 95% ci of -0.99 to -.32.
 I examined the anova of mod1 with both random effects and mod2 with just th1 
 and mod1 is preferred.  I also examined pdDiag(th1 + th2~1) for another model 
 (mod3) and based on the anova the original mod1 is preferred.

 My question is can I use pdBlocked with only 2 random effects or should I and 
 if so how I would specify that in the model or perhaps the 95% ci for 
 correlation is wide enough to ignore???

 Dan

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] PCA for Binary data

2007-06-12 Thread Spencer Graves
  The problem with applying prcomp to binary data is that it's not 
clear what problem you are solving. 

  The standard principal components and factor analysis models 
assume that the observations are linear combinations of unobserved 
common factors (shared variability), normally distributed, plus normal 
noise, independent between observations and variables.  Those 
assumptions are clearly violated for binary data. 

  RSiteSearch(PCA for binary data) produced references to 'ade4' 
and 'FactoMineR'.  Have you considered these?  I have not used them, but 
FactoMineR included functions for 'Multiple Factor Analysis for Mixed 
[quantitative and qualitative] Data'
  
  Hope this helps. 
  Spencer Graves

Josh Gilbert wrote:
 I don't understand, what's wrong with using prcomp in this situation?

 On Sunday 10 June 2007 12:50 pm, Ranga Chandra Gudivada wrote:
   
 Hi,

 I was wondering whether there is any package implementing Principal
 Component Analysis for Binary data

   Thanks chandra


 -


  [[alternative HTML version deleted]]

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html and provide commented, minimal,
 self-contained, reproducible code.
 

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] How do I obtain standard error of each estimated coefficients in polr

2007-06-12 Thread Spencer Graves
  I'm confused: 

  Have you considered the 'examples' in the 'polr' help file?  The 
first example ends summary(house.plr).  The print of this summary 
includes standard errors.  If you want those numbers for subsequent 
computations, you can try str(summary(house.plr)) or 
names(summary(house.plr)).  If you want to be more sophisticated, 
class(summary(house.plr)) says it is summary.polr.  Then  
methods(class=summary.polr) says there exists a function 
'print.summary.polr', which is however, 'invisible'.  If you want to see 
it, getAnywhere('print.summary.polr') will produce the code. 

  If this does NOT answer your question, please provide commented, 
minimal, self-contained, reproducible code, as requested in the posting 
guide www.R-project.org/posting-guide.html. 

  Hope this helps. 
  Spencer Graves

[EMAIL PROTECTED] wrote:
 Hi,

 I obtained all the coefficients that I need from polr. However, I'm wondering 
 how I can obtain the standard error of each estimated coefficient? I saved 
 the Hessian and do something like summary(polrObj), I don't see any standard 
 error like when doing regression using lm. Any help would be really 
 appreciated. Thank you!

 - adschai

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Fwd: Using odesolve to produce non-negative solutions

2007-06-11 Thread Spencer Graves
in line

Martin Henry H. Stevens wrote:
 Hi Jeremy,
 First, setting hmax to a small number could prevent a large step, if 
 you think that is a problem. Second, however, I don't see how you can 
 get a negative population size when using the log trick. 
SG:  Can lsoda estimate complex or imaginary parameters? 

 I would think that that would prevent completely any negative values 
 of N (i.e. e^-10  0). Can you explain? or do you want to a void 
 that trick? The only other solver I know of is rk4 and it is not 
 recommended.
 Hank
 On Jun 11, 2007, at 11:46 AM, Jeremy Goldhaber-Fiebert wrote:

 Hi Spencer,

 Thank you for your response. I also did not see anything on the lsoda 
 help page which is the reason that I wrote to the list.

 From your response, I am not sure if I asked my question clearly.

 I am modeling a group of people (in a variety of health states) 
 moving through time (and getting infected with an infectious 
 disease). This means that the count of the number of people in each 
 state should be positive at all times.

 What appears to happen is that lsoda asks for a derivative at a given 
 point in time t and then adjusts the state of the population. 
 However, perhaps due to numerical instability, it occasionally lower 
 the population count below 0 for one of the health states (perhaps 
 because it's step size is too big or something).

 I have tried both the logarithm trick
snip

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Nonlinear Regression

2007-06-10 Thread Spencer Graves
  Have you worked through the examples in the 'nls' help file, 
especially the following: 

 DNase1 - subset(DNase, Run == 1)
 fm3DNase1 - nls(density ~ Asym/(1 + exp((xmid - log(conc))/scal)),
  data = DNase1,
  start = list(Asym = 3, xmid = 0, scal = 1),
  trace = TRUE)

   Treated - Puromycin[Puromycin$state == treated, ]
 weighted.MM - function(resp, conc, Vm, K)
 {
 ## Purpose: exactly as white book p. 451 -- RHS for nls()
 ##  Weighted version of Michaelis-Menten model
 ## --
 ## Arguments: 'y', 'x' and the two parameters (see book)
 ## --
 ## Author: Martin Maechler, Date: 23 Mar 2001

 pred - (Vm * conc)/(K + conc)
 (resp - pred) / sqrt(pred)
 }

 Pur.wt - nls( ~ weighted.MM(rate, conc, Vm, K), data = Treated,
   start = list(Vm = 200, K = 0.1),
   trace = TRUE)
112.5978 :  200.0   0.1
17.33824 :  205.67588840   0.04692873
14.6097 :  206.33087396   0.05387279
14.59694 :  206.79883508   0.05457132
14.59690 :  206.83291286   0.05460917
14.59690 :  206.83468191   0.05461109

# In the call to 'nls' here, 'Vm' and 'K' are in 'start' and must 
therefore be parameters to be estimated. 
# The other names passed to the global 'weighted.MM' must be columns of 
'data = Treated'. 

# To get the residual sum of squares, first note that it is printed as 
the first column in the trace output. 

# To get that from Pur.wt, I first tried 'class(Pur.wt)'. 
# This told me it was of class 'nls'. 
# I then tried method(class='nls'). 
# One of the functions listed was 'residuals.nls'.  That gave me the 
residuals. 
# I then tried 'sum(residuals(Pur.wt)^2)', which returned 14.59690. 

  Hope this helps. 
  Spencer Graves
p.s.  Did this answer your question?  Your example did not seem to me to 
be self contained, which makes it more difficult for me to know if I'm 
misinterpreting your question.  If the example had been self contained, 
I might have replied a couple of days ago. 

tronter wrote:
 Hello

 I followed the example in page 59, chapter 11 of the 'Introduction to R'
 manual. I entered my own x,y data. I used the least squares. My function has
 5 parameters: p[1], p[2], p[3], p[4], p[5]. I plotted the x-y data. Then I
 used lines(spline(xfit,yfit)) to overlay best curves on the data while
 changing the parameters. My question is how do I calculate the residual sum
 of squares. In the example they have the following:

 df - data.frame( x=x, y=y)

 fit - nls(y ~SSmicmen(s, Vm, K), df)

 fit


 In the second line how would I input my function? Would it be:

 fit - nls(y ~ myfunction(p[1], p[2], p[3], p[4], p[5]), df) where
 myfunction is the actual function? My function doesnt have a name, so should
 I just enter it?

 Thanks



__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Fwd: Using odesolve to produce non-negative solutions

2007-06-08 Thread Spencer Graves
  On the 'lsoda' help page, I did not see any option to force some 
or all parameters to be nonnegative. 

  Have you considered replacing the parameters that must be 
nonnegative with their logarithms?  This effective moves the 0 lower 
limit to (-Inf) and seems to have worked well for me in the past.  
Often, it can even make the log likelihood or sum of squares surface 
more elliptical, which means that the standard normal approximation for 
the sampling distribution of parameter estimates will likely be more 
accurate. 

  Hope this helps. 
  Spencer Graves
p.s.  Your example seems not to be self contained.  If I could have 
easily copied it from your email and run it myself, I might have been 
able to offer more useful suggestions. 

Jeremy Goldhaber-Fiebert wrote:
 Hello,

 I am using odesolve to simulate a group of people moving through time and 
 transmitting infections to one another. 

 In Matlab, there is a NonNegative option which tells the Matlab solver to 
 keep the vector elements of the ODE solution non-negative at all times. What 
 is the right way to do this in R?

 Thanks,
 Jeremy

 P.S., Below is a simplified version of the code I use to try to do this, but 
 I am not sure that it is theoretically right 

 dynmodel - function(t,y,p) 
 { 
 ## Initialize parameter values

   birth - p$mybirth(t)
   death - p$mydeath(t)
   recover - p$myrecover
   beta - p$mybeta
   vaxeff - p$myvaxeff
   vaccinated - p$myvax(t)

   vax - vaxeff*vaccinated/100

 ## If the state currently has negative quantities (shouldn't have), then 
 reset to reasonable values for computing meaningful derivatives

   for (i in 1:length(y)) {
   if (y[i]0) {
   y[i] - 0
   }
   }

   S - y[1]
   I - y[2]
   R - y[3]
   N - y[4]

   shat - (birth*(1-vax)) - (death*S) - (beta*S*I/N)
   ihat - (beta*S*I/N) - (death*I) - (recover*I)
   rhat - (birth*(vax)) + (recover*I) - (death*R)

 ## Do we overshoot into negative space, if so shrink derivative to bring 
 state to 0 
 ## then rescale the components that take the derivative negative

   if (shat+S0) {
   shat_old - shat
   shat - -1*S
   scaled_transmission - (shat/shat_old)*(beta*S*I/N)
   ihat - scaled_transmission - (death*I) - (recover*I)
   
   }   
   if (ihat+I0) {
   ihat_old - ihat
   ihat - -1*I
   scaled_recovery - (ihat/ihat_old)*(recover*I)
   rhat - scaled_recovery +(birth*(vax)) - (death*R)
   
   }   
   if (rhat+R0) {
   rhat - -1*R
   }   

   nhat - shat + ihat + rhat

   if (nhat+N0) {
   nhat - -1*N
   }   

 ## return derivatives

   list(c(shat,ihat,rhat,nhat),c(0))

 }

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] factor analysis

2007-06-06 Thread Spencer Graves
  I haven't seen an answer to this post, so I thought I would try to 
generate a response. 

  Regarding your first question (Can i use this factor analysis 
somehow despite the poor cumulative variance of the first three factors 
?), I would ask, for what purpose?  And, What are the alternatives? 

  The second question on the null hypothesis can be answered by 
looking at the degrees of freedom:  That will often identify the null 
hypothesis for a test with a chi-square statistic.  Let p = number of 
original variables, which I assume is 10 in your case as you list 10 
eigenvalues.  Let f = number of factors = 3 in your case.  The degrees 
of freedom is the number of free parameters estimated in a model.  With 
two nested models, the degrees of freedom is the difference in the 
numbers of parameters estimated in the two models. 

  I can think of several obvious hypotheses, in this case:  The two 
extremes are that there are no significant correlations and that a 
saturated model is required.  The former requires no parameters to 
estimate the correlation matrix, while the latter requires choose(p, 2) 
= 45 with p = 10.  To estimate a model with only one factor requires p 
parameters, one for the eigenvalue and (p-1) for the eigenvector / 
direction / factor loadings.  (The sums of squares of the elements of 
each eigenvector = 1.  The factor loadings = the eigenvector times the 
square root of the corresponding eigenvalue.)  Thus, the free parameters 
for a one-factor model = 10.  If this hypothesis compared one factor to 
none, the degrees of freedom would be 10 - 0 = 10.  Similarly, if the 
null hypothesis were saturated, the degrees of freedom would be 45 - 10 
= 35. 

  Next consider a 2-factor model.  In addition to the p coefficients 
estimated for one factor, we must estimate an additional p-1, one 
eigenvalue and p-2 for a unit vector orthogonal to the one we already 
have.  This is 19 degrees of freedom.  Similarly for a 3-factor model, 
we must estimate an additional p-2 parameter, one eigenvalue plus p-3 
for a unit vector orthogonal to the two we already have.  This gives us 
19 + 8 = 27.  Finally a 4-factor model would require estimating p-3 
additional parameters for a total of 34. 

  Now compare the degrees of freedom for the 3-factor model with all 
the others just listed to find one where the difference is the number 
you got, 18.  If we do this we find that 45 - 27 = 18.  From this, we 
conclude that the null hypothesis is the saturated model, i.e., no 
factor structure identifiable from these data. 

  As a check, let's look at your 4-factor model:  45 - 34 = 11.  
This says that your 4-factor model is NOT significantly different from a 
saturated model, i.e., it is adequate.  Returning to the 3-factor model, 
the low p-value in that case says that 3 factors is not enough:  4 
factors provides a more accurate representation. 

  Does this make sense? 

  Note, however, that the above assumes your observations are all 
statistically independent.  If that's not the case, then the assumptions 
behind this test are not satisfied.  Similarly, if the observations are 
not normally distributed, you can't trust this test.  I often check 
normality using 'qqnorm'.  However, if your observations were collected 
in batches, for example, then I would not expect them to be independent. 

  Finally, even though this analysis suggest that a 4-factor model 
is better, I might still use the 3-factor model if it gave me something 
I could interpret and the 4-factor model didn't. 

  Hope this helps. 
  Spencer Graves
p.s.  I might have answered this a day or two earlier, but the lack of a 
simple, self-contained example meant that I would have to work harder to 
understand your question and craft an answer.  

bunny , lautloscrew.com wrote:
 Hi there,

 i´ve trouble understanding the factanal output of R.
 i am running a a FA on a dataset with 10 variables.

 i plotted eigenvalues to finde out how many factors to try.
 i think the elbow is @ 3 factors.
 here are my eigenvalues: 2.6372766 1.5137754 1.0188919 0.8986154  
 0.8327583 0.7187473 0.6932792 0.5807489 0.5709594 0.5349477
 (of the correlation matrix)

 i guess this is basically what screeplot does as well.

 and here´s my problem:
 unfortunately the cumulative variance @ 3 factors is only .357
 there are no crossloadings and the interpretation of the factors and  
 their loadings definetely make sense so far.

 Can i use this factor analysis somehow despite the poor cumulative  
 variance of the first three factors ?
 changing the rotation didnt help much.

 The test of the hypothesis says the following:

 Test of the hypothesis that 3 factors are sufficient.
 The chi square statistic is 46.58 on 18 degrees of freedom.
 The p-value is 0.000244

 does this mean the Hnull is that 3 factors are sufficient and i cant  
 recject ?


 4 factors say:
 Test of the hypothesis that 4 factors

Re: [R] R-squared in mixed-effects models

2007-06-06 Thread Spencer Graves
  RSiteSearch(R^2 in lme) produced 34 hits for me just now, 
including 5 from an exchange on this list dated 2007.05.14. 

  They should answer your question better than I can. 

  Hope this helps. 
  Spencer Graves

Richard Gunton wrote:
 Hello,

 I'm fitting general linear models using the function lme() from the package
 nlme.  My variables include a number of covariates and a single random factor,
 because the experiment was laid out in blocks.  I'd like to have a statistic 
 to
 measure the proportion of explained variance from my models.  With ordinary
 multiple regression I'd use R-squared, but I can't find any similar items in
 the output from lme().  Does anyone know something I can use in these or any
 other package?

 Thanks,

 Richard.

 --
 Richard Gunton
 PhD student - Ecology and Evolution group
 School of Biology, University of Leeds, LS2 9JT, UK
 Room 10.16, Miall Building   Tel: 0113 3432825

 http://www.personal.leeds.ac.uk/~bgyrg

 ~ Opinions expressed in this message are not attributable to the University of
 Leeds ~

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Can I treat subject as fixed effect in linear model

2007-06-06 Thread Spencer Graves
  The short answer is that you could fit that fixed-effect model 
using 'lm', for example.  That would make sense if you wanted to make 
inference only about that particular group of 20 subjects AND you 
thought it was inappropriate to consider that their contribution to a 
model would follow a normal distribution. 

  If you want to make inference beyond that group of 20 subjects, 
then the fixed effect analysis is not appropriate.  If you thought it 
was inappropriate to think that individual adjustments for each subject 
were normally distributed, then the question is how far from normal 
might they be. 

  I don't think Model2 is illegal in the sense that you are not 
likely to be sent to prison for using it.  However, I wouldn't do it.  
I'd make use your Model 1 and make all the plots that seem consistent 
with that model, as described in Pinheiro and Bates (2000) Mixed-Effects 
Models in S and S-Plus (Springer).  If the plots (or something else) 
suggested that some of my assumptions were inappropriate, then I'd 
consider other alternative models.  However, that could be a lot of 
work, and I wouldn't undertake such an effort without some pretty strong 
justification. 

  Hope this helps. 
  Spencer

shirley zhang wrote:
 Hi,

 There are 20 subjects grouped by Gender, each subject has 2 tissues
 (normal vs. cancer).

 In fact, it is a 2-way anova (factors: Gender and tissue) with tissue
 nested in subject. I've tried the following:

 Model 1: lme(response ~ tissue*Gender, random = ~1|subject)
 Model 2: response ~ tissue*Gender + subject
 Model 3: response ~ tissue*Gender


 It seems like Model 1 is the correct one since my experiment design is
 nested design. However, can anybody tell me whether Model2 is
 completely illegal?

 Thanks

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] 'varFunc' class with additive variance? (was: can I get same results using lme and gls?)

2007-05-21 Thread Spencer Graves
Hi, Doug and others: 

  What might be the best tools for modeling an additive variance 
structure for residuals, something like the following: 

   var(resid) = s0.2*(1+var.pred) + daysFromTraining*var(process 
migration per day),

  where var.pred = relative variance of prediction error = something 
roughly like crossprod(x, solve(crossprod(X), x)),
  and var(process migration per day) = the variance of a random walk 
of some process characteristic. 

  My data are residuals from predictions from models produced by a 
data mining algorithm with various choices for training and test sets.  
I've been using 'gls' to fit models using 'varFunc' classes.  However, 
'varComb' models the relative standard deviation as a product of 
components from different sources.  I'm thinking of creating 'varSumSq' 
functions by modifying your 'varComb' code to model an additive (not 
multiplicative) variance structure. 

  Might there be other tools for modeling an additive variance 
structure?  Alternatively, does it sound sensible to create 'varSumSq' 
functions similar to 'varComb'? 

  Thanks,
  Spencer Graves

Douglas Bates wrote:
 On 5/20/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:

   
 I was wondering how to get the same results with gls and lme. In my lme, the
 design matrix for the random effects is (should be) a identity matrix and
 therefore G should add up with R to produce the R matrix that gls would 
 report
 (V=ZGZ'+R). Added complexity is that I have 3 levels, so I have R, G and say 
 H
 (V=WHW'+ZGZ'+R). The lme is giving me the correct results, I am having 
 trouble
 finding the right corresponding specification for the gls.
 

 Thanks for including a reproducible example.  However, I'm a bit at a
 loss as to why you would want to try to create a gls model that fits a
 mixed-effects model that has random effects for intercept and slope at
 two nested levels.  I don't think that corCompSymm will do what you
 want but, to tell the truth, I have difficulty in thinking of the
 model in that form.  I much prefer the mixed-effects form.


   
 Thanks for your help.

 Toby


 dtaa =
 read.table(http://www.ats.ucla.edu/stat/mplus/examples/ma_snijders/mlbook1.dat;,
 sep=,)
 dta1 = reshape(dtaa, list(c(V10,V12)), score, direction=long,
 drop=c(V2,V3,V4,V5,V6,V7,V8,V9,V11,V13,V14,V15,V16,V17,V18,V19,V20,V21,V22,V23,V24,V25))
 colnames(dta1)[1] = schoolNR
 dta2 = dta1[order(dta1$id),]
 head(dta2)
 timef = factor(dta2$time)

 summary(mdl1l - lme(score~timef-1, dta2, ~timef-1|schoolNR/idML))
 summary(mdl1g - gls(score~timef-1, dta2, corCompSymm(, ~timef|schoolNR/id),
 varIdent(, ~1|id*timef),,ML))

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.

 

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] [OT] Is data copyrightable?

2007-05-12 Thread Spencer Graves
Dear Hadley:

  P.s.  Ben Klemens (2006) Math you can't use (Brookings) cites cases 
where people have been successfully sued for copyright infringement for 
using a theorem they independently discovered.  That's pretty scary to 
me and seems totally unreasonable, but apparently the law at least in 
the US.

  Spencer Graves


  Brian's reply seems more consistent with what I've heard than
Peter's.

  The briefest summary I know of copyright law is that expression
but not ideas can be copyrighted.  Copyright law exists to promote
useful arts, and a compilation of data is intended to be useful.
Google, led me to http://ahds.ac.uk/copyrightfaq.htm#faq1?;, says that
data or other materials which (a) are arranged in a systematic or
methodical way, or (b) are individually accessible by electronic or
other means can be copyrighted.

  Beyond that, there is a fair use doctrine, which in the US at
least allows use in many cases by educators in public institutions, but
the same use by someone not affiliated with a public school might be an
infringement.  Ten years ago, I heard from attorneys at the University
of Wisconsin that a college prof can run copies of a journal article and
distribute them to this class without worrying about copyright
infringement (provided any money collected is clearly designed to cover
costs not make a profit), but the same copies prepared by Kinko's off
campus for the same class (sold perhaps at the same price) must get
copyright permission.

  Hope this helps.
  Spencer Graves

Peter Dalgaard wrote:
 hadley wickham wrote:
   
 Dear all,

 This is a little bit off-topic, but I was wondering if anyone has any
 informed opinion on whether data (ie. a dataset) is copyrightable?

 Hadley
   
 
 In general not, I believe. E.g., I didn't have to ask formal permission 
 to use data from Altman's book in mine (and I did check with my 
 publisher). I suspect that things can get murkier than that though; I 
 seem to recall stories of plagiarism cases in relation to collections of 
 mathematical tables. Beware also that there can be other legal 
 complications, including rights to first publication of new results, 
 which usually implies that you cannot publish entire datasets until 
 their publication potential has been exhausted. And of course, proper 
 attribution is required for reasons of scientific integrity and general 
 courtesy. (Disclaimer: I Am Not A Lawyer, esp. not a US one...)

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Nonlinear constrains with optim

2007-05-10 Thread Spencer Graves
Hi, Patrick, Paul, et al.: 

see in line 

Patrick Burns wrote:
 I don't know of any sources, but the idea is quite simple.

 For each constraint that is broken, the penalty is the amount
 by which the constraint is broken times a penalty rate.  The
 total penalty to add to the objective is the sum of penalties
 over all constraints.

 There is a catch or two when using this with derivative-based
 optimizers.  The objective typically becomes non-differentiable
 at the boundary, and optimizers can get confused.  
I believe I've gotten good results with penalties that are the SQUARE of 
the amount by which the constraints were violated.  These are 
continuously differentiable and so don't confuse the derivative-based 
optimizers much. 

Also, I start with a small penalty, then increase the penalty until I 
get a solution that seems sensible.  If you can't handle a solution just 
a little outside your constraints, shrink a little the place at which 
the penalty starts. 

  Hope this helps. 
  Spencer Graves

 They might
 be less confused with smaller penalty rates.  However if the
 penalty rate is too small, then you can get a solution that breaks
 one or more penalties.

 Starting from a solution given by Rgenoud or its ilk is probably
 a good idea.
   
 Patrick Burns
 [EMAIL PROTECTED]
 +44 (0)20 8525 0696
 http://www.burns-stat.com
 (home of S Poetry and A Guide for the Unwilling S User)

 Paul Smith wrote:

   
 Dear All

 I am dealing at the moment with optimization problems with nonlinear
 constraints. Regenoud is quite apt to solve that kind of problems, but
 the precision of the optimal values for the parameters is sometimes
 far from what I need. Optim seems to be more precise, but it can only
 accept box-constrained optimization problems. I read in the list
 archives that optim can also be used with nonlinear constrains through
 penalizations. However, I am not familiar with the technique of
 penalizations. Could someone please indicate to me a site or a book to
 learn about that penalization technique?

 Thanks in advance,

 Paul

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


  

 

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Optim

2007-05-10 Thread Spencer Graves
Hello, Wassim: 

GENERAL THEORY: 

  To expand on Ravi's comments, what can you tell us about the 
problem?  For example, if you have only 1 parameter, you can plot the 
log(likelihood) over a wide enough range so you can be confident you've 
covered all local maxima.  Then pick the max of the local maxima.  If 
there are only 2 parameters, you can make contour plots. 

  If this is not convenient, what else can you tell us about the 
problem?  For example, why are there local maxima?  If there are 
identifiability issues as Ravi suggested, what can you do to 
characterize and eliminate them -- using either constraints or 
transformations? 

  Also, can you find an upper bound with a unique maximum?  If yes, 
and if you've found one local maximum for your likelihood, you could (in 
theory at least) construct the set of all points where the upper bound 
is above the local max you have.   


PRAGMATICS IN R: 

  If you don't have time or knowledge to do something more 
sophisticated, you can try starting 'optim' at multiple places, store 
the answers and pick the winner. 

  Also, have you considered method = 'SANN'?  Simulated Annealing is 
designed specifically to produce something sensible with nasty 
problems.  It won't guarantee that you've found the optimal, but it 
might get you close. 

  For functions that are poorly conditioned, I've had reasonable 
luck using different methods, using the optimal found by one method as 
starting values for another method.  Also consider 'nlminb'. 

  hope this helps. 
  spencer graves

Ravi Varadhan wrote:
 Let us first assume that you have enumerated all the local maxima, which is
 by no means a trivial thing to assure.  How different are the likelihood
 values?  If they are significantly different, then take the parameter
 estimates corresponding to the largest likelihood.  If they are not
 significantly different but the corresponding parameter estimates differ
 widely, then you may have identifiability issues.  

 Ravi.

 
 ---

 Ravi Varadhan, Ph.D.

 Assistant Professor, The Center on Aging and Health

 Division of Geriatric Medicine and Gerontology 

 Johns Hopkins University

 Ph: (410) 502-2619

 Fax: (410) 614-9625

 Email: [EMAIL PROTECTED]

 Webpage:  http://www.jhsph.edu/agingandhealth/People/Faculty/Varadhan.html

  

 
 


 -Original Message-
 From: [EMAIL PROTECTED]
 [mailto:[EMAIL PROTECTED] On Behalf Of Wassim Kamoum
 Sent: Thursday, May 10, 2007 3:46 PM
 To: r-help@stat.math.ethz.ch
 Subject: [R] Optim

 Hello, 
   I'm maximizing a likelihood function with the function optim, but for
 different intial parameters (in the input of the optim funtion) , I found
 different value for the likelihood function and the parameters estimates,
 the causes is that the algorithm has not found the global maximum for the
 function but only a local maximum. What must I do to obtain the global
 maximum for the likelihood function?
   Thanks

   
 -

   [[alternative HTML version deleted]]

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] getting informative error messages

2007-05-08 Thread Spencer Graves
Hi, Tony: 

  Are you familiar with the 'debug' command?  I agree that more 
informative error messages and 'traceback' would be nice, but I've found 
the 'debug' facility quite useful.  [I even sometimes prepare a shell of 
a function 'fn', then say debug(fn) and fn(), and complete writing the 
function in its native environment where I can more easily check what 
each step does.]  I've heard that 'debug' does not work will with S4 
class generics, but I have not so far had to deal with that.  {There is 
also a 'debug' package, which is completely separate from the debug 
command in the 'base' package.  I've heard that it has more extensive 
capabilities, but I've never used it.}

  I suspect you may already know 'debug', but for those who don't, I 
think it's worth noting its utility for this kind of thing. 

  Hope this helps. 
  Spencer Graves

Tony Plate wrote:
 Certain errors seem to generate messages that are less informative than 
 most -- they just tell you which function an error happened in, but 
 don't indicate which line or expression the error occurred in.

 Here's a toy example:

   f - function(x) {a - 1; y - x[list(1:3)]; b - 2; return(y)}
   options(error=NULL)
   f(1:3)
 Error in f(1:3) : invalid subscript type
   traceback()
 1: f(1:3)
  

 In this function, it's clear that the error is in subscripting 'x', but 
 it's not always so immediately obvious in lengthier functions.

 Is there anything I can do to get a more informative error message in 
 this type of situation?  I couldn't find any help in the section 
 Debugging R Code in R-exts (or anything at all relevant in R-intro).

 (Different values for options(error=...) and different formatting of the 
 function made no difference.)

 -- Tony Plate

   sessionInfo()
 R version 2.5.0 (2007-04-23)
 i386-pc-mingw32

 locale:
 LC_COLLATE=English_United States.1252;LC_CTYPE=English_United 
 States.1252;LC_MONETARY=English_United 
 States.1252;LC_NUMERIC=C;LC_TIME=English_United States.1252

 attached base packages:
 [1] stats graphics  grDevices utils datasets  methods
 [7] base

 other attached packages:
 tap.misc
 1.0
  

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] getting informative error messages

2007-05-08 Thread Spencer Graves
Dear Prof. Ripley: 

  1.  I very much appreciate your contributions to the R project. 

  2.  Whether with release 2.4.0 or earlier, I noticed that 
'traceback()' had become less informative.  This loss was more than 
offset when I learned to use the 'debug' function in the 'base' 
package:  It increased my productivity so much that it helped complete 
my transition from S-Plus:  The last few times I had to use S-Plus, I 
ported them to R, got the code working using 'debug', then ported the 
results back to S-Plus.  That was quicker for me than debugging directly 
in S-Plus.

  3.  Thanks again for your many contributions to the R project and 
to my education more generally. 

  Best Wishes,
  Spencer Graves

Prof Brian Ripley wrote:
 It is not clear to me what you want here.

 Errors are tagged by a 'call', and f(1:3) is the innermost 'call' (special 
 primitives do not set a context and so do not count if you consider '[' 
 to be a function).

 The message could tell you what the type was, but it does not and we have 
 lost the pool of active contributors we once had to submit tested patches 
 for things like that.


 On Mon, 7 May 2007, Tony Plate wrote:

   
 Certain errors seem to generate messages that are less informative than
 most -- they just tell you which function an error happened in, but
 don't indicate which line or expression the error occurred in.

 Here's a toy example:

 
 f - function(x) {a - 1; y - x[list(1:3)]; b - 2; return(y)}
 options(error=NULL)
 f(1:3)
   
 Error in f(1:3) : invalid subscript type
 
 traceback()
   
 1: f(1:3)
 
 In this function, it's clear that the error is in subscripting 'x', but
 it's not always so immediately obvious in lengthier functions.

 Is there anything I can do to get a more informative error message in
 this type of situation?  I couldn't find any help in the section
 Debugging R Code in R-exts (or anything at all relevant in R-intro).

 (Different values for options(error=...) and different formatting of the
 function made no difference.)

 -- Tony Plate

 
 sessionInfo()
   
 R version 2.5.0 (2007-04-23)
 i386-pc-mingw32

 locale:
 LC_COLLATE=English_United States.1252;LC_CTYPE=English_United
 States.1252;LC_MONETARY=English_United
 States.1252;LC_NUMERIC=C;LC_TIME=English_United States.1252

 attached base packages:
 [1] stats graphics  grDevices utils datasets  methods
 [7] base

 other attached packages:
 tap.misc
1.0
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.

 



__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] factanal AIC?

2007-05-04 Thread Spencer Graves
Dear R Developers: 

  What is the proper log(likelihood) for 'factanal'?  I believe it 
should be something like the following: 

  (-n/2)*(k*(log(2*pi)+1)+log(det(S)))

or without k*(log(2*pi)-1): 

  (-n/2)*log(det(S)),

where n = the number of (multivariate) observations. 

  The 'factanal' help file say the factanal component criteria:  
The results of the optimization: the value of the negative 
log-likelihood and information on the iterations used. 

  However, I'm not able to get this.  If it's a log(likelihood), 
then replacing a data set m1 by rbind(m1, m1) should double the 
log(likelihood).  However it has no impact on it.  Consider the 
following modification of the first example in the 'factanal' help page: 

 v1 - c(1,1,1,1,1,1,1,1,1,1,3,3,3,3,3,4,5,6)
   v2 - c(1,2,1,1,1,1,2,1,2,1,3,4,3,3,3,4,6,5)
   v3 - c(3,3,3,3,3,1,1,1,1,1,1,1,1,1,1,5,4,6)
   v4 - c(3,3,4,3,3,1,1,2,1,1,1,1,2,1,1,5,6,4)
   v5 - c(1,1,1,1,1,3,3,3,3,3,1,1,1,1,1,6,4,5)
   v6 - c(1,1,1,2,1,3,3,3,4,3,1,1,1,2,1,6,5,4)
   m1 - cbind(v1,v2,v3,v4,v5,v6)
  tst - factanal(m1, factors=3)
  fit1 - factanal(m1, factors=3)
  fit2 - factanal(rbind(m1, m1), factors=3)
  fit1$criteria[objective]
objective
0.4755156
  fit2$criteria[objective]
objective
0.4755156

  From the following example, it seems that the k*(log(2*pi)-1) 
term is omitted: 

  x2 - c(-1,1)
  X2.4 - as.matrix(expand.grid(x2, x2, x2, x2))
  factanal(X2.4, 1)$criteria
  objective counts.function counts.gradient
  0   7   7

  However, I can't get the 'fit1$criteria' above, even ignoring the 
sample size.  Consider the following: 

   S3 - tcrossprod(fit1$loadings)+diag(fit1$uniquenesses)
  log(det(S3))
[1] -6.725685

  Shouldn't this be something closer to the 0.4755 for fit1$criteria? 

  Thanks,
  Spencer Graves

JENS:  For your purposes, I suggest you try to compute 
(-n/2)*log(det(tcrossprod(fit$loadings)+diag(fit$uniquenesses)).  If you 
do this, you might get more sensible results with your examples. 

  Hope this helps. 
  Spencer Graves

Jens Oehlschlägel wrote:
 Dear list members,

 Could any expert on factor analysis be so kind to explain how to calculate 
 AIC on the output of factanal. Do I calculate AIC wrong or is 
 factanal$criteria[objective] not a negative log-likelihood?

 Best regards


 Jens Oehlschlägel



 The AIC calculated using summary.factanal below don't appear correct to me:

   n items factors total.df rest.df model.df   LL   AIC  
 AICc   BIC
 1  100020   1  210 170   40 -5.192975386  90.38595  
 93.80618  286.6962
 2  100020   2  210 151   59 -3.474172303 124.94834 
 132.48026  414.5059
 3  100020   3  210 133   77 -1.745821627 157.49164 
 170.51984  535.3888
 4  100020   4  210 116   94 -0.120505369 188.24101 
 207.97582  649.5700
 5  100020   5  210 100  110 -0.099209921 220.19842 
 247.66749  760.0515
 6  100020   6  210  85  125 -0.072272695 250.14455 
 286.18574  863.6140
 7  100020   7  210  71  139 -0.054668588 278.10934 
 323.36515  960.2873
 8  100020   8  210  58  152 -0.041708051 304.08342 
 358.99723 1050.0622
 9  100020   9  210  46  164 -0.028686298 328.05737 
 392.87174 1132.9292
 10 100020  10  210  35  175 -0.015742783 350.03149 
 424.78877 1208.8887
 11 100020  11  210  25  185 -0.007007901 370.01402 
 454.55947 1277.9487
 12 100020  12  210  16  194 -0.005090725 388.01018 
 481.99776 1340.1147


 summary.factanal - function(object, ...){
   if (inherits(object, try-error)){
 c(n=NA, items=NA, factors=NA, total.df=NA, rest.df=NA, model.df=NA, 
 LL=NA, AIC=NA, AICc=NA, BIC=NA)
   }else{
 n - object$n.obs
 p - length(object$uniquenesses)
 m - object$factors
 model.df - (p*m) + (m*(m+1))/2 + p - m^2
 total.df - p*(p+1)/2
 rest.df - total.df - model.df # = object$dof
 LL - -as.vector(object$criteria[objective])
 k - model.df
 aic - 2*k - 2*LL
 aicc - aic + (2*k*(k+1))/(n-k-1)
 bic  - k*log(n) - 2*LL
 c(n=n, items=p, factors=m, total.df=total.df, rest.df=rest.df, 
 model.df=model.df, LL=LL, AIC=aic, AICc=aicc, BIC=bic)
   }
 }

 multifactanal - function(factors=1:3, ...){
   names(factors) - factors
   ret - lapply(factors, function(factors){
 try(factanal(factors=factors, ...))
   })
   class(ret) - multifactanal
   ret
 }

 summary.multifactanal - function(object,...){
   do.call(rbind, lapply(object, summary.factanal))
 }

 print.multifactanal - function(x,...){
   ret - summary.multifactanal(x)
   print(ret, ...)
   invisible(ret)
 }

 # simulate a true 4-factor model
 n - 1000
 ktrue - 4
 kfac - 5
 true - matrix(rnorm(n*ktrue), ncol=ktrue)
 x - matrix(rep(true, kfac)+rnorm(n*ktrue*kfac

Re: [R] Matlab import

2007-04-19 Thread Spencer Graves
  Have you tried 'getVariable' from Matlab?  I also recently failed 
to get 'readMat' to work for me -- probably because I hadn't saved the 
files using the options Henrik suggested below. 

  Fortunately, I was able to get something like the following to work: 

# Start Matlab server
Matlab$startServer()

# Create a Matlab client object used to communicate with Matlab
matlab - Matlab()

# Get max info for diagnosis
setVerbose(matlab, -2)

# Confirm that 'matlab' is running
open(matlab)

# Load the raw input data in matFile.mat into Matlab
evaluate(matlab, load matFile)

# Transfer it to R. 
varInMatfile - getVariable(matlab, variableInMatFile)
 
# Done
close(matlab)

  This was with Matlab 7.3.0 (R2006b) and the following versions of 
everything else: 

 sessionInfo()
R version 2.4.1 (2006-12-18)
i386-pc-mingw32

locale:
LC_COLLATE=English_United States.1252;LC_CTYPE=English_United 
States.1252;LC_MONETARY=English_United 
States.1252;LC_NUMERIC=C;LC_TIME=English_United States.1252

attached base packages:
[1] stats graphics  grDevices utils datasets  methods 
[7] base

other attached packages:
R.matlab R.oo
 1.1.3  1.2.7

   Hope this helps. 
  Spencer Graves

Henrik Bengtsson wrote:
 Hi,

 as already mentioned, do not save MAT files in ASCII format but save
 to binary formats, i.e. do *not* use -ascii.  Moreover, from
 ?readMat, you find that:

  From Matlab v7, _compressed_ MAT version 5 files are used by
  default [3]. These are not supported. Use 'save -V6' in Matlab to
  write a MAT file compatible with Matlab v6, that is, to write a
  non-compressed MAT version 5 file. Note: Do not mix up version
  numbers for the Matlab software and the Matlab file formats.

 You haven't told use what version of R you are using (neither what
 version of R.matlab), but from the error message I suspect you are
 using Matlab v7, correct?  If so, try to save with

save('test.mat', 'matrixM', '-ascii', '-V6')

 and tell us if it works.

 Cheers

 Henrik


 On 4/10/07, Schmitt, Corinna [EMAIL PROTECTED] wrote:
   
 Hallo,

 
 With readMat, don't use the -ascii option (which you didn't have in your
 first posting). I've never tried reading matlab's ascii format. In any 
 case,
 readMat reads matlab's binary format.
   
 - Tom
   
 I did the saving again without 'ascii' option but the import also did not 
 work. I get the following error message:

 
 library(R.matlab)
 mats - readMat(Z:/Software/R-Programme/test2.dat)
   
 Fehler in list(readMat(Z:/Software/R-Programme/test2.dat) = 
 environment,  :

 [2007-04-10 14:57:52] Exception: Tag type not supported: miCOMPRESSED
   at throw(Exception(...))
   at throw.default(Tag type not supported: , tag$type)
   at throw(Tag type not supported: , tag$type)
   at readMat5DataElement(this)
   at readMat5(con, firstFourBytes = firstFourBytes, maxLength = maxLength)
   at readMat.default(Z:/Software/R-Programme/test2.dat)
   at readMat(Z:/Software/R-Programme/test2.dat)
 
 Any further idea,
 Corinna

 


 Schmitt, Corinna wrote:
 
 Hallo,

   
 I've used Henrik Bengtsson's R.matlab package several times to
 
 successfully
 
 read in matlab data files. It's normally as easy as:
 
 library(R.matlab)
 mats - readMat(matrixM.txt)
 
 - Tom
 
 I have imported this package, too. And tried your commands with the new
 txt-file as mentioned in my last mail to the mailing list.

 I get the following error command:
 mats = readMat(Z:/Software/R-Programme/test.dat)
 Error in if (version == 256) { : Argument hat Länge 0
 Zusätzlich: Warning message:
 Unknown endian: . Will assume Bigendian. in: readMat5Header(this,
 firstFourBytes = firstFourBytes)

 What did I wrong? Please check my txt-file which was constructed with the
 matlab command save('test.dat','matrixM','-ascii')

 Thanks, Corinna

 


 Schmitt, Corinna wrote:
   
 Dear R-Experts,

 here again a question concerning matlab. With the command matrixM=[1 2
 3;4 5 6] a matrix under Matlab was constructed. It was than stored with
 the command save('matrixM.txt','matrixM').

 Now I tried to import the data in R with the help of the command
 Z=matrix(scan(Z:/Software/R-Programme/matrixM.txt))

 An error occurred.

 The result should be a matrix with the entries as mentioned above.
 Perhaps I made already an error in matlab.

 Has any one got an idea how to import the data and store it in R. In R I
 want to make further calculations with the matrix. I just installed
 R.matlab but could not find an example with could help me.

 Thanks, Corinna

 MATLAB 5.0 MAT-file, Platform: PCWIN, Created on: Tue Apr 10 13:17:44
 2007
 �IM���3���xœãc``p�b6 æ€Ò À
 å31331;ç–eVø‚ÅAjY˜X™
 
 �[n|
   
 __
 R-help

Re: [R] Matlab import

2007-04-10 Thread Spencer Graves
  Have you considered exporting from Matlab using the '-ascii' 
option, then reading it with 'read.table', after perhaps checking the 
first few rows using, e.g., 'readLines', and confirming that all records 
have the same length using 'count.fields'? 

  Hope this helps. 
  Spencer Graves

Schmitt, Corinna wrote:
 Hallo,

   
 With readMat, don't use the -ascii option (which you didn't have in your
 first posting). I've never tried reading matlab's ascii format. In any case,
 readMat reads matlab's binary format. 
 

   
 - Tom
 

 I did the saving again without 'ascii' option but the import also did not 
 work. I get the following error message:

   
 library(R.matlab)
 mats - readMat(Z:/Software/R-Programme/test2.dat)
 
 Fehler in list(readMat(Z:/Software/R-Programme/test2.dat) = 
 environment,  : 
 
 [2007-04-10 14:57:52] Exception: Tag type not supported: miCOMPRESSED
   at throw(Exception(...))
   at throw.default(Tag type not supported: , tag$type)
   at throw(Tag type not supported: , tag$type)
   at readMat5DataElement(this)
   at readMat5(con, firstFourBytes = firstFourBytes, maxLength = maxLength)
   at readMat.default(Z:/Software/R-Programme/test2.dat)
   at readMat(Z:/Software/R-Programme/test2.dat)
   


 Any further idea,
 Corinna

 


 Schmitt, Corinna wrote:
   
 Hallo,

 
 I've used Henrik Bengtsson's R.matlab package several times to
   
 successfully
   
 read in matlab data files. It's normally as easy as:
   
 library(R.matlab)
 mats - readMat(matrixM.txt)
   
 - Tom
   
 I have imported this package, too. And tried your commands with the new
 txt-file as mentioned in my last mail to the mailing list. 

 I get the following error command:
 mats = readMat(Z:/Software/R-Programme/test.dat)
 Error in if (version == 256) { : Argument hat Länge 0
 Zusätzlich: Warning message:
 Unknown endian: . Will assume Bigendian. in: readMat5Header(this,
 firstFourBytes = firstFourBytes) 

 What did I wrong? Please check my txt-file which was constructed with the
 matlab command save('test.dat','matrixM','-ascii')

 Thanks, Corinna

 


 Schmitt, Corinna wrote:
 
 Dear R-Experts,

 here again a question concerning matlab. With the command matrixM=[1 2
 3;4 5 6] a matrix under Matlab was constructed. It was than stored with
 the command save('matrixM.txt','matrixM').

 Now I tried to import the data in R with the help of the command
 Z=matrix(scan(Z:/Software/R-Programme/matrixM.txt))

 An error occurred. 

 The result should be a matrix with the entries as mentioned above.
 Perhaps I made already an error in matlab. 

 Has any one got an idea how to import the data and store it in R. In R I
 want to make further calculations with the matrix. I just installed
 R.matlab but could not find an example with could help me.

 Thanks, Corinna

 MATLAB 5.0 MAT-file, Platform: PCWIN, Created on: Tue Apr 10 13:17:44
 2007 
 �IM���3���xœãc``p�b6 æ€Ò À
 å31331;ç–eVø‚ÅAjY˜X™
   
 �[n|
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


   
 -- 
 View this message in context:
 http://www.nabble.com/Matlab-import-tf3552511.html#a9918327
 Sent from the R help mailing list archive at Nabble.com.

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


 



__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] R in Industry

2007-02-08 Thread Spencer Graves
  As Duncan indicated, I think R wins overwhelmingly  on this point: 

  What should you do if a key software vendor decides to increase 
their license fees beyond reason or obsolete a key product that burdens 
you with excessive transition costs?  Similarly, what do you do if you 
want to migrate a special application from some obscure operating system 
onto Windows or Linux, or you need some enhancements that should be 
minor but your vendor wants an excessive fee for that service?  If they 
see you as the only customer for a certain modification, their fees may 
be reasonable from their perspective. 

  With R, you can get the source code, so adapting it, modifying it, 
etc., should rarely be a problem.  With commercial software, you almost 
never get the source code, and you should consult attorneys before 
attempting to code something required to escape from a vendor whose fee 
structure is becoming prohibitive.  In many situations, just analyzing 
the legal issues could cost you more than paying someone to modify R 
code to support your changing needs. 

  Spencer Graves

Charilaos Skiadas wrote:
 On Feb 8, 2007, at 12:48 PM, Ben Fairbank wrote:
   
  If my company
 came to depend heavily on a fairly obscure R package (as we are
 contemplating doing), what guarantee is there that it will be  
 available
 next month/year/decade?  I know of none, nor would I expect one.
 

 I would imagine that if there was a package that really needed  
 updating, then your company could hire an R programmer for a short  
 time to fix whatever needs fixing, and that would be a much smaller  
 expense than licensing an expensive package like those other ones out  
 there.

 But perhaps I am completely wrong in this, I am relatively far from  
 the industry world.

 Haris

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Rating competitors

2006-12-26 Thread Spencer Graves
  Have you considered Bradley-Terry models?   RSiteSearch(bradley, 
functions) just returned 31 hits for me. 

  Hope this helps. 
  Spencer Graves

Jeff Newmiller wrote:
 I am looking for hints on how to estimate ratings for competitors
 in an ongoing pairwise competition using R... my particular area of
 interest being the game of Go, but the idea of identifying ratings
 (on a continuous scale) rather than relative rankings seems easily
 generalized to other competitions so I thought someone might be
 studying something related already.

 I presume the rating of a competitor would be best modeled as a random
 variate on the rating scale, and an encounter between two
 competitors would be represented by a binary result.  Logistic regression
 seems promising, but I am at a loss how to represent the model since
 the pairings are arbitrary and not necessarily repeated often.

 I have read about some approaches to estimating ratings for Go,
 but they seem to involve optimization using assumed distributions
 rather than model fitting which characterizes analysis in R.

 Does any of this sound familiar? Suggestions for reading, anyone?



__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Find S4 Generic?

2006-12-18 Thread Spencer Graves
Dear Prof. Ripley: 

  Thanks very much.  That works.  I got stuck on the help page for 
dumpMethods and failed to check See Also. 

  Best Wishes,
  Spencer Graves

Prof Brian Ripley wrote:
 Do you want E (type 'E') or its methods (getMethods(E) works for me)?

 On Sun, 17 Dec 2006, Spencer Graves wrote:

  How can I get the R code for E in the distrEx package?  The
 function call 'dumpMethods(E, E.R)' created E.R in the working
 directory.  Unfortunately, it apparently contains 0 bytes.

 See ?dumpMethods: you need to specify 'where' (as it says you may).


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Find S4 Generic?

2006-12-17 Thread Spencer Graves
  How can I get the R code for E in the distrEx package?  The 
function call 'dumpMethods(E, E.R)' created E.R in the working 
directory.  Unfortunately, it apparently contains 0 bytes. 

  Thanks,
  Spencer Graves

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.



Re: [R] DOE teaching suggestions?

2006-12-11 Thread Spencer Graves
Hi, Erin:

  Also, have you seen the BHH2 package, companion to Box, Hunter and 
Hunter (2005) Statistics for Experimenters, 2nd ed. (Wiley)?

  Hope this helps.
  Spencer Graves

  Are you planning to have them design and conduct an actual
physical experiment as part of the class?  You may know that Bill Hunter
(the second Hunter of Box, Hunter  Hunter) wrote articles about doing
this, and I found it extremely helpful.   Things happen with real
physical experiments that can't be duplicated with any kind of computer
simulation.

  I think I've gotten good results assigning team projects of their
own choosing.  I found it necessary to have design review
presentations in the middle of the class.  These presentations give you
feedback on their understanding of the class material to that date.
They also give you an opportunity to suggest improvements before they
actually do the experiment.

  This is not what you asked, but I hope you find it useful, anyway.
  Best Wishes,
  Spencer Graves

Erin Hodgess wrote:
 Dear R People:

 I will be teaching an undergraduate Design of Experiments class
 in the Spring Semester.  It will be very much an applied course.

 My question, please:  has anyone used R for a course like this, please?

 I've tried Rcmdr for a regression course and just plain command
 line for a time series course.

 Should I use Rcmdr, or teach them to use the command line, OR is there
 something else, please?

 Thanks in advance!

 Sincerely,
 Erin Hodgess
 Associate Professor
 Department of Computer and Mathematical Sciences
 University of Houston - Downtown
 mailto: [EMAIL PROTECTED]

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] DOE teaching suggestions?

2006-12-11 Thread Spencer Graves
Hi, Erin:

  Also, have you seen the BsMD package (Bayes Screening and Model 
Discrimination), also discussed in Box Hunger and Hunter (2005).

  Spencer Graves
##
  Also, have you seen the BHH2 package, companion to Box, Hunter and
Hunter (2005) Statistics for Experimenters, 2nd ed. (Wiley)?

  Hope this helps.
  Spencer Graves

  Are you planning to have them design and conduct an actual
physical experiment as part of the class?  You may know that Bill Hunter
(the second Hunter of Box, Hunter  Hunter) wrote articles about doing
this, and I found it extremely helpful.   Things happen with real
physical experiments that can't be duplicated with any kind of computer
simulation.

  I think I've gotten good results assigning team projects of their
own choosing.  I found it necessary to have design review
presentations in the middle of the class.  These presentations give you
feedback on their understanding of the class material to that date.
They also give you an opportunity to suggest improvements before they
actually do the experiment.

  This is not what you asked, but I hope you find it useful, anyway.
  Best Wishes,
  Spencer Graves

Erin Hodgess wrote:
 Dear R People:

 I will be teaching an undergraduate Design of Experiments class
 in the Spring Semester.  It will be very much an applied course.

 My question, please:  has anyone used R for a course like this, please?

 I've tried Rcmdr for a regression course and just plain command
 line for a time series course.

 Should I use Rcmdr, or teach them to use the command line, OR is there
 something else, please?

 Thanks in advance!

 Sincerely,
 Erin Hodgess
 Associate Professor
 Department of Computer and Mathematical Sciences
 University of Houston - Downtown
 mailto: [EMAIL PROTECTED]

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Help with response CIs for lme

2006-12-04 Thread Spencer Graves
  No, your example computation does NOT produce the desired lme 
prediction intervals.  I ran your script and got exactly the same 
numbers for upper and lower limits.  Even without any consideration of 
statistical theory, this suggests either shockingly precise statistical 
estimation or a problem with your algorithm. 

  The theory behind such intervals is sufficiently complicated that 
the 'nlme' developers have not found a need sufficient to justify the 
effort required to develop the required capability.  A reply by James 
Rogers to a similar question a couple of years ago concluded, It is not 
easy to write such a function for the general case, but it may be 
relatively easy to write your own for special cases of lme models. 
(http://finzi.psych.upenn.edu/R/Rhelp02a/archive/42781.html)

  To briefly outline some of the difficulties, we first should ask 
if you want confidence intervals for the MEAN of future values or for 
the future values themselves?  And how do you want to handle the random 
effects?  If you want the mean of the fixed effects, averaging over 
random effects, that should be fairly easy.  Consider, for example, the 
following modification of the example on the 'lme' help page: 

fm1 - lme(distance ~ age, data = Orthodont) # random is ~ age
summary(fm1)
snip

Fixed effects: distance ~ age
Value Std.Error DF   t-value p-value
(Intercept) 16.76 0.7752461 80 21.620375   0
age  0.660185 0.0712533 80  9.265334   0
 Correlation:
(Intr)
age -0.848

  From the Std.Error and Correlation, we could reconstruct the 
covariance matrix of the parameter estimates, b.hat.  Call this S.b.  
Then the estimate for Ey for some new set of covariates coded in a row 
vector x' is var(Ey.hat) = x' S.b x.  The square root of this number is 
a standard deviation, and you could add and subtract some multiple like 
2 of this number from x' b.hat to get the desired confidence interval. 

  If I wanted to do that myself, I might read the code for 
summary.lme, after using getAnywhere(summary.lme) to get it. 

  I know this doesn't solve your problem, but I hope it helps.  
  Spencer Graves

Michael Kubovy wrote:
 Hi,

 Can someone please offer a procedure for going from CIs produced by  
 intervals.lme() to fixed-effects response CIs.

 Here's a simple example:

 library(mlmRev)
 library(nlme)
 hsb.lme - lme(mAch ~ minrty * sector, random = ~ 1 | cses, Hsb82)
 (intervals(hsb.lme))
 (hsb.new - data.frame
  minrty = rep(c('No', 'Yes'), 2),
  sector = rep(c('Public', 'Catholic'), each = 2)))
 cbind(hsb.new, predict(hsb.lme, hsb.new, level = 0))

 Is the following correct (I know from the previous command that the  
 estimate is correct)?
 cbind(hsb.new, rbind(hsb.int[[1]][1,], hsb.int[[1]][1,]+hsb.int[[1]] 
 [2,], hsb.int[[1]][1,]+hsb.int[[1]][3,], hsb.int[[1]][1,]+hsb.int[[1]] 
 [2,] + hsb.int[[1]][3,] + hsb.int[[1]][4,]))

 If so, is there an easier way to write it?
 _
 Professor Michael Kubovy
 University of Virginia
 Department of Psychology
 USPS: P.O.Box 400400Charlottesville, VA 22904-4400
 Parcels:Room 102Gilmer Hall
  McCormick RoadCharlottesville, VA 22903
 Office:B011+1-434-982-4729
 Lab:B019+1-434-982-4751
 Fax:+1-434-982-4766
 WWW:http://www.people.virginia.edu/~mk9y/

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Quadratic Optimization

2006-12-02 Thread Spencer Graves
Hi, Prof. Ripley: 

snip


 But that is a single equality quadratic constraint, and I believe 
 'quadratic constraints' (note, plural) conventionally means multiple 
 inequality constraints.  That meaning is a hard problem that needs 
 specialized software (most likely using interior-point methods).

SpG   Maximize a'x subject to x'Ax=c.

 Not I believe the usual meaning (nor what Googling 'quadratic 
 constraints' came up with for me).

Thanks for the clarification.  Spencer Graves

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] lme function

2006-12-01 Thread Spencer Graves
  RSiteSearch(lme spatial correlation, functions) produced 10 
hits for me just now.  The sixth title on that list was spatial 
correlation structure 
(http://finzi.psych.upenn.edu/R/library/nlme/html/corSpher.html).  This 
is the help page for the corSpher function.  The Examples section 
there includes references to selected pages in Pinheiro and Bates (2000) 
Mixed-Effects Models in S and S-Plus (Springer), which  for me is 
essential documentation for 'lme' and is the best book I know on 
mixed-effects models generally.  The value of that book is greatly 
enhanced by the availability of script files ch01.R, ch02.R, ..., 
ch06.R, ch08.R (in the ~R\library\nlme\scripts subdirectory of 
your R installation directory).  These contain R code to reproduce all 
the data analyses in the book.  There are a very few cases where the 
syntax is different between R and that documented in the book [e.g., x^2 
must be I(x^2)].  Before I found the script files, I couldn't understand 
why I got substantially different results from the book when just typing 
the commands into R. 

   Hope this helps. 
  Spencer Graves

Mark Wilson wrote:
 Hello.

 As advised by Mick Crawley in his book on S+, I'm trying to use the lme
 function to examine a linear relationship between two variables measured at
 60 locations in 12 sites, while taking account of any spatial
 autocorrelation (i.e. similarity in variation between the two variables that
 is due to site). I am using the function as follows:

 model-lme(yvariable~xvariable,random=~xvariable|site)

 If you know your way around this function, I would be very grateful if you
 could confirm that this approach is a valid one, or point out why it isn't.
 I'd also be very keen to hear any suggestions regarding alternative ways to
 address the spatial autocorrelation in my data (I'm hoping to arrive at a
 slightly more elegant solution than simply taking site averages for each of
 the two variables and running a correlation using these mean values).

 Thanks,

 Mark


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Quadratic Optimization

2006-12-01 Thread Spencer Graves
  Unless I'm missing something, optimizing a linear function with 
quadratic constraints is almost trivial with Langrange multipliers. 

  Maximize a'x subject to x'Ax=c. 

  S = Lagrange objective = a'x+lam*(x'Ax-c). 
  dS/dx = a + 2*lam*Ax. 
 
  Given lam, x1 = solve(A, a)/(2*lam) 

  Then x = c*x1/(x'Ax) 

  In R, you need to know that t = transpose of a matrix. 

  I thought I had seen mention of a contributed package for 
optimization with nonlinear constraints.  However, you don't need that 
here. 

  In case this does not solve your problem, my crude engineer's 
approach to constrained optimization includes the following: 

  (1) Find transformation to send the constraints to +/-Inf. 

  (2) If that fails, add the constraints as a penalty.  Start with a 
low penalty and gradually increase it if necessary until you solve the 
problem.  Of course, to do this, you have to make sure your objective 
function returns valid numbers outside the constrained region. 

  How's this? 
  Spencer Graves

amit soni wrote:
 Hi,

 I need to solve an optimization problem in R having linear objective function 
 and quadratic constraints(number of variables is around 80).  What are the 
 possible choices to do this in R.
 optim() function only allows box constrained problems. Is it possible in 
 nlm()? Or please tell me if there is any other routine.

 Thanks

 Amit




  
 
 Cheap talk?


   [[alternative HTML version deleted]]

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] help

2006-11-30 Thread Spencer Graves
  In case you haven't already received an adequate reply (which I 
haven't seen) or figured this out on your own, I will comment.  Consider 
the following modifications of an example in the 'lmer' documentation: 

(fm0.0 - lmer(Reaction~(1|Subject), sleepstudy))
(fm0.1 - lmer(Reaction~1+(1|Subject), sleepstudy))
(fm0.s - lmer(Reaction~Subject+(1|Subject), sleepstudy))

  The first two models are equivalent, as can be seen from looking 
at the output.  In the formula language, something like Reaction~X 
means to estimate an intercept plus an X effect.  If you want a 
no-constant model, you must specify Reaction ~ -1+X.  When X is a 
factor, Reaction ~ -1+X effectively fits the same model as 
Reaction~X using an alternative parameterization.  If X is numeric, 
Reaction~X means estimate b0 and b1 in Reaction = b0 + b1*X + error.  
Meanwhile, Reaction ~ -1+X means estimate only b1 in Reaction = b1*X + 
error.  In this latter case, the introduction of -1 actually changes 
the model. 

  The third model Reaction~Subject+(1+Subject) is a confusion:  The 
~Subject part asks lmer to estimate a separate parameter for each 
Subject.  The (1|Subject) term asks lmer to estimate the standard 
deviation for between-Subject random variability after the fixed effects 
are removed.  Since Subject is also listed as a fixed effect in this 
model, the model is overparameterized:  I'm not certain, but it appears 
to me that the software doesn't know whether to allocate 
subject-specific deviations from the overall mean to the fixed effects 
coefficients or the random effect, and it appears to do a little of both.

  It might be nice if 'lmer' included a check for factors appearing 
as both fixed and random effects.  However, I believe that 'lme4' and (R 
more generally) is primarily a research platform for new statistical 
algorithm development.  Most of the R Core Team work to maintain R under 
the GNU license primarily because it supports their research (and 
educational) objectives.  The product therefore may not strive to be as 
supportive for naive users as commercial software. 

  Hope this helps. 
  Spencer Graves

Aimin Yan wrote:
 consider p as random effect with 5 levels, what is difference between these 
 two models?

   p5.random.p - lmer(Y 
 ~p+(1|p),data=p5,family=binomial,control=list(usePQL=FALSE,msV=1))
   p5.random.p1 - lmer(Y 
 ~1+(1|p),data=p5,family=binomial,control=list(usePQL=FALSE,msV=1))

 thanks,

 Aimin Yan

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] problem with the plm package (2)

2006-11-29 Thread Spencer Graves
  I haven't seen any replies to this, so I will offer some 
suggestions even though I'm not familiar with 'plm'. 

  1.  The standard formula syntax in S-Plus and R is that y ~ x is 
a request to estimate b0 and b1 in y = b0+b1*x+error.  Since this 
already includes 1, it is essentially equivalent to y~1+x.  To 
estimate a noconstant model, use something like y~x-1. 

  2.  Have you tried an email directly to the official plm 
maintainer?  An email address for this can be found using 
help(package=plm). 

  3.  Panel data is a special case of mixed effects data.  The best 
software I know for that the 'nlme' package, which is part of the 
standard R distribution.  The best book on the subject that I know is 
Pinheiro and Bates (2000) Mixed-Effects Models in S and S-Plus 
(Springer).  The ~R\library\nlme\scripts subdirectory of your R 
installation directory contains files with names like ch01.R, 
ch02.R, etc.  These provide R scripts for producing the analyses in 
the book.  You can make local copies of these scripts and work through 
them line by line, trying different things, etc.  The R syntax is 
slightly different from that in the book, e.g., x^2 in a formula must be 
specified as I(x^2).  These differences are overall very minor and easy 
to learn.  However, using exactly what appears in the book without 
modification will in a few cases be misinterpreted, leading to very 
different results.  I had a difficult time understanding what was 
happening until I found this. 

  Hope this helps.
  Spencer Graves

Giangiacomo Bravo wrote:
 Thanks a lot for the quick answerd I received and that helped me to 
 solve my first (simple) problem with the plm package.

 Unfortunatley, here's another one:
 I do not know why, but I'm unable to estimate a simple model in the 
 form y ~  1 + x

 When I try
   zz - plm(y ~ 1 + x , data=mydata)
 I obtain
 Errore in X.m[, coef.within, drop = F] : numero di dimensioni errato
 which means that the number of dimensions is wrong in X.m[, 
 coef.within, drop = F].

 However, I have no problem to estimate a more complex model, e.g.
   zz - plm(y ~ 1 + x + I(x^2), data=mydata)
 in this case everything is ok.

 What's wrong?

 Thank you,
 Giangiacomo

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] My own correlation structure with nlme

2006-11-25 Thread Spencer Graves
  I haven't seen a reply to this post, so I'll offer some comments, 
even though I can't answer the question directly. 

  1.  Given your question, I assume you have consulted Pinheiro and 
Bates (2000) Mixed-Effects Models in S and S-Plus (Springer).  If you 
have not, I'm confident you will find material relevant to your question 
there, especially in chapters 6-8. 

  2.  Are you aware that the standard R installation includes a 
subdirectory ~R\library\nlme\scripts, which contain copies of R 
commands to create all the analyses in the book?  In particular, 
ch06.R and ch08.R might be particularly relevant to your question.  
If you have not made local copies of these and walked through the code 
line by line, I suggest you do so.  I've learned a lot doing that. 

  3.  Which versions of R and 'nlme' are you using?  Some minor 
enhancements to the help files were added a few months ago, and there 
might be something helpful in the current examples that wasn't there 
before. 

  4.  What have you done to develop simple toy examples to help you 
test your understanding of different aspects of the code?  This 
technique has helped me solve many problems. 

  5.  Are you familiar with the use of the methods command to 
trace logic flow?  Consider for example the following: 

  methods(corMatrix)
 [1] corMatrix.corAR1*  corMatrix.corARMA* corMatrix.corCAR1*   
 [4] corMatrix.corCompSymm* corMatrix.corIdent*corMatrix.corNatural*
 [7] corMatrix.corSpatial*  corMatrix.corStruct*   corMatrix.corSymm*   
[10] corMatrix.pdBlocked*   corMatrix.pdCompSymm*  corMatrix.pdDiag*
[13] corMatrix.pdIdent* corMatrix.pdMat*   corMatrix.reStruct*  

   Non-visible functions are asterisked

  6.  Are you familiar with using getAnywhere to obtain code that 
may be hidden with namespaces?  For example, I just learned that 
'corAR1' and 'corMatrix.corAR1' are two distinct functions.  I found the 
latter with this methods command, and I got the code to it using 
getAnywhere. 

  7.  Are you familiar with the 'debug' command (in the 'base' 
package, not the 'debug' package, which is probably much more powerful 
but I haven't used the latter)?  This allows a user to trace through 
code line by line, examining the contents of objects -- and even 
modifying them if you want.  This is an extremely powerful way to learn 
what a piece of code does. 

  8.  If the above does not produce the answers you seek with a 
reasonable additional effort, please submit another post.  To increase 
your chances of getting replies that are quicker and more helpful than 
this one, please include commented, minimal, self-contained, 
reproducible code.  I'm confident that I could have helped you more with 
less effort if your 'pairCorr' code had been included a sample call that 
I could copy from your email into R and see if I get the same error 
message you got.  If I failed to get the same error as you saw, that 
suggests a problem in your installation.  If I got the same error 
message, there is a good chance that I figure out how to get around that 
and provide more helpful suggestions in less time than I've been 
thinking about this. 

  Hope this helps. 
  Spencer Graves

Mohand wrote:
 Dear all,

 I am trying to define my own corStruct which is different from the 
 classical one available in nlme. The structure of this correlation is 
 given below.
 I am wondering to know how to continue with this structure by using 
 specific functions (corMatrix, getCovariate, Initialize,...) in order to 
 get a structure like corAR1, corSymm which will be working for my data.

 Thanks  in advance.

 Regards

 M. Feddag




 *Correlation structure _
 _*


 pairCorr - function(A, B, lowerTriangle = TRUE){
   n - length(A)
   m - length(B)
   if (n != m) stop(A and B must have same length)
   result - matrix(0, n, n)
   same.A - outer(A, A, ==)
   same.B - outer(B, B, ==)
   A.matches.B - outer(A, B, ==)
   B.matches.A - outer(B, A, ==)
   result - result + 0.5*same.A + 0.5*same.B -
  0.5*A.matches.B - 0.5*B.matches.A
   rownames(result) - colnames(result) - paste(A, B, sep = )
   if (lowerTriangle) result - result[row(result)  col(result)]
   result
 }

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Nonlinear statistical modeling -- a comparison of R and AD Model Builder

2006-11-24 Thread Spencer Graves
Hi, Mike  Dave: 

  Have you considered nonlinear mixed effects models for the types 
of problems considered in the comparison paper you cite?  Those 
benchmark trials consider T years of data ... for A age classes and 
the total number of parameters is m = T+A+5.  Without knowing more 
about the problem, I suspect that the T year parameters and the A age 
class parameters might be better modeled as random effects.  If this 
were done, the optimization problem would then involve 7 parameters, the 
5 fixed-effect parameters suggested by the computation of m plus two 
variance parameters, one for the random year effects and another for 
the random age class effect.  This would replace the problem of 
maximizing, e.g., a likelihood over T+A+5 parameters with one of 
maximizing a marginal likelihood over 2+5 parameters after integrating 
out the T and A random effects. 

  These integrations may not be easy, and I might stick with the 
fixed-effects solution if I couldn't get answers in the available time 
using a model I thought would be theoretically more appropriate.  Also, 
I might use the fixed-effects solution to get starting values for an 
attempt to maximize a more appropriate marginal likelihood.  For the 
latter, I might first try 'nlmle'.  If that failed, I might explore 
Markov Chain Monte Carlo (MCMC).  I have not done MCMC myself, but the 
MCMCpack R package looks like it might make it feasible for the types 
of problems considered in this comparison.  The CRAN summary of that 
package led me to an Adobe Acrobat version of a PPT slide presentation 
that seemed to consider just this type of problem (e.g., 
http://mcmcpack.wustl.edu/files/MartinQuinnMCMCpackslides.pdf). 

  Have you considered that? 
  Hope this helps. 
  Spencer Graves

Mike Prager wrote:
 dave fournier [EMAIL PROTECTED] wrote:

   
 I  think that many R users understimate the numerical challenges
 that some of the typical nonlinear statistical model used in different
 fields present. R may not be a suitable platform for development for
 such models.

 Around 10 years ago John Schnute, Laura Richards, and Norm Olsen
 with Canadian federal fisheries undertook an investigation
 comparing various statistical modeling packages for a simple
 age-structured statistical model of the type commonly used in
 fisheries. [...] It is possible
 to produce a working model with the present day version of R so that
 R can now be directly compared with AD Model Builder for this type of model.

 The results are that AD Model builder is roughly 1000 times faster than
 R for this problem. ADMB takes about 2 seconds to converge while
 R takes over 90 minutes.
 

 Our group's experiences reflect, at least qualitatively, what
 Dave says above.  We use R for analyzing results from models
 written in his AD Model Builder, and a couple of years ago, we
 started programming one of our models directly in R.  We quickly
 abandoned that idea because of lengthy execution time under R.
 That is not a judgement of either piece of software.  R and ADMB
 are designed for different types of task, and it seems to me
 that they complement each other well.

 That experience was in part the genesis of our X2R software (now
 at CRAN -- pardon the plug), which saves results from ADMB
 models into a format that R can read as a list.  We feel that
 now we have the best of both worlds -- fast execution with ADMB,
 followed by the programming ease and excellent graphics of R for
 analysis of results and projections under numerous scenarios.



__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] how to forecast the GARCH volatility?

2006-11-23 Thread Spencer Graves
  comments in line

Rick Xu wrote:
 Dear All,

 I have loaded package(tseries), but when I run
 predict.garch(...) R tells me could not find function
 predict.garch, however ?predict.garch shows me
   
  The secret is methods dispatch. 

  predict
function (object, ...)
UseMethod(predict)
environment: namespace:stats
 
  The 'predict' function calls 'UseMethod', which in turn appends 
class(object) to predict, and looks for a function by that name.  If 
it doesn't find one, it looks for 'predict.default'.  If it doesn't find 
that, it gives an error message: 

  predict(1:4)
Error in predict(1:4) : no applicable method for predict

  To find the methods available for 'predict', call 'methods': 

  methods(predict)
 [1] predict.ar*predict.Arima*   
 [3] predict.arima0*predict.garch*   
 snip 
   Non-visible functions are asterisked

  To get a Non-visible function, use 'getAnywhere': 

  getAnywhere(predict.garch)
 something. I am confused about this. How can I
 forecast garch volatility?

 I have tried:
 predict(...,n.ahead=...),give me fitted value
   
  What did you not like about this?  
 predict(...,n),give me NA,NA

   
  How many arguments do you have in ...?  The help file 
'?predict.garch' says 'Usage': 

 predict(object, newdata, genuine = FALSE, ...)

  R matches names to arguments by position or by name.  With names, 
R allows partial matching except for function arguments that appear 
after the special ... argument.  The interpretation of the incomplete 
example you gave depends on how many arguments you have before n.  
'predict(garch.object, n)' would interpret 'n' as 'newdata'.  If 'n' 
doesn't look like 'newdata', it might explain why you got NA,NA.  These 
aspects of R are probably described many places, but one I just checked 
was sec. 3.1 in Venables and Ripley (2000) S Programming (Springer). 

  Hope this helps. 
  Spencer Graves
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] SNA packages/network package

2006-11-23 Thread Spencer Graves
  I haven't seen a reply to this post, so I will offer some comments 
even though I do not recall any previous exposure to social network 
analysis or the 'sna' package.  I'm sorry I can't answer your question 
directly.  However, if you wouldn't mind posting a minimal example of 
something you've tried and ask your question in terms of that example, 
you might get a better response.  Please try to make your example self 
contained in the sense that someone like me can copy it from your email, 
paste it into an R session, and presumably see what you see.  When I 
listed objects(2) with the 'sna' package in the second position on the 
search path, I got 201 different objects.  I didn't see an obvious entry 
point for the package. 

  Hope this helps. 
  Spencer Graves

Bagatti Davide wrote:
 Hello everyone,

 I am an italian student who is working with packages SNA (social network
 analysis) and network.
 I ask  you if there is a simple way to write a R-script using these packages
 to extract from an adjacency matrix the following things:

 -number of cliques which are in the network;
 -number of k-cores (e.g. 3-cores, 4-cores);
 -cycles (e.g. 3-cycles, 4-cycles)
 -hub  authorities.

 Thank-you very much

 Davide

   [[alternative HTML version deleted]]

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] stl and the relative scale data-seasonal

2006-11-19 Thread Spencer Graves
  I'm not sure I understand what you want.  However, I will suppose 
I wanted essentially the same image as plot(stl..) but with individual 
control over ylim for each of the 4 plots.  In that case, I might make a 
local copy of the actual plot function using 'getAnywhere(plot.stl)' 
and modify it.  I got to the point by noting that plot in most cases 
calls 'UseMethod(plot)'.  This suggested I check 'methods(plot)'.  
The response to this included 'plot.stl', which would be called by 
'UseMethod(plot)' whenever the first argument of 'plot' was of class 
'stl'. 

  Then I'd modify my local copy of 'plot.stl' to produce what I 
wanted (using 'debug' to walk through the code line by line if my 
modifications didn't work right and I didn't understand immediately 
why).  This may be as much work as extracting all the components and 
plot 4 separate figures, but I see no simpler way. 

  Hope this helps. 
  Spencer Graves
p.s.  By 'debug' here, I mean the 'debug' in the 'base' package, not the 
'debug' package, which I've never used but is doubtless more powerful 
(though may also require more effort and thought from the user).  For 
more information on debug{base}, I suggest you consult the documentation 
and perhaps the archives.  RSiteSearch(graves debug) produced 127 hits 
for me just now.  The most interesting for you among the first few might 
be http://finzi.psych.upenn.edu/R/Rhelp02a/archive/79251.html;. 

Berta wrote:
 I am trying to plot the time series decomposition using plot(stl..). 
 Eventhough I understand why the scale of the 4 plots is better to be 
 unequal, I would like to plot all 4  in the same scale (otherwise 
 interpretation at a simple look may be misleading). Is there a way I could 
 do so (easier than extracting all the components and plot 4 separate figures 
 using 4 plot-comands)?

 Thanks a lot in advance.

 Berta.

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] nlme

2006-11-19 Thread Spencer Graves
  A more sensible option in my experience would be to transform the 
parameter space to send the boundaries to +/-Inf.  Suggested reading for 
'nlme' includes Pinheiro and Bates (2000) Mixed-Effects Models in S and 
S-Plus (Springer) and Bates and Watts (1988) Nonlinear Regression 
Analysis and Its Applications (Wiley). 

  Spencer Graves

Douglas Bates wrote:
 On 11/19/06, Fluss [EMAIL PROTECTED] wrote:
   
 Hello!
 I am trying to fit a mixed non-linear model using nlme.
 How can I constrain the fixed parameter space (add bounds) as in nls.
 

 By rewriting the nlme function, an option I wouldn't recommend.  :-)

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Need help in nlmeODE

2006-11-19 Thread Spencer Graves
  I've never used nlmeODE, but this post is almost 3 days old, so 
I'll offer some comments.  I didn't see an example in the 'nlmeODE' help 
file, but I did find a reference to 'PKPDmodels', and the help file for 
'PKPDmodels' includes several examples.  Have you worked through those?  
If you have and you still would like some help, please submit another 
post with a copy to the 'nlmeODE' maintainer [identified, e.g., via 
help(package='nlmeODE')].  In that post, please include minimal, 
self-contained, reproducible code showing something you tried and 
explaining why it doesn't meet your needs (as suggested in the posting 
guide www.R-project.org/posting-guide.html). 

  Hope this helps. 
  Spencer Graves

Ron Fisher wrote:
 I am trying to use nlmeODE. But I don't know what is ObsEq. How you can get 
 it from your ODEs system? Could someone help me out?

 Best,

 Ron

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] deriv when one term is indexed

2006-11-19 Thread Spencer Graves
  You ask whether there would be a way to adapt the deriv and 
deriv3 functions to deal with formulas that have an indexed term, 
effectively allowing differentiation with respect to a vector.  As Simon 
Blomberg famously said, This is R. There is no if. Only how.(*)

  The implementation, however, might not be easy.  Let's start by 
looking at 'deriv': 

  deriv
function (expr, ...)
UseMethod(deriv)
environment: namespace:stats

  To get beyond this, try the following: 

  methods(deriv)
[1] deriv.default deriv.formula

  Next, look at 'deriv.default': 

 deriv.default
function (expr, namevec, function.arg = NULL, tag = .expr,
hessian = FALSE, ...)
.Internal(deriv.default(expr, namevec, function.arg, tag, hessian))
environment: namespace:stats

  The call to '.Internal' says that 'deriv.default' is written in 
compiled code of some kind.  If we similarly check 'deriv.formula', and 
'deriv3', we get the same results. 

  Someone else might offer a more optimistic perspective, but this 
seems like a project beyond my available time at the moment. 

  If you told us more about your motivating example, someone might 
be able to help with a special case. 

  Sorry I couldn't be more encouraging. 
  Spencer Graves
(*) Reference: 
library(fortunes)
Ftns - read.fortunes()
sel - regexpr(This is R, Ftns$quote)
which(sel0)
[1] 109
  fortunes(109)

Ken Knoblauch wrote:
 I don't know why I get a different result than you do, though I'll
 play around with it some and check the code to see if I can figure
 out what is going on, on my end.

 I will add that by transforming GL so that it spans the interval [0, 1]
 instead
 of [0, 255], the correlations among the parameters decreased, also
 evidenced in
 the plot and the pairs plots of the profile object, but this didn't change
 whether these functions worked for me on the model when i did not add
 derivative information.

 But, this is a bit of a diversion, since we haven't yet addressed the
 question
 to which my Subject heading refers and remains the question for which
 I'm most seeking an answer, direction or guidance, and that is
 whether there would be a way to adapt the deriv and deriv3 functions
 to deal with formulas that have an indexed term as in the one
 in my situation, i.e.,

 Lum ~ Blev + beta[Gun] * GL^gamm

 Any ideas on that?  Thank you.

 ken




 Gabor Grothendieck wrote:
   
 I mixed up examples.  Here it is again.  As with the last one
 confint(dd.plin) gives an error (which I assume is a problem with
 confint that needs to be fixed) but other than that it works without
 issuing errors and I assume you don't need the confint(dd.plin)
 in any case since dd.plin is just being used to get starting values.

 
 gg - model.matrix(~ Gun/GL - Gun, dd)
 dd.plin - nls(Lum ~ gg^gamm, dd, start = list(gamm = 2.4),
   
 +alg = plinear
 +)
 
 confint(dd.plin)
   
 Waiting for profiling to be done...
 Error in eval(expr, envir, enclos) : (subscript) logical subscript too
 long
 
 B - as.vector(coef(dd.plin))
 st -list(Blev = B[2], beta = B[3:5], gamm = B[1])



 dd.nls - nls(Lum ~ Blev + beta[Gun] * GL^gamm,
   
 +data = dd, start = st
 +)
 
 confint(dd.nls)
   
 Waiting for profiling to be done...
2.5%97.5%
 Blev  -1.612492e-01 2.988387e-02
 beta1  6.108282e-06 8.762679e-06
 beta2  1.269000e-05 1.792914e-05
 beta3  3.844042e-05 5.388546e-05
 gamm   2.481102e+00 2.542966e+00
 
 dd.deriv2 - function (Blev, beta, gamm, GL)
   
 + {
 +.expr1 - GL^gamm
 +.value - Blev + rep(beta, each = 17) * .expr1
 +.grad - array(0, c(length(.value), 5), list(NULL, c(Blev,
 +beta.rouge, beta.vert, beta.bleu, gamm)))
 +.grad[, Blev] - 1
 +.grad[1:17, beta.rouge] - .expr1[1:17]
 +.grad[18:34, beta.vert] - .expr1[1:17]
 +.grad[35:51, beta.bleu] - .expr1[1:17]
 +.grad[, gamm] - ifelse(GL == 0, 0, rep(beta, each = 17) * (.expr1
 *
 + log(GL)))
 +attr(.value, gradient) - .grad
 +.value
 + }
 
 dd.nls.d2 - nls(Lum ~ dd.deriv2(Blev, beta, gamm, GL), data = dd,
   
 +start = list(Blev = B[2], beta = B[3:5], gamm =
 B[1]))
 
 confint(dd.nls.d2)
   
 Waiting for profiling to be done...
2.5%97.5%
 Blev  -1.612492e-01 2.988391e-02
 beta1  1.269000e-05 1.792914e-05
 beta2  3.844041e-05 5.388546e-05
 beta3  6.108281e-06 8.762678e-06
 gamm   2.481102e+00 2.542966e+00
 
 R.version.string # XP
   
 [1] R version 2.4.0 Patched (2006-10-24 r39722)
 
 On 11/18/06, Gabor Grothendieck [EMAIL PROTECTED] wrote:
 
 This works for me in terms of giving results without error messages
 except for the confint(dd.plin) which I assume you don't really need
 anyways.

   
 gg - model.matrix(~ Gun/GL - Gun, dd)
 dd.plin - nls(Lum ~ gg^gamm, dd, start = list(gamm = 2.4

Re: [R] Repeated measures by lme and aov give different results

2006-11-16 Thread Spencer Graves
  RSiteSearch(lme and aov) returned 350 hits for me just now.  I'm 
sure that many are not relevant to your question, but I believe some 
are.  Beyond this, there is now and R Wiki, accessible via 
www.r-project.org - Documentation:  Wiki (or directly as 
http://wiki.r-project.org/rwiki/doku.php?id=guides:lmer-testss=lme%20and%20aov).
  
The first hit in a search there for lme and aov is an edited 
transcript of a long thread in R-help starting Sept 7, 2006 from a 
comment by Hank Stevens, with Douglas Bates as leading actor.  
(http://wiki.r-project.org/rwiki/doku.php?id=guides:lmer-testss=lme%20and%20aov).
  


  If that fails to answer your questions on this, please submit 
another post.  Please realize however that the expected number and 
quality of replies is inversely proportional to some large power of the 
length and complexity of your question. 

  Hope this helps. 
  Spencer Graves

Vicki Allison wrote:
 I am analyzing data from an experiment with two factors: Carbon (+/-)
 and O3 (+/-), with 4 replicates of each treatment, and 4 harvests over a
 year.  The treatments are assigned in a block design to individual
 Rings.

 I have approaches this as a repeated measures design.  Fixed factors
 are Carbon, O3 and Harvest, with Ring assigned as a random variable.  I
 have performed repeated measures analysis on this data set two different
 ways: one utilizing lme (as described in Crawley, 2002), and the second
 using aov (based on Baron and Li, 2006).  Using lme I get very
 conservative p-values, while aov gives me significant p-values,
 consistent with those I obtain performing this analysis in SYSTAT.  Can
 anyone explain how these models differ, and which is more appropriate to
 the experimental design I have described?  The code I use, and the
 output obtained follow:

 1  lme model

 library(nlme)
 M5 -lme(ln_tot_lgth ~ Carbon*O3*Harv., random = ~-1|Ring)
 anova(M5, type=marginal)

 # Output
 numDF denDF   F-value p-value
 (Intercept) 144 176.59692  .0001
 Carbon  112   0.42187  0.5282
 O3  112   0.06507  0.8030
 Harv.   144  17.15861  0.0002
 Carbon:O3   112   0.23747  0.6348
 Carbon:Harv.144   0.85829  0.3593
 O3:Harv.144   0.04524  0.8325
 Carbon:O3:Harv. 144   0.05645  0.8133
   
 plot(M5)
 


 2  aov model

 M6-aov(ln_tot_lgth ~ O3*Harv.*Carbon + Error (Ring/Carbon+O3))
 summary(M6)
 plot(M6)

 # Output
 Error: Ring
   Df  Sum Sq Mean Sq F value  Pr(F)  
 O3 1 1.76999 1.76999  8.2645 0.01396 *
 Carbon 1 0.64766 0.64766  3.0241 0.10760  
 O3:Carbon  1 0.15777 0.15777  0.7366 0.40756  
 Residuals 12 2.57002 0.21417  

 Error: Within
 Df Sum Sq Mean Sq F value   Pr(F)
 Harv.1 33.541  33.541 84.0109 9.14e-12 ***
 O3:Harv. 1  0.001   0.001  0.0036   0.9524
 Harv.:Carbon 1  0.414   0.414  1.0362   0.3143
 O3:Harv.:Carbon  1  0.020   0.020  0.0508   0.8226
 Residuals   44 17.567   0.399   


 *** Note change of location***

 Victoria Allison
 Landcare Research
 Private Bag 92170
 Auckland 1142
 New Zealand
 Phone: +64 9 574 4164
 
 WARNING: This email and any attachments may be confidential ...{{dropped}}

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] nmle for time of structural change?

2006-11-15 Thread Spencer Graves
Hi, All: 

  Does 'nlme' require the nonlinear function to be differentiable?  
I'm fitting structural change models to many related time series, and 
I'd like to improve the estimates of the change point times through some 
type of pooling.  Unfortunately, I've so far been unable to get 'nlme' 
to work for me.  The following toy example is the closest I've come to 
what I want using 'nlme': 

library(nlme)
tstDF5 - data.frame(t.=rep(1:5, 3), subj=rep(1:3, e=5),
 y=c(rep(0:1, c(1,4)), rep(0:1, c(2,3)),
 rep(0:1, c(3,2)) ) )
breakpoint0seg2t - function(t., lT){
  t1 - 5*plogis(-lT)
  ((t.=t1)+(t1t.))
}
tstFit - nlme(y~breakpoint0seg2t(t., lT1),
data=tstDF5, fixed=lT1~1,
random=list(subj=pdDiag(l1~1)), start=0 )

Error in chol((value + t(value))/2) : the leading minor of order 1 is 
not positive definite

  The function 'breakpoint0seg2t' is constant except at the data 
points, where it is discontinuous.  Is this error message generated by 
the fact that the first derivative of 'breakpoint0seg2t' is 0 almost 
everywhere?  If yes, what are the options for getting around this? 

  The real problem behind this toy example involves fitting either 
step functions or linear or quadratics in time with any number of 
breakpoints.  I'm currently estimating the required parameters using the 
'strucchange' package.  That work, but I'm having trouble with this 
enhancement. 

  Thanks,
  Spencer Graves

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Confidence interval for relative risk

2006-11-12 Thread Spencer Graves
  When I have refereed manuscripts for publication and have been 
unable to get their answers, I have told the authors they need more 
explanation of their methodology -- while summarizing what I tried in a 
few lines.  I've even told some that it would make it vastly easier for 
the referees and increase the potential readership for their article if 
they make R code available -- at least downloadable somewhere and 
preferably in a contributed R package, where it could attract interest 
in the article from audiences who would not likely find it any other 
way.  I've done this for articles that were NOT written to specifically 
describe statistical software. 

  I have not followed all of this thread.  However, one suggestion I 
saw looked like it used the delta method, if I understood correctly 
from skimming without studying the details carefully.  Have you also 
considered 2*log(likelihood ratio) being approximately chi-square? 

  Just my 2e-9 Euros (or 2e-7 Yen or Yuan). 
   Spencer Graves

Michael Dewey wrote:
 At 12:35 12/11/2006, Peter Dalgaard wrote:
   
 Michael Dewey [EMAIL PROTECTED] writes:

 
 At 15:54 10/11/2006, Viechtbauer Wolfgang (STAT) wrote:

 Thanks for the suggestion Wolfgang, but whatever the original authors
 did that is not it.
   
 Did you ever say what result they got?

 -p

 

 No, because I did not want to use the original numbers in the 
 request. So as the snippet below indicates I changed the numbers. If 
 I apply Wolfgang's suggestion (which I had already thought of but 
 discarded) I get about 13 for the real example where the authors quote about 
 5.

 My question still remains though as to how I can get a confidence 
 interval for the risk ratio without adding a constant to each cell.

   
 it credible. I show below my attempts to
 do this in R. The example is slightly changed
 from the authors'.
   

 Michael Dewey
 http://www.aghmed.fsnet.co.uk

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] POSIXlt converted to POSIXct in as.data.frame()

2006-11-12 Thread Spencer Graves
  I'm not qualified to say much about POSIX, but I haven't seen a 
reply to this in almost 3 days, so I'll offer a comment in the hopes 
that it might be useful or that someone else might correct any 
misstatement I might make: 

  First, I didn't find a discrepancy. 
  sessionInfo()
R version 2.4.0 (2006-10-03)
i386-pc-mingw32

locale:
LC_COLLATE=English_United States.1252;LC_CTYPE=English_United 
States.1252;LC_MONETARY=English_United 
States.1252;LC_NUMERIC=C;LC_TIME=English_United States.1252

attached base packages:
[1] methods   stats graphics  grDevices utils datasets
[7] base

  I used the following modification  extension of your example: 

my_POSIXlt - strptime(c(11-09-2006, 11-10-2006, 11-11-2006,
  11-12-2006, 11-13-2006), %m-%d-%Y)
str(my_POSIXlt)
class(my_POSIXlt)
my_Date - as.Date(my_POSIXlt)
str(my_Date)
myCharacters - format(my_Date)
class(myCharacters)

my_DF - data.frame(my_POSIXlt)
str(my_DF)
DF_Date - as.Date(my_DF$my_POSIXlt)
str(DF_Date)
DF_Date

all.equal(DF_Date, my_Date)
# TRUE

  The data.frame function converts POSIXlt, which is a list, to 
POSIXct, which is a numeric vector with attributes: 

  attributes(my_DF$my_POSIXlt)
$class
[1] POSIXt  POSIXct

$tzone
[1] 

  is.numeric(my_DF$my_POSIXlt)
[1] TRUE

  If you are getting a day shift, I would guess that you might have 
a different version of some of the software or a difference in your 
locale.  I just did 'update.packages()' yesterday, so if I'm out of date 
on something, I hope someone will help me understand and fix the problem. 

  Beyond this, have you reviewed the ?POSIXt help file plus Gabor 
Grothendieck and Thomas Petzoldt. R help desk: Date and time classes in 
R. R News, 4(1):29-32, June 2004 (available from www.r-project.org - 
Newsletter)? 

  Hope this helps. 
  Spencer Graves

Roger Bivand wrote:
 In trying to use as.Date(), I've come across the conversion of POSIXlt to 
 POSIXct when a POSIXlt variable is included in a data frame:

 my_POSIX - strptime(c(11-09-2006, 11-10-2006, 11-11-2006, 
   11-12-2006, 11-13-2006), %m-%d-%Y)
 str(my_POSIX)
 my_Date - as.Date(my_POSIX)
 str(my_Date)
 data - format(my_Date)
 str(data)
 my_DF - data.frame(my_POSIX)
 str(my_DF)
 DF_Date - as.Date(my_DF$my_POSIX)
 str(DF_Date)
 DF_Date

 The consequence (for my LC_TIME and machine time zone) is that when 
 as.Date() is applied to the data frame column, it dispatches on 
 as.Date.POSIXct() not as.Date.POSIXlt(), causing a day shift (actually 60 
 minutes, but because as.Date.POSIXct() says floor(), it ends up being a 
 whole day). Should data.frame() be changing POSIXlt to POSIXct?

 As as.data.frame.POSIXlt() is written, it says:

   
 as.data.frame.POSIXlt
 
 function (x, row.names = NULL, optional = FALSE, ...) 
 {
 value - as.data.frame.POSIXct(as.POSIXct(x), row.names, 
 optional, ...)
 if (!optional) 
 names(value) - deparse(substitute(x))[[1]]
 value
 }
 environment: namespace:base

 which seems a little brutal. Using I() seems to be a work-around:

 my_DF - data.frame(my_POSIXa=I(my_POSIX))
 str(my_DF)
 class(my_DF$my_POSIXa)
 DF_Date - as.Date(my_DF$my_POSIXa)
 str(DF_Date)
 DF_Date

 In the original problem (conversion of a list read from PostgreSQL to 
 a data frame with as.data.frame()), having to know that you need to insert 
 I() is perhaps unexpected.

 Roger



__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Problems with metaMDS from vegan

2006-11-12 Thread Spencer Graves
  Have you sent this question to the vegan maintainer, identified, 
e.g., with help(package=vegan)?  I've added that name as a cc to 
this email. 

  Hope this helps. 
  Spencer Graves

Peter Roosen wrote:
 Hello all,

 I recently used the Vegan library quite extensively (in the context of
 text similarity assessment) on an Ubuntu 6.06 LTS system with R version
   2.2.1  (2005-12-20 r36812). The Vegan lib is version 1.6-10.

 I hit on a problem yesterday, though, when trying to install R and Vegan
 on two further computers - one Windows XP and one further Ubuntu 6.06
 machine, taking either R version 2.4.0 on XP or 2.2.1 (as above) on
 Ubuntu, and the newer Vegan version 1.8-3 on both machines. On the new
 Ubuntu machine I even tried to regress to the Vegan 1.6-10, but to no
 avail, as the error remains the same:

 As soon as a I try to process an R matrix (see below) to obtain the MDS
 I am confronted with:

   meta - metaMDS(distab.dist, distance=bray, k, trymax=50)
 Fehler in rowSums(x, na.rm = TRUE) : 'x' muss ein Array mit mindestens
 zwei Dimensionen sein
 Ausführung angehalten

 (
 translated:
 error in rowSums(x, a.rm = TRUE) : 'x' must be an array of at least two
 dimensions.
 Execution stopped
 )

 This error is not appearing on identical data when using my older
 installation. The only relevant googled mentioning of that error points
 to a surplus (0,0) entry in the matrix to be processed, but this is
 definitely not the case here. I crosschecked with *identical* data sets
 on the mentioned systems.

 Following are my (rather small) processing program and a (small cut of
 a) matrix that produces the error, but runs smoothly on the older
 version mentioned:

 R batch script:
 
 library(vegan)

 simitab - read.table(r.csv, header = TRUE, row.names = 1, sep = ;)
 olimit - max(simitab)
 distab - simitab / olimit

 distab.dist - vegdist(distab)

 k - 2
 meta - metaMDS(distab.dist, distance=bray, k, trymax=50)

 postscript(metaMDS-CASE-DIMENd.ps, horizontal=TRUE)

 plot(meta$points, col=blue, xlab=Dim 1, ylab=Dim 2,
 main=sprintf(metaMDS für Variante = \CASE\, dim = %d, Stress = %.2f
 %%, k, meta$stress))
 text(meta$points+xinch(0.09), labels=names(distab), cex=0.8)
 ###

 Cut of data set:
 ###
 US-1020525;US-1027783;US-1032733
 US-1020525;1.;0.00903941;0.93719674
 US-1027783;0.00903941;1.;0.01013081
 US-1032733;0.93719674;0.01013081;1.
 ###
 (Remark: I did *not* test exactly the given small data set cut but took
   the larger original one, being a 100*100 matrix + headers that I'd
 rather not post in here.)


 I'd appreciate any help in making the script functional again on our
 newer installations!

 Regards,

  Peter

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Value at Risk historical simulation

2006-11-12 Thread Spencer Graves
  Have you looked at the 'VaR' package?  If nothing in this package 
seems adequate for your purposes, please provide minimal, 
self-contained, reproducible code while explaining what it seems to 
lack, etc., as suggested in the posting guide 
www.R-project.org/posting-guide.html. 

  Hope this helps. 
  Spencer Graves

Benjamin Dickgiesser wrote:
 Hi

 Has someone got a package/script at hand to do a historical simulation
 to calculate the Value at Risk?

 If your not sure what Historical Simulation is:
 In simple terms, Historical Simulation (HS) is just taking sample
 percentiles over a moving sample. Suppose we want to use HS to predict
 a portfolio's Value-at-Risk at a confidence level of 99 percent and
 the window size is chosen to be 250 trading days. Then the 1 percent
 sample percentile is some amount between the second worst portfolio
 loss and the third worst portfolio loss (since 250 × 0.01 = 2.5). We
 decide to determine the Value-at-Risk through interpolation; in this
 case the VaR lies halfway between the second worst portfolio loss and
 the third worst portfolio loss.

 Benjamin

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] help with nlme function

2006-11-12 Thread Spencer Graves
  The primary (if not the only) documentation for the 'nlme' 
function, beyond that in the 'nlme' package, is Pinheiro and Bates. 

  Your example is not self contained, which makes it not feasible 
for me to say much about your problem.  Have you tried to reduce the 
size and complexity of your example as much as possible and still get 
the same error?  Also, have you worked the examples in the 'nlme' help 
files plus Pinheiro and Bates, then gradually increasing the complexity 
of one of their examples until you get what you want?  (In particular, 
have you worked through the files ch06.R and ch08.R in 
~\library\nlme\scripts in the installation of R on your hard drive?)  
With luck, these steps will either answer your question or produce a 
simple, self-contained example you can send to this list to illustrate 
your question. 

  If I tried all that without success, I'd try using 'debug' to walk 
through the code line by line until I reached enlightenment.  The first 
step on this journey is the following: 

  methods(nlme)
[1] nlme.formula nlme.nlsList

  This already suggests an alternative path you might try, namely 
'nlme(fit)' with the output of 'nlsList'.  Have you tried that?  If that 
produced the same result. I'd make a local copy of 'nlme.formula', then 
issue 'debug(nlme.formula)' and follow that with your call to 'nlme'.  
This will allow you to walk through the function line by line until you 
see where it dies.  In the process, you will likely find the problem. 

  Hope this helps. 
  Spencer Graves

Bill Shipley wrote:
 Hello.  I am trying to fit a nonlinear mixed model involving 3 parameters.
 I have successfully made a self-starting function.  getInitial() correctly
 outputs the initial estimates.  I can also use the nlsList with this
 function to get the separate nonlinear fits by group.  However, I get an
 error message when using the nlme function.  Here is the relevent code:

 fit-nlsList(photosynthesis~photo(irradiance,Q,Am,LCP)|species/plant/leaf,da
 ta=marouane.data,
 + na.action=na.omit)

 This works, showing that the function photo works as a self-starting
 function.

 nlme(model=photosynthesis~photo(irradiance,Q,Am,LCP),
 + data=marouane.data,fixed=Q+Am+LCP~1,
 + random=Q+Am+LCP~1|species,na.action=na.omit)
 Error: subscript out of bounds

 This is what happens when I use the nlme function.  I don't know what
 subscript out of bounds means but I assume that there is something wrong
 with the syntax of my code.

 The data frame (marouane.data) has dimensions of 1000 and 9.  The first
 three columns give the group structure (species, plants within species,
 leaves within plants).  species is a factor while plants is coded 1 or 2
 and leaves is also coded 1 or 2.  The other columns give values of
 measured variables.

 Could someone explain what I am doing wrong?  Alternatively, is there
 another text besides Pinheiro  Bates that explains the basic syntax of the
 nlme function?

 Thanks.
 Bill Shipley

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Problems with metaMDS from vegan

2006-11-12 Thread Spencer Graves
  Three more suggestions: 

  1.  Have you tried to develop an example so small and simple that 
it can be provided with a few lines of code in an email like this (as 
suggested in the trailer to every email distributed by r-help)?  If you 
include an example like that, it will increase your chances of getting 
an answer from the maintainer.  You should not criticize the maintainer 
for not replying to your earlier email:  I did not see in your email any 
information that would allow anyone to solve your problem unless s/he 
just happened to be unbelievably lucky.  Please include 'sessionInfo()' 
with your email, in case the problem happens to be related to something 
in your local operating environment. 

  2.  Have you tried 'traceback()' after your failed call to 
'metaMDS'?  With luck, this will identify the function actually 
generating the error.  Then you can list that function and try find 
rowSums(x, a.rm = TRUE).  Then hunt for x in that code.  The 
function that includes this call to 'rowSums' expects 'x' to be an array 
of at least two dimensions.  Either you pass it something that does NOT 
meet that description or someplace x might be created from selecting 
rows or columns of a larger array and either 0 or 1 rows were selected, 
thereby reducing the dimension of the array to 0 or 1.  For example, 
consider the following: 

  x - X[sel, ]

  If sel only selects 1 row of X here, x will be a vector.  
This can be fixed by replacing it by the following: 

  x - X[sel, , drop=FALSE]

  In this case, x is still a two-dimensional array, now with only 
one row.  If sel selects 0 rows, x will still be a two-dimensional 
array, now with 0 rows. 

  This may allow you to fix the code.  Then you can send the 
suggested fix to the vegan maintainer.  This person is doubtless very 
busy and may not have time to try to figure out why something doesn't 
work, especially when you provide nothing more than a vague claim that 
it doesn't work, as I indicated above. 

  3.  If 1 and 2 fail to provide enlightenment, I would then try 
the 'debug' functions (provided it was important enough for me to do so, 
of course).  The 'debug' function makes it easy to walk through the code 
line by line, checking the contents of different variables, etc.,  until 
the problem is identified.  Before I try 'debug', I make a local copy of 
the problem function, which I study as I walk through the code.  I've 
done this many times with my own code and with the code of others.  I 
know of two cases where this will not work: 

3.1.  If I find the problem line but don't understand the 
code enough to know what it means, I won't know how to fix it.  However, 
at least I can mention what I found to someone else:  This can make it 
easier for someone else to see how to get around the problem. 

   3.2.  The 'debug' function does not work with generic 
functions following the S 4 standard.  In my cursory review of the vegan 
package, I did not find anything suggested use of S 4, so with luck, 
you won't need to deal with that. 

   NOTE:  I have not used the 'debug' package, but I understand 
it to be functionally unrelated to the 'debug' function, providing a 
much more flexible implementation of similar capabilities. 

  If you follow these three suggestions, there's a good chance you 
will find how to get around the problem.  If that fails, you will almost 
certainly have something that will make it much easier for someone else 
to isolate and fix the problem. 

  Hope this helps. 
  Spencer Graves

Peter Roosen wrote:
 Hello Spencer,

   
   Have you sent this question to the vegan maintainer, identified, 
 e.g., with help(package=vegan)?
 

 I wrote a mail directly to Jari prior to posting in this group. My 
 posting here was no sooner than waiting (in vain) for an answer for 
 about a week.

   
  I've added that name as a cc to 
 this email. 

   Hope this helps. 
 

 We'll see! :) But thanks a lot anyhow for your help!

   Peter

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Variable limit in nlm?

2006-11-12 Thread Spencer Graves
  Since no one has replied in 2 days, I will offer comments: 

  1.  Your example is not self contained.  When I copied your code 
into R 2.4.0 and tried it just now, I got 'Error in fnoise(y) : object 
y not found'.  I don't know the answer to your question, and I don't 
know how I can answer it without a self-contained example. 

  2.  Is it feasible for you to upgrade to a newer version?  If you 
are using a shared machine and can't get the system administrator to 
upgrade, can you install R 2.4.0 in your own local space?  Or can you 
port your data to another machine where you can install the latest 
version of R? 

  3.  Have you worked the examples with the 'nlm' documentation?  
When I've done this kind of thing in the past, I've sometimes found 
problems with how I was trying to use the function. 

  4.  Have you tried to cut your problem down to something much 
simpler (and self contained)?  If that does not lead you to an answer to 
your question, it should at least provide something that could help 
someone else help you. 

  I know my comments don't solve your problem, but hopefully they 
will get you closer. 

  Spencer Graves

Robert Shnidman wrote:
 Admittedly I am using an old version 1.7.1, but can anyone tell if this 
 is or was a problem. I can only get nlm (nonlinear minimization) to 
 adjust the first three components of function variable. No gradient or 
 hessian is supplied. E.G.;

 fnoise
 function(y) { y[5]/(y[4]*sp2) * exp(-((x[,3]-y[1]-y[2]*x[,1]-y[3]
 *x[,2])/y[4])^2/2) + (1-y[5])/(y[9]*sp2) * exp(-((x[,3]-y[6]-y[7]*x[,1]-y[8]
 *x[,2])/y[9])^2/2) }

 nlm(sum(-log(fnoise(y))),c(5,1,1,10,.75,1,.2,.2,6),hessian=TRUE,print.level=2))

 Thanks,
 Bob

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Comparison between GARCH and ARMA

2006-11-11 Thread Spencer Graves
  Have you tried to Monte Carlo ARMA and GARCH?  If you plot the 
resulting series in various ways, I suspect the differences will be 
apparent to you.  If you'd like more from this list, I suggest you 
illustrate your question with commented, minimal, self-contained, 
reproducible code, as suggested in the posting guide 
www.R-project.org/posting-guide.html. 

  Hope this helps. 
  Spencer Graves

[EMAIL PROTECTED] wrote:
 Hi

 i was just going by this thread, i thought of igniting my mind and got 
 something wierd so i thought of making it wired. 

 i think whether you take ARMA or GARCH. In computer science these are 
 feedback systems or put it simply new values are function of past values. 
 In ARMA case it is the return series and the error series. In case of 
 GARCH it is the errors and stdev or returns and shock with propotionality 
 of coeficient. In any case we are trying to find the returns only. What if 
 i put stdev in ARMA and returns in GARCH ? I want to ask what it would end 
 up showing me. For me both are having similar structure having two parts :

 1. regression depending on past values

 2. trying to reduce errors by averaging them

 i hope i am correct. please correct me where i am wrong.

 thanks and regards
   Email(Office) :- [EMAIL PROTECTED] ,  Email(Personal) :- 
 [EMAIL PROTECTED]




 Wensui Liu [EMAIL PROTECTED] 
 Sent by: [EMAIL PROTECTED]
 08-11-06 12:24 AM

 To
 Leeds, Mark (IED) [EMAIL PROTECTED]
 cc
 r-help@stat.math.ethz.ch, Megh Dal [EMAIL PROTECTED]
 Subject
 Re: [R] Comparison between GARCH and ARMA






 Mark,

 I totally agree that it doesn't make sense to compare arma with garch.

 but to some extent, garch can be considered arma for conditional
 variance. similarly, arch can be considered ma for conditional
 variance.

 the above is just my understanding, which might not be correct.

 thanks.

 On 11/7/06, Leeds, Mark (IED) [EMAIL PROTECTED] wrote:
   
 Hi : I'm a R novice but I consider myself reasonably versed in time
 series related material and
 I have never heard of an equivalence between Garch(1,1) for volatility
 and an ARMA(1,1) in the squared returns
 and I'm almost sure there isn't.

 There are various problems with what you wrote.

 1) r(t) = h(t)*z(t) not h(i) but that's not a big deal.

 2) you can't write the equation in terms of r(t) because r(t) =
 h(t)*z(t) and h(t) is UPDATED FIRST
 And then the relation r(t) = h(t)*z(t) is true ( in the sense of the
 model ). So, r(t) is
 a function of z(t) , a random variable, so trying to use r(t) on the
 left hand side of the volatility
 equation doesn't make sense at all.

 3) even if your equation was valid, what you wrote is not an ARMA(1,1).
 The AR term is there but the MA term
 ( the beta term ) Has an r_t-1 terms in it when r_t-1 is on the left
 side. An MA term in an ARMA framework
 multiples lagged noise terms not the lag of what's on the left side.
 That's what the AR term does.

 4) even if your equation was correct in terms of it being a true
 ARMA(1,1) , you
 Have common coefficients on The AR term and MA term ( beta ) so you
 would need contraints to tell the
 Model that this was the same term in both places.

 5) basically, you can't do what you
 Are trying to do so you shouldn't expect to any consistency in estimates
 Of the intercept for the reasons stated above.
 why are you trying to transform in such a way anyway ?

 Now back to your original garch(1,1) model

 6) a garch(1,1) has a stationarity condition that alpha + beta is less
 than 1
 So this has to be satisfied when you estimate a garch(1,1).

 It looks like this condition is satisfied so you should be okay there.

 7) also, if you are really assuming/believe that the returns have mean
 zero to begin with,  without subtraction,
 Then you shouldn't be subtracting the mean before you estimate
 Because eseentially you will be subtracting noise and throwing out
 useful
 Information that could used in estimating the garch(1,1) parameters.
 Maybe you aren't assuming that the mean is zero and you are making the
 mean zero which is fine.

 I hope this helps you. I don't mean to be rude but I am just trying to
 get across that what you
 Are doing is not valid. If you saw the equivalence somewhere in the
 literature,
 Let me know because I would be interested in looking at it.


 mark






 -Original Message-
 From: [EMAIL PROTECTED]
 [mailto:[EMAIL PROTECTED] On Behalf Of Megh Dal
 Sent: Tuesday, November 07, 2006 2:24 AM
 To: r-help@stat.math.ethz.ch
 Subject: [R] Comparison between GARCH and ARMA

 Dear all R user,

 Please forgive me if my problem is too simple.
 Actually my problem is basically Statistical rather directly R related.
 Suppose I have return series ret
 with mean zero. And I want to fit a Garch(1,1)
 on this.

 my   is r[t] = h[i]*z[t]

 h[t] = w + alpha*r[t-1]^2 + beta*h[t-1]

 I want to estimate the three parameters here;

 the R syntax is as follows:

 # download data:
 data

Re: [R] Making a case for using R in Academia

2006-11-09 Thread Spencer Graves
  I haven't followed this thread, but I suggest you not judge solely 
from current and past usage, because I think R's market share is 
increasing everywhere, and it's increasing the fastest in two groups: 

  (1) People who are price sensitive, e.g., the professors in New 
Zealand who produced the first prototype of R. 

  (2) People involved in new algorithm development:  R is the 
language of choice for a rapidly growing group of people involved in new 
algorithm development.  If I see software that does something similar to 
what I want, if it's in R, I can get the source, walk through it line by 
line to see what it does and quickly modify it to see if my idea seems 
to work in my application.  For commercial software, I've got to start 
from scratch.  This increases the work required to learn and modify 
something by at least an order of magnitude. 

  Because of this latter point, I now look for an R companion for 
technical books and articles I read:  I will learn more, faster if R 
code exists than if it doesn't. 

  Bottom line:  You need to prepare your students for the future, 
not the past. 

  Hope this helps. 
  Spencer Graves

Doran, Harold wrote:
 I would turn this on its head. The problem with social science grad
 schools is that students are not expected to know R. In my org doing
 psychometrics, we won't even consider an applicant if they only know
 SPSS. 

   
 -Original Message-
 From: [EMAIL PROTECTED] 
 [mailto:[EMAIL PROTECTED] On Behalf Of 
 Charilaos Skiadas
 Sent: Thursday, November 09, 2006 11:18 AM
 To: r-help@stat.math.ethz.ch
 Subject: Re: [R] Making a case for using R in Academia

 As a addendum to all this, this is one of the responses I got 
 from one of my colleagues:

 The problem with R is that our students in many social science  
 fields, are expected to know SPSS when they go to graduate school.   
 Not having a background in SPSS would put these students at a 
 disadvantage.

 Is this really the case? Does anyone have any such statistics?

 Charilaos Skiadas
 Department of Mathematics
 Hanover College
 P.O.Box 108
 Hanover, IN 47243

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide 
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.

 

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Comparing models in multiple regression and hierarchical linear regression

2006-11-08 Thread Spencer Graves
  The questions you ask about the interactions in the model not 
making sense relates, I believe, to a multiple comparisons issue that is 
not adequately addressed by the stepAIC analysis you did.  To understand 
this, note first that you've got something close to 2^(2^5) possible 
models:  With 5 main effects, there area 2^5 = 32 possible subsets = 
interaction terms.  The function stepAIC tries to select from among 
models representing all possible subsets of these 2^5 interaction 
terms.  Just for clarity on this point, suppose we had 2 main effects, A 
and B, not 5 as you have.  Then there are 2^2 = 4 possible interaction 
terms:  1, A, B, and A:B.  Then stepAIC would want to select the best 
model from among the 2^4 = 16 models consisting of these 4 terms plus 
all possible combinations of them.  With 5 main effects, this set of 
possible models balloons to 2^32 = 4.3e9.  Some of them don't make sense 
by themselves, e.g., A:B without A and B.  However, I don't think that 
will reduce this 4.3e9 by more than an order of magnitude or so -- and 
maybe only a few percent.  If we ignore this reduction, then the 
Bonferroni correction suggests we multiply all your p values by 4.3e9.  
If you still have any that are less than, say, 0.05, then you could 
believe those.  Otherwise, I'd be skeptical.  A better way of handling 
this, I believe is Bayesian Model Averaging.  An R package BMA is 
supposed to address that, but I don't know if it will help you. 

  Other questions:  What are your 5 explanatory variables, NR, NS, 
PA, KINDERWR, and WM?  In particular, are they numbers representing at 
least an ordinal scale or are they categories?  If numbers, you've got 
possibilities for parabolic terms that you haven't even considered.  If 
categories, how many levels are there?  If some of them have large 
numbers of levels, you may want to consider those factors as random 
effects.  In that case, you need something to do 'mixed-effects' 
modeling.  For that, people use the 'nlme' or 'lme4' packages. 

  Have you tried to find a statistician with whom you could consult 
or even possibly collaborate on this? 

  Hope this helps. 
  Spencer Graves

Jenifer Larson-Hall wrote:
 I don’t know if this question properly belongs on this list, but I’ll ask it 
 here because I’ve been using R to run linear regression models, and it is 
 only in using R (after switching from using SPSS) that I have discovered the 
 process of fitting a linear model. However, after reading Crowley (2002), Fox 
 (2002), Verzani (2004), Dalgaard (2002) and of course searching the R-help 
 archives I cannot find an answer to my question.
   I have 5 explanatory variables (NR, NS, PA, KINDERWR, WM) and one 
 response variable (G1L1WR). A simple main effects model finds that only PA is 
 statistically significant, and an anova comparison between a 5-variable main 
 effects model and a 1-variable main effects model finds no difference between 
 the models. So it is possible to simplify the model to just G1L1WR ~ PA. This 
 leaves me with a residual standard error of 0.3026 on 35 degrees of freedom 
 and an adjusted R2 of 0.552.
   I also decided, following Crawley’s (2002) advice, to create a maximal 
 model, G1L1WR ~ NR*NS*PA*KINDERWR*WM. This full model is not a good fit, but 
 a stepAIC through the model revealed the model which had a maximal fit:

 maximal.fit=lm(formula = G1L1WR ~ NR + KINDERWR + NS + WM + PA + NR:KINDERWR 
 + NR:NS + KINDERWR:NS + NR:WM + KINDERWR:WM + NS:WM + NR:PA + + KINDERWR:PA + 
 NS:PA + WM:PA + NR:KINDERWR:NS + NR:KINDERWR:WM + NR:NS:WM + KINDERWR:NS:WM + 
 NR:NS:PA + KINDERWR:NS:PA + KINDERWR:WM:PA + NR:KINDERWR:NS:WM, data = 
 lafrance.NoNA)

 All of the terms of this model have statistical t-tests, the residual 
 standard error has gone down to 0.2102, and the adjusted R2 has increased to 
 0.7839. An anova shows a clear difference between the simplified model and 
 the maximal fit model. My question is, should I really pick the maximal fit 
 over the simple model when it is really so much harder to understand? I guess 
 there’s really no easy answer to that, but if that’s so, then my question 
 is—would there be anything wrong with me saying that sometimes you might 
 value parsimony and ease of understanding over best fit? Because I don’t 
 really know what the maximal fit model buys you. It seems unintelligible to 
 me. All of the terms are involved in interactions to some extent, but there 
 are 4-way interactions and 3-way interactions and 2-way interactions and I’m 
 not sure even how to understand it. A nice tree model showed that at higher 
 levels of PA, KINDERWR and NS affected scores. That I can understand, but 
 that is not reflected in this model.

   An auxiliary question, probably easier to answer, is how could I do 
 hierarchical linear regression? The authors knew that PA would be the largest 
 contributor to the response variable because of previous research

Re: [R] multivariate splines

2006-11-08 Thread Spencer Graves
  You say, I have looked at a few spline packages in R, but didn't 
find what I was looking for.  Could somebody please point me in the 
right direction?  It's difficult to comment without more information on 
what you've tried and why you thought it was not adequate.  Please 
provide commented, minimal, self-contained, reproducible code, 
explaining very briefly what you've tried and why it did not seem 
satisfactory (as suggested in the posting guide 
www.R-project.org/posting-guide.html).  In case you are not familiar 
with 'RSiteSearch', I just tried RSiteSearch(multivariate splines):  
It produced 45 hits, some of which might interest you. 

  Hope this helps. 
  Spencer Graves
p.s.  I received permission from Paul Dierckx to build an R package 
using the the software associated with his book, Curve and Surface 
Fitting with Splines (Oxford University Press, 1993).  However, I 
haven't started on that project, partly because I'm not too familiar 
with splines and partly for lack of time.  Dierckx's software seems to 
allow the fitting minimal splines, and I have not found a package in R 
that supports that. 

Tamas K Papp wrote:
 Hi,

 I am looking for an R package that would calculate multivarite (mostly
 2d and 3d, tensor) cubic interpolating splines, so that I could
 evaluate these splines (and their derivatives) at many points (unkown
 at the time of calculating the spline polynomials) repeatedly.

 To make things concrete, I have an array V with

 dim(V) = k

 and gridpoint vectors grid=list(...), length(grid[[i]])==k[i], and
 would like to get some representation of the spline convenient for
 subsequent calculations.

 I have looked at a few spline packages in R, but didn't find what I
 was looking for.  Could somebody please point me in the right
 direction?

 Thanks,

 Tamas

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Which genetic optimization package allows customized crossover and mutation operation

2006-11-08 Thread Spencer Graves
  Are you familiar with www.bioconductor.org?  They have a 
listserve with people who may know more about the question you asked. 

  Spencer Graves

sun wrote:
 Hi,

   I am looking for genetic optimization package that allow to define my own 
 chromosome solution and crossover/mutation opoerations. I checked genoud and 
 genopt, not with much effort of 'cause, with no luck.

  Any one can shed some light on this? thanks.

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Measuring the effects of history on event probabilities

2006-11-04 Thread Spencer Graves
  I haven't seen other replies, so I'll offer a feeble comment:  
It's not clear to me the events for which you want to estimate 
probabilities.  If you would still like help from this listserve, I 
suggest you send us a toy example with typical (possibly made up) data 
consisting of 5-10 observations on 2-4 individuals.  Then tell us what 
you'd like to do in terms of this toy example, possibly including things 
you've tried and why they didn't seem to give you what you wanted. 

  Hope this helps. 
  Spencer Graves

Jon Minton wrote:
 This is probably very simple but my brain has frozen over. (I'm trying to
 warm it with coffee)

  

 I have observations of around 22000 individuals over 13 successive years:
 they were either 'interviewed' at time t or 'not interviewed'.

 What's the most appropriate function/approach to use to find out the extent
 to which individuals' event outcomes are temporally correlated?

  

 Thanks,

  

  

 Jon Minton

  

  


   [[alternative HTML version deleted]]

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] help went away from top emacs window :-(

2006-11-03 Thread Spencer Graves
  This change has created problems for me using XEmacs 21.4.19 with 
ESS 5.3.1

  ?help

Error in 
print.help_files_with_topic(c:/progra~1/R/R-2.4.0/library/utils/chm/help) 
:
CHM file could not be displayed

  sessionInfo()
R version 2.4.0 (2006-10-03)
i386-pc-mingw32

locale:
LC_COLLATE=English_United States.1252;LC_CTYPE=English_United 
States.1252;LC_MONETARY=English_United 
States.1252;LC_NUMERIC=C;LC_TIME=English_United States.1252

attached base packages:
[1] methods   stats graphics  grDevices utils datasets
[7] base

  Restoring the previous default with options(chmhelp=FALSE) 
eliminates this problem.  From the time I first noticed this problem 
until just prior to reading this thread, I used RGui to get help.  (It 
wasn't hard to think of this, because 'install.packages' under ESS goes 
waits for me to select a mirror -- but without asking me to select 
such!  Result:  I used RGui instead of Rterm under XEmacs to 
install.packages.) 

  FYI. 
  Thanks for all your great efforts on R.  It has definitely made me 
much more productive, and I believe it is making substantive 
contributions to the improvement of people's quality of life all over 
the world. 

  Best Wishes,
  Spencer Graves

Richard M. Heiberger wrote:
 I agree with both Duncan and Martin.  With Duncan, that this is not
 a Windows FAQ.  With Martin, that it is an FAQ.  I recommend that it
 go on the R FAQ page, prior to the current 7.24.  Here is a draft of the
 question and an outline of the answer.

 7.24 Why does the help information, for example when I type ?lm,
   show up in (a browser window, an emacs buffer in its own frame, an
   emacs buffer in the same frame as *R*, in a Windows-style help window,
   in a new frame inside the Rgui window, in a new frame on the computer
   screen)?  I really want it in the (choose from the same list).

 The answer includes at least the following:
 On Windows, set options(chmhelp=TRUE/FALSE).
 With Java, use help.start()
 In emacs, set the emacs variable ess-help-frame
 





 On the correct default for ESS, I am not sure for myself.
 I can see a case both for help in emacs buffers and in the
 chmhelp window.  This week I am tending to the chmhelp window,
 even when on emacs.

 Rich

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] help went away from top emacs window :-(

2006-11-03 Thread Spencer Graves
I concur:  Thanks very much for looking into this.  Spencer Graves

Duncan Murdoch wrote:
 Thanks for looking into this.

 On 11/3/2006 9:53 AM, Richard M. Heiberger wrote:
 I located what might be the problem.  There is a different misbehavior
 with Gnu emacs.  I don't have Xemacs.  I have an idea how to fix it but
 haven't worked out the details.


 ess-inf.el in the lines
 (defconst inferior-R-1-input-help (format ^ *help *(%s) 
 ess-help-arg-regexp))
 (defconst inferior-R-2-input-help (format ^ *\\? *%s 
 ess-help-arg-regexp))
 (defconst inferior-R-page  (format ^ *page *(%s) 
 ess-help-arg-regexp))

 detects help requests from the commandline in the *R* buffer and 
 redirects
 their output to an emacs *help[R] (whatever)* buffer.

 In gnu emacs on windows, with options(chmhelp=TRUE), the effect is an 
 empty
 *help[R] (whatever)* buffer.  What I think happened is that ESS 
 trapped the
 request and created the buffer, then R never got the right content to 
 send
 to that buffer because R thinks that chm is handling it.

 The fix probably requires ESS to check the value of the chmhelp option
 and suppress the interception of help requests if the option has the 
 value
 TRUE.  ESS also needs to monitor if that option has been changed.  
 This might
 mean that ESS always intercepts help requests and checks the value of 
 the option
 each time.

 You should also look at options(htmlhelp) - a user might have chosen 
 that style.

 Duncan

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] nlme Error: Subscript out of bounds

2006-11-03 Thread Spencer Graves
  I can't begin to guess.  If your example were self contained (and 
preferably simple), it would be easier to diagnose. 

  When I have problems like this, I often try to find a simple 
example that produced the same error message.  That process often leads 
me to a solution.  If it doesn't it often produces a sufficiently simple 
example that I can describe it easily in a few lines of a question to 
r-help.  I might also use the 'debug' function (in the 'base' package.  
I have not used the 'debug' package, which may be more powerful but 
possibly harder to use.).  To access a chain of earlier recommendations 
on debug, I tried 'RSiteSearch(graves debug), then 
'RSiteSearch(graves debug lme)'.  The fifth hit from the latter is 
http://finzi.psych.upenn.edu/R/Rhelp02a/archive/77298.html;. 

  Hope this helps. 
  Spencer Graves

Lisa Avery wrote:
 Hello, I am new to non-linear growth modelling in R and I am trying to
 reproduce an analysis that was done (successfully) in S-Plus.  

  

 I have a simple non-linear growth model, with no nesting. I have attempted
 to simplify the call as much as possible (by creating another grouped
 object, instead of using subset= and compacting the fixed and random
 expressions.)

  

 This is a what the grouped data object looks like:

  

 levelI.data[1:2,]

 Grouped Data: GMAE ~ AGE | STUDYID

STUDYID TIMESCORE   INCURVES  MOST FIRST  AGE

 1910051 ACTUAL (unaided) in JAMA curves Level I   Level I   49.11301

 2010052 ACTUAL (unaided) in JAMA curves Level I   Level I   56.53745

GMFM GMFCS  GMAE  YRS

 19 91.03394 1 74.16 4.092751

 20 95.35018 1 84.05 4.711454

  

 Here is the nlme model:

  

 cp.grad-deriv(~ (100/(1+exp(-L)))*(1-exp(-exp(logR)*x)), c(L, logR),
 function(x=0:100,L,logR) NULL)

 levelI.nlme-nlme(GMAE~cp.grad(AGE,limit,lograte), 

 data=levelI.data, 

 fixed = limit+lograte~1, 

 random = limit+lograte~1, 

 start = c(2.0, -3.0))

  

 I get a subscript out of bounds error  - which I am not finding very helpful
 because I don't know where things are going wrong.

  

 Bill Shipley posted a similar problem with nlme called with a self-starting
 function - but here I don't use a self-starting function and I give the
 starting values explicitly so I assume that this is not the same problem he
 is having.

  

 What am I doing wrong?  Any insights would be very much appreciated.

  

 Kind Regards, Lisa Avery


   [[alternative HTML version deleted]]

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Variance Component/ICC Confidence Intervals via Bootstrap or Jackknife

2006-10-29 Thread Spencer Graves
  I can think of two ways to get confidence intervals on intraclass 
correlations (ICCs) and more accurate intervals for variance 
components:  (1) modifying 'simulate.lme' to store the estimated 
variance components as well as logLik and (2) using 'lmer' and 
'mcmcsamp' in library(lme4). 

  The difficulty with (1) is that you have to make a local copy of 
'simulate.lme', then figure out where and how to modify it.  I've just 
looked at the code, and it looks like the required modifications should 
be fairly straightforward.  The problem with (2) is that you would have 
to learn the syntax for a nested model for 'lmer'.  It's different from 
that for 'lme' but not difficult.  The 'lmer' function is newer and 
better in many ways but is not as well documented and does not have as 
many helper functions yet.  The best documentation available for it may 
be the 'MlmSoftRev' vignette in the 'mlmRev' package plus the R News 
5/1 article from May 2005.  If you are not familiar with vignettes, 
RSiteSearch(graves vignette) produced 74 hits for me just now.  Find 
'vignette' in the first hit led me to an earlier description of vignettes. 

  If it were my problem, I'd probably try the second, though I might 
try both and compare.  If you try them both, I'd be interested in the 
comparison.  If the answers were substantially different, I'd worry. 

  Hope this helps. 
  Spencer Graves

Rick Bilonick wrote:
 I'm using the lme function in nmle to estimate the variance components
 of a fully nested two-level model:

 Y_ijk = mu + a_i + b_j(i) + e_k(j(i))

 lme computes estimates of the variances for a, b, and e, call them v_a,
 v_b, and v_e, and I can use the intervals function to get confidence
 intervals. My understanding is that these intervals are probably not
 that robust plus I need intervals on the intraclass correlation
 coefficients:

 v_a/(v_a + v_b + v_e)

 and

 (v_a + v_b)/(v_a + v_b + v_e).

 I would like to use a bootstrap or a jackknife estimate of these
 confidence intervals. Is there any package available?

 I've tried to write the R code myself but I'm not sure if I understand
 exactly how to do it. The way I've tried to do a bootstrap estimate is
 to sample with replacement for i units, then sample with replacement the
 j(i) units, and then finally sample with replacement the k(j(i)) units.

 But the few references I've been able to track down (Arvesen, Biometrcs,
 1970 is one), seem to say that I should just sample with replacement the
 i units. Plus they seem to indicate that a log transform is needed. The
 Arvesen reference used something like using log(v_a/v_e) as an estimator
 for sigma_a^2/sigma_e^2 and getting an interval and then transforming to
 get to an interval for the ICC (although it's not clear to me how to get
 the other ICC in a two-level nested design).

 Any insights would be appreciated.

 Rick B.

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Mixed conditional logit model

2006-10-29 Thread Spencer Graves
  The 'lmer' function in library(lme4) and the 'glmmPQL' function in 
library(MASS) both will estimate mixed-effects logit / logistic 
regression models.  (If anyone thinks there is a difference between 
'logit' and 'logistic regression' models, I hope they will disabuse me 
of my ignorance.) 

  Hope this help. 
  Spencer Graves

Ryuichi Tamura wrote:
 Hi,

 Sven Müller sven.mueller at tu-dresden.de writes:

   
 i wonder whether it is possible to estimate a mixed (random parameters) 
 logit model in R. 
 

 I wrote a collection of R functions for estimating discrete choice models 
 by simulated maximum likelihood. It includes:
 - likelihood and gradient functions for estimating mixed mnl, mixed 
panel mnl with some specific random structures for coefs
   (normal, lognormal, triangular, uniform)
 - some IIA testing functions includes 'artificial variable test' in
Mcfadden and Train(2001)
 - functions for generating halton sequences for 'efficient' simulation.

 Please email me if you interested in.
 However, it is not so difficult to write programs for your own needs.
 URL for main reference is Kenneth Train's Home Page:
 http://elsa.berkeley.edu/~train/
 My R functions is based on GAUSS program listed in this site.


 Regards,

 Ryuichi Tamura

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Help with random effects and smoothing splines in GAMM

2006-10-29 Thread Spencer Graves
  Your random specification 'list(x=~1, s(z,bs=cr)=~1)' 
generates for me a syntax error.  To understand it, please see the 
documentation for 'list'.  The help file for 'list' says, The arguments 
to |list| or |pairlist| are of the form |value| or |tag=value|, and I 
believe that 'tag' must be a legal R name.  In your 'random' 
specification, R wants to interpret 's(z, bs=cr)' as a 'tag'.  This 
generates a 'syntax error', because it is not a legal R name.  To see 
how to get around this, I suggest you work through all the examples in 
the 'gamm' help file.  If that is not adequate, I suggest you also 
review the 'gamm' article in the June 2001 issue of R News (vol. 1/2, 
available from www.r-project.org - Documentation:  Newsletter). 

  If you want more help from this listserve, please provide 
commented, minimal, self-contained, reproducible code, as suggested in 
the posting guide www.R-project.org/posting-guide.html.  Your example 
was not self contained. 

  Hope this helps. 
  Spencer Graves

Liang, Hua wrote:
 Try to fit a longitudinal dataset using generalized mixed effects models
 via the R function gamm() as follows:

  

 library(mgcv)

 gamm0.fit- gamm(y ~ x+s(z,bs=cr),

 random=list(

 x=~1,

 s(z,bs=cr)=~1

 ),

  family = binomial, data =raw)

 the data is given by raw=(id, y,x,z). It doesn't work. If you can tell
 me how to fix this problem, it will be appreciated.

  

  

  

  



   [[alternative HTML version deleted]]

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Get the names of the columns in a tserie

2006-10-29 Thread Spencer Graves
  Have you tried 'str(X)' on the 'X' object that 'read.ts' created 
for you?  If you've done that and still can't solve your problem, I 
suggest you review the Introduction to R manual, available as the 
first option on the left from the html file you get from 
'help.start()'.  If that still does not answer your question, I suggest 
you review the 'zoo' vignette.  For more information about how to access 
that, I suggest you try 
http://finzi.psych.upenn.edu/R/Rhelp02a/archive/67006.html; (which was 
the first of 35 hits that I got just now in response to  
'RSiteSearch(graves zoo vignette)'). 

  If you'd like further help from this listserve, please provide 
commented, minimal, self-contained, reproducible code, as suggested in 
the posting guide www.R-project.org/posting-guide.html.  Your example 
was not self contained.  Also, R is case sensitive, so Tseries is 
different from tseries.  I believe you can increase your chances of 
getting more and more useful replies quicker by thinking about what 
someone would have to know to be able to answer your question.  In 
particular, a very short, self contained example sometimes invites 
people who may never have used the 'tseries' package before to 
experiment with your example.  By doing so, they get a very quick 
introduction to 'tseries', and with only a little luck, you get a quick 
answer to your question.  It's win-win, and I've learned a lot doing 
just that.  Without a simple, self-contained example, I know it will 
likely take me a lot longer to even understand what you are asking, and 
I will likely not be very confident that I even understand your question 
even if I attempt an answer. 

  Hope this helps. 
  Spencer Graves

lvdtime wrote:
 Please, I need this information, it's important for my work

 Thank you.


 lvdtime wrote:
   
 Hello everybody,

 I'm a beginner in R, and I'm currently working on Tseries (analysis of a
 portfolio)

 I imported the data like this (library tseries) :
 X-read.ts(X.dat, start=c(1995,1), frequency=261, header=T, sep=;)

 There is a header which contains the names of each column (codes of
 shares)

 I'd like to know if it is possible to get the names of the columns (to
 display it in the plots : ylab=name of the col)
 To summarize, I wonder if a code like label(X[,i]) exists...


 Thank you for your help


 LVDTime


 



__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Variance Component/ICC Confidence Intervals via Bootstrap or Jackknife

2006-10-29 Thread Spencer Graves
see in line 

Rick Bilonick wrote:
 On Sun, 2006-10-29 at 11:06 -0800, Spencer Graves wrote:
   
   I can think of two ways to get confidence intervals on intraclass 
 correlations (ICCs) and more accurate intervals for variance 
 
snip
   
 



 The way I've done the bootstrapping (sampling at each level) sounds the
 same as using a simulation. But articles and references I've found
 indicate that only the highest level (a if c is nested in b and b is
 nested a) should be sampled.

   
Correct:  Bootstrapping with mixed effects hurts my brain just to think 
about it, because if you are not careful, you effectively assume away 
the variance components structure, and that's rarely what you want to 
do.  The difference between simulate.lme and mcmcsamp is that the 
first produces Monte Carlo confidence intervals, while the latter 
produces Bayesian posterior intervals, so the interpretation is 
slightly different.  I do not recall having seen any direct comparison, 
so I'd be interested in the results, if you try both.  However, I'd be 
extremely surprised if the answers were very different.  If I did it 
myself and found substantive differences, I would suspect that I'd done 
something wrong with at least one of the two.  If I couldn't find any 
errors in my work, I'd start looking for help. 

  Hope this helps. 
  Spencer Graves
 Rick B.





__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] plot.POSIXct plot.POSIXlt

2006-10-28 Thread Spencer Graves
  Your example is not self contained, and I was unable to generate 
an example that produced the warning messages you've mentioned. 

  Have you tried 'options(warn=2)', then running your example?  This 
will turn the warning into an error.  Then you might be able to get 
something from 'traceback'.  I also encourage you to experiment with 
'debug'.  I've discussed how to do this in previous posts.  
'RSiteSearch(debug spencer)' produced 124 hits for me just now.  
Number 7 on that list looked like it might help you:  
http://finzi.psych.upenn.edu/R/Rhelp02a/archive/79251.html;. 

  Hope this helps. 
  Spencer Graves

Patrick Giraudoux wrote:
 Hi,

 Just to signal that when I want to plot POSIXct variable on x using 
 format within plot(), I get what I want on the plot but with a number of 
 warnings:

   plot(y~x,format=%y-%m)
 Warning messages:
 1: format is not a graphical parameter in: plot.window(xlim, ylim, 
 log, asp, ...)
 2: format is not a graphical parameter in: plot.xy(xy, type, pch, lty, 
 col, bg, cex, lwd, ...)
 3: format is not a graphical parameter in: axis(side, at, labels, 
 tick, line, pos, outer, font, lty, lwd, 
 4: format is not a graphical parameter in: box(which = which, lty = 
 lty, ...)
 5: format is not a graphical parameter in: title(main, sub, xlab, 
 ylab, line, outer, ...)

 I suppose that format may not be at the right place in plot() or/and not 
 handled by the functions called from there, however the documentation 
 (?plot.POSIXct) seems to allow passing this argument:

  ...: Further arguments to be passed from or to other methods,
   typically graphical parameters or arguments of
   'plot.default'.  For the 'plot' methods, also 'format'.

 Any comment?

 Patrick

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Tukey-Kramer test

2006-10-26 Thread Spencer Graves
  RSiteSearch(Tukey-Kramer) produced 7 hits for me just now.  Have 
you looked at these?  If yes, please provide a commented, minimal, 
self-contained, reproducible example of something you've tried, 
explaining why it did not meet your needs, as suggested in the posting 
guide www.R-project.org/posting-guide.html. 

  Hope this helps. 
  Spencer Graves

A.R. Criswell wrote:
 Hello All,

 I found the TukeyHSD() function. Is there a Tukey-Kramer test for
 unbalanced data?

 Andrew

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] linear mixed effects models with breakpoints

2006-10-23 Thread Spencer Graves
  Are you familiar with how to include include the desired serial 
correlation structure in 'lme', ignoring the breakpoint question?  If 
no, please consult Pinheiro and Bates (2000) Mixed-Effects Models in S 
and S-Plus (Springer)?  If yes, it should be a relatively easy matter to 
add that to the 'lmeChgPt' function. 

  If you'd like further help from this listserve, please provide 
commented, minimal, self-contained, reproducible code, as suggested in 
the posting guide www.R-project.org/posting-guide.html. 

  Hope this helps. 
  Spencer Graves

Ewan Wakefield wrote:
 Hi folks

 I have some data to which I've been fitting linear mixed effects
 models. I am currently using a lme model in the nlme package, with terms
 for random effects due to repeated measures on individuals and the
 corCAR1 serial correlation structure. However, there is some suggestion
 in the data (and from theory) that a breakpoint (change point) model may
 be more appropriate. Scott, Norman and Berger's lmeChgPt model seems to
 go some way to fitting the requirements for such a model but does not
 incorporate, as I understand it, a serial correlation structure.

 Is anyone aware of a model fitting procedure that incorporates the
 correlation structures of the lme models in a breakpoint model?

 Kind regards,

 Ewan Wakefield



 British Antarctic Survey,
 High Cross,
 Madingley Road,
 Cambridge CB3 OET

 Tel. +44 (0) 1223 221215
 E-mail: [EMAIL PROTECTED]
 Website: antarctica.ac.uk

 --  
 This message (and any attachments) is for the recipient only...{{dropped}}

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] MARS help?

2006-10-18 Thread Spencer Graves
Hi, Andy: 

  Thanks very much.  The 'strucchange' package (including its 
vignette) provide useful food for thought.  I haven't yet seen exactly 
what I want, but I also haven't explored more than the vignette, and I 
see other functions that may do what I want. 

  Thanks again. 
  Spencer Graves

Liaw, Andy wrote:
 Spencer,
  
 MARS fits splines, not disconnected lines.  Perhaps the strucchange 
 package has facility to fit your data better.
  
 Cheers,
 Andy

 
 *From:* [EMAIL PROTECTED] on behalf of Spencer Graves
 *Sent:* Tue 10/17/2006 11:43 PM
 *To:* R-help; Kurt Hornik
 *Subject:* [R] MARS help? [Broadcast]

   I'm trying to use mars{mda} to model functions that look fairly
 close to a sequence of straight line segments.  Unfortunately, 'mars'
 seems to totally miss the obvious places for the knots in the apparent
 first order spline model, and I wonder if someone can suggest a better
 way to do this.  The following example consists of a slight downward
 trend followed by a jump up after t1=4, following by a more marked
 downward trend after t1=5:

 Dat0 - cbind(t1=1:10,
x=c(1, 0, 0, 90, 99, 95, 90, 87, 80, 77))
 library(mda)
 fit0 - mars(Dat0[, 1, drop=FALSE], Dat0[, 2],
  penalty=.001)
 plot(Dat0, type=l)
 lines(Dat0[, 1], fit0$fitted.values,
   lty=2, col=red)

   Are there 'mars' options I'm missing or other software I should be
 using?

   I've got thousands of traces crudely like this of different
 lengths, and I want an automated way of summarizing similar traces in
 terms of a fixed number of knots and associated slopes for each linear
 spline segment max(0, t1-t.knot).

   Thanks,
   Spencer Graves

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide 
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


 --
 Notice: This e-mail message, together with any attachments...{{dropped}}

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] MARS help?

2006-10-17 Thread Spencer Graves
  I'm trying to use mars{mda} to model functions that look fairly 
close to a sequence of straight line segments.  Unfortunately, 'mars' 
seems to totally miss the obvious places for the knots in the apparent 
first order spline model, and I wonder if someone can suggest a better 
way to do this.  The following example consists of a slight downward 
trend followed by a jump up after t1=4, following by a more marked 
downward trend after t1=5: 

Dat0 - cbind(t1=1:10,
   x=c(1, 0, 0, 90, 99, 95, 90, 87, 80, 77))
library(mda)
fit0 - mars(Dat0[, 1, drop=FALSE], Dat0[, 2],
 penalty=.001)
plot(Dat0, type=l)
lines(Dat0[, 1], fit0$fitted.values,
  lty=2, col=red)

  Are there 'mars' options I'm missing or other software I should be 
using? 

  I've got thousands of traces crudely like this of different 
lengths, and I want an automated way of summarizing similar traces in 
terms of a fixed number of knots and associated slopes for each linear 
spline segment max(0, t1-t.knot). 

  Thanks,
  Spencer Graves

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Interval pls

2006-10-16 Thread Spencer Graves
  RSiteSearch(interval pls) produced 7 hits for me just now.  I 
didn't study all of them carefully, but I doubt if any one of them have 
exactly what you want.  One could also search for pls with ordered 
factors, but I didn't try that. 

  However, I doubt if it would be excessively difficult to write a 
wrapper for appropriate functions in the 'pls' packages to do what you 
want.  With the 'interval' variables as a potentially explanatory 
variable rather than a response variable, one could start by replacing 
ordered factors by contrasts, then adjusting the coding for intermediate 
levels iteratively until either (a) intermediate levels were collapsed 
because the model would otherwise reverse them or (b) the coefficients 
of nonlinear terms were all zero.  Something similar could be done if an 
'interval' variable was a response variable, with the goal of maximizing 
R2. 

  Something similar could be done using the 'sem' package, but the 
'sem' function seems to require a moment or covariance matrix as an 
input, which might make the wrapper somewhat more complex, I think.  
With 'sem' one could adjust the levels for an ordered response variable 
to minimize BIC, rather than maximizing R2.  This latter seems to me to 
be somewhat more satisfying and solid theoretically, but others may not 
agree. 

  Of course, whatever I did in this regard, I'd want to also 
accompany the effort with a more careful literature search. 

  Hope this helps. 
  Spencer Graves

Dirk De Becker wrote:
 Hey R-helpers,

 Does anybody know of an implementation of interval PLS in R?

 Thx,

 Dirk



__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] extracting nested variances from lme4 model

2006-10-16 Thread Spencer Graves
Dear Doug   Frank: 

  Thanks for the reply, Doug.  First a comment, then another 
question for you, Doug, if you have time: 

  1.  COMMENT:  Here's an ugly hack that would seem to answer 
Frank's question using 'lmer': 

(vt - with(dtf, tapply(x, trth, sd)))
(vr - vt[2]/vt[1])

mod1b - lmer(x~trth+(1|rtr)+(I(vt[1]*(1+vr*trth))-1|cs),
  data=dtf)

  Now put this is a loop over a range of values for 'vr' and pick 
the one that maximizes 'logLik' (or minimizes AIC or BIC);  it should be 
fairly close if not identical to this initial estimate. 

  2.  ANOTHER QUESTION:  Can the desired model be fit using 'lme', 
with crossed random effects, and we want separate variance estimates for 
the random effect 'cs' for each level of the fixed effect 'trth'?   I 
previously suggested chapters 4 and 5 in Pinheiro and Bates (2000).  
However, even with the scripts ch04.R and ch05.R in 
~library\nlme\scripts, it might take me a while to figure out how to 
do it (and even answer the question of whether it can). 

  Thanks again. 
  Spencer Graves

Douglas Bates wrote:
 On 10/15/06, Douglas Bates [EMAIL PROTECTED] wrote:
 On 10/14/06, Spencer Graves [EMAIL PROTECTED] wrote:
You want to estimate a different 'cs' variance for each level of
  'trth', as indicated by the following summary from your 'fake data 
 set':
 
tapply(dtf$x, dtf$trth, sd)
 FALSE TRUE
  1.532251 8.378206
 
Since var(x[trth])  var(x[!trth]), I  thought that the 
 following
  should produce this:
 
mod1.-lmer( x ~  (1|rtr)+ (trth|cs) , data=dtf)
 
Unfortunately, this generates an error for me:
 
  Error in lmer(x ~ (1 | rtr) + (trth | cs), data = dtf) :
  the leading minor of order 1 is not positive definite
 
I do not understand this error, and I don't have other ideas 
 about
  how to get around it using 'lmer'.

 Hmm.  It's an interesting problem.  I can tell you where the error
 message originates but I can't yet tell you why.

 The error message originates when Cholesky decompositions of the
 variance-covariance matrices for the random effects are generated at
 the initial estimates.  It looks as if one of the model matrices is
 not being formed correctly for some reason.  I will need to do more
 debugging.

 OK, I have worked out why the error occurs and will upload
 lme4_0.9975-4, which fixes this problem and also has some speedups
 when fitting generalized linear mixed models.  Just for the record, I
 had assumed that the cholmod_aat function in the CHOLMOD sparse matrix
 library (the function cholmod_aat calculates AA' from a sparse matrix
 A) returned a matrix in which the columns are sorted in increasing
 order of the rows.  That is not always correct but the situation is
 easily remedied.  (The typical way to sort the columns in sparse
 matrices is to take the transpose of the transpose, which had me
 confused when I first saw it in some of Tim Davis's code.  It turns
 out that a side effect of this operation is to sort the columns in
 increasing order of the rows.)

 With this patched lme4 package the model Spencer proposed can be fit
 but the variance-covariance matrix of the two-dimensional random
 effects for the cs factor is singular because of the design (the level
 of trth is constant within each level of cs).  I'm not sure how to
 specify the desired model in lmer.

 (fm2 - lmer(x ~(1|rtr) + (trth|cs), dtf))
 Linear mixed-effects model fit by REML
 Formula: x ~ (1 | rtr) + (trth | cs)
   Data: dtf
   AIC   BIC logLik MLdeviance REMLdeviance
 252.5 260.9 -121.2  244.8242.5
 Random effects:
 Groups   NameVariance Std.Dev. Corr
 cs   (Intercept)  1.9127  1.3830
  trthTRUE24.1627  4.9156   1.000
 rtr  (Intercept)  1.9128  1.3830
 Residual 18.5278  4.3044
 number of obs: 40, groups: cs, 10; rtr, 4

 Fixed effects:
Estimate Std. Error t value
 (Intercept)  -0.2128 1.2723 -0.1673
 Warning message:
 nlminb returned message false convergence (8)
 in: `LMEoptimize-`(`*tmp*`, value = list(maxIter = 200, tolerance =
 1.49011611938477e-08,




  It seems to me that 'lme' in
  library(nlme) should also be able to handle this problem, but 'lme' is
  designed for nested factors, and your 'rtr' and 'cs' are crossed.  
 If it
  were my problem, I think I'd study ch. 4 and possibly ch. 5 of 
 Pinheiro
  and Bates (2000) Mixed-Effects Models in S and S-Plus (Springer) on
  'pdMat' and 'corrStruct' classes, because I believe those may support
  models with crossed random effects like in your example AND might also
  support estimating different variance components for different 
 levels of
  a fixed effect like 'trth' in your example.
 
If we are lucky, someone else might educate us both.
 
I'm sorry I couldn't answer your question, but I hope these
  comments might help in some way.
 
Spencer Graves
 
  Frank Samuelson wrote:
   I have a model:
   mod1-lmer( x ~  (1|rtr

Re: [R] split-plot analysis with lme()

2006-10-15 Thread Spencer Graves
  The problem in your example is that 'lme' doesn't know how to 
handle the Variety*nitro interaction when all 12 combinations are not 
present.  The error message singularity in backsolve means that with 
data for only 11 combinations, which is what you have in your example, 
you can only estimate 11 linearly independent fixed-effect coefficients, 
not the 12 required by this model:  1 for intercept + (3-1) for Variety 
+ (4-1) for nitro + (3-1)*(4-1) for Variety*nitro = 12. 

  Since 'nitro' is a fixed effect only, you can get what you want by 
keeping it as a numeric factor and manually specifying the (at most 5, 
not 6) interaction contrasts  you want, something like the following: 

fit2. - lme(yield ~ Variety+nitro+I(nitro^2)+I(nitro^3)
 +Variety:(nitro+I(nitro^2)), data=Oats,
 random=~1|Block/Variety,
subset=!(Variety == Golden Rain  nitro == 0))

  NOTE:  This gives us 4 degrees of freedom for the interaction.  
With all the data, we can estimate 6.  Therefore, there should be some 
way to get 5, but so far I haven't figured out an easy way to do that.  
Perhaps someone else will enlighten us both. 

  Even without a method for estimating an interaction term with 5 
degrees of freedom, I hope I've at least answered your basic question. 

  Best Wishes,
  Spencer Graves  

i.m.s.white wrote:
 Dear R-help,

 Why can't lme cope with an incomplete whole plot when analysing a split-plot
 experiment? For example:

 R : Copyright 2006, The R Foundation for Statistical Computing
 Version 2.3.1 (2006-06-01)

   
 library(nlme)
 attach(Oats)
 nitro - ordered(nitro)
 fit - lme(yield ~ Variety*nitro, random=~1|Block/Variety)
 anova(fit)
 
   numDF denDF   F-value p-value
 (Intercept)   145 245.14333  .0001
 Variety   210   1.48534  0.2724
 nitro 345  37.68560  .0001
 Variety:nitro 645   0.30282  0.9322

 # Excellent! However ---

   
 fit2 - lme(yield ~ Variety*nitro, random=~1|Block/Variety, subset=
 
 + !(Variety == Golden Rain  nitro == 0))
 Error in MEEM(object, conLin, control$niterEM) : 
   Singularity in backsolve at level 0, block 1


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Gradient problem in nlm

2006-10-14 Thread Spencer Graves
  Because of the lack of a self-contained, reproducible example, I 
can only guess.  If it were my problem, I might try the following: 

  1.  Try 'nlm' with 'print.level=2'.  This should provide more 
detail about the circumstances under which it failed.  If that didn't 
provide adequate detail, I might modify my objective function to print 
more detailed trace information each time it's called. 

  2.  Compare the gradient with that computed numerically, e.g., 
using 'grad' in library(numDeriv), especially for combinations of values 
apparently tested by 'nlm'. 

  3.  Work through all the examples on the 'nlm' help page and 
demo(nlm), making sure I understood those in full detail.  Doing so 
might identify something I was doing wrong, etc. 

  4.  If I can't solve the problem after all of the above, I might 
try to develop the simplest, self-contained example I can find that 
still exhibits the problem.  This often leads me to the problem.  If it 
doesn't, I then have a simple, self-contained example that I can then 
post to this list;  including such an example on average tends to 
increase the speed and quality of responses (and sometimes even the 
quantity). 

  Hope this helps. 
  Spencer Graves

singyee ling wrote:
 Hello everyone!


 I am having some trouble supplying the gradient function to nlm in R for
 windows version 2.2.1.

 What follows are the R-code I use:

 fredcs39-function(a1,b1,b2,x){return(a1+exp(b1+b2*x))}
 loglikcs39-function(theta,len){
 value-sum(mcs39[1:len]*fredcs39(theta[1],theta[2],theta[3],c(8:(7+len))) -
 pcs39[1:len] * log(fredcs39(theta[1],theta[2],theta[3],c(8:(7+len)
 a1-theta[1]; b1-theta[2]; b2-theta[3]
 df.a1-sum(-mcs39[1:len] + pcs39[1:len]/(a1+exp(b1+b2*c(8:(7+len)
 df.b1-sum( -mcs39[1:len] * exp(b1+b2*c(8:(7+len))) + (pcs39[1:len] *
 exp(b1+b2*c(8:(7+len))) ) /(a1+exp(b1+b2*c(8:(7+len)
 df.b2- sum(-mcs39[1:len] * exp(b1+b2*c(8:(7+len))) * c(8:(7+len))  +
 (pcs39[1:len] * exp(b1+b2*c(8:(7+len)))  * c(8:(7+len))  )
 /(a1+exp(b1+b2*c(8:(7+len )


 attr(value,gradient)-c(df.a1,df.b1,df.b2)
 return(value)
 }

 theta.start-c(0.01 ,-1.20, -0.0005)
 outcs39-nlm(loglikcs39,theta.start,len=50)


 Error in nlm(loglikcs39, theta.start, len = 50) :
 probable coding error in analytic gradient


 Any light that can be shed on this would be highly appreciated.
 Many thanks

 Singyee Ling

   [[alternative HTML version deleted]]

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] nlme_varcov matrix

2006-10-14 Thread Spencer Graves
  Do you have Pinheiro and Bates (2000) Mixed-Effects Models in S 
and S-Plus (Springer)?  That book should contain worked examples to 
answer your question.  If the library at the Patagonia National Center 
does not have a copy, I recommend you do what you can to encourage them 
to obtain a copy.  There are many, many applications in the study of 
biology and the management of aquatic resources for the types of 
analyses described in that book -- and if people in your institute are 
not routinely using those types of analyses, they may be making 
decisions using misleading analyses because of the failure to explicitly 
model important components of variance -- and you can quote me to your 
librarian if that would help. 

  Beyond that, have you reviewed 'help(package=nlme)', 
methods(class='lme'), and methods(class='nlme') -- plus 'str' of the 
output of an 'nlme' run. 

  If you'd like further help from this listserve, please provide 
commented, minimal, self-contained, reproducible code as suggested in 
the posting guide www.R-project.org/posting-guide.html. 

  Para servirle. 
  Spencer Graves

gabriela escati peñaloza wrote:
 Hi all!
 Is there a function that provides the VarCov matrix
 for a nlme objects? How I can extract the matrix for a
 nlme model fitted?

 I would appreciate any guidance

 Regards

 Lic. Gabriela Escati Peñaloza
 Biología y Manejo de Recursos Acuáticos
 Centro Nacional Patagónico(CENPAT). 
 CONICET
 Bvd. Brown s/nº.
 (U9120ACV)Pto. Madryn 
 Chubut
 Argentina

 Tel: 54-2965/451301/451024/451375/45401 (Int:277)
 Fax: 54-29657451543


   
   
   
 __

 Todo lo que querías saber, y lo que ni imaginabas,

 ¡Probalo ya!

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Optim: Function definition

2006-10-14 Thread Spencer Graves
  Have you tried making contour plots of your objective function, 
e.g., using expand.grid and 'contour' or 'contourplot', as described in 
Venables and Ripley (2002) Modern Applied Statistics with S (Springer)? 

  Hope this helps. 
  Spencer Graves

Serguei Kaniovski wrote:
 Hi all,

 I apply optim to the function obj, which minimizes the goodness of 
 fit statistic and obtains Pearson minimum chi-squared estimate for x[1], 
 x[2] and x[3]. The vector fr contains the four observed frequencies.

 Since fr[i] appears in the denominator, I would like to substitute 0 
 in the sum if fr[i]=0.

 I tried an ifelse condition which works, but in some cases the 
 solution seems rather odd. Is there anything wrong with the way second 
 objective function is coded?

 obj-function(x){
 (fr[1]-x[1]*x[2]-x[3]*sqrt(x[1]*(1-x[1])*x[2]*(1-x[2])))^2/fr[1]+
 (fr[2]-(1-x[1])*x[2]+x[3]*sqrt(x[1]*(1-x[1])*x[2]*(1-x[2])))^2/fr[2]+
 (fr[3]-x[1]*(1-x[2])+x[3]*sqrt(x[1]*(1-x[1])*x[2]*(1-x[2])))^2/fr[3]+
 (fr[4]-(1-x[1])*(1-x[2])-x[3]*sqrt(x[1]*(1-x[1])*x[2]*(1-x[2])))^2/fr[4]
 }

 obj-function(x){
 ifelse(fr[1]==0,0,(fr[1]-x[1]*x[2]-x[3]*sqrt(x[1]*(1-x[1])*x[2]*(1-x[2])))^2/fr[1])+
 ifelse(fr[2]==0,0,(fr[2]-(1-x[1])*x[2]+x[3]*sqrt(x[1]*(1-x[1])*x[2]*(1-x[2])))^2/fr[2])+
 ifelse(fr[3]==0,0,(fr[3]-x[1]*(1-x[2])+x[3]*sqrt(x[1]*(1-x[1])*x[2]*(1-x[2])))^2/fr[3])+
 ifelse(fr[4]==0,0,(fr[4]-(1-x[1])*(1-x[2])-x[3]*sqrt(x[1]*(1-x[1])*x[2]*(1-x[2])))^2/fr[4])
 }

 sval=rep(0.1,3)
 fr-c(0,0.1,0.2,0.3)

 optim(sval,obj, method=BFGS)$par

 Thank you,
 Serguei


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] extracting nested variances from lme4 model

2006-10-14 Thread Spencer Graves
  You want to estimate a different 'cs' variance for each level of 
'trth', as indicated by the following summary from your 'fake data set': 

  tapply(dtf$x, dtf$trth, sd)
   FALSE TRUE
1.532251 8.378206

  Since var(x[trth])  var(x[!trth]), I  thought that the following 
should produce this: 

  mod1.-lmer( x ~  (1|rtr)+ (trth|cs) , data=dtf) 

  Unfortunately, this generates an error for me: 

Error in lmer(x ~ (1 | rtr) + (trth | cs), data = dtf) :
the leading minor of order 1 is not positive definite

  I do not understand this error, and I don't have other ideas about 
how to get around it using 'lmer'.  It seems to me that 'lme' in 
library(nlme) should also be able to handle this problem, but 'lme' is 
designed for nested factors, and your 'rtr' and 'cs' are crossed.  If it 
were my problem, I think I'd study ch. 4 and possibly ch. 5 of Pinheiro 
and Bates (2000) Mixed-Effects Models in S and S-Plus (Springer) on 
'pdMat' and 'corrStruct' classes, because I believe those may support 
models with crossed random effects like in your example AND might also 
support estimating different variance components for different levels of 
a fixed effect like 'trth' in your example. 

  If we are lucky, someone else might educate us both. 

  I'm sorry I couldn't answer your question, but I hope these 
comments might help in some way. 

  Spencer Graves

Frank Samuelson wrote:
 I have a model:
 mod1-lmer( x ~  (1|rtr)+ trth/(1|cs) , data=dtf)  #

 Here, cs and rtr are crossed random effects.
 cs 1-5 are of type TRUE, cs 6-10 are of type FALSE,
 so cs is nested in trth, which is fixed.
 So for cs I should get a fit for 1-5 and 6-10.

 This appears to be the case from the random effects:
   mean( ranef(mod1)$cs[[1]][1:5] )
 [1] -2.498002e-16
   var( ranef(mod1)$cs[[1]][1:5] )
 [1] 23.53083
   mean( ranef(mod1)$cs[[1]][6:10] )
 [1] 2.706169e-16
   var( ranef(mod1)$cs[[1]][6:10] )
 [1] 1.001065

 However VarCorr(mod1) gives me only
 a single variance estimate for the cases.
 How do I get the other variance estimate?

 Thanks for any help.

 -Frank



 My fake data set:
 ---
 data:
 $frame
   x rtr  trth cs
 1   -4.4964808   1  TRUE  1
 2   -0.6297254   1  TRUE  2
 31.8062857   1  TRUE  3
 42.7273275   1  TRUE  4
 5   -0.6297254   1  TRUE  5
 6   -1.3331767   1 FALSE  6
 7   -0.3055796   1 FALSE  7
 81.3331767   1 FALSE  8
 90.1574314   1 FALSE  9
 10  -0.1574314   1 FALSE 10
 11  -3.0958803   2  TRUE  1
 12  12.4601615   2  TRUE  2
 13  -5.2363532   2  TRUE  3
 14   1.5419810   2  TRUE  4
 15   7.7071005   2  TRUE  5
 16  -0.3854953   2 FALSE  6
 17   0.7673098   2 FALSE  7
 18   2.9485984   2 FALSE  8
 19   0.3854953   2 FALSE  9
 20  -0.3716558   2 FALSE 10
 21  -6.4139940   3  TRUE  1
 22  -3.8162344   3  TRUE  2
 23 -11.0518554   3  TRUE  3
 24   2.7462631   3  TRUE  4
 25  16.2543990   3  TRUE  5
 26  -1.7353668   3 FALSE  6
 27   1.3521369   3 FALSE  7
 28   1.3521369   3 FALSE  8
 29   0.2054067   3 FALSE  9
 30  -1.7446691   3 FALSE 10
 31  -6.7468356   4  TRUE  1
 32 -11.3228243   4  TRUE  2
 33 -18.0088554   4  TRUE  3
 34   4.6264891   4  TRUE  4
 35   9.2973991   4  TRUE  5
 36  -4.1122576   4 FALSE  6
 37  -0.5091994   4 FALSE  7
 38   1.2787085   4 FALSE  8
 39  -1.6867089   4 FALSE  9
 40  -0.5091994   4 FALSE 10

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Nonlinear mixed effects model

2006-10-14 Thread Spencer Graves
  Have you read Pinheiro and Bates (2000) Mixed-Effects Models in S 
and S-Plus (Springer), especially the discussion of 'corStruct' classes 
in ch. 5 and the nonlinear mixed-effects models in Part II?  Bates has 
kept a small army of graduate students busy for over 20 years developing 
good ways to do things like this, and this book is probably the 
best-known product of that work. 

  Hope this helps. 
  Spencer Graves

Cam wrote:
 In nonlinear mixed effects models, SAS doesn't allow for free manipulation of
 the covariance matrix (you can only specify a type, and our type doesn't
 exist). Can R accomplish this? For example:

 Parameters:
 B1= Beta 1i
 B2= Beta 2i
 G1= Gamma i


 y = B1 -(B1 - B2) exp { - G1 time} + e

 the covariance matrix for 
 (B1 [( covB1? covB1B2   covB1G1
  B2~  covB2B1covB2? covB2G1
  G1)   covG1B1   covG1B2   covG1?   )]

 **If we want to specify covG1B1 and make everything else 0's, for example,
 what would the code look like? 



 Thanks for your time.


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] mixed models: correlation between fixed and random effects??

2006-10-14 Thread Spencer Graves
  I suspect you've observed an outcome that occurs once out of every 
2^4 = 16 times.  Have you tried Monte Carlo, e.g., using 'simulate.lme' 
in library(nlme) or 'mcmcsamp' in library(lme4)? 

  Hope this helps. 
  Spencer Graves

Bruno L. Giordano wrote:
 Hello,
 I built 4 mixed models using different data sets and standardized variables 
 as predictors.

 In all the models each of the fixed effects has an associated random effect
 (same predictor).

 What I find is that fixed effects with larger (absolute) standardized
 parameter estimates have also a higher estimate of the related random
 effect. In other words, the higher the average of the absolute value of the
 BLUPs for a given standardized parameter, the higher its variance.

 Is this a common situation or I am missing some additional normalization
 factor necessary to compare the different random effects?

 Thanks a lot,
 Bruno



 ~~
 Bruno L. Giordano, Ph.D.
 CIRMMT
 Schulich School of Music, McGill University
 555 Sherbrooke Street West
 Montréal, QC H3A 1E3
 Canada
 http://www.music.mcgill.ca/~bruno/

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Once again: aov vs. lme/lmer

2006-10-14 Thread Spencer Graves
  I believe the short answer to your question is no:  I don't 
believe it's possible to get the exact same answer from 'lme' or 'lmer' 
as from 'aov' for your example. 

  Without a philosophical discussion, I can't tell you why.  I 
almost never use 'aov', because it tests everything using an 'F' test, 
the particular 'F' is really relevant to what we want to know or not.  
With 'lme' and 'lmer', I have more confidence that I know what I'm doing 
and that I'm testing the hypotheses that are likely to be of greatest 
interest for my problem.  The 'aov' algorithm was wonderful half a 
century ago, and was still the best procedure easily available to most 
people a quarter century ago.  We have better tools available today.  In 
case someone reading this reply wants more information on this, I 
suggest they start with the discussion of 'Conservative ANOVA tables in 
lmer' in the R Wiki. 

  Hope this helps. 
  Spencer Graves

Rafael Laboissiere wrote:
 First of all, I apologize for asking a question that has appeared
 recurrently in this mailing list.  However, I have googled for it, have
 looked at the mailing list archives, and also looked at Pinheiro  Bates book
 (although not very thoroughly, I must confess), to no avail.

 Here is the question: I am trying to obtain with lme or lmer the same exact
 numerical results (p-values) that I obtain with aov.  Consider the following
 data:

   d - data.frame (a = factor (rep (c (1, 2), c(10, 10))),
b = factor (rep (rep (c (1, 2), c(5, 5)),2)),
s = factor (rep (1 : 5, 4)),
v = rnorm (20))

 Let us say that this comes from a repeated measures experiments where all
 five subjects (factor s) were tested in all combinations of the two fixed
 factors a and b.

 With aov, for a model were s, s:a, s:b and s:a:b are random, and a*b
 are fixed terms, I would use:

   aov (v ~ a*b + Error (s / (a*b)), data = d)

 Is there a way to get the same results using lme or lmer? How should I write
 my random argument in lme or the (...|...) terms in lmer?  Please notice
 that I am not interested in philosophical discussions about whether I am
 trying to do wrong things.  I would only like to know whether a given R
 function could be used in place of another R function.

 Thanks in advance for your help,



__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Helmert contrasts for repeated measures and split-plot expts

2006-10-13 Thread Spencer Graves
comments in line  

Roy Sanderson wrote:
 Dear R-help

 I have two separate experiments, one a repeated-measures design, the other
 a split-plot.  In a standard ANOVA I have usually undertaken a
 multiple-comparison test on a significant factor with e.g TukeyHSD, but as
 I understand it such a test is inappropriate for repeated measures or
 split-plot designs.

 Is it therefore sensible to use Helmert contrasts for either of these
 designs?  Whilst not providing all the pairwise comparisons of TukeyHSD,
 presumably the P-statistic for each Helmert contrast will indicate clearly
 whether that contrast is significant and should be retained in the model.
 (This seems to come with the disadvantage that the parameter values are
 harder to interpret than with Treatment contrasts.)  In the
 repeated-measures design the factor in question has three levels, whilst in
 the split-plot design it has four.
   
  You don't need to restrict yourself to Helmert vs. treatment 
contrasts:  You can use any set of contrasts that will provide 
estimates of (k-1) parameters for a factor with k levels and interpret 
the p values as you suggest.  I see two issues with doing this:  
correlation among parameter estimates and individual vs. group p values. 

CORRELATED PARAMETER ESTIMATES:  Helmert contrasts are orthogonal for a 
balanced design but will produce correlated parameter estimates with an 
unbalanced design.  This will generally increase the p values due to 
variance inflation created by the correlation.  If one or more 
correlations are too large, you may wish to try custom contrasts that 
produce parameter estimates that are essentially uncorrelated;  this 
should give you the smallest p value you could expect for that 
comparison.  If I was interested in, e.g., 2*k comparisons, I might run 
the same analysis several times with different contrasts, taking the p 
value for each comparison from an analysis in which the coefficient for 
that comparison had a low correlation with the remaining (k-2) 
coefficients for that k-level factor. 

INDIVIDUAL VS. GROUP p VALUES:  In many but not all cases, under the 
null hypothesis of no effect, a p value will follow a uniform 
distribution.  Thus, if we compute 1,000 p values using a typical 
procedure when nothing is going on, we can expect roughly 50 of them to 
be less than 0.05 by chance alone.  The Bonferroni inequality suggests 
that if we do m comparisons, we should multiply the smallest p value by 
m to convert it to a family- or group-wise p value.  This is known to be 
conservative, and with more than (k-1) comparisons among k levels of a 
factor, it is extremely conservative.  In that case, I would be inclined 
to multiple the smallest p value by (k-1), even if I considered many 
more than (k-1) comparisons among the k levels.  I don't know a 
reference for doing this, and if I were going to do it for a 
publication, I might do some simulations to check it.  Perhaps someone 
else might enlighten us both on how sensible this might be. 

  Hope this helps. 
  Spencer Graves
 Many thanks in advance
 Roy
 
 ---
 Roy Sanderson
 Institute for Research on Environment and Sustainability
 Devonshire Building
 University of Newcastle
 Newcastle upon Tyne
 NE1 7RU
 United Kingdom

 Tel: +44 191 246 4835
 Fax: +44 191 246 4999

 http://www.ncl.ac.uk/environment/
 [EMAIL PROTECTED]

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Plackett-Dale Model in R

2006-10-12 Thread Spencer Graves
  I'm not familiar with the Plackett-Dale model, and 
RSiteSearch(Plackett-Dale) produced nothing.  Google led me to an 
article on Plackett-Dale Inference to Study Inheritance 
(http://doclib.uhasselt.be/dspace/bitstream/1942/466/1/molg23.pdf#search=%22Plackett-Dale%22).
  
This article discusses parameter estimation to maximize a 
pseudo-likelihood for a Plackett-Dale distribution, defined in other 
references.  If you can write a function to compute the Plackett-Dale 
distribution, it should not be too difficult to maximize it (or minimize 
the negative of its logarithm).  From that you should be able to get 
Wald approximate confidence intervals, likelihood ratio tests, etc. 

  However, unless it's available under some other name, it looks to 
me like it's not included in any current CRAN package. 

  I know this is not what you wanted, but I hope it helps. 
  Spencer Graves

Pryseley Assam wrote:
 Dear R users,

   Can someone inform me about a library/function in R that fits a 
 Plackett-Dale model ?

   Thanks in advance
   Pryseley

   
 -


   [[alternative HTML version deleted]]

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] warning message in nlm

2006-10-05 Thread Spencer Graves
  Without 'msc39', I can't say for sure, but I doubt if it's 
anything to worry about.  It looks like 'nlm' tests values for theta and 
len that produce either NA or Inf in computing 'loglikcs'.  Since you 
got an answer, it does not look to me like it's anything worth worrying 
about;  just make sure you are minimizing (-log(likelihood)), not 
maximizing it.  For more detail, you can run 'nlm' with print.level = 2 
rather than the default of 0 -- and make contour plots of loglikcs in an 
appropriate region of the optimum, as suggested in Venables and Ripley 
(2002) Modern Applied Statistics with S (Springer), discussing 
'expand.grid' and 'contour'. 

  Hope this helps. 
  Spencer Graves

singyee ling wrote:
 Dear R-users,

 I am trying to find the MLEs for a loglikelihood function (loglikcs39) and
 tried using both optim and nlm.

 fredcs39-function(b1,b2,x){return(exp(b1+b2*x))}
 loglikcs39-function(theta,len){
 sum(mcs39[1:len]*fredcs39(theta[1],theta[2],c(8:(7+len))) - pcs39[1:len] *
 log(fredcs39(theta[1],theta[2],c(8:(7+len)
 }
 theta.start-c(0.1,0.1)


 1. The output from using optim is as follow
 --

 optcs39-optim(theta.start,loglikcs39,len=120,method=BFGS)
   
 optcs39
 
 $par
 [1] -1.27795226 -0.03626846

 $value
 [1] 7470.551

 $counts
 function gradient
  133   23

 $convergence
 [1] 0

 $message
 NULL

 2. The output from using nlm is as follow
 ---

   
 outcs39-nlm(loglikcs39,theta.start,len=120)
 
 Warning messages:
 1: NA/Inf replaced by maximum positive value
 2: NA/Inf replaced by maximum positive value
 3: NA/Inf replaced by maximum positive value
 4: NA/Inf replaced by maximum positive value
 5: NA/Inf replaced by maximum positive value
 6: NA/Inf replaced by maximum positive value
 7: NA/Inf replaced by maximum positive value
   
 outcs39
 
 $minimum
 [1] 7470.551

 $estimate
 [1] -1.27817854 -0.03626027

 $gradient
 [1] -8.933577e-06 -1.460512e-04

 $code
 [1] 1

 $iterations
 [1] 40


 As you can see, the values obtained from using both functions are very
 similar.  But, what puzzled is the warning message that i got from using
 nlm. Could anyone please shed some light on how this warning message come
 about and whether it is a cause for concern?


 Many thanks in advance for any advice!

 singyee

   [[alternative HTML version deleted]]

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] treatment effect at specific time point within mixed effects model

2006-10-04 Thread Spencer Graves
  Consider the following modification of your example: 

fm1a = lme(z ~ (factor(Time)-1)*drug, data = data.grp,
random = list(Patient = ~ 1) )

summary(fm1a)
snip
 Value Std.Error DFt-value p-value
factor(Time)1   -0.6238472 0.7170161 10 -0.8700602  0.4047
factor(Time)2   -1.0155283 0.7170161 10 -1.4163256  0.1871
factor(Time)30.1446512 0.7170161 10  0.2017405  0.8442
factor(Time)40.7751736 0.7170161 10  1.0811105  0.3050
factor(Time)50.1566588 0.7170161 10  0.2184871  0.8314
factor(Time)60.0616839 0.7170161 10  0.0860286  0.9331
drugP1.2781723 1.0140139  3  1.2605077  0.2966
factor(Time)2:drugP  0.4034690 1.4340322 10  0.2813528  0.7842
factor(Time)3:drugP -0.6754441 1.4340322 10 -0.4710104  0.6477
factor(Time)4:drugP -1.8149720 1.4340322 10 -1.2656424  0.2343
factor(Time)5:drugP -0.6416580 1.4340322 10 -0.4474502  0.6641
factor(Time)6:drugP -2.1396105 1.4340322 10 -1.4920240  0.1666

  Does this answer your question? 
  Hope this helps. 
  Spencer Graves

Afshartous, David wrote:
  
 All,

 The code below is for a pseudo dataset of repeated measures on patients
 where there is also a treatment factor called drug.  Time is treated
 as categorical.  

 What code is necessary to test for a treatment effect at a single time
 point,
 e.g., time = 3?   Does the answer matter if the design is a crossover
 design,
 i.e, each patient received drug and placebo?

 Finally, what would be a good response to someone that suggests to do a
 simple t-test (paired in crossover case) instead of the test above
 within a mixed model?

 thanks!
 dave



 z = rnorm(24, mean=0, sd=1)
 time = rep(1:6, 4)
 Patient = rep(1:4, each = 6)
 drug = factor(rep(c(I, P), each = 6, times = 2)) ## P = placebo, I =
 Ibuprofen
 dat.new = data.frame(time, drug, z, Patient)
 data.grp = groupedData(z ~ time | Patient, data = dat.new)
 fm1 = lme(z ~ factor(time) + drug + factor(time):drug, data = data.grp,
 random = list(Patient = ~ 1) )

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] variance-covariance structure of random effects in lme

2006-10-03 Thread Spencer Graves
  I actually see two violations of the compound symmetry 
assumptions in the Oats example numbers you provide below.  You 
mentioned the fact that the 3 different numbers in 
cor(random.effects(f4OatsB)) are all different, when compound symmetry 
would require them to all be the same.  In addition, note that in 
VarCorr(fm4OatsB), Corr does not equal sigma1^2/(sigma1^2+sigma.e^2), as 
suggested  by the theory. 

  One might naively expect that the algorithm might constrain the 
parameter estimates to meet this compound symmetry assumption.  I don't 
know why the algorithm does not produce that, but it doesn't bother me 
much that it doesn't, because the numbers are close, especially since 
this data set includes only 3 varieties and 6 blocks, producing 6 
estimated random effects for each variety. 

  Someone more knowledgeable may provide more detailed comments. 

  Hope this helps. 
  Spencer Graves

Mi, Deming wrote:
 Dear R users,
 I have a question about the patterned variance-covariance structure for the 
 random effects in linear mixed effect model.
 I am reading section 4.2.2 of Mixed-Effects Models in S and S-Plus by Jose 
 Pinheiro and Douglas Bates.
 There is an example of defining a compound symmetry variance-covariance 
 structure for the random effects in a
 split-plot experiment on varieties of oats. I ran the codes from the book and 
 extracted the variance and correlation
 components:
   
 library(nlme)
 data(Oats)
 fm4OatsB - lme(yield~nitro, data=Oats, 
 random=list(Block=pdCompSymm(~Variety-1)))
 VarCorr(fm4OatsB)
 
 Block = pdCompSymm(Variety - 1) 
Variance StdDev   Corr   
 VarietyGolden Rain 331.5271 18.20788
 VarietyMarvellous  331.5271 18.20788 0.635  
 VarietyVictory 331.5271 18.20788 0.635 0.635
 Residual   165.5585 12.86695  
  
 This is a compound symmetry variance-covariance structure.  I then tried to 
 find out the standard deviation and
 correlation matrix of the BLUPs predictors of the random effects and wish all 
 three standard deviations would be close
 to 18.20788 and the correlation structure would be consistent with a compound 
 symmetry structure.
  
  
   
 sd(random.effects(fm4OatsB))
 
 VarietyGolden Rain  VarietyMarvellous VarietyVictory 
   16.01352   15.17026   19.83877 
   
 cor(random.effects(fm4OatsB))
 
VarietyGolden Rain VarietyMarvellous VarietyVictory
 VarietyGolden Rain  1.000 0.6489084  0.8848020
 VarietyMarvellous   0.6489084 1.000  0.6343395
 VarietyVictory  0.8848020 0.6343395  1.000

 The correlation structure is far from a compound symmetry structure, and the 
 standard deviation of three random effects are
 all different from 18.20788.  On the contrary, the result is more like the 
 one from a general positive-definite
 variance-covariance structure.  
 Can anyone tell me why I did not see a compound symmetry structure from the 
 BLUPs predictors of the random effects or if
 I am doing something wrong?
 Thank you!
 Deming Mi
 [EMAIL PROTECTED]


   [[alternative HTML version deleted]]

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] variance functions in glmmPQL or glm?

2006-10-01 Thread Spencer Graves
  What are you trying to do with varIdent with logistic regression 
with variance components?  Have you considered the weights argument in 
glm or glmmPQL?  It sounds to me like you are trying to estimate 'size' 
for a binomial distribution, and I have trouble visualizing a context 
where that would be reasonable that would NOT be handled by the random 
effects portion of glmmPQL{MASS} or lmer{lme4}. 

  If you'd like more help from this listserve, please provide 
commented, minimal, self-contained, reproducible code, as suggested in 
the posting guide www.R-project.org/posting-guide.html. 

  Hope this helps, even if it doesn't answer your question. 
  Spencer Graves

Gretchen wrote:
 Hello R users-
 I am new to R, and tried searching the archives and literature for an answer
 to this - please be patient if I missed something obvious.

 I am fitting a logistic regression model, and would like to include variance
 functions (specifically the varIdent function).  I cannot figure out how to
 do this either in glmmPQL (or something similar) for the model with random
 effects, or in glm for the model without random effects.  Is it possible to
 fit a varIdent function in a generalized linear model?  If so, what are the
 appropriate packages/functions to use?

 Any help would be appreciated. 
 Thank you,
 Gretchen Anderson

  

 M.Sc. Candidate

 Dept. of Fisheries and Wildlife

 Michigan State University

 13 Natural Resources Building

 East Lansing, MI 48824

 Phone: (517) 353-0731

 E-Mail: [EMAIL PROTECTED]

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Need help to estimate the Coef matrices in mAr

2006-09-30 Thread Spencer Graves
  I'm sorry, but I can't follow what you are asking.  If you'd like 
more help, please provide commented, minimal, self-contained, 
reproducible code, as suggested in the posting guide 
www.R-project.org/posting-guide.html. 
http://www.R-project.org/posting-guide.html  Please include a minimal 
data set, e.g., 10 simulated observations on 2 variables, with comments 
describing what you tried and what you don't understand about it. 

  Hope this helps. 
  Spencer Graves

Arun Kumar Saha wrote:

 Dear Spencer,

  

 Thank you very much for your attention on my problem. According to 
 your advice I did some home work on this problem, but unfortunately I 
 could not solve my problem.

  

  

 Suppose I have a dataset of length 300 with 2 variables. And I want to 
 fit a VAR model on this of order 2.

  

 I went through the function mAr.est and got understand that, here 'K' 
 is a matrix with (300-2) rows and 7 columns. the first col. consists 
 only 1, next two columns consist of lagged values of two variables 
 with lag-length 2, next two col. consist of lagged value with lag 
 length-1, and next two cols are for lag-length-0.

  

 Next, they add additional a 7-7 matrix to K. For this matrix diagonal 
 elements are the square root of sum of square of elements of K (col. 
 wise) and rest of the elements are 0.

  

 I feel that this matrix, that is added to K, is the key matrix for any 
 type of modification that you prescribed. Therefore for experimental 
 purpose I put NA against one of its off-diagonal elements. But I got 
 error.

  

 However I cannot understand why they put such figures for diagonal and 
 off-diagonal elements of that matrix.

  

 Can you suggest me any solution more specifically?

  

  

 Thanks and regards,

 Arun



 On 9/4/06, *Spencer Graves* [EMAIL PROTECTED] 
 mailto:[EMAIL PROTECTED] wrote:

   Have you tried 'RSiteSearch(multivariate autoregression,
 functions)'?  This produced 14 hits for me just now, the first of
 which mentions a package 'MSBVAR'.  Have you looked at that?

   If that failed, I don't think it would be too hard to modify
 'mAr.est' to do what you want.  If it were my problem, I might a local
 copy of the function, then add an argument accepting a 2 or
 3-dimensional array with numbers for AR coefficients to be fixed
 and NAs
 for the coefficients.  Then I'd use 'debug' to walk through the
 function
 line by line until I figured out how to modify the function to do
 what I
 wanted.  I haven't checked all the details, so I don't know for
 sure if
 this would work, but the function contains a line 'R =
 qr.R(qr((rbind(K,
 diag(scale, complete = TRUE)' which I would start by decomposing,
 possibly starting as follows:

   Z - rbind(K, diag(scale)

 I'd figure out how the different columns of Z relate to my
 problem, then
 modify it appropriately to get what I wanted.

   Another alternative would be to program it from scratch using
 something like 'optim' to minimize the sum of squares of residuals
 over
 the free parameters in my AR matrices.   I'm confident I could
 make this
 work, even if the I somehow could not get it with either of the
 other two.

   There may be something else  better, e.g., a Kalman filter
 representation, but I can't think how to do that off the top if my
 head.

   Hope this helps.
   Spencer Graves

 Arun Kumar Saha wrote:
  Dear R users,
 
  I am using mAr package to fit a Vector autoregressive model to
 my data. But
  here I want to put some predetermined values for some elements in
  coefficient matrix that mAr.est going to estimate. For example
 if p=3 then I
  want to put A3[1,3] = 0 and keep rest of the elements of
 coefficient
  matrices to be determined by mAr.est.
 
  Can anyone please tell me how can I do that?
 
  Sincerely yours,
  Arun
 
[[alternative HTML version deleted]]
 
  __
  R-help@stat.math.ethz.ch mailto:R-help@stat.math.ethz.ch
 mailing list
  https://stat.ethz.ch/mailman/listinfo/r-help
 https://stat.ethz.ch/mailman/listinfo/r-help
  PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html
  and provide commented, minimal, self-contained, reproducible code.
 




 -- 
 Arun Kumar Saha, M.Sc.[C.U.]
 S T A T I S T I C I A N[Analyst]
 RISK  MANAGEMENT  DIVISION
 Transgraph Consulting [ www.transgraph.com http://www.transgraph.com]
 Hyderabad, INDIA
 Contact #  Home: (91-033) 25558038
 Office: (91-040) 30685012 Ext. 17
   FAX: (91-040) 55755003
Mobile: 919989122010
 E-Mail: [EMAIL PROTECTED] 
 mailto:[EMAIL PROTECTED]
 [EMAIL PROTECTED] mailto:[EMAIL PROTECTED

Re: [R] plotting all subgroups with augPred

2006-09-29 Thread Spencer Graves
  You want an 'augPred' plot with multiple lines per panel, from 
multiple factors in the model?  Because you provided such a simple, 
self-contained example, I can offer much more substantive comments than 
I might have otherwise.  First the bad news:  I don't see an easy way to 
get what you want.  Good news:  I think I can outline for you a 
moderately simple but still not trivial way to get it -- though I 
haven't completed the exercise myself. 

  The 'augPred' documentation includes an example with two lines per 
panel, but one of the lines is just the 'fixed' model, which is the same 
in all frames.  That's not what you want. 

  To dig deeper, I typed 'augPred' at a command prompt:  It's a one 
line function consisting of 'UseMethod(augPred)'.  This pushed me to 
do the following: 

  methods(augPred)
[1] augPred.gls*augPred.lme*augPred.lmList*

  This led me to try getAnywhere(augPred.lme), which listed out 
the function.  I decided I wanted to walk through the function line by 
line, so I tried the following: 

  debug(augPred.lme)
Error: object augPred.lme not found

  Since 'augPred' is in library(nlme), I refined this as follows: 

  debug(nlme:::augPred.lme)

  This worked.  Next I tried to execute your command 
'augPred(fm1Pixel)', which put me into 'augPred.lme'.  From there, one 
can walk through the function line by line, looking at what they do, 
etc.  Later, you can do the execute your own modification to that code 
outside of a call to 'augPred'.  If you get 'object ... not found', try 
adding 'nlme:::' as a prefix to '...'.  If you do this, you will find 
that 'augPred' basically does three things: 

  1.  Creates a data.frame 'value' containing the explanatory 
variables for which predictions are needed.  You need to add a column 
for 'Side' to 'value';  I don't see a way to do this in the function call. 

  2.  Call 'pred - predict(...)' to get the numbers required for 
the lines. 

  3.  Reorganize things so 'plot.augPred' knows what to do. 

  After you get 'pred' with all the numbers you need, you can plot 
them any way you want. 

  Hope this helps. 
  Spencer Graves

Afshartous, David wrote:
 All,

 I have a question RE plotting the prediction lines of a random effects
 model via augPred.  I'll illustrate via the Pixel dataset within the
 nlme package: 

 library(nlme)
 attach(Pixel)
 fm1Pixel = lme(pixel ~ day + I(day^2), data = Pixel, random = list(Dog =
 ~ 1))
 plot(augPred(fm1Pixel))   ### 10 fitted lines since there are 10 dogs

 fm2Pixel = update(fm1Pixel, . ~ . + Side)
 plot(augPred(fm2Pixel))## also produces 10 fitted lines

 For the second plot, shouldn't we have 2 fitted lines per dog, one for
 each level
 of the fixed effect Side?  

 1) How does one plot insure that this is plotted accordingly?

 2) How does one plot say only the first 5 dogs?


 Thanks!
 Dave

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] EBAM ERROR

2006-09-29 Thread Spencer Graves
  I haven't seen a reply to this post, so I will offer a couple of 
comments.  Unfortunately, the information provided is not sufficient for 
me to answer your questions.  If you'd still like help from this 
listserve, please provide commented, minimal, self-contained, 
reproducible code, as suggested in the posting guide 
www.R-project.org/posting-guide.html.  However, have you tried 
www.bioconductor.org?  I believe they also have a listserve, and the 
people who follow that list should be more familiar with analysis of 
genetic data than this more general listserve. 

  Also, I suggest you send your email From something more 
revealing of who you are.  Some of the most knowledgeable and frequent 
contributors to this listserve rarely if ever answer questions from 
anonymous email addresses like [EMAIL PROTECTED]. 

  Hope this helps. 
  Spencer Graves

Learning R wrote:
 Dear RUsers,

 I am new to R. I am learning how to use R. I am a PC user and run R on
 windows. I would appreciate if some one could guide me on a few questions I
 have:

 1) I have 4 cel files (2 replicates for NORM and Disease resp). I have been
 able to run siggenes on this dataset where I have 4 labels in the class file
 groupsnhi.cl  op- (0,0,1,1) and my data has been read into datrmanhi after
 performing rma. When I run these commands here I receive these errors:

   
 plot(samnhi.out,seq(0.1,0.6,0.1))
 identify(samnhi.out,ll=FALSE)
 
 warning: no point with 0.25 inches
 warning: no point with 0.25 inches
 warning: no point with 0.25 inches

 Why does this happen.

 2) Now I am trying to run EBAM: and when I type I get an error

   find.out-find.a0(datrmanhi,groupsnhi.cl,rand=123)
 Loading required package: affy
 Loading required package: affyio
 EBAM Analysis for the two class unpaired case.

 Warning: There are 1 genes which have variance Zero or no non-missing
 values.
  The d-value of these genes is set to NA.


 The following object(s) are masked _by_ .GlobalEnv :

  n


 The following object(s) are masked from mat.repeat ( position 5 ) :

  center log.bino n p success x1 x2 x3 x4 x5

 Error in optim(rep(0, 6), neglogLik.repeat, method = BFGS) :
 non-finite finite-difference value [1]
 In addition: Warning message:
 collapsing to unique 'x' values in: approx(Z, Z.norm, z, rule = 2)


 -

 I have also tried :
   
 find.out-find.a0(exprs2,c(1,1,1,0,0,0),rand=123)
 
 EBAM Analysis for the two class unpaired case.

 Warning: There are 1 genes which have variance Zero or no non-missing
 values.
  The d-value of these genes is set to NA.


 The following object(s) are masked _by_ .GlobalEnv :

  n


 The following object(s) are masked from mat.repeat ( position 3 ) :

  center log.bino n p success x1 x2 x3 x4 x5


 The following object(s) are masked from mat.repeat ( position 6 ) :

  center log.bino n p success x1 x2 x3 x4 x5

 Error in optim(rep(0, 6), neglogLik.repeat, method = BFGS) :
 non-finite finite-difference value [1]
 In addition: Warning message:
 collapsing to unique 'x' values in: approx(Z, Z.norm, z, rule = 2)


 I would greatly appreciate any solutions and help to solve this problem.

 Thank you,
 Appreciate your time.
 Regards,
 Lolita

   [[alternative HTML version deleted]]

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Formula aruguments with NLS and model.frame()

2006-09-29 Thread Spencer Graves
  I haven't seen any replies to this post, so I will offer a couple 
of comments. 

  First, your example is entirely too complicated and not self 
contained for me to say much in the limited time I have for this.  I 
suggest you start by splitting your problem into several steps.  For 
example, will 'garchFit' allow 'formula.mean' to be something other than 
'~arma(p, q)', where p and q are nonnegative integers?  If no, I suggest 
you start by trying to produce your own modification to 'garchFit' so it 
will accept something like 'formula.mean=~z+arma(p, q)';  I suggest you 
give your local copy a different name like 'garchFitZ'.  Second, I 
suggest you cut your example data down to a minimum, e.g, 9 or 20 
observations, just enough so the algorithm won't die for some reason 
that would not occur with a larger data set but small enough so you can 
quickly print out and study every object the 'garchFit' algorithm 
produces. 

  Second, I suggest you use 'debug' to walk through your local 
version of 'garchFit' line by line.  I've found this to be a very 
powerful way to learn what happens in the internal environment of a 
function. 

  If you get stuck trying this, please submit another post including 
commented, minimal, self-contained, reproducible code, as suggested in 
the posting guide 'www.R-project.org/posting-guide.html'. 
  Hope this helps. 
  Spencer Graves

and provide commented, minimal, self-contained, reproducible code.



Joe W. Byers wrote:
 I could use some help understanding how nls parses the formula argument
 to a model.frame and estimates the model.  I am trying to utilize the
 functionality of the nls formula argument to modify garchFit() to handle
 other variables in the mean equation besides just an arma(u,v)
 specification.

 My nonlinear model is
   y-nls(t~a*sin(w*2*pi/365*id+p)+b*id+int,data=t1,
   start=list(w=.5,a=.1,p=.5,b=init.y$coef[2],int=init.y$coef[1] ),
   control=list(maxiter=100,minFactor=1e-18))
 where t is change in daily temperatures, id is just a time trend and the
 a*sin is a one year fourier series.

 I have tried to debug the nls code using the following code
 t1-data.frame(t=as.vector(x),id=index(x))
 data=t1;
 formula - as.formula(t ~ a *sin(w *2* pi/365 * id + p) + b * id + int);
   varNames - all.vars(formula)
   algorithm-'default';
   mf - match.call(definition=nls,expand.dots=FALSE,
   call('nls',formula, data=parent.frame(),start,control = nls.control(),
   algorithm = default, trace = FALSE,
   subset, weights, na.action, model = FALSE, lower = -Inf,
   upper = Inf));
   mWeights-F;#missing(weights);
   start=list(w=.5,a=.1,p=.5,b=init.y$coef[2],int=init.y$coef[1] );
   pnames - names(start);
varNames - varNames[is.na(match(varNames, pnames, nomatch = NA))]

   varIndex - sapply(varNames,
   function(varName, data, respLength) {
   length(eval(as.name(varName), data))%%respLength == 0},
data, length(eval(formula[[2]], data))
   );
   mf$formula - as.formula(paste(~, paste(varNames[varIndex],
   collapse = +)), env = environment(formula));
   mf$start - NULL;mf$control - NULL;mf$algorithm - NULL;
   mf$trace - NULL;mf$model - NULL;
   mf$lower - NULL;mf$upper - NULL;
   mf[[1]] - as.name(model.frame);
   mf-evalq(mf,data);
   n-nrow(mf)
   mf-as.list(mf);
   wts - if (!mWeights)
   model.weights(mf)
   else rep(1, n)
   if (any(wts  0 | is.na(wts)))
   stop(missing or negative weights not allowed)

   m - switch(algorithm,
   plinear = nlsModel.plinear(formula, mf, start, wts),
   port = nlsModel(formula, mf, start, wts, upper),
   nlsModel(formula, mf, start, wts));

 I am struggling with the environment issues associated with performing
 these operations.  I did not include the data because it is 9000 
 observations of temperature data.  If anyone would like the data, I can 
 provide it or a subset in a csv file.


 thank you
 Joe

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] time series simulation

2006-09-29 Thread Spencer Graves
  First, I'd write down a model for how your stochastic process 
relates to independent, normal observations with mean 0 and standard 
deviation 1.  You want a lognormal series, so I'd start by generating a 
normal series and the compute 'exp' of that.  If you'd like more help 
from this listserve, please provide commented, minimal, self-contained, 
reproducible code, as suggested in the posting guide 
www.R-project.org/posting-guide.html. 

  Hope this helps. 
  Spencer Graves

march wrote:
 Hi everybody
 I'm trying to simulate a stochastic process in R. I would like consider n
 log normal time series. The first time serie has a growth rate lower than
 the second and so on. the initial time of the first serie is lower than the
 initial time of the second and so on. In the long run the series have the
 same value. Do you have any idea at running such a process?
 Other question: How can I reduce the domain of a random variable?
 Thanks
 March
   


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Simulation of a Dirichlet Process.

2006-09-29 Thread Spencer Graves
  I haven't seen any replies to this post, and I don't know tutors 
in NYC.  However, I just got 79 hits for 'RSiteSearch(dirichlet)' and 
12 for 'RSiteSearch(dirichlet MCMC)'.  If you would like further help 
from this listserve, I suggest you review the results of 
'RSiteSearch(dirichlet MCMC)', try something.  If it is not what you 
want, please convert it to a commented, minimal, self-contained, 
reproducible code and send that to this listserve, as suggested in the 
posting guide www.R-project.org/posting-guide.html. 

  Hope this helps. 
  Spencer Graves

[EMAIL PROTECTED] wrote:
 I'm just getting started with R, having a lot of original work on
 modeling and exploring the simulation by MCMC. I want to simulate the 
 prior and the posterior distribution of Dirichlet Process by MCMC.
 Is there anyone in NYC that might be a good
 tutor for me?



__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] ISO8601 week-of-year to date

2006-09-29 Thread Spencer Graves
  What have you tried?  Are you familiar with the 'zoo' vignette and 
the time-date articles in R News?  Have you tried 'RSiteSearch'? 

  If you'd like more help from this listserve, please submit another 
post that includes commented, minimal, self-contained, reproducible code 
describing something you've tried and did not find adequate (as 
suggested in the posting guide 'www.R-project.org/posting-guide.html'). 

  Hope this helps. 
  Spencer Graves

Ott-Siim Toomet wrote:
 Hi,

 are there any way to convert ISO8601 weeks to gregorian dates?  Something
 like

 coverttodate(year=2006, week=38, day=1)
 # Sept 18, 2006

 Thanks in advance,
 Ott

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Power analysis for repeated measures ANCOVA

2006-09-24 Thread Spencer Graves
  I haven't seen a reply to this post, and you may have moved beyond 
this question by now, but I will nevertheless offer one brief question:  
Have you looked at 'simulate.lme'?  This might simplify coding your 
power analysis. 

  Hope this helps. 
  Spencer Graves

Steve Su wrote:
 Dear All,

  

 I wonder if anyone has written a code for power analysis with repeated
 measures ANCOVA? I am aware of the following reference:

  

 Frison L and Pocock SJ: Repeated Measures in Clinical Trials: analysis
 using mean summary statistics and its implications for design in
 Statistics in Medicine, 1992, 11:1685-1704.

 Statistics in Medicine

  

 It will probably be fairly easy to code this up or even work on this
 from scratch, but it would be good if people have already done this and
 can share their code.

  

 Thanks.

  

 Steve.


   [[alternative HTML version deleted]]

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] nls

2006-09-23 Thread Spencer Graves
  I used debug to walk through your example line by line,  I found 
that the error message was misleading.  By making 
as.vector(semivariance) and as.vector(h)  columns of a data.frame, I got 
it to work.  My revised code appears below. 

  Thanks for providing a self-contained and relatively simple 
example.  Without it, I could have done little more that suggest 
'traceback' (which provided zero information when I tried it) and 'debug'. 

  Hope this helps. 
  Spencer Graves

fit.gaus-function(coordinates,values,guess.c0,guess.c1,guess.a)
{
 long-rep(coordinates[,1],each=length(coordinates[,1]))
   
lag.long-t(matrix(long,nrow=length(coordinates[,1]),byrow=TRUE))
 dif.long -(lag.long-t(lag.long))^2
 lat -rep(coordinates[,2],each=length(coordinates[,2]))
 lag.lat-t(matrix(lat,nrow=length(coordinates[,2]),byrow=TRUE))
 dif.lat -(lag.lat-t(lag.lat))^2
 h -sqrt(dif.long+dif.lat)

   
 if( length(values[1,])1)
   {
 y.m -apply(values,1,sum,na.rm=TRUE)
 y.m -as.matrix(y.m)
 y.mod -(1/length(values[1,]))*(y.m)
}
 else
   {
  y.mod -as.matrix(values)
}

semi -rep(y.mod,each=length(y.mod))
mat1-t(matrix(semi,nrow=length(y.mod),byrow=TRUE))
mat2-t(mat1)
semivariance -(1/2)*(mat1-mat2)^2

#model -semivariance ~c0+c1*(1-exp(-(h^2)/a^2))
 DF - data.frame(smvar=as.vector(semivariance),
  h.=as.vector(h) )
 mdl - smvar~c0+c1*(1-exp(-(h.^2)/a^2))

#parameters -nls(model,start =
#list(c0=guess.c0,c1=guess.c1,a=guess.a),trace=TRUE)
parameters -nls(mdl,start =
list(c0=guess.c0,c1=guess.c1,a=guess.a),trace=TRUE,
 data=DF )
results -summary(parameters)
  print(results)
}


mzabalazo ngwenya wrote:
 Hello everyone !

 I am trying to write a short program to estimate  semivariogram
 parameters. But I keep running into a problem when using the nls
 function.

 Could you please shed some light. I have put a sample of one of the
 codes and ran a short example so you see what I mean.

 -



 fit.gaus-function(coordinates,values,guess.c0,guess.c1,guess.a)
 {
  long-rep(coordinates[,1],each=length(coordinates[,1]))
 
 lag.long-t(matrix(long,nrow=length(coordinates[,1]),byrow=TRUE))
  dif.long -(lag.long-t(lag.long))^2
  lat -rep(coordinates[,2],each=length(coordinates[,2]))
  lag.lat-t(matrix(lat,nrow=length(coordinates[,2]),byrow=TRUE))
  dif.lat -(lag.lat-t(lag.lat))^2
  h -sqrt(dif.long+dif.lat) 
  
 
  if( length(values[1,])1)
{
  y.m -apply(values,1,sum,na.rm=TRUE)
  y.m -as.matrix(y.m)
  y.mod -(1/length(values[1,]))*(y.m)
 }
  else
{
   y.mod -as.matrix(values)
 }

 semi -rep(y.mod,each=length(y.mod))
 mat1-t(matrix(semi,nrow=length(y.mod),byrow=TRUE))
 mat2-t(mat1)
 semivariance -(1/2)*(mat1-mat2)^2

 model -semivariance ~c0+c1*(1-exp(-(h^2)/a^2)) 
 parameters -nls(model,start =
 list(c0=guess.c0,c1=guess.c1,a=guess.a),trace=TRUE)
 results -summary(parameters)
   print(results)
 } 
 --

   
  don -matrix(c(2,3,9,6,5,2,7,9,5,3),5,2)
 don
 
  [,1] [,2]
 [1,]22
 [2,]37
 [3,]99
 [4,]65
 [5,]53
   
  data -matrix(c(3,4,2,4,6))
 data
 
  [,1]
 [1,]3
 [2,]4
 [3,]2
 [4,]4
 [5,]6
   
 fit.gaus(don,data,2,3,5)
 
  [,1] [,2] [,3] [,4] [,5]
 [1,] 0.00 5.099020 9.899495 5.00 3.162278
 [2,] 5.099020 0.00 6.324555 3.605551 4.472136
 [3,] 9.899495 6.324555 0.00 5.00 7.211103
 [4,] 5.00 3.605551 5.00 0.00 2.236068
 [5,] 3.162278 4.472136 7.211103 2.236068 0.00
 178.9113 :  2 3 5 
 Error in qr.qty(QR, resid) : 'qr' and 'y' must have the same number of
 rows
   

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


  1   2   3   4   5   6   7   8   9   10   >