Re: [R] The error in R while using bugs.R function

2005-05-17 Thread Prof Brian Ripley
On Mon, 16 May 2005, Li, Jia wrote:
I followed the instuctions on Dr. Gelman's web to install all of 
documents that bugs.R needs, but when I try to run the school example 
that the web posted in R, I got an error: couldn't find function bugs, 
what's wrong?
I suggest you ask Dr Gelman.  There is a function bugs in package
R2WinBUGS: perhaps you don't have that installed or that package loaded?
--
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


RE: [R] NA erase your data trick

2005-05-17 Thread Daniel Nordlund


 -Original Message-
 From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
 On Behalf Of Anders Schwartz Corr
 Sent: Monday, May 16, 2005 10:38 PM
 To: r-help@stat.math.ethz.ch
 Subject: [R] NA erase your data trick
 
 
 Oops,
 
 I just erased all my data using this gizmo that I thought would replace -9
 with NA.
 
 A) Can I get my tcn5 back?

I don't think there is any going back.

 
 B) How do I do it right next time, I learned my lesson, I'll never do it
 again, I promise!
 

How about something like

x[x == -9] - NA

Dan Nordlund
Bothell, WA

 Anders Corr
 
  for(i in 1:dim(tcn5)[2]){ ##for the number of columns
 + for(n in 1:dim(tcn5)[1]){ ##for the number of rows
 + tcn5[is.na(tcn5[n,i]) | tcn5[n,i] == -9] - NA
 +
 + }
 + }
 
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] The error in R while using bugs.R function

2005-05-17 Thread Duncan Murdoch
Li, Jia wrote:
Dear R users, 
 
I followed the instuctions on Dr. Gelman's web to install all
of documents that bugs.R needs, but when I try to run the school example that the web posted in R, I got an error: couldn't find function bugs, what's wrong?
It sounds as though you missed an instruction, or he did.  I'm guessing 
you didn't run library() to load the package.

Generally when a contributed package doesn't work, you should ask the 
maintainer for help.

Duncan Murdoch
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] NA erase your data trick

2005-05-17 Thread Uwe Ligges
Anders Schwartz Corr wrote:
Oops,
I just erased all my data using this gizmo that I thought would replace -9
with NA.
A) Can I get my tcn5 back?
As you got it the first time. There is nothing like undo.

B) How do I do it right next time, I learned my lesson, I'll never do it
again, I promise!
By vectorization:
   tcn5[tcn5 == -9] - NA
Uwe Ligges

Anders Corr

for(i in 1:dim(tcn5)[2]){ ##for the number of columns
+ for(n in 1:dim(tcn5)[1]){ ##for the number of rows
+ tcn5[is.na(tcn5[n,i]) | tcn5[n,i] == -9] - NA
+
+ }
+ }
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] NA erase your data trick

2005-05-17 Thread Duncan Murdoch
Anders Schwartz Corr wrote:
Oops,
I just erased all my data using this gizmo that I thought would replace -9
with NA.
A) Can I get my tcn5 back?
Not if you don't have it backed up somewhere else.
I wouldn't recommend keeping your only copy of anything in an R 
workspace.  It's too easy to accidentally delete or overwrite it.  Keep 
the original in a file.

B) How do I do it right next time, I learned my lesson, I'll never do it
again, I promise!
Anders Corr

for(i in 1:dim(tcn5)[2]){ ##for the number of columns
+ for(n in 1:dim(tcn5)[1]){ ##for the number of rows
+ tcn5[is.na(tcn5[n,i]) | tcn5[n,i] == -9] - NA
For some values of i and n, this last line simplifies to
tcn5[TRUE] - NA
which is why you lost your data.
You want to (a) think in vectors, or (b) use an if statement:
(a) Replace your whole series of statements with
tcn5[is.na(tcn5) | tcn5 == -9] - NA
or
(b) Replace just the last line above with
  if (is.na(tcn5[n,i]) | tcn5[n,i] == -9) tcn5[n,i] - NA
I'd choose (a); it's a lot cleaner and will run faster.
Duncan Murdoch
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] The error in R while using bugs.R function

2005-05-17 Thread Uwe Ligges
Li, Jia wrote:
Dear R users, 
 
I followed the instuctions on Dr. Gelman's web to install all
of documents that bugs.R needs, but when I try to run the school example that the web posted in R, I got an error: couldn't find function bugs, what's wrong?
Have you forgot to source() Andrew's bugs.R file?
Anyway, you might want to give the CRAN package R2WinBUGS a try, which 
is based on Andrew's code known as bugs.R.
There is also a developer version BRugs (an interface to OpenBUGS) 
available at http://www.statistik.uni-dortmund.de/~ligges/BRugs/ (will 
move to CRAN very soon now).

Uwe Ligges



 
Thanks,
 
Jia

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] NA erase your data trick

2005-05-17 Thread Petr Pikal
Hi

Maybe

tcn5[tcn5 == -9] - NA

if tcn5 is matrix


 mat-matrix(rnorm(100),10,10)
 mat[5,6:7]- -9
 mat[mat == -9]-NA

Read some intro on data manipulation, it helps you to avoid 
thinking in loops

Cheers
Petr

On 17 May 2005 at 1:37, Anders Schwartz Corr wrote:

 
 Oops,
 
 I just erased all my data using this gizmo that I thought would
 replace -9 with NA.
 
 A) Can I get my tcn5 back?
 
 B) How do I do it right next time, I learned my lesson, I'll never do
 it again, I promise!
 
 Anders Corr
 
  for(i in 1:dim(tcn5)[2]){ ##for the number of columns
 + for(n in 1:dim(tcn5)[1]){ ##for the number of rows
 + tcn5[is.na(tcn5[n,i]) | tcn5[n,i] == -9] - NA
 +
 + }
 + }
 
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide!
 http://www.R-project.org/posting-guide.html

Petr Pikal
[EMAIL PROTECTED]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] A question about bugs.R: functions for running WinBUGs from R

2005-05-17 Thread Uwe Ligges
Li, Jia wrote:
Dear R users,
 
I've found bugs.R : the functions for running WinBUGs from R that is
writen by Dr. Andrew Gelman who is a professor from Columbia University.
The bugs.R would be very useful for me,  and I think many of you know it
as well. I followed the instuctions on Dr. Gelman's web to install all
of documents that bugs.R needs, but when I try to run the school example
the web posted in R, I got an error. 
 
Would you please help me out? I am stuck on this for a while and really
frustrated now.
 
The program and the error as follow.
 
Thanks a lot in advance!
Why do you post twice?
See my former message, you forgot to source() bugs.R.
BTW: Since this code is from the website from Andrew Gelman, why do you 
think posting to R-help is the appropriate way to get help? R has a 
standardized mechanism for distributing code - called package.
And the appropriate package R2WinBUGS you are looking for has been 
made available - thanks to Andrew's effort in writing the original code 
and thanks to the efforts of two others to generalize the code and 
package it for you.

Uwe Ligges

Jiaa

_
# R code for entering the data and fitting the Bugs model for 8
schools
# analysis from Section 5.5 of Bayesian Data Analysis.
# To run, the Bugs model must be in the file schools.txt in your
working
# directory and you must load in the functions in the bugs.R file (see
# http://www.stat.columbia.edu/~gelman/bugsR/).
J - 8
y - c(28,8,-3,7,-1,1,18,12)
sigma.y - c(15,10,16,11,9,11,10,18)
schools.data - list (J, y, sigma.y)
schools.inits - function()
+   list (theta=rnorm(J,0,1), mu.theta=rnorm(1,0,100),
+ sigma.theta=runif(1,0,100))
schools.parameters - c(theta, mu.theta, sigma.theta)
#run in winbugs14
schools.sim - bugs (schools.data, schools.inits, schools.parameters,
schools.bug, n.chains=3, n.iter=1000, version=1.4)
Error: couldn't find function bugs


--- 

[[alternative HTML version deleted]]
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] NA erase your data trick

2005-05-17 Thread Detlef Steuer
On Tue, 17 May 2005 08:33:00 +0200
Uwe Ligges [EMAIL PROTECTED] wrote:

 Anders Schwartz Corr wrote:
  Oops,
  
  I just erased all my data using this gizmo that I thought would replace -9
  with NA.
  
  A) Can I get my tcn5 back?
 
 As you got it the first time. There is nothing like undo.

If you´re lucky it still lives inside an old, not overwritten when leaving the 
depressing R session .RData.

Detlef

 
 
  B) How do I do it right next time, I learned my lesson, I'll never do it
  again, I promise!
 
 By vectorization:
 
 tcn5[tcn5 == -9] - NA
 
 Uwe Ligges
 
 
 
  Anders Corr
  
  
 for(i in 1:dim(tcn5)[2]){ ##for the number of columns
  
  + for(n in 1:dim(tcn5)[1]){ ##for the number of rows
  + tcn5[is.na(tcn5[n,i]) | tcn5[n,i] == -9] - NA
  +
  + }
  + }
  
  
  __
  R-help@stat.math.ethz.ch mailing list
  https://stat.ethz.ch/mailman/listinfo/r-help
  PLEASE do read the posting guide! 
  http://www.R-project.org/posting-guide.html
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] NA erase your data trick

2005-05-17 Thread Prof Brian Ripley
On Tue, 17 May 2005, Uwe Ligges wrote:
Anders Schwartz Corr wrote:
Oops,
I just erased all my data using this gizmo that I thought would replace -9
with NA.
A) Can I get my tcn5 back?
As you got it the first time. There is nothing like undo.

B) How do I do it right next time, I learned my lesson, I'll never do it
again, I promise!
By vectorization:
  tcn5[tcn5 == -9] - NA
That will work if tcn5 contains NAs, but only because NA indices on the 
lhs are now ignored for matrices (if tcn5 is a matrix, which seems 
unstated) -- this used not to be the case.  I would prefer

tcn5[tcn %in% -9] - NA
Using %in% rather than == in computed indices is a good habit to acquire: 
it also makes things like

tcn5[tcn %in% c(-9, -99)] - NA
work as expected.
If tcn is a data frame, you have to do this column-by-column, as in
tcn5[] - lapply(tcn5, function(x) x[x %in% -9] - NA)
or by a logical index matrix, which is harder to construct.

for(i in 1:dim(tcn5)[2]){ ##for the number of columns
+ for(n in 1:dim(tcn5)[1]){ ##for the number of rows
+ tcn5[is.na(tcn5[n,i]) | tcn5[n,i] == -9] - NA
+
+ }
+ }
--
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


RE: [R] parsing speed

2005-05-17 Thread Martin Maechler
 BertG == Berton Gunter [EMAIL PROTECTED]
 on Mon, 16 May 2005 15:20:01 -0700 writes:

BertG (just my additional $.02) ... and as a general rule
BertG (subject to numerous exceptions, caveats, etc.)

BertG 1) it is programming and debugging time that most
BertG impacts overall program execution time; 2) this is
BertG most strongly impacted by code readability and size
BertG (the smaller the better); 3) both of which are
BertG enhanced by modular construction and reuseability,
BertG which argues for avoiding inline code and using
BertG separate functions.

BertG These days, i would argue that most of the time it is
BertG program clarity and correctness (they are related)
BertG that is the important issue, not execution speed.

BertG ... again, subject to exceptions and caveats, etc.

Yes indeed; very good points very well put!

Just to say it again: 

  We strongly recommend not to inline your code, but rather
  program modularly, i.e. call small `utility' functions.

If execution time ever becomes crucial for your problem
(not often), the chances are considerable that the time spent is
 not there [[ but you have to measure! -  use Rprof() ! ]]
and if it *was* there, then you have your bottleneck in one
simple function that you could start optimizing... even a good
reason for not inlining that code..

Martin Maechler, ETH Zurich

BertG -- Bert Gunter Genentech Non-Clinical Statistics
BertG South San Francisco, CA
 
BertG The business of the statistician is to catalyze the
BertG scientific learning process.  - George E. P. Box
 
 

 -Original Message- From:
 [EMAIL PROTECTED]
 [mailto:[EMAIL PROTECTED] On Behalf Of
 Duncan Murdoch Sent: Monday, May 16, 2005 3:09 PM To:
 [EMAIL PROTECTED] Cc: r-help Subject: Re: [R]
 parsing speed
 
 Federico Calboli wrote:  Hi everyone,
  
  I have a question on parsing speed.
  
  I have two functions:
  
  F1  F2
  
  As things are now, F2 calls F1 internally:
  
  F2 = function(x){  if (something == 1){  y = F1(x) 
 }  if (something ==2){  do whatever  }  }
  
  *Assuming there could be some difference*, is is faster
 to use the code  as written above or should I actually
 write the statements of F1 to make  the parsing faster?
 
 The parsing only happens once when you define the
 functions, and is (almost always) a negligible part of
 total execution time.  I think you're really worried
 about execution time.  You'll probably get more execution
 time with a separate function because function calls take
 time.
 
 However, my guess is that putting F1 inline won't make
 enough difference to notice.
 
 Duncan

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Vuong test

2005-05-17 Thread =?iso-8859-1?Q?St=E9phanie_PAYET?=
Hi,

I have two questions. First, I'd like to compare a ZINB model to a negativ
binomial model with the Vuong test, but I can't find how to performe it from
the zicount package. Does a programm exist to do it ?
Second, I'd like to know in which cases we have to use a double hurdle model
instead of a zero inflated model.

Many thanks,

Stéphanie Payet

REES France
Réseau d'Evaluation en Economie de la Santé
28, rue d'Assas
75006 PARIS
Tél. +33 (0)1 44 39 16 90
Fax +33 (0)1 44 39 16 92
Mèl. [EMAIL PROTECTED]
Site Internet : http://www.rees-france.com

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] cluster results using fanny

2005-05-17 Thread TEMPL Matthias
 Barbara Diaz wrote:
  Hi,
  
  I am using fanny and I have estrange results. I am wondering if 
  someone out there can help me understand why this happens.
  
  First of all in most of my tries, it gives me a result in 
 which each 
  object has equal membership in all clusters. I have read that that 
  means the clustering is entirely fuzzy. Looking at the 
 graphics it 
  is really difficult to understand how objects with so 
 different scores 
  for the variables have the same membership for all the clusters.

Hi Barbara,

I think, there is a problem with fanny, when you have standardised data.
For example:
library(mvoutlier)
library(cluster)
data(chorizon)
a - fanny(chorizon[,101:110],4)
b - fanny(scale(chorizon[,101:110]),4)
a$mem # is ok, but
b$mem # have same memberships

Better to use function cmeans in package e1071, which gives correct
memberships!

Best,
Matthias



 __
 R-help@stat.math.ethz.ch mailing list 
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read 
 the posting guide! http://www.R-project.org/posting-guide.html


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Vuong test

2005-05-17 Thread Seyed Reza Jafarzadeh
Hi Stéphanie,

The Vuong test can be done in Stata
(http://www.stata.com/support/faqs/stat/vuong.html), but I am also
looking for its code in R. In addition to zicounts, Dr. Simon
Jackman (http://pscl.stanford.edu/) has provided the code for fitting
the zero-inflated (http://pscl.stanford.edu/zeroinfl.r) and hurdle
(http://pscl.stanford.edu/hurdle.r) count models.

Reza



On 5/17/05, Stéphanie PAYET [EMAIL PROTECTED] wrote:
 Hi,
 
 I have two questions. First, I'd like to compare a ZINB model to a negativ
 binomial model with the Vuong test, but I can't find how to performe it from
 the zicount package. Does a programm exist to do it ?
 Second, I'd like to know in which cases we have to use a double hurdle model
 instead of a zero inflated model.
 
 Many thanks,
 
 Stéphanie Payet
 
 REES France
 Réseau d'Evaluation en Economie de la Santé
 28, rue d'Assas
 75006 PARIS
 Tél. +33 (0)1 44 39 16 90
 Fax +33 (0)1 44 39 16 92
 Mèl. [EMAIL PROTECTED]
 Site Internet : http://www.rees-france.com
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] a problem sourcing a file using chdir=TRUE

2005-05-17 Thread Prof Brian Ripley
This and some related problems should be fixed in tomorrow's R-patched 
snapshot.

On Mon, 16 May 2005, Luca Scrucca wrote:
Dear R-users,
I used to give commands such as:
source(file=~/path/to/file.R, chdir=TRUE)
but with the latest v. 2.1.0 it does not seem to work anymore.
I tried to figure out what it was going on and it seems that the string
for which
class(file)
[1] character
is changed to
class(file)
[1] file   connection
when the connection is open by
file - file(file, r, encoding = encoding)
But this force the following if statement
if (chdir  is.character(file)  (path - dirname(file)) != .)
  { owd - getwd()
on.exit(setwd(owd))
setwd(path)
  }
to be FALSE and then non changing of current directory is done.
Is this the desired behavior or some bug fix is required?
--
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] parsing speed

2005-05-17 Thread Jan T. Kim
On Tue, May 17, 2005 at 09:50:20AM +0200, Martin Maechler wrote:
  BertG == Berton Gunter [EMAIL PROTECTED]
  on Mon, 16 May 2005 15:20:01 -0700 writes:
 
 BertG (just my additional $.02) ... and as a general rule
 BertG (subject to numerous exceptions, caveats, etc.)
 
 BertG 1) it is programming and debugging time that most
 BertG impacts overall program execution time; 2) this is
 BertG most strongly impacted by code readability and size
 BertG (the smaller the better); 3) both of which are
 BertG enhanced by modular construction and reuseability,
 BertG which argues for avoiding inline code and using
 BertG separate functions.
 
 BertG These days, i would argue that most of the time it is
 BertG program clarity and correctness (they are related)
 BertG that is the important issue, not execution speed.
 
 BertG ... again, subject to exceptions and caveats, etc.
 
 Yes indeed; very good points very well put!
 
 Just to say it again: 
 
   We strongly recommend not to inline your code, but rather
   program modularly, i.e. call small `utility' functions.

Generally, I fully agree -- modular coding is good, not only in R.
However, with regard to execution time, modularisation that involves
passing of large amounts of data (100 x 1000 data frames etc.) can
cause problems.

Best regards, Jan
-- 
 +- Jan T. Kim ---+
 |*NEW*email: [EMAIL PROTECTED]   |
 |*NEW*WWW:   http://www.cmp.uea.ac.uk/people/jtk |
 *-=  hierarchical systems are for files, not for humans  =-*

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] adjusted p-values with TukeyHSD?

2005-05-17 Thread Christoph Buser
Dear Christoph

You can use the multcomp package. Please have a look at the
following example:

library(multcomp)

The first two lines were already proposed by Erin Hodgess:

summary(fm1 - aov(breaks ~ wool + tension, data = warpbreaks))
TukeyHSD(fm1, tension, ordered = TRUE)

Tukey multiple comparisons of means
95% family-wise confidence level
factor levels have been ordered
 
Fit: aov(formula = breaks ~ wool + tension, data = warpbreaks)

$tension
 difflwr  upr
M-H  4.72 -4.6311985 14.07564
L-H 14.72  5.3688015 24.07564
L-M 10.00  0.6465793 19.35342
 

By using the functions simtest or simint you can get the
p-values, too:

summary(simtest(breaks ~ wool + tension, data = warpbreaks, whichf=tension,
type = Tukey))

 Simultaneous tests: Tukey contrasts 

Call: 
simtest.formula(formula = breaks ~ wool + tension, data = warpbreaks, 
whichf = tension, type = Tukey)

 Tukey contrasts for factor tension, covariable:  wool 

Contrast matrix:
  tensionL tensionM tensionH
tensionM-tensionL 0 0   -110
tensionH-tensionL 0 0   -101
tensionH-tensionM 0 00   -11


Absolute Error Tolerance:  0.001 

Coefficients:
  Estimate t value Std.Err. p raw p Bonf p adj
tensionH-tensionL  -14.722  -3.8023.872 0.000  0.001 0.001
tensionM-tensionL  -10.000  -2.5823.872 0.013  0.026 0.024
tensionH-tensionM   -4.722  -1.2193.872 0.228  0.228 0.228



or if you prefer to get the confidence intervals, too, you can
use:

summary(simint(breaks ~ wool + tension, data = warpbreaks, whichf=tension,
type = Tukey))

Simultaneous 95% confidence intervals: Tukey contrasts

Call: 
simint.formula(formula = breaks ~ wool + tension, data = warpbreaks, 
whichf = tension, type = Tukey)

 Tukey contrasts for factor tension, covariable:  wool 

Contrast matrix:
  tensionL tensionM tensionH
tensionM-tensionL 0 0   -110
tensionH-tensionL 0 0   -101
tensionH-tensionM 0 00   -11

Absolute Error Tolerance:  0.001 

 95 % quantile:  2.415 

Coefficients:
  Estimate   2.5 % 97.5 % t value Std.Err. p raw p Bonf p adj
tensionM-tensionL  -10.000 -19.352 -0.648  -2.5823.872 0.013  0.038 0.034
tensionH-tensionL  -14.722 -24.074 -5.370  -3.8023.872 0.000  0.001 0.001
tensionH-tensionM   -4.722 -14.074  4.630  -1.2193.872 0.228  0.685 0.447

-
Please be careful: The resulting confidence intervals in
simint are not associated with the p-values from 'simtest' as it
is described in the help page of the two functions.
-

I had not the time to check the differences in the function or
read the references given on the help page.
If you are interested in the function you can check those to
find out which one you prefer.

Best regards,

Christoph Buser

--
Christoph Buser [EMAIL PROTECTED]
Seminar fuer Statistik, LEO C13
ETH (Federal Inst. Technology)  8092 Zurich  SWITZERLAND
phone: x-41-44-632-4673 fax: 632-1228
http://stat.ethz.ch/~buser/
--


Christoph Strehblow writes:
  hi list,
  
  i have to ask you again, having tried and searched for several days...
  
  i want to do a TukeyHSD after an Anova, and want to get the adjusted  
  p-values after the Tukey Correction.
  i found the p.adjust function, but it can only correct for holm,  
  hochberg, bonferroni, but not Tukey.
  
  Is it not possbile to get adjusted p-values after Tukey-correction?
  
  sorry, if this is an often-answered-question, but i didn´t find it on  
  the list archive...
  
  thx a lot, list, Chris
  
  
  Christoph Strehblow, MD
  Department of Rheumatology, Diabetes and Endocrinology
  Wilhelminenspital, Vienna, Austria
  [EMAIL PROTECTED]
  
  __
  R-help@stat.math.ethz.ch mailing list
  https://stat.ethz.ch/mailman/listinfo/r-help
  PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] parsing speed

2005-05-17 Thread Barry Rowlingson
Jan T. Kim wrote:
Generally, I fully agree -- modular coding is good, not only in R.
However, with regard to execution time, modularisation that involves
passing of large amounts of data (100 x 1000 data frames etc.) can
cause problems.
 I've just tried a few simple examples of throwing biggish (3000x3000) 
matrices around and haven't encountered any pathological behaviour yet. 
I tried modifying the matrices within the functions, tried looping a few 
thousand times to estimate the matrix passing overhead, and in most 
cases the modular version run pretty much as fast as - or occasionally 
faster than - the inline version. There was some variability in CPU time 
taken, probably due to garbage collection.

 Does anyone have a simple example where passing large data sets causes 
a huge increase in CPU time? I think R is pretty smart with its 
parameter passing these days - anyone who thinks its still like Splus 
version 2.3 should update their brains to the 21st Century.

Baz
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] setting value arg of pdSymm() in nlme

2005-05-17 Thread William Valdar
Dear All,
I wish to model random effects that have known between-group covariance 
structure using the lme() function from library nlme. However, I have yet 
to get even a simple example to work. No doubt this is because I am 
confusing my syntax, but I would appreciate any guidance as to how. I have 
studied Pinheiro  Bates carefully (though it's always possible I've 
missed something), the few posts mentioning pdSymm (some of which suggest 
lme is suboptimal here anyway) and ?pdSymm (which has only a trivial 
example, see later) but have not yet found a successful example of syntax 
for this particular problem.

I am using the pdSymm class to specify a positive definite matrix 
corresponding to the covariance structure of a random batch effect, and 
passing this to lme() through the random= argument. To do this, I must 
set the value= argument of pdSymm.

Consider the following simple and self-contained example:
library(nlme)
# make response and batch data
batch.names - c(A, B, C)
data.df - data.frame(
response = rnorm(100),
batch = factor(sample(batch.names, 100, replace=T))
)
# make covariance matrix for batch
batch.mat - matrix(c(1,.5,.2, .5, 1, .3, .2, .3, 1), ncol=3)
colnames(batch.mat) - batch.names
rownames(batch.mat) - batch.names
# fit batch as a simple random intercept
lme(response ~ 1, data=data.df, random=~1|batch)
# ...works fine
# do the same using pdSymm notation
lme(response ~ 1, data=data.df,
random=list( batch=pdSymm(form=~1) )
)
# ...works fine also
# specify cov structure using value arg
lme(response ~ 1, data=data.df,
random=list( batch=pdSymm(
value=batch.mat,
form=~1,
nam=batch.names)
)
)
# throws error below
---snip---
Error in Names-.pdMat(`*tmp*`, value = (Intercept)) :
Length of names should be 3
traceback()
7: stop(paste(Length of names should be, length(dn)))
6: Names-.pdMat(`*tmp*`, value = (Intercept))
5: Names-(`*tmp*`, value = (Intercept))
4: Names-.reStruct(`*tmp*`, value = list(batch = (Intercept)))
3: Names-(`*tmp*`, value = list(batch = (Intercept)))
2: lme.formula(response ~ 1, data = data.df, random = list(batch = 
pdSymm(value = batch.mat,
   form = ~1, nam = batch.names)))
1: lme(response ~ 1, data = data.df, random = list(batch = pdSymm(value = 
batch.mat,
   form = ~1, nam = batch.names)))
---snip---

The length of batch.names is 3, so I find this error enigmatic. Note that 
I had to specify all three of value, form and nam otherwise I got 
missing args errors. Also note that doing

 pdSymm(value=batch.mat, form=~1, nam=batch.names)
on the command line, like the similar invocation described on ?pdSymm, 
works fine also. It's just lme() that doesn't like it.

Can anybody show me what I should be doing instead? Some successful code 
will greatly clarify the issue. (My version details are below). Also, I 
notice the pdMat scheme is absent from lme() in lme4. Is this 
functionality deprecated in lme4 and excluded from lmer?

Many thanks,
William
Version details: running R 2.1.0 on windows XP, using nlme 3.1-57.
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Dr William Valdar   ++44 (0)1865 287 717
Wellcome Trust Centre   [EMAIL PROTECTED]
for Human Genetics, Oxford  www.well.ox.ac.uk/~valdar
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] adjusted p-values with TukeyHSD?

2005-05-17 Thread Christoph Strehblow
Hi!
Thanks a lot, works as advertised. If i used Tukey, it even gives  
raw, Bonferroni- and Tukey-corrected p-values!

Thx for the help,
Christoph Strehblow, MD
Department of Rheumatology, Diabetes and Endocrinology
Wilhelminenspital, Vienna, Austria
[EMAIL PROTECTED]
Am 17.05.2005 um 13:23 schrieb Christoph Buser:
Dear Christoph
You can use the multcomp package. Please have a look at the
following example:
library(multcomp)
The first two lines were already proposed by Erin Hodgess:
summary(fm1 - aov(breaks ~ wool + tension, data = warpbreaks))
TukeyHSD(fm1, tension, ordered = TRUE)
Tukey multiple comparisons of means
95% family-wise confidence level
factor levels have been ordered
Fit: aov(formula = breaks ~ wool + tension, data = warpbreaks)
$tension
 difflwr  upr
M-H  4.72 -4.6311985 14.07564
L-H 14.72  5.3688015 24.07564
L-M 10.00  0.6465793 19.35342
By using the functions simtest or simint you can get the
p-values, too:
summary(simtest(breaks ~ wool + tension, data = warpbreaks,  
whichf=tension,
type = Tukey))

 Simultaneous tests: Tukey contrasts
Call:
simtest.formula(formula = breaks ~ wool + tension, data = warpbreaks,
whichf = tension, type = Tukey)
 Tukey contrasts for factor tension, covariable:  wool
Contrast matrix:
  tensionL tensionM tensionH
tensionM-tensionL 0 0   -110
tensionH-tensionL 0 0   -101
tensionH-tensionM 0 00   -11
Absolute Error Tolerance:  0.001
Coefficients:
  Estimate t value Std.Err. p raw p Bonf p adj
tensionH-tensionL  -14.722  -3.8023.872 0.000  0.001 0.001
tensionM-tensionL  -10.000  -2.5823.872 0.013  0.026 0.024
tensionH-tensionM   -4.722  -1.2193.872 0.228  0.228 0.228

or if you prefer to get the confidence intervals, too, you can
use:
summary(simint(breaks ~ wool + tension, data = warpbreaks,  
whichf=tension,
type = Tukey))

Simultaneous 95% confidence intervals: Tukey contrasts
Call:
simint.formula(formula = breaks ~ wool + tension, data = warpbreaks,
whichf = tension, type = Tukey)
 Tukey contrasts for factor tension, covariable:  wool
Contrast matrix:
  tensionL tensionM tensionH
tensionM-tensionL 0 0   -110
tensionH-tensionL 0 0   -101
tensionH-tensionM 0 00   -11
Absolute Error Tolerance:  0.001
 95 % quantile:  2.415
Coefficients:
  Estimate   2.5 % 97.5 % t value Std.Err. p raw p  
Bonf p adj
tensionM-tensionL  -10.000 -19.352 -0.648  -2.5823.872 0.013   
0.038 0.034
tensionH-tensionL  -14.722 -24.074 -5.370  -3.8023.872 0.000   
0.001 0.001
tensionH-tensionM   -4.722 -14.074  4.630  -1.2193.872 0.228   
0.685 0.447

-
Please be careful: The resulting confidence intervals in
simint are not associated with the p-values from 'simtest' as it
is described in the help page of the two functions.
-
I had not the time to check the differences in the function or
read the references given on the help page.
If you are interested in the function you can check those to
find out which one you prefer.
Best regards,
Christoph Buser
--
Christoph Buser [EMAIL PROTECTED]
Seminar fuer Statistik, LEO C13
ETH (Federal Inst. Technology)8092 Zurich SWITZERLAND
phone: x-41-44-632-4673fax: 632-1228
http://stat.ethz.ch/~buser/
--
Christoph Strehblow writes:
hi list,
i have to ask you again, having tried and searched for several  
days...

i want to do a TukeyHSD after an Anova, and want to get the adjusted
p-values after the Tukey Correction.
i found the p.adjust function, but it can only correct for holm,
hochberg, bonferroni, but not Tukey.
Is it not possbile to get adjusted p-values after Tukey-correction?
sorry, if this is an often-answered-question, but i didn´t find it on
the list archive...
thx a lot, list, Chris
Christoph Strehblow, MD
Department of Rheumatology, Diabetes and Endocrinology
Wilhelminenspital, Vienna, Austria
[EMAIL PROTECTED]
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting- 
guide.html


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


RE: [R] install.packages parameters

2005-05-17 Thread Liaw, Andy
Caveat: I know next to nothing about Mac...

That said, my guess is that you installed R from binary, rather than
building from source.  In that case the compilers and flags, etc., are
configured to the machine that the binary is built on.  You can look in
$RHOME/etc/Makeconf to see the settings, and see if changing them helps.

Andy

 From: [EMAIL PROTECTED]
 
 Hello.
 
 R is having some trouble installing a package because it 
 passed arguments to 
 gcc which were non-existent directories and files.  It also 
 didn't find 
 g77, although it's in a directory in my $PATH;  I tricked it 
 by making a 
 sym link in /usr/bin.
 
 What file does R get these parameters from?  
 I've looked for the parameters in the package source, the 
 install.packages 
 help pages, and the R preferences menu, all to no avail.
 
 I am running R 2.1.0 on Mac 10.3.8, and three days ago I installed a
 different package from source where installation involved gcc without
 any problems, and nothing has changed since then.  The packages I'm
 trying to install are Joe Schafer's mix, norm, and cat.
 
 Thanks,
 
 Janet
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! 
 http://www.R-project.org/posting-guide.html
 
 


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] adjusted p-values with TukeyHSD?

2005-05-17 Thread Sander Oom
Hi Chris and Chris,
I was keeping my eye on this thread as I have also been discovering 
multiple comparisons recently. Your instructions are very clear! Thanks.

Now I would love to see an R boffin write a nifty function to produce a 
graphical representation of the multiple comparison, like this one:

http://www.theses.ulaval.ca/2003/21026/21026024.jpg
Should not be too difficult.[any one up for the challenge?]
I came across more multiple comparison info here;
http://www.agr.kuleuven.ac.be/vakken/statisticsbyR/ANOVAbyRr/multiplecomp.htm
Cheers,
Sander.
Christoph Buser wrote:
Dear Christoph
You can use the multcomp package. Please have a look at the
following example:
library(multcomp)
The first two lines were already proposed by Erin Hodgess:
summary(fm1 - aov(breaks ~ wool + tension, data = warpbreaks))
TukeyHSD(fm1, tension, ordered = TRUE)
Tukey multiple comparisons of means
95% family-wise confidence level
factor levels have been ordered
 
Fit: aov(formula = breaks ~ wool + tension, data = warpbreaks)

$tension
 difflwr  upr
M-H  4.72 -4.6311985 14.07564
L-H 14.72  5.3688015 24.07564
L-M 10.00  0.6465793 19.35342
 

By using the functions simtest or simint you can get the
p-values, too:
summary(simtest(breaks ~ wool + tension, data = warpbreaks, whichf=tension,
type = Tukey))
	 Simultaneous tests: Tukey contrasts 

Call: 
simtest.formula(formula = breaks ~ wool + tension, data = warpbreaks, 
whichf = tension, type = Tukey)

	 Tukey contrasts for factor tension, covariable:  wool 

Contrast matrix:
  tensionL tensionM tensionH
tensionM-tensionL 0 0   -110
tensionH-tensionL 0 0   -101
tensionH-tensionM 0 00   -11
Absolute Error Tolerance:  0.001 

Coefficients:
  Estimate t value Std.Err. p raw p Bonf p adj
tensionH-tensionL  -14.722  -3.8023.872 0.000  0.001 0.001
tensionM-tensionL  -10.000  -2.5823.872 0.013  0.026 0.024
tensionH-tensionM   -4.722  -1.2193.872 0.228  0.228 0.228

or if you prefer to get the confidence intervals, too, you can
use:
summary(simint(breaks ~ wool + tension, data = warpbreaks, whichf=tension,
type = Tukey))
Simultaneous 95% confidence intervals: Tukey contrasts
Call: 
simint.formula(formula = breaks ~ wool + tension, data = warpbreaks, 
whichf = tension, type = Tukey)

	 Tukey contrasts for factor tension, covariable:  wool 

Contrast matrix:
  tensionL tensionM tensionH
tensionM-tensionL 0 0   -110
tensionH-tensionL 0 0   -101
tensionH-tensionM 0 00   -11
Absolute Error Tolerance:  0.001 

 95 % quantile:  2.415 

Coefficients:
  Estimate   2.5 % 97.5 % t value Std.Err. p raw p Bonf p adj
tensionM-tensionL  -10.000 -19.352 -0.648  -2.5823.872 0.013  0.038 0.034
tensionH-tensionL  -14.722 -24.074 -5.370  -3.8023.872 0.000  0.001 0.001
tensionH-tensionM   -4.722 -14.074  4.630  -1.2193.872 0.228  0.685 0.447
-
Please be careful: The resulting confidence intervals in
simint are not associated with the p-values from 'simtest' as it
is described in the help page of the two functions.
-
I had not the time to check the differences in the function or
read the references given on the help page.
If you are interested in the function you can check those to
find out which one you prefer.
Best regards,
Christoph Buser
--
Christoph Buser [EMAIL PROTECTED]
Seminar fuer Statistik, LEO C13
ETH (Federal Inst. Technology)  8092 Zurich  SWITZERLAND
phone: x-41-44-632-4673 fax: 632-1228
http://stat.ethz.ch/~buser/
--
Christoph Strehblow writes:
  hi list,
  
  i have to ask you again, having tried and searched for several days...
  
  i want to do a TukeyHSD after an Anova, and want to get the adjusted  
  p-values after the Tukey Correction.
  i found the p.adjust function, but it can only correct for holm,  
  hochberg, bonferroni, but not Tukey.
  
  Is it not possbile to get adjusted p-values after Tukey-correction?
  
  sorry, if this is an often-answered-question, but i didn´t find it on  
  the list archive...
  
  thx a lot, list, Chris
  
  
  Christoph Strehblow, MD
  Department of Rheumatology, Diabetes and Endocrinology
  Wilhelminenspital, Vienna, Austria
  [EMAIL PROTECTED]
  
  __
  R-help@stat.math.ethz.ch mailing list
  https://stat.ethz.ch/mailman/listinfo/r-help
  PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read 

[R] Finding the right number of clusters

2005-05-17 Thread Philip Bermingham
SAS has something called the cubic criterion cutoff for finding the 
most appropriate number of clusters.  Does R have anything that would 
replicate that? I've been searching the lists and can't seem to find 
anything that would point me in the right direction.

Thank in advance,
Philip Bermingham
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Finding the right number of clusters

2005-05-17 Thread Romain Francois
Le 17.05.2005 14:42, Philip Bermingham a écrit :
SAS has something called the cubic criterion cutoff for finding the 
most appropriate number of clusters.  Does R have anything that would 
replicate that? I've been searching the lists and can't seem to find 
anything that would point me in the right direction.

Thank in advance,
Philip Bermingham
Hello,
Package fpc has a function cluster.stats with a lot of criterion like 
G2, G3, etc ...

Romain
--
 ~ 
~~  Romain FRANCOIS - http://addictedtor.free.fr ~~
Etudiant  ISUP - CS3 - Industrie et Services   
~~http://www.isup.cicrp.jussieu.fr/  ~~
   Stagiaire INRIA Futurs - Equipe SELECT  
~~   http://www.inria.fr/recherche/equipes/select.fr.html~~
 ~ 
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


RE: [R] parsing speed

2005-05-17 Thread Liaw, Andy
 From: Barry Rowlingson
 
 Jan T. Kim wrote:
 
  Generally, I fully agree -- modular coding is good, not only in R.
  However, with regard to execution time, modularisation that involves
  passing of large amounts of data (100 x 1000 data frames etc.) can
  cause problems.
 
   I've just tried a few simple examples of throwing biggish 
 (3000x3000) 
 matrices around and haven't encountered any pathological 
 behaviour yet. 
 I tried modifying the matrices within the functions, tried 
 looping a few 
 thousand times to estimate the matrix passing overhead, and in most 
 cases the modular version run pretty much as fast as - or 
 occasionally 
 faster than - the inline version. There was some variability 
 in CPU time 
 taken, probably due to garbage collection.
 
   Does anyone have a simple example where passing large data 
 sets causes 
 a huge increase in CPU time? I think R is pretty smart with its 
 parameter passing these days - anyone who thinks its still like Splus 
 version 2.3 should update their brains to the 21st Century.

I think one example of this is using the formula interface to fit models on
large data sets, especially those with tons of variables.  Some model
fitting functions have the default interface f(x, y, ...), along with a
formula method f(formula, data, ...).  If x has lots of variables (say over
1000), using the formula interface can take several times longer than
calling the raw interface directly.

Andy
 
 Baz
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! 
 http://www.R-project.org/posting-guide.html
 
 


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


RE: [R] Survey of moving window statistical functions - still l ooking for fast mad function

2005-05-17 Thread Tuszynski, Jaroslaw W.
On 9 Oct 2004, Brian D. Ripley wrote: 

 On Fri, 8 Oct 2004, Tuszynski, Jaroslaw W. wrote:
  Finally a question: I still need to get moving windows mad function 
 faster my runmad function is not that much faster than apply/embed 
 combo, and that I used before, and this is where my code spends most 
 of its time. I need something like runmed but for a mad function. Any
suggestions?

 Write your own C-level implementation, as runmed and most of the other
fast functions you cite are.

I did as suggested and just released runmean, runmax, runmin, runmad and
runquantile functions as part of caMassClass package.
Thanks for the suggestion, now my codes run 20 min. instead of overnight.

Jarek
\===

 Jarek Tuszynski, PhD.   o / \ 
 Science Applications International Corporation  \__,|  
 (703) 676-4192  \
 [EMAIL PROTECTED] `\

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] installing R on Irix

2005-05-17 Thread =?ISO-8859-2?Q?Cserh=E1ti_M=E1ty=E1s?=
Hello veeryone, I nedd some help here.

The problem is I was trying to install R on my Irix system, with little 
success: I got the following ugly error messages: watch out:


begin installing recommended package mgcv
Cannot create directory : No such file or directory
* Installing *source* package 'mgcv' ...
** libs
gmake[3]: Entering directory `/tmp/R.INSTALL.13709658/mgcv/src'
gcc -I/usr/home/csmatyi/programs/R/R-2.1.0/include  -
I/usr/local/include -g -O2 -c gcv.c -o gcv.o
gcc -I/usr/home/csmatyi/programs/R/R-2.1.0/include  -
I/usr/local/include -g -O2 -c magic.c -o magic.o
gcc -I/usr/home/csmatyi/programs/R/R-2.1.0/include  -
I/usr/local/include -g -O2 -c mat.c -o mat.o
gcc -I/usr/home/csmatyi/programs/R/R-2.1.0/include  -
I/usr/local/include -g -O2 -c matrix.c -o matrix.o
as: Error: /var/tmp/ccAomyDd.s, line 23679: register expected
  dmtc1 244($sp),$f0
as: Error: /var/tmp/ccAomyDd.s, line 23679: Undefined symbol: 244
gmake[3]: *** [matrix.o] Error 1
gmake[3]: Leaving directory `/tmp/R.INSTALL.13709658/mgcv/src'
ERROR: compilation failed for package 'mgcv'
** Removing '/usr/home/csmatyi/programs/R/R-2.1.0/library/mgcv'
gmake[2]: *** [mgcv.ts] Error 1
gmake[2]: Leaving directory `/usr/home/csmatyi/programs/R/R-
2.1.0/src/library/Recommended'
gmake[1]: *** [recommended-packages] Error 2
gmake[1]: Leaving directory `/usr/home/csmatyi/programs/R/R-
2.1.0/src/library/Recommended'
gmake: *** [stamp-recommended] Error 2


What could the problem be here?

Thanks, Matthew C.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] adjusted p-values with TukeyHSD?

2005-05-17 Thread Sander Oom
Shame I can not get hold of Hsu, J. C. and M. Peruggia (1994) just now. 
I am quite curious to see what their graphs look like. Would you be able 
to give an example in R.?  ;-)

The graph I put forward is typically used by ecologists to summarize 
data. It comes down to a simple means plot with error bars. Significant 
differences of multiple comparisons are then added using the letters a, 
b, c etc. If two bars have the same letter, they are not significantly 
different. It can become quite complicated when mean one is different 
from mean three but not from mean two and mean two is different from 
mean three but not mean one. You then get: a, ab, c for mean one, two 
and three respectively.

Of course what is often used does not constitute the best way of doing it.
Sander.

Liaw, Andy wrote:
From: Sander Oom
Hi Chris and Chris,
I was keeping my eye on this thread as I have also been discovering 
multiple comparisons recently. Your instructions are very 
clear! Thanks.
One thing to note, though:  Multcomp does not do Dunnett's or 
Tukey's multiple comparisons per se.  Those names in multcomp 
refer to the contrasts being used (comparison to a control for 
Dunnett and all pairwise comparison for Tukey).  The actual 
methods used are as described in the references of the help
pages.

 
Now I would love to see an R boffin write a nifty function to 
produce a 
graphical representation of the multiple comparison, like this one:

http://www.theses.ulaval.ca/2003/21026/21026024.jpg
Should not be too difficult.[any one up for the challenge?]
I beg to differ:  That's probably as bad a way as one can use to 
graphically show multiple comparison.  The shaded bars serve no 
purpose.

Two alternatives that I'm aware of are 

- Multiple comparison circles, due to John Sall, and not 
  surprisingly, implemented in JMP and SAS/Insight.  See:
 
http://support.sas.com/documentation/onlinedoc/v7/whatsnew/insight/sect4.htm

- The mean-mean display proposed by Hsu and Peruggia:
  Hsu, J. C. and M. Peruggia (1994). 
  Graphical representations of Tukey's multiple comparison method.
  Journal of Computational and Graphical Statistics 3, 143{161

Andy
 
I came across more multiple comparison info here;
http://www.agr.kuleuven.ac.be/vakken/statisticsbyR/ANOVAbyRr/m
ultiplecomp.htm
Cheers,
Sander.
Christoph Buser wrote:
Dear Christoph
You can use the multcomp package. Please have a look at the
following example:
library(multcomp)
The first two lines were already proposed by Erin Hodgess:
summary(fm1 - aov(breaks ~ wool + tension, data = warpbreaks))
TukeyHSD(fm1, tension, ordered = TRUE)
   Tukey multiple comparisons of means
   95% family-wise confidence level
   factor levels have been ordered
Fit: aov(formula = breaks ~ wool + tension, data = warpbreaks)
$tension
difflwr  upr
M-H  4.72 -4.6311985 14.07564
L-H 14.72  5.3688015 24.07564
L-M 10.00  0.6465793 19.35342
By using the functions simtest or simint you can get the
p-values, too:
summary(simtest(breaks ~ wool + tension, data = warpbreaks, 
whichf=tension,
   type = Tukey))
	 Simultaneous tests: Tukey contrasts 

Call: 
simtest.formula(formula = breaks ~ wool + tension, data = 
warpbreaks, 
   whichf = tension, type = Tukey)
	 Tukey contrasts for factor tension, covariable:  wool 

Contrast matrix:
 tensionL tensionM tensionH
tensionM-tensionL 0 0   -110
tensionH-tensionL 0 0   -101
tensionH-tensionM 0 00   -11
Absolute Error Tolerance:  0.001 

Coefficients:
 Estimate t value Std.Err. p raw p Bonf p adj
tensionH-tensionL  -14.722  -3.8023.872 0.000  0.001 0.001
tensionM-tensionL  -10.000  -2.5823.872 0.013  0.026 0.024
tensionH-tensionM   -4.722  -1.2193.872 0.228  0.228 0.228

or if you prefer to get the confidence intervals, too, you can
use:
summary(simint(breaks ~ wool + tension, data = warpbreaks, 
whichf=tension,
   type = Tukey))
Simultaneous 95% confidence intervals: Tukey contrasts
Call: 
simint.formula(formula = breaks ~ wool + tension, data = 
warpbreaks, 
   whichf = tension, type = Tukey)
	 Tukey contrasts for factor tension, covariable:  wool 

Contrast matrix:
 tensionL tensionM tensionH
tensionM-tensionL 0 0   -110
tensionH-tensionL 0 0   -101
tensionH-tensionM 0 00   -11
Absolute Error Tolerance:  0.001 

95 % quantile:  2.415 

Coefficients:
 Estimate   2.5 % 97.5 % t value Std.Err. 
p raw p Bonf p adj
tensionM-tensionL  -10.000 -19.352 -0.648  -2.5823.872 
0.013  0.038 0.034
tensionH-tensionL  -14.722 -24.074 -5.370  -3.8023.872 
0.000  0.001 0.001
tensionH-tensionM   -4.722 -14.074  4.630  -1.2193.872 
0.228  0.685 0.447
-
Please be careful: The resulting confidence intervals in
simint are not associated with the p-values 

Re: [R] installing R on Irix

2005-05-17 Thread Peter Dalgaard
Cserháti Mátyás [EMAIL PROTECTED] writes:

 Hello veeryone, I nedd some help here.
 
 The problem is I was trying to install R on my Irix system, with little 
 success: I got the following ugly error messages: watch out:
 
 
 begin installing recommended package mgcv
 Cannot create directory : No such file or directory
 * Installing *source* package 'mgcv' ...
 ** libs
 gmake[3]: Entering directory `/tmp/R.INSTALL.13709658/mgcv/src'
 gcc -I/usr/home/csmatyi/programs/R/R-2.1.0/include  -
 I/usr/local/include -g -O2 -c gcv.c -o gcv.o
 gcc -I/usr/home/csmatyi/programs/R/R-2.1.0/include  -
 I/usr/local/include -g -O2 -c magic.c -o magic.o
 gcc -I/usr/home/csmatyi/programs/R/R-2.1.0/include  -
 I/usr/local/include -g -O2 -c mat.c -o mat.o
 gcc -I/usr/home/csmatyi/programs/R/R-2.1.0/include  -
 I/usr/local/include -g -O2 -c matrix.c -o matrix.o
 as: Error: /var/tmp/ccAomyDd.s, line 23679: register expected
   dmtc1 244($sp),$f0
 as: Error: /var/tmp/ccAomyDd.s, line 23679: Undefined symbol: 244
 gmake[3]: *** [matrix.o] Error 1
 gmake[3]: Leaving directory `/tmp/R.INSTALL.13709658/mgcv/src'
 ERROR: compilation failed for package 'mgcv'
 ** Removing '/usr/home/csmatyi/programs/R/R-2.1.0/library/mgcv'
 gmake[2]: *** [mgcv.ts] Error 1
 gmake[2]: Leaving directory `/usr/home/csmatyi/programs/R/R-
 2.1.0/src/library/Recommended'
 gmake[1]: *** [recommended-packages] Error 2
 gmake[1]: Leaving directory `/usr/home/csmatyi/programs/R/R-
 2.1.0/src/library/Recommended'
 gmake: *** [stamp-recommended] Error 2
 
 
 What could the problem be here?

Offhand: You seem to be using the system assembler as on the output
of gcc. Sometimes one needs the GNU assembler (gas) in which case
you likely need to install the GNU binutils. If that is the case, it
is quite surprising that you got that far, but stranger things have
happened... 

-- 
   O__   Peter Dalgaard Blegdamsvej 3  
  c/ /'_ --- Dept. of Biostatistics 2200 Cph. N   
 (*) \(*) -- University of Copenhagen   Denmark  Ph: (+45) 35327918
~~ - ([EMAIL PROTECTED]) FAX: (+45) 35327907

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] setting value arg of pdSymm() in nlme

2005-05-17 Thread William Valdar
Hi,
I'm afraid that I don't understand what you are trying to do.  With a
formula of ~ 1 the pdSymm generator creates a 1x1 variance-covariance
matrix, which you are initializing to a 3x3 matrix.
Oh... I had a feeling I was doing something wrong there.
What is batch.mat supposed to represent?
I would like to use batch.mat to specify a correlation structure for the 
batches A, B and C. Specifically, I wish to work out the contribution to 
the variance of the batch random effect, given that I know some pairs of 
batches (eg, A and B) are going to be more similar than other pairs (eg, A 
and C) and how similar they are likely to be. My (mis?)reading of PB p165 
suggested this may be possible turning some of the pdIdents into 
pdSymms.

I was hoping to use this test example as a prelude to using genetic 
relationship data to impose a correlation structure on a subject-level or 
family-level effect. I understand from an earlier post (Jarrod Hadfield, 
2003) that lme is not really optimized for this, but I would nontheless 
like to evaluate to what extent it can do it anyway. The example used in 
that post was extreme: each case was effectively a different realization 
of a random effect where the correlation between levels is known. My needs 
are simpler: in the example I gave above, batches A, B and C might 
represent three families related by different degrees.

Yes, although I think you mean lmer in lme4.  Because the lmer function
allows multiple nested or non-nested grouping factors, the need for the
pdMat classes is eliminated (or greatly reduced) and the code can be
simplified considerably.  There is an article in the 2005/1 issue of R
News describing the use of lmer.
Thanks for pointing this out as I had missed this article. I know lmer is 
under development and am very interested in what it can do. Having found 
lme/lmer useful for more standard problems I would like to use it for 
genetic-type analysis where possible rather than resort to a different 
language (eg SAS) or specialized proprietary software (eg, ASREML). 
However, I understand that may not be possible.

William
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Dr William Valdar   ++44 (0)1865 287 717
Wellcome Trust Centre   [EMAIL PROTECTED]
for Human Genetics, Oxford  www.well.ox.ac.uk/~valdar
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Combinations with two part column

2005-05-17 Thread Sofyan Iyan
Dear R-helpers,
I am a beginner using R.
This is the first question in this list.
My question, Is there possible to make combinations with two part column?
If I have a number 1,2,3,4,5,6,7,8. I need the result something like below:

1,2,3,4,5 6,7,8
1,2,3,4,7 5,6,8
2,3,4,5,6 1,7,8
1,2,3,6,7 4,5,8
1,2,3,4,8 5,6,7
3,4,6,7,8 1,2,5


I  would be very happy if anyone could help me.
Best regards, 

Sofyan

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Problem in lme longitudinal model

2005-05-17 Thread Bernardo Rangel Tura
Hi R-masters!
I trying model Heart disease mortality in my country with a lme model like 
this:

m1.lme-lme(log(rdeath)~age*year,random=~age|year,data=dados)
where: rdeath is rate of mortality per 10 person per age and year
   age: age of death (22 27 32 37 42 47 52 57 62 67 72 77 82)
   year: year of death (1980:2002)
I don´t have problem to fit  the model, but in residual analysis I have one 
problem.

If i type acf(m1.lme$residuals) the graph show 4 plot with a sin curve (a 
example in attach) and I get this variogram:

 Variogram(m1.lme)
  variog dist n.pairs
1  0.39390651 276
2  0.64864522 253
3  0.96798703 230
4  1.27650944 207
5  1.41581475 184
6  1.42646856 161
7  1.40486087 138
8  1.19026138 115
9  0.96193309  92
10 0.7037427   10  69
11 0.8591257   11  46
12 1.2215657   12  23
Well
I need help for solution this problem
Somebody can help me?
Thanks in advance
Bernardo Rangel Tura, MD, MSc
National Institute of Cardiology Laranjeiras
Rio de Janeiro Brazil 

No virus found in this outgoing message.
Checked by AVG Anti-Virus.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

RE: [R] Omitting NAs in aggregate.ts()

2005-05-17 Thread Waichler, Scott R
  I have a time series vector (not necessarily ts class) that has NAs
in it.
  How can I omit the NAs when using aggregate.ts() to compute a
function 
  on each window?  If there is at least one non-NA value in each
window, 
  I'd like to proceed with evaluating the function; otherwise, I would

  like NA returned.  I'm not wedded to aggregate.ts and don't need ts 
  class if there is another fast, vectorized way to handle this.  Here

  is what I am trying to do, with the invalid na.rm thrown in:
  
  as.vector(aggregate.ts(x, ndeltat=24, FUN=min, na.rm=F))
  

 I don't know is the short answer, but if I had the data I 
 might have tried mymin - function(x) min(x, na.rm = TRUE) 
 as.vector(aggregate.ts(x, ndeltat=24, FUN=mymin))


Thanks for the suggestions, and my apologies for not providing an
example and error messages.  Using a custom function for FUN in
aggregate.ts() is the way to go.  Here is a complete solution to my
problem:

 e - c(2.3, 4.5, 6.2, 1.8)
 f - c(2.3, NA, NA, NA)

 mymin - function(x) {
+   if(length(x[!is.na(x)])  0) return(min(x, na.rm = TRUE))
+   else return(NA)
+ }

 as.vector(aggregate.ts(e, ndeltat=2, FUN=mymin))
[1] 2.3 1.8
 as.vector(aggregate.ts(f, ndeltat=2, FUN=mymin))
[1] 2.3  NA
   

Scott Waichler
Pacific Northwest National Laboratory
[EMAIL PROTECTED]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] simple question, i hope

2005-05-17 Thread BJ
How do you output a list to a text file without the extra line numbers 
and stuff?

I have a list b, and tried
zz-textConnection(captest.txt,w)
sink(zz)
b
sink()
close(zz)
but that isnt what i want, because i get [[1]]
   [1] a
etc. Is there a simple way to do the R equivalent of this perl code?
open(OUT,out.txt);
print OUT @b;
close OUT
Thank you for your help. I tried pouring over teh documentation for 
this, but couldnt find what I was lookign for. ~Erithid

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Combinations with two part column

2005-05-17 Thread Gabor Grothendieck
On 5/17/05, Sofyan Iyan [EMAIL PROTECTED] wrote:
 Dear R-helpers,
 I am a beginner using R.
 This is the first question in this list.
 My question, Is there possible to make combinations with two part column?
 If I have a number 1,2,3,4,5,6,7,8. I need the result something like below:
 
 1,2,3,4,5 6,7,8
 1,2,3,4,7 5,6,8
 2,3,4,5,6 1,7,8
 1,2,3,6,7 4,5,8
 1,2,3,4,8 5,6,7
 3,4,6,7,8 1,2,5
 
 

Try this:

library(gtools)
t(apply(combinations(8,5), 1, function(x) c(x,setdiff(1:8, x

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] predict() question

2005-05-17 Thread Weiwei Shi
Hi, there:
Following yesterday's question ( i had a new level for a categorical
variable occurred in validation dataset and predict() complains about
it: i made some python code to solve the problem), but here, I am just
curious about some details about the mechanism:

I believed rpart follows CART and for a categorical variable, the
splitting criteria should be like,
is it A or not?
   --yes, go to left branch
   --no, go to right

So, when you predict, if you have a new level C,for example,
the predict() should not complain about the occurrence of C (of
course, if there are many C's in validation, it should complain).
Maybe for robustness, predict() has to check first if there is new
level or not.

I am not sure if my understanding is right or not, please be advised!

Thanks,

-- 
Weiwei Shi, Ph.D

Did you always know?
No, I did not. But I believed...
---Matrix III

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Sweave and paths

2005-05-17 Thread Bill Rising
Is there some way to encourage \SweaveInput{foo} to find foo in a  
subdirectory of a file tree? Something along the lines of the  
behavior of list.files(stuff, recursive=TRUE). This would be very  
helpful at calling small modular files, such as solution sets and the  
like.

I couldn't see anything in the documentation, and I looked in the  
source code, but it seems that SweaveReadFiledoc() wants to look only  
in the directory which contains foo.

Still, I'm guessing that I'm missing something. Any tips would be  
much appreciated.

Bill
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Error message glmmPQL

2005-05-17 Thread Pedro Torres Saavedra
Hi,

I'm fitting a model for two-nested binay data in glmmPQL function but I have 
this error message: Error in solve.default(estimates[dimE[1] - (p:1), dimE
[2] - (p:1), drop = FALSE]) : system is computationally singular: reciprocal 
condition number = 1.14416e-018.

How do I can solve this problem? Or the problem are the data?

Thanks,


Pedro A. Torres S.
Graduate Student - Statistics
Department of Mathematics
University of Puerto Rico at Mayaguez
(1 787) 832-4040 X 2537

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Combinations with two part column

2005-05-17 Thread Sofyan Iyan
Thanks for you quick answer.
Could I extend my question?
How to make the result for each rows with comma ,;
 library(gtools)
 comb8.5 - t(apply(combinations(8,5), 1, function(x) c(x,setdiff(1:8, x
 comb8.5[,1:5]
  [,1] [,2] [,3] [,4] [,5]
 [1,]12345
 [2,]12346
 [3,]12347
 [4,]12348
 [5,]12356
 [6,]12357
...

I mean like:
1,2,3,4,5
1,2,3,4,6
1,2,3,4,7
1,2,3,4,8
1,2,3,5,6
1,2,3,5,7

 comb8.5[,6:8]
  [,1] [,2] [,3]
 [1,]678
 [2,]578
 [3,]568
 [4,]567
 [5,]478
 [6,]468
...

this below like:
 6,7,8
 5,7,8
 5,6,8
 5,6,7
 4,7,8
 4,6,8

Best,
Sofyan


On 5/17/05, Gabor Grothendieck [EMAIL PROTECTED] wrote:
 On 5/17/05, Sofyan Iyan [EMAIL PROTECTED] wrote:
  Dear R-helpers,
  I am a beginner using R.
  This is the first question in this list.
  My question, Is there possible to make combinations with two part column?
  If I have a number 1,2,3,4,5,6,7,8. I need the result something like below:
 
  1,2,3,4,5 6,7,8
  1,2,3,4,7 5,6,8
  2,3,4,5,6 1,7,8
  1,2,3,6,7 4,5,8
  1,2,3,4,8 5,6,7
  3,4,6,7,8 1,2,5
  
 
 
 Try this:
 
 library(gtools)
 t(apply(combinations(8,5), 1, function(x) c(x,setdiff(1:8, x


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Combinations with two part column

2005-05-17 Thread Gabor Grothendieck
Try write.table:

write.table(comb8.5[,1:5], sep = ,, row.names = FALSE, col.names = FALSE)
write.table(comb8.5[,6:8], sep = ,, row.names = FALSE, col.names = FALSE)


On 5/17/05, Sofyan Iyan [EMAIL PROTECTED] wrote:
 Thanks for you quick answer.
 Could I extend my question?
 How to make the result for each rows with comma ,;
  library(gtools)
  comb8.5 - t(apply(combinations(8,5), 1, function(x) c(x,setdiff(1:8, x
  comb8.5[,1:5]
  [,1] [,2] [,3] [,4] [,5]
 [1,]12345
 [2,]12346
 [3,]12347
 [4,]12348
 [5,]12356
 [6,]12357
 ...
 
 I mean like:
1,2,3,4,5
1,2,3,4,6
1,2,3,4,7
1,2,3,4,8
1,2,3,5,6
1,2,3,5,7
 
  comb8.5[,6:8]
  [,1] [,2] [,3]
 [1,]678
 [2,]578
 [3,]568
 [4,]567
 [5,]478
 [6,]468
 ...
 
 this below like:
 6,7,8
 5,7,8
 5,6,8
 5,6,7
 4,7,8
 4,6,8
 
 Best,
 Sofyan
 
 
 On 5/17/05, Gabor Grothendieck [EMAIL PROTECTED] wrote:
  On 5/17/05, Sofyan Iyan [EMAIL PROTECTED] wrote:
   Dear R-helpers,
   I am a beginner using R.
   This is the first question in this list.
   My question, Is there possible to make combinations with two part column?
   If I have a number 1,2,3,4,5,6,7,8. I need the result something like 
   below:
  
   1,2,3,4,5 6,7,8
   1,2,3,4,7 5,6,8
   2,3,4,5,6 1,7,8
   1,2,3,6,7 4,5,8
   1,2,3,4,8 5,6,7
   3,4,6,7,8 1,2,5
   
  
 
  Try this:
 
  library(gtools)
  t(apply(combinations(8,5), 1, function(x) c(x,setdiff(1:8, x
 
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] cumsum on chron objects

2005-05-17 Thread Sebastian Luque
Hi,

Is there some alternative to cumsum for chron objects? I have data frames
that contain some chron objects that look like this:

DateTime
13/10/03 12:30:35
NA
NA
NA
15/10/03 16:30:05
NA
NA
...


and I've been trying to replace the NA's so that a date/time sequence is
created starting with the preceding available value. Because the number of
rows with NA's following each available date/time is unknown, I've split
the data frame using:

splitdf - split(df, as.factor(df$DateTime))

so that I can later use lapply to work on each block of data. I thought
I could use cumsum and set the NA's to the desired interval to create the
date/time sequence starting with the first row. However, this function is
not defined for chron objects. Does anybody know of alternatives to create
such a sequence?

Thanks in advance,
-- 
Sebastian P. Luque

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Hmisc/Design and problem with cgroup

2005-05-17 Thread Aric Gregson
Hello,

I am trying to use the following to output a table to latex:

cohortbyagesummary - by(data.frame(age,ethnicity), cohort, summary) 

w - latex.default(cohortbyagesummary, 
caption=Five Number Age Summaries by Cohort,
label=agesummarybycohort, 
cgroup=c('hello','goodbye','hello'),
colheads=c(Age,Ethnicity),
extracolheads=c('hello','goodbye'), # demonstration of subheadings
greek=TRUE,
ctable=TRUE)

I am not able to get the major column headings of cgroup to work. I
receive the error:
Object cline not found

I do not see in the examples or documentation that you must specify
cline or anything. 

Any suggestions? Thanks very much,

aric

R 2.0.1
Design 2.0-11
Hmisc  3.0-5
OS X 10.3.9

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] cumsum on chron objects

2005-05-17 Thread Gabor Grothendieck
On 5/17/05, Gabor Grothendieck [EMAIL PROTECTED] wrote:
 On 5/17/05, Sebastian Luque [EMAIL PROTECTED] wrote:
  Hi,
 
  Is there some alternative to cumsum for chron objects? I have data frames
  that contain some chron objects that look like this:
 
  DateTime
  13/10/03 12:30:35
  NA
  NA
  NA
  15/10/03 16:30:05
  NA
  NA
  ...
 
  and I've been trying to replace the NA's so that a date/time sequence is
  created starting with the preceding available value. Because the number of
  rows with NA's following each available date/time is unknown, I've split
  the data frame using:
 
  splitdf - split(df, as.factor(df$DateTime))
 
  so that I can later use lapply to work on each block of data. I thought
  I could use cumsum and set the NA's to the desired interval to create the
  date/time sequence starting with the first row. However, this function is
  not defined for chron objects. Does anybody know of alternatives to create
  such a sequence?
 
 
 The 'zoo' package has na.locf which stands for Last Occurrence Carried
 Forward, which is what I believe you want.
 
 First let us create some test data, x:
 
  library(chron); library(zoo)
  x - chron(c(1.5, 2, NA, NA, 4, NA))
  x
 [1] (01/02/70 12:00:00) (01/03/70 00:00:00) (NA NA)
 [4] (NA NA) (01/05/70 00:00:00) (NA NA)
 
  # na.locf is intended for zoo objects but we can convert
  # the chron object to zoo, apply na.locf and convert back:
 
  chron(as.vector(na.locf(zoo(as.vector(x)
 [1] (01/02/70 12:00:00) (01/03/70 00:00:00) (01/03/70 00:00:00)
 [4] (01/03/70 00:00:00) (01/05/70 00:00:00) (01/05/70 00:00:00)
 

Just to reply to my own post, it can actually be done even more
simply:

chron(na.locf(as.vector(x)))

Also in re-reading my post, I think the O in locf stands for observation 
rather than occurrence.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Combinations with two part column

2005-05-17 Thread Sofyan Iyan
Dear to Gabor Grothendieck and James Holtman,

Thank you for giving me so much of your time to solve my problem.  
Many thanks and best regards,
Sofyan

On 5/17/05, Gabor Grothendieck [EMAIL PROTECTED] wrote:
 Try write.table:
 
 write.table(comb8.5[,1:5], sep = ,, row.names = FALSE, col.names = FALSE)
 write.table(comb8.5[,6:8], sep = ,, row.names = FALSE, col.names = FALSE)
 
 
 On 5/17/05, Sofyan Iyan [EMAIL PROTECTED] wrote:
  Thanks for you quick answer.
  Could I extend my question?
  How to make the result for each rows with comma ,;
   library(gtools)
   comb8.5 - t(apply(combinations(8,5), 1, function(x) c(x,setdiff(1:8, 
   x
   comb8.5[,1:5]
   [,1] [,2] [,3] [,4] [,5]
  [1,]12345
  [2,]12346
  [3,]12347
  [4,]12348
  [5,]12356
  [6,]12357
  ...
 
  I mean like:
 1,2,3,4,5
 1,2,3,4,6
 1,2,3,4,7
 1,2,3,4,8
 1,2,3,5,6
 1,2,3,5,7
 
   comb8.5[,6:8]
   [,1] [,2] [,3]
  [1,]678
  [2,]578
  [3,]568
  [4,]567
  [5,]478
  [6,]468
  ...
 
  this below like:
  6,7,8
  5,7,8
  5,6,8
  5,6,7
  4,7,8
  4,6,8
 
  Best,
  Sofyan
 
 
  On 5/17/05, Gabor Grothendieck [EMAIL PROTECTED] wrote:
   On 5/17/05, Sofyan Iyan [EMAIL PROTECTED] wrote:
Dear R-helpers,
I am a beginner using R.
This is the first question in this list.
My question, Is there possible to make combinations with two part 
column?
If I have a number 1,2,3,4,5,6,7,8. I need the result something like 
below:
   
1,2,3,4,5 6,7,8
1,2,3,4,7 5,6,8
2,3,4,5,6 1,7,8
1,2,3,6,7 4,5,8
1,2,3,4,8 5,6,7
3,4,6,7,8 1,2,5

   
  
   Try this:
  
   library(gtools)
   t(apply(combinations(8,5), 1, function(x) c(x,setdiff(1:8, x
  
 
  __
  R-help@stat.math.ethz.ch mailing list
  https://stat.ethz.ch/mailman/listinfo/r-help
  PLEASE do read the posting guide! 
  http://www.R-project.org/posting-guide.html
 


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] cumsum on chron objects

2005-05-17 Thread Gabor Grothendieck
On 5/17/05, Sebastian Luque [EMAIL PROTECTED] wrote:
 Hi,
 
 Is there some alternative to cumsum for chron objects? I have data frames
 that contain some chron objects that look like this:
 
 DateTime
 13/10/03 12:30:35
 NA
 NA
 NA
 15/10/03 16:30:05
 NA
 NA
 ...
 
 and I've been trying to replace the NA's so that a date/time sequence is
 created starting with the preceding available value. Because the number of
 rows with NA's following each available date/time is unknown, I've split
 the data frame using:
 
 splitdf - split(df, as.factor(df$DateTime))
 
 so that I can later use lapply to work on each block of data. I thought
 I could use cumsum and set the NA's to the desired interval to create the
 date/time sequence starting with the first row. However, this function is
 not defined for chron objects. Does anybody know of alternatives to create
 such a sequence?
 


The 'zoo' package has na.locf which stands for Last Occurrence Carried
Forward, which is what I believe you want.   

First let us create some test data, x:

 library(chron); library(zoo)
 x - chron(c(1.5, 2, NA, NA, 4, NA))  
 x
[1] (01/02/70 12:00:00) (01/03/70 00:00:00) (NA NA)
[4] (NA NA) (01/05/70 00:00:00) (NA NA)


 # na.locf is intended for zoo objects but we can convert
 # the chron object to zoo, apply na.locf and convert back:

 chron(as.vector(na.locf(zoo(as.vector(x)
[1] (01/02/70 12:00:00) (01/03/70 00:00:00) (01/03/70 00:00:00)
[4] (01/03/70 00:00:00) (01/05/70 00:00:00) (01/05/70 00:00:00)

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Centered overall title with layout()

2005-05-17 Thread Stephen D. Weigand
Dear Pierre,
On May 15, 2005, at 6:36 PM, Lapointe, Pierre wrote:
Hello,
I would like to have a centered overall title for a graphics page 
using the
layout() function.

Example, using this function:
z - layout(matrix(c(1:6), 3,2, byrow = TRUE))
layout.show(6)
I'd like to get this:
  Centered Overall Title

|   |   |
|   |   |
|   |   |
|   |   |
|   |   |

|   |   |
|   |   |
|   |   |
|   |   |
|   |   |

|   |   |
|   |   |
|   |   |
|   |   |
|   |   |

I really want to use layout(), not par(mfrow())
Thanks
Pierre Lapointe
Does mtext give you what you want? E.g.,
par(oma = c(0, 0, 3, 0))
z - layout(matrix(c(1:6), 3,2, byrow = TRUE))
layout.show(6)
mtext(Centered Overall Title, side = 3, line = 1, outer = TRUE)
Hope this helps,
Stephen
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] cumsum on chron objects

2005-05-17 Thread Sebastian Luque
Hello Gabor,

Thanks for your reply. na.locf would replace the NA's with the most recent
non-NA, so it wouldn't create a sequence of chron dates/times (via
as.vector, as in your example). To expand my original example:


 On 5/17/05, Sebastian Luque [EMAIL PROTECTED] wrote:

[...]

 DateTime
 13/10/03 12:30:35
 NA
 NA
 NA
 15/10/03 16:30:05
 NA
 NA
 ...

I thought one could replace the NA's by the desired interval, say 1 day,
so if the above chron object was named nachron, one could do:

nachron[is.na(nachron)] - 1

and, for simplicity, applying on each block separately:

cumsum(nachron)

would give:

DateTime
13/10/03 12:30:35
14/10/03 12:30:35
15/10/03 12:30:35
16/10/03 12:30:35

for the first block, and:

DateTime
15/10/03 16:30:05
16/10/03 16:30:05
17/10/03 16:30:05
...

for the second one. Since there are not too many blocks I may end up doing
it in Excel, but it would be nice to know how to do it in R!

Cheers and thank you,
-- 
Sebastian P. Luque

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html