Re: [R] fitting extreme value distribution

2005-07-28 Thread Vito Ricci
Hi,
for fitting have you seen ?fgev in evd package, or
?gevFit in fExtremes or ?gev in evir package?


Regards,
Vito


kunnavrv at slu.edu wrote:

hi,
  rgev function gives me random deviates and I have a
data
set which I am fitting to an EVD,IS there a way I can
plot
both observed and ideal evd on the same plot
thankyou
Rangesh


Diventare costruttori di soluzioni
Became solutions' constructors

The business of the statistician is to catalyze 
the scientific learning process.  
George E. P. Box

Statistical thinking will one day be as necessary for efficient citizenship as 
the ability to read and write
H. G. Wells

Top 10 reasons to become a Statistician

 1. Deviation is considered normal
 2. We feel complete and sufficient
 3. We are 'mean' lovers
 4. Statisticians do it discretely and continuously
 5. We are right 95% of the time
 6. We can legally comment on someone's posterior distribution
 7. We may not be normal, but we are transformable
 8. We never have to say we are certain
 9. We are honestly significantly different
10. No one wants our jobs


Visitate il portale http://www.modugno.it/
e in particolare la sezione su Palese  
http://www.modugno.it/archivio/palesesanto_spirito/

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Unexpected behavior in recode{car}

2005-07-28 Thread Mulholland, Tom
require( car )
set.seed(12345)
nn - sample( c( 2, 4 ), size=50, replace=TRUE )
rr - recode( nn, 2='TWO';4='FOUR' )
table( rr, exclude=NULL )
ss - recode( nn, 2='Num2';4='Num4' )  # Doesn't work as expected
table( ss, exclude=NULL )
ss - recode( nn, 2='Num2';4='Num4', TRUE )  #?
table( ss, exclude=NULL )
tt - recode( nn, 2='TWO'; 4='Num4' )
table( tt, exclude=NULL )
uu - recode( nn, 2='Num2'; 4='FOUR' )
table( uu, exclude=NULL )

I looked at the code and found it too difficult to immediately decipher. So 
does making the result a factor cause any real problems?

I noticed that the same response happens with any letterset followed by a number
recode( nn, 2='Num2'; 4='abc5' )

Tom

 -Original Message-
 From: [EMAIL PROTECTED]
 [mailto:[EMAIL PROTECTED] Behalf Of D. Dailey
 Sent: Thursday, 28 July 2005 11:45 AM
 To: r-help@stat.math.ethz.ch
 Subject: [R] Unexpected behavior in recode{car}
 
 
 Thanks to the R creators for such a great statistical system. 
 Thanks to
 the R help list, I have (finally) gotten far enough in R to have a
 question I hope to be worth posting.
 
 I'm using the recode function from John Fox's car package and have
 encountered some unexpected behavior.
 
 Consider the following example:
 
 ## Begin cut-and-paste example
 require( car )
 set.seed(12345)
 nn - sample( c( 2, 4 ), size=50, replace=TRUE )
 rr - recode( nn, 2='TWO';4='FOUR' )
 table( rr, exclude=NULL )
 ss - recode( nn, 2='Num2';4='Num4' )  # Doesn't work as expected
 table( ss, exclude=NULL )
 tt - recode( nn, 2='TWO'; 4='Num4' )
 table( tt, exclude=NULL )
 uu - recode( nn, 2='Num2'; 4='FOUR' )
 table( uu, exclude=NULL )
 ## End cut-and-paste example
 
 All but the recoding to ss work as expected: I get a character vector
 with 23 instances of either FOUR or Num4 and 27 instances of TWO
 or Num2.
 
 But for the ss line, wherein all the strings to be assigned contain a
 digit, the resulting vector contains all NAs. Using str(), I note that
 ss is a numeric vector.
 
 Is there a tidy way (using recode) to recode numeric values into
 character strings, all of which contain a digit? I have a 
 workaround for
 my current project, but it would be nice to be able to use mixed
 alphanumeric strings in this context.
 
 Thanks in advance for any insight you can give into this question.
 
 Using R 2.1.1 (downloaded binary) on Windows XP Pro, car 
 version 1.0-17
 (installed from CRAN via Windows GUI). Complete version information
 below:
 
   version
   _
 platform i386-pc-mingw32
 arch i386
 os   mingw32
 system   i386, mingw32
 status
 major2
 minor1.1
 year 2005
 month06
 day  20
 language R
 
   t(t( installed.packages()['car',] ))
   [,1]
 Package  car
 LibPath  C:/Programs/R/rw2011/library
 Version  1.0-17
 Priority NA
 Bundle   NA
 Contains NA
 Depends  R (= 1.9.0)
 Suggests MASS, nnet, leaps
 Imports  NA
 Built2.1.0
 
 
 I subscribe to the help list in digest form, so would appreciate being
 copied directly in addition to seeing responses sent to the list.
 
 David Dailey
 Shoreline, Washington, USA
 [EMAIL PROTECTED]
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! 
http://www.R-project.org/posting-guide.html

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] gamma distribution

2005-07-28 Thread Uwe Ligges
Answering both messges here:


1. [EMAIL PROTECTED] wrote:
  Hi I appreciate your response. This is what I observed..taking
  the log transform of the raw gamma does change the p value of
  the test. That is what I am importing into excel (the p - values)

Well, so you made a mistake! And I still do not know why anybody realy 
want to import data to Excel, if the data is already in R.

For me, the results are identical (and there is no reason why not).


  and then calculating the power of the test (both raw and
  transformed).
 
  can you tell me what exactly your code is doing?

See below.


2. [EMAIL PROTECTED] wrote:
 Hi
 I ran your code. I think it should give me the number of p values below 0.05
 significance level  (thats what i could understand from your code), but after
 running your code there is neither any error that shows up nor any value that
 the console displays.

You are right in the point what the code I sent does:

   erg - replicate(1000, {
x-rgamma(10, 2.5, scale = 10)
y-rgamma(10, 2.5, scale = 10)
wilcox.test(x, y, var.equal = FALSE)$p.value
})
sum(erg  0.05) # 45


and it works for me. It results in a random number close to 50, hopefully.

Since both points above seem to be very strange on your machine: Which 
version of R are you using? We assume the most recent one which is R-2.1.1.

Uwe Ligges

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] gamma distribution

2005-07-28 Thread pantd
thanks for your response. btw i am calculating the power of the wilcoxon test. i
divide the total no. of rejections by the no. of simulations. so for 1000
simulations, at 0.05 level of significance if the no. of rejections are 50 then
the power will be 50/1000 = 0.05. thats y im importing in excel the p values.

is my approach correct??

thanks n regards
-dev


Quoting Uwe Ligges [EMAIL PROTECTED]:

 Answering both messges here:


 1. [EMAIL PROTECTED] wrote:
   Hi I appreciate your response. This is what I observed..taking
   the log transform of the raw gamma does change the p value of
   the test. That is what I am importing into excel (the p - values)

 Well, so you made a mistake! And I still do not know why anybody realy
 want to import data to Excel, if the data is already in R.

 For me, the results are identical (and there is no reason why not).


   and then calculating the power of the test (both raw and
   transformed).
  
   can you tell me what exactly your code is doing?

 See below.


 2. [EMAIL PROTECTED] wrote:
  Hi
  I ran your code. I think it should give me the number of p values below
 0.05
  significance level  (thats what i could understand from your code), but
 after
  running your code there is neither any error that shows up nor any value
 that
  the console displays.

 You are right in the point what the code I sent does:

erg - replicate(1000, {
 x-rgamma(10, 2.5, scale = 10)
 y-rgamma(10, 2.5, scale = 10)
 wilcox.test(x, y, var.equal = FALSE)$p.value
 })
 sum(erg  0.05) # 45


 and it works for me. It results in a random number close to 50, hopefully.

 Since both points above seem to be very strange on your machine: Which
 version of R are you using? We assume the most recent one which is R-2.1.1.

 Uwe Ligges



__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] gamma distribution

2005-07-28 Thread Uwe Ligges
[EMAIL PROTECTED] wrote:

 thanks for your response. btw i am calculating the power of the wilcoxon 
 test. i
 divide the total no. of rejections by the no. of simulations. so for 1000
 simulations, at 0.05 level of significance if the no. of rejections are 50 
 then
 the power will be 50/1000 = 0.05. thats y im importing in excel the p values.

No, since H1 is NOT true in your case (the power is undefined under H0).
In this case it is an estimator for the alpha error, but not the power. 
You might want to reread some basic textbook on testing theory.

BTW: Why do you think R cannot calculate 50/1000 and Excel does better?

 is my approach correct??

No.

Uwe Ligges



 thanks n regards
 -dev
 
 
 Quoting Uwe Ligges [EMAIL PROTECTED]:
 
 
Answering both messges here:


1. [EMAIL PROTECTED] wrote:
  Hi I appreciate your response. This is what I observed..taking
  the log transform of the raw gamma does change the p value of
  the test. That is what I am importing into excel (the p - values)

Well, so you made a mistake! And I still do not know why anybody realy
want to import data to Excel, if the data is already in R.

For me, the results are identical (and there is no reason why not).


  and then calculating the power of the test (both raw and
  transformed).
 
  can you tell me what exactly your code is doing?

See below.


2. [EMAIL PROTECTED] wrote:

Hi
I ran your code. I think it should give me the number of p values below

0.05

significance level  (thats what i could understand from your code), but

after

running your code there is neither any error that shows up nor any value

that

the console displays.

You are right in the point what the code I sent does:

   erg - replicate(1000, {
x-rgamma(10, 2.5, scale = 10)
y-rgamma(10, 2.5, scale = 10)
wilcox.test(x, y, var.equal = FALSE)$p.value
})
sum(erg  0.05) # 45


and it works for me. It results in a random number close to 50, hopefully.

Since both points above seem to be very strange on your machine: Which
version of R are you using? We assume the most recent one which is R-2.1.1.

Uwe Ligges


 
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] gamma distribution

2005-07-28 Thread Christoph Buser
Hi 

Again to come back on the question why you don't get identical
p.values for the untransformed and the transformed data.

I ran your script below and I get always 2 identical test per
loop. In your text you are talking about the first 1000 values
for the untransformed and the next 1000 values for the
transformed. 

But did you consider that in each loop there is a test for the
untransformed and the transformed, so the tests are printed
alternating. 
This might be a reason why you did not get equal results.

Hope this helps,

Christoph

--
Christoph Buser [EMAIL PROTECTED]
Seminar fuer Statistik, LEO C13
ETH (Federal Inst. Technology)  8092 Zurich  SWITZERLAND
phone: x-41-44-632-4673 fax: 632-1228
http://stat.ethz.ch/~buser/
--


[EMAIL PROTECTED] writes:
  Hi R Users
  
  
  This is a code I wrote and just want to confirm if the first 1000 values are 
  raw
  gamma (z) and the next 1000 values are transformed gamma (k) or not. As I get
  2000 rows once I import into excel, the p - values beyond 1000 dont look that
  good, they are very high.
  
  
  --
  sink(a1.txt);
  
  for (i in 1:1000)
  {
  x-rgamma(10, 2.5, scale = 10)
  y-rgamma(10, 2.5, scale = 10)
  z-wilcox.test(x, y, var.equal = FALSE)
  print(z)
  x1-log(x)
  y1-log(y)
  k-wilcox.test(x1, y1, var.equal = FALSE)
  print(k)
  }
  
  ---
  any suggestions are welcome
  
  thanks
  
  -devarshi
  
  __
  R-help@stat.math.ethz.ch mailing list
  https://stat.ethz.ch/mailman/listinfo/r-help
  PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] error message running R2WinBUGS

2005-07-28 Thread Uwe Ligges
This is a question related to WinBUGS, not to R nor to the R2WinBUGS 
interface. Please ask on the appropriate lists/forums related to 
WinBUGS (with plain WinBUGS code, rather than R's interface code).


BTW: Let's answer your question before:

You have specified x[1,28] to be 127, but it is from a Binomial 
distribution with n=100 (hence impossible!).
And that's what the WinBUGS (NOT R!) error message tells you.

Uwe Ligges


qi zhang wrote:

 *Dear R-user,
 *
 I try to run Winbugs from R using bugs function in R2WinBUGS.My model works 
 well in Winbugs except that I can't get DIC. Since I don't need DIC, when I 
 try to run Winbugs from R , I set DIC=FALSE. My model is as following:
  model {
 for (i in 1:N) {
 for(j in 1 : T ) {
 x[i, j] ~ dbin(p[i, j],n[i])
 #Hier.prior 
 p[i, j] ~ dbeta(alpha[i, j], beta[i, j])
 alpha[i, j]  - pi[ j]*(1-theta[i])/theta[i]
 beta[i, j]  -(1-pi[ j])*(1-theta[i])/theta[i]
 }
 } 
 
 for(j in 1 : T ) {
 pi[ j ] ~dbeta(0.1,0.1)I(0.1,0.9)
 }
  
 for (i in 1:N) {
 theta[i] ~dbeta(2,10)
 }
 }
 
  And my R code is as followings:
 
 initials1-list(theta=c(0.2,0.01), pi=rbeta(50, 0.1, 0.1)*0.8+0.1
 
 , p=matrix(rbeta(100, 2, 10)*0.8+0.1,nr=2,nc=50,byrow=TRUE))
 
 inits-list(initials1 initials1)
 
 data-list(N=2,T=50 ,n=c(100,150),x = structure(.Data = c(
 
 3, 47, 8, 19, 87, 69,
 
 2, 4, 75, 24, 16, 81,
 
 10, 78, 87, 44, 17, 56,
 
 23, 75, 55, 85, 84, 69,
 
 6, 36, 8, 90, 47, 6,
 
 87, 61, 49, 57, 28, 56,
 
 31, 54, 75, 79, 67, 38,
 
 28, 13, 89, 63, 32, 68,
 
 70, 7,
 
 24, 95, 8, 14, 127, 134,
 
 7, 8, 133, 40, 76, 126,
 
 0, 132, 120, 137, 9, 91,
 
 3, 130, 18, 80, 134, 95,
 
 12, 34, 19, 111, 34, 25,
 
 127, 79, 132, 84, 72, 134,
 
 67, 44, 95, 69, 80, 51,
 
 57, 12, 138, 137, 64, 80,
 
 130, 58), .Dim = c(2, 50)))
 
  #run winbugs
 
 library(R2WinBUGS)
 
 structure-bugs(data,inits,model.file=S:/Common/zhangqi/ebayes/for 
 R/model.txt,
 
 parameters=c(p,pi,theta),n.chains=2,n.iter=2000,n.burnin=500,
 
 debug=TRUE,DIC=FALSE,bugs.directory=c:/Program Files/WinBUGS14/,
 
 working.directory = NULL )
 
   Even if I changed initial values several times, I still keep geting the 
 following error message in Winbugs: 
  
 display(log)
 
 check(S:/Common/zhangqi/ebayes/for R/model.txt)
 
 model is syntactically correct
 
 data(C:/Program Files/R/rw2011/data.txt)
 
 data loaded
 
 compile(2)
 
 model compiled
 
 inits(1,C:/Program Files/R/rw2011/inits1.txt)
 
 value of binomial x[1,28] must be between zero and order of x[1,28]
 
  I really have no clue about what I can do to fix it. 
 
 Could any of your please take a look at my problem, I truely appreciate any 
 suggestions.
 
  Qi Zhang
 
   [[alternative HTML version deleted]]
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] odesolve/lsoda differences on Windows and Mac

2005-07-28 Thread Thomas Petzoldt
On 27 Jul 2005, Peter Dalgaard wrote:

 One thought: Integrating across input pulses is a known source of
 turbulence in lsoda. You might have better luck integrating over
 intervals in which the input function is continuous.

 Tweaking the lsoda tolerances is another thing to try.

Yes, that's also our experience. Where I am usually succesful
when playing with the tolerances or the interpolation rule of external
pulses, some of our students use the fixed step rk4 algorithm and some
others wrote their own integrators in R.

I have heared that several people had plans to provide alternative ODE
integrators for R but I currently do not know about the state of these
projects. It wold be nice if they might post this to the list in order to
avoid double work.


 I haven't seen lsoda fail like that, but it's not too surprising that
 marginal cases show platform dependency (i.e. the integrator just
 fails on Mac and barely succeeds on PC).


Aha, I see. It should be regarded carefully when publishing examples that
result in marginal cases as the common user would expect that R is
platform independent.

Thomas P.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] [R-pkgs] New version of Rpad

2005-07-28 Thread Short, Tom
Announcing release 0.9.6 of Rpad. This version provides bug fixes and
some improved HTML handling. This is also the first widespread release
that supports Rpad as an installed package within R.

Rpad is an interactive, web-based analysis system. Rpad pages are
interactive workbook-type sheets based on R. Rpad is an analysis
package, a web-page designer, and a gui designer all wrapped in
one. Rpad makes it easy to develop powerful data analysis applications
that can be shared with others (most likely on an intranet).

Rpad is a new type of application like Google's gmail where the
browser page is dynamically updated with R calculations (Ajax is the 
latest buzzword for this).

Rpad can be installed in two ways: (1) local package and (2) server
style. The server version uses Apache (or other web server) to serve
up web pages and act as an R calculation engine. In the local version,
Rpad is installed as an R package, and it uses a builtin mini web
server to serve Rpad pages to your local browser.

You can try demos of Rpad at: http://www.rpad.org/Rpad

Here are two basic demos that show how it's used:

http://www.rpad.org/Rpad/Example1.Rpad
http://www.rpad.org/Rpad/InputExamples.Rpad

Here are two of the fancier demos that show off interactivity using
Rpad (in these examples, the R code is hidden by default):

http://www.rpad.org/Rpad/mapdemo.Rpad
http://www.rpad.org/Rpad/SearchRKeywords.Rpad

Rpad can be downloaded from CRAN or from www.rpad.org.

- Tom

Tom Short
EPRI Solutions, Inc.

[[alternative HTML version deleted]]

___
R-packages mailing list
[EMAIL PROTECTED]
https://stat.ethz.ch/mailman/listinfo/r-packages

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] How to save the object of linear regression in file and load it later

2005-07-28 Thread Bahoo
Hi,

I am using locfit for regression.
After I do 
out = locfit(...),
I want to save out in a file, and load it later for
prediction.

How should I do it?  Thanks!

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] gamma distribution

2005-07-28 Thread Christoph Buser
Hi 

As Uwe mentioned be careful about the difference the
significance level alpha and the power of a test.

To do power calculations you should specify and alternative
hypothesis H_A, e.g. if you have two populations you want to
compare and we assume that they are normal distributed (equal
unknown variance for simplicity). We are interested if there is
a difference in the mean and want to use the t.test.
Our Null hypothesis H_0: there is no difference in the means

To do a power calculation for our test, we first have to specify
and alternative H_A: the mean difference is 1 (unit)
Now for a fix number of observations we can calculate the power
of our test, which is in that case the probability that (if the
true unknown difference is 1, meaning that H_A is true) our test
is significant, meaning if I repeat the test many times (always
taking samples with mean difference of 1), the number of
significant test divided by the total number of tests is an
estimate for the power.


In you case the situation is a little bit more complicated. You
need to specify an alternative hypothesis.
In one of your first examples you draw samples from two gamma
distributions with different shape parameter and the same
scale. But by varying the shape parameter the two distributions
not only differ in their mean but also in their form.
 
I got an email from Prof. Ripley in which he explained in
details and very precise some examples of tests and what they
are testing. It was in addition to the first posts about t tests
and wilcoxon test. 
I attached the email below and recommend to read it carefully. It
might be helpful for you, too.

Regards,

Christoph Buser

--
Christoph Buser [EMAIL PROTECTED]
Seminar fuer Statistik, LEO C13
ETH (Federal Inst. Technology)  8092 Zurich  SWITZERLAND
phone: x-41-44-632-4673 fax: 632-1228
http://stat.ethz.ch/~buser/
--
 


From: Prof Brian Ripley [EMAIL PROTECTED]
To: Christoph Buser [EMAIL PROTECTED]
cc: Liaw, Andy [EMAIL PROTECTED]
Subject: Re: [R] Alternatives to t-tests (was Code Verification)
Date: Thu, 21 Jul 2005 10:33:28 +0100 (BST)

I believe there is a rather more to this than Christoph's account.  The 
Wilcoxon test is not testing the same null hypothesis as the t-test, and 
that may very well matter in practice and it does in the example given.

The (default in R) Welch t-test tests a difference in means between two 
samples, not necessarily of the same variance or shape.  A difference in 
means is simple to understand, and is unambiguously defined at least if 
the distributions have means, even for real-life long-tailed 
distributions.  Inference from the t-test is quite accurate even a long 
way from normality and from equality of the shapes of the two 
distributions, except in very small sample sizes.  (I point my beginning 
students at the simulation study in `The Statistical Sleuth' by Ramsey and 
Schafer, stressing that the unequal-variance t-test ought to be the 
default choice as it is in R.  So I get them to redo the simulations.)

The Wilcoxon test tests a shift in location between two samples from 
distributions of the same shape differing only by location.  Having the 
same shape is part of the null hypothesis, and so is an assumption that 
needs to be verified if you want to conclude there is a difference in 
location (e.g. in means).  Even if you assume symmetric distributions (so 
the location is unambiguously defined) the level of the test depends on 
the shapes, tending to reject equality of location in the presence of 
difference of shape.  So you really are testing equality of distribution, 
both location and shape, with power concentrated on location-shift 
alternatives.

Given samples from a gamma(shape=2) and gamma(shape=20) distributions, we 
know what the t-test is testing (equality of means).  What is the Wilcoxon 
test testing?  Something hard to describe and less interesting, I believe.

BTW, I don't see the value of the gamma simulation as this 
simultaneously changes mean and shape between the samples.  How about
checking holding the mean the same:

n - 1000
z1 - z2 - numeric(n)
for (i in 1:n) {
   x - rgamma(40, 2.5, 0.1)
   y - rgamma(40, 10, 0.1*10/2.5)
   z1[i] - t.test(x, y)$p.value
   z2[i] - wilcox.test(x, y)$p.value
}
## Level
1 - sum(z10.05)/1000  ## 0.049
1 - sum(z20.05)/1000  ## 0.15

? -- the Wilcoxon test is shown to be a poor test of equality of means. 
Christoph's simulation shows that it is able to use difference in shape as 
well as location in the test of these two distributions, whereas the 
t-test is designed only to use the difference in means.  Why compare the 
power of two tests testing different null hypotheses?

I would say a very good reason to use a t-test is if you are actually 
interested in the hypothesis it tests 






[R] Minimum-chi-square estimation

2005-07-28 Thread Dragos Ilie
Is there any R package that does point estimation by using the
minimum-chi-square method?

Regards,
Dragos

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] How to save the object of linear regression in file and load it later

2005-07-28 Thread Sean O'Riordain
Hi Bahoo!

I've found the R/Rpad Reference Card quite good at helping me find
this sort of information... i.e. towards the bottom of the first
column of the first page it says...

save(file,...) saves the specified ojects (...) in the XDR platform
independent binary format

if I then say ?save it tells me that ?load is what you're looking for
at another date...

cheers!
Sean




On 25/07/05, Bahoo [EMAIL PROTECTED] wrote:
 Hi,
 
 I am using locfit for regression.
 After I do
 out = locfit(...),
 I want to save out in a file, and load it later for
 prediction.
 
 How should I do it?  Thanks!
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Help: how to specify the current active window by mouse

2005-07-28 Thread Shengzhe Wu
Hello,

When I use windows() to open several windows, e.g. 4 windows, device
2, device 3, device 4 and device 5. Normally using dev.set() to
specify the current active window, if there is a way to specify the
active window by mouse click? When I click device 3, device 3 becomes
active (shown on device header), and when I click device 4, device 4
becomes active, so on

Thank you,
Shengzhe

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] CSV file and date. Dates are read as factors!

2005-07-28 Thread Frank E Harrell Jr
John Sorkin wrote:
 I am using read.csv to read a CSV file (produced by saving an Excel file
 as a CSV file). The columns containing dates are being read as factors.
 Because of this, I can not compute follow-up time, i.e.
 Followup-postDate-preDate. I would appreciate any suggestion that would
 help me read the dates as dates and thus allow me to calculate follow-up
 time.
 Thanks
 John

library(Hmisc)
?csv.get   (see datevars argument)

Frank

 
 John Sorkin M.D., Ph.D.
 Chief, Biostatistics and Informatics
 Baltimore VA Medical Center GRECC and
 University of Maryland School of Medicine Claude Pepper OAIC
 
 University of Maryland School of Medicine
 Division of Gerontology
 Baltimore VA Medical Center
 10 North Greene Street
 GRECC (BT/18/GR)
 Baltimore, MD 21201-1524
 
 410-605-7119 
 -- NOTE NEW EMAIL ADDRESS:
 [EMAIL PROTECTED]
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
 


-- 
Frank E Harrell Jr   Professor and Chair   School of Medicine
  Department of Biostatistics   Vanderbilt University

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] stl()

2005-07-28 Thread Sebastian Leuzinger
Hello, anyone got an idea on how to use stl() so that the remainder eventually 
becomes white noise? i used stl repeatedly but there is autocorrelation in 
the remainder that i can't get rid of. 
os: linux suse9.3

Sebastian Leuzinger
Institute of Botany, University of Basel
Schönbeinstr. 6 CH-4056 Basel
ph0041 (0) 61 2673511
fax   0041 (0) 61 2673504
email [EMAIL PROTECTED] 
web   http://pages.unibas.ch/botschoen/leuzinger

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Help: how to specify the current active window by mouse

2005-07-28 Thread Duncan Murdoch
On 7/28/2005 7:26 AM, Shengzhe Wu wrote:
 Hello,
 
 When I use windows() to open several windows, e.g. 4 windows, device
 2, device 3, device 4 and device 5. Normally using dev.set() to
 specify the current active window, if there is a way to specify the
 active window by mouse click? When I click device 3, device 3 becomes
 active (shown on device header), and when I click device 4, device 4
 becomes active, so on

No, there's nothing built in to do that, though you could probably write 
something using the tcltk package to do it.

The logic is that being active means being the target of plot output, 
and essentially all plot output comes from executed commands, not from 
mouse clicks.  Since you're executing commands already, why not put in 
one more?

Duncan Murdoch

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] CSV file and date. Dates are read as factors!

2005-07-28 Thread roger bos
Working with dates is not easy (for me at least).  I always manage to
get it done, but the code is somewhat messy.  I have not tried using
the Hmisc package as Frank suggested, but I will show you my code as
an alternate way:

w - unclass((as.Date(as.character(dataMat$fy1_period_end_date),
format=%m/%d/%Y) - as.Date(datec[i], format=%m/%d/%Y))/365)

w is the time (in days) between two dates.  You can see that I had to
unclasss the first date vector.  I read my files in csv also, so I
am sure something similar can be made to work for you.

HTH,

Roger




On 7/27/05, John Sorkin [EMAIL PROTECTED] wrote:
 I am using read.csv to read a CSV file (produced by saving an Excel file
 as a CSV file). The columns containing dates are being read as factors.
 Because of this, I can not compute follow-up time, i.e.
 Followup-postDate-preDate. I would appreciate any suggestion that would
 help me read the dates as dates and thus allow me to calculate follow-up
 time.
 Thanks
 John
 
 John Sorkin M.D., Ph.D.
 Chief, Biostatistics and Informatics
 Baltimore VA Medical Center GRECC and
 University of Maryland School of Medicine Claude Pepper OAIC
 
 University of Maryland School of Medicine
 Division of Gerontology
 Baltimore VA Medical Center
 10 North Greene Street
 GRECC (BT/18/GR)
 Baltimore, MD 21201-1524
 
 410-605-7119
 -- NOTE NEW EMAIL ADDRESS:
 [EMAIL PROTECTED]
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] conversion from SAS

2005-07-28 Thread alessandro carletti
Hi, I wonder if anybody could help me in converting
this easy SAS program into R.
(I'm still trying to do that!)

PROC IMPORT OUT= WORK.CHLA_italian 
DATAFILE= C:\Documents and
Settings\carleal\My
Documents\REBECCA\stat\sas\Allnutrients.xls 
DBMS=EXCEL2000 REPLACE;
 GETNAMES=YES;
RUN;
data chla_italian;
   set chla_italian;
   year=year(datepart(date));
   month=month(datepart(date));
   run;

proc sort data=chla_italian; by station; run;
/* Check bloom for seasonal cycle outliers */
data sort_dataset;
   set chla_italian;
   chla=chl_a;
   dayno=date-mdy(1,1,year)+1;
   cos1=cos(2*3.14*dayno/365);
   sin1=sin(2*3.14*dayno/365);
   cos2=cos(4*3.14*dayno/365);
   sin2=sin(4*3.14*dayno/365);
   cos3=cos(6*3.14*dayno/365);
   sin3=sin(6*3.14*dayno/365);
   cos4=cos(8*3.14*dayno/365);
   sin4=sin(8*3.14*dayno/365);
   bloom=0;
   w_chla=1/chla/chla;
run;
ODS listing close;
%macro sort_event(cut_off,last=0);
/*proc glm data=sort_dataset;
   class year;
   model logchla=year cos1 sin1 cos2 sin2 cos3 sin3
cos4 sin4 /solution;
   by station;
   where bloom=0;
   output out=chla_res predicted=pred student=studres
cookd=cookd rstudent=rstudent u95=u95;
   lsmeans year / at (cos1 sin1 cos2 sin2 cos3 sin3
cos4 sin4)=(0 0 0 0 0 0 0 0);
   ODS output ParameterEstimates=parmest
LSmeans=lsmeans;
run;*/
proc glm data=sort_dataset;
   class year month;
   model chla=/solution;
   by station;
   weight w_chla;
   where bloom=0;
   output out=chla_res predicted=pred student=studres
cookd=cookd 
daynumber-data$date-mdy(1,1,y)+1 
rstudent=rstudent ucl=ucl lcl=lcl / alpha=0.02;
*   lsmeans year / at (cos1 sin1)=(0 0);
*   ODS output ParameterEstimates=parmest
LSmeans=lsmeans;
run;
quit;
data sort_dataset;
   set chla_res;
   increase=ucl-pred;
   if chlaUCL then bloom=1;
   w_chla=1/(50+5*pred*pred);
   %if last=0 %then %do; drop ucl lcl cookd rstudent
studres pred; %end;
run;
%mend sort_event;
ODS listing;
%sort_event(2.0);
%sort_event(2.0);
%sort_event(2.0);
%sort_event(2.0);
%sort_event(2.0);
%sort_event(2.0);
%sort_event(2.0);
%sort_event(2.0);
%sort_event(2.0);
%sort_event(2.0);
%sort_event(2.0);
%sort_event(2.0);
%sort_event(2.0);
%sort_event(2.0);
%sort_event(2.0,last=1);
/* Combine bloom information with all chlorophyll
values */
data chla_sep;
   merge sort_dataset(keep=station date bloom pred ucl
lcl) chla_italian (rename=(chl_a=chla));
   by station date;
*   lcl=exp(lcl);
*   ucl=exp(ucl);
*   pred=exp(pred);
   if bloom=. then bloom=1;
   if bloom=0 then chla1=chla; else chla1=.;
   if bloom=1 then chla2=chla; else chla2=.;
run;
 
symbol1 i=none value=plus color=red;
symbol2 i=none value=plus color=green;
symbol3 i=join value=none line=1 color=black;
axis1 logbase=10; axis1;
proc gplot data=chla_sep;
   plot chla2*date=1 chla1*date=2 (ucl pred
lcl)*date=3 /overlay vaxis=axis1;
   by station;
run;
quit;
proc glm data=chla_sep;
   class station year month;
   model salinity temperature Transparency__m_
Nitrate__mmol_l_1_ Phosphate__mmol_l_1_
Silicate__mmol_l_1_=bloom month/solution;
   by station;
run;
quit;

Thanks

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] odesolve/lsoda differences on Windows and Mac

2005-07-28 Thread Setzer . Woodrow
I've been talking offline with Hank Stephens about this; note that in
the example he quotes, he set hmin  = 0.1, and the quoted error message
says that the stepsize had reached hmin with no convergence.  I believe
that he intended to set hmax (because of the pulsed input).  Then, Peter
Dalgaard's explanation would make sense -- the PPC platform just needs a
smaller stepsize than does the PC to achieve convergence.

R. Woodrow Setzer, Jr.
National Center for Computational Toxicology
US Environmental Protection Agency
Mail Drop B305-03/US EPA/RTP, NC 27711
Ph: (919) 541-0128Fax: (919) 541-4284



 Martin Henry H.   
 Stevens   
 [EMAIL PROTECTED]To 
 .edu'R-Help'
  r-help@stat.math.ethz.ch
 07/27/2005 12:36cc 
 PM   Thomas Petzoldt   
  [EMAIL PROTECTED], 
  Woodrow Setzer/RTP/USEPA/[EMAIL 
PROTECTED]   
Subject 
  odesolve/lsoda differences on 
  Windows and Mac   










Hi -
I am getting different results when I run the numerical integrator
function lsoda (odesolve package) on a Mac and a PC. I am trying to
simulating a system of 10 ODE's with two exogenous pulsed inputs to the
system, and have had reasonably good success with many model parameter
sets. Under some parameter sets, however, the simulations fail on the
Mac (see error message below). The same parameter sets, however, appear
to run fine for our computational technician on his PC, generating
apparently very  reasonable data.

Our tech is successfully  running
Dell Latitude D810, Windows XP Pro (Service Pack 2), 1Gb
RAM.  RGUI 2.1.1

I am running:
  R Version 2.1.1  (2005-06-20) on a
Mac OS 10.3.9
   Machine Model:Power Mac G5
   CPU Type: PowerPC 970  (2.2)
   Number Of CPUs: 2
   CPU Speed:2 GHz
   L2 Cache (per CPU): 512 KB
   Memory: 1.5 GB
   Bus Speed:1 GHz
   Boot ROM Version:   5.0.7f0
   Serial Number:XB3472Q1NVS

My Error Message
  system.time(
+ outAc2 - as.data.frame(lsoda(xstart,times, pondamph, parms,
tcrit=170*730, hmin=.1))
+ )
[1] 0.02 0.01 0.04 0.00 0.00
Warning messages:
1: lsoda--  at t (=r1) and step size h (=r2), the
2:   corrector convergence failed repeatedly
3:   or with abs(h) = hmin
4: Returning early from lsoda.  Results are accurate, as far as they go

Thanks for any input.

Hank Stevens



Dr. Martin Henry H. Stevens, Assistant Professor
338 Pearson Hall
Botany Department
Miami University
Oxford, OH 45056

Office: (513) 529-4206
Lab: (513) 529-4262
FAX: (513) 529-4243
http://www.cas.muohio.edu/botany/bot/henry.html
http://www.muohio.edu/ecology/
http://www.muohio.edu/botany/
E Pluribus Unum

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Unexpected behavior in recode{car}

2005-07-28 Thread John Fox
Dear Tom and David,

The source of the problem isn't hard to see if you trace the execution of
recode() via debug(): The test for whether the result can be coerced to
numeric is faulty. I've fixed the bug and will upload a new version of car
to CRAN shortly. In the meantime, you can use 

ss - recode(nn, 2='Num2'; 4='Num4', as.factor=TRUE)

Thanks for bringing this bug to my attention.

John


John Fox
Department of Sociology
McMaster University
Hamilton, Ontario
Canada L8S 4M4
905-525-9140x23604
http://socserv.mcmaster.ca/jfox 
 

 -Original Message-
 From: [EMAIL PROTECTED] 
 [mailto:[EMAIL PROTECTED] On Behalf Of Mulholland, Tom
 Sent: Thursday, July 28, 2005 2:01 AM
 To: D. Dailey; r-help@stat.math.ethz.ch
 Subject: Re: [R] Unexpected behavior in recode{car}
 
 require( car )
 set.seed(12345)
 nn - sample( c( 2, 4 ), size=50, replace=TRUE ) rr - 
 recode( nn, 2='TWO';4='FOUR' ) table( rr, exclude=NULL ) ss 
 - recode( nn, 2='Num2';4='Num4' )  # Doesn't work as 
 expected table( ss, exclude=NULL ) ss - recode( nn, 
 2='Num2';4='Num4', TRUE )  #?
 table( ss, exclude=NULL )
 tt - recode( nn, 2='TWO'; 4='Num4' )
 table( tt, exclude=NULL )
 uu - recode( nn, 2='Num2'; 4='FOUR' ) table( uu, exclude=NULL )
 
 I looked at the code and found it too difficult to 
 immediately decipher. So does making the result a factor 
 cause any real problems?
 
 I noticed that the same response happens with any letterset 
 followed by a number recode( nn, 2='Num2'; 4='abc5' )
 
 Tom
 
  -Original Message-
  From: [EMAIL PROTECTED] 
  [mailto:[EMAIL PROTECTED] Behalf Of D. Dailey
  Sent: Thursday, 28 July 2005 11:45 AM
  To: r-help@stat.math.ethz.ch
  Subject: [R] Unexpected behavior in recode{car}
  
  
  Thanks to the R creators for such a great statistical system. 
  Thanks to
  the R help list, I have (finally) gotten far enough in R to have a 
  question I hope to be worth posting.
  
  I'm using the recode function from John Fox's car package and have 
  encountered some unexpected behavior.
  
  Consider the following example:
  
  ## Begin cut-and-paste example
  require( car )
  set.seed(12345)
  nn - sample( c( 2, 4 ), size=50, replace=TRUE ) rr - recode( nn, 
  2='TWO';4='FOUR' ) table( rr, exclude=NULL ) ss - recode( nn, 
  2='Num2';4='Num4' )  # Doesn't work as expected table( ss, 
  exclude=NULL ) tt - recode( nn, 2='TWO'; 4='Num4' ) table( tt, 
  exclude=NULL ) uu - recode( nn, 2='Num2'; 4='FOUR' ) table( uu, 
  exclude=NULL ) ## End cut-and-paste example
  
  All but the recoding to ss work as expected: I get a 
 character vector 
  with 23 instances of either FOUR or Num4 and 27 
 instances of TWO
  or Num2.
  
  But for the ss line, wherein all the strings to be assigned 
 contain a 
  digit, the resulting vector contains all NAs. Using str(), 
 I note that 
  ss is a numeric vector.
  
  Is there a tidy way (using recode) to recode numeric values into 
  character strings, all of which contain a digit? I have a 
 workaround 
  for my current project, but it would be nice to be able to 
 use mixed 
  alphanumeric strings in this context.
  
  Thanks in advance for any insight you can give into this question.
  
  Using R 2.1.1 (downloaded binary) on Windows XP Pro, car version 
  1.0-17 (installed from CRAN via Windows GUI). Complete version 
  information
  below:
  
version
_
  platform i386-pc-mingw32
  arch i386
  os   mingw32
  system   i386, mingw32
  status
  major2
  minor1.1
  year 2005
  month06
  day  20
  language R
  
t(t( installed.packages()['car',] ))
[,1]
  Package  car
  LibPath  C:/Programs/R/rw2011/library
  Version  1.0-17
  Priority NA
  Bundle   NA
  Contains NA
  Depends  R (= 1.9.0)
  Suggests MASS, nnet, leaps
  Imports  NA
  Built2.1.0
  
  
  I subscribe to the help list in digest form, so would 
 appreciate being 
  copied directly in addition to seeing responses sent to the list.
  
  David Dailey
  Shoreline, Washington, USA
  [EMAIL PROTECTED]
  
  __
  R-help@stat.math.ethz.ch mailing list
  https://stat.ethz.ch/mailman/listinfo/r-help
  PLEASE do read the posting guide! 
 http://www.R-project.org/posting-guide.html
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! 
 http://www.R-project.org/posting-guide.html

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] odesolve/lsoda differences on Windows and Mac

2005-07-28 Thread Setzer . Woodrow
I will have added most of the solvers from the ode package odepack by
Alan Hindmarsh, LLNL, to odesolve this year.  These are all solvers in
the lsode family.
I would also like to add solver(s) for DAEs, like daskr (Brown,
Hindmarsh, Petzold), but that may take a bit longer.
If other folks are planning to contribute solvers, I would be happy to
discuss including them in odesolve, so there would be a single main
package for solving ODEs.

R. Woodrow Setzer, Jr.
National Center for Computational Toxicology
US Environmental Protection Agency
Mail Drop B305-03/US EPA/RTP, NC 27711
Ph: (919) 541-0128Fax: (919) 541-4284



 Thomas Petzoldt
 [EMAIL PROTECTED]   
 z.tu-dresden.deTo 
  Peter Dalgaard
 07/28/2005 05:05 [EMAIL PROTECTED]
 AM  cc 
  Martin Henry H. Stevens 
  [EMAIL PROTECTED], 'R-Help' 
  r-help@stat.math.ethz.ch,   
  Woodrow Setzer/RTP/USEPA/[EMAIL 
PROTECTED],  
  Thomas Petzoldt   
  [EMAIL PROTECTED]  
Subject 
  Re: [R] odesolve/lsoda
  differences on Windows and Mac










On 27 Jul 2005, Peter Dalgaard wrote:

 One thought: Integrating across input pulses is a known source of
 turbulence in lsoda. You might have better luck integrating over
 intervals in which the input function is continuous.

 Tweaking the lsoda tolerances is another thing to try.

Yes, that's also our experience. Where I am usually succesful
when playing with the tolerances or the interpolation rule of external
pulses, some of our students use the fixed step rk4 algorithm and some
others wrote their own integrators in R.

I have heared that several people had plans to provide alternative ODE
integrators for R but I currently do not know about the state of these
projects. It wold be nice if they might post this to the list in order
to
avoid double work.


 I haven't seen lsoda fail like that, but it's not too surprising that
 marginal cases show platform dependency (i.e. the integrator just
 fails on Mac and barely succeeds on PC).


Aha, I see. It should be regarded carefully when publishing examples
that
result in marginal cases as the common user would expect that R is
platform independent.

Thomas P.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Anova's in R

2005-07-28 Thread Frith Jarrad
Hello.

I am looking for some help using anova's in RGui.

My experiment ie data, has a fixed burning treatment (Factor A) 2 levels, 
unburnt/burnt.
Nested within each level of Factor A are 2 random sites (Factor B).
All sites are crossed with a fixed temperature treatment (Factor C) 2 
levels, 0 degreesC/2 degreesC, with many replicates of these temperature 
treatments randomly located at each site.

I am trying the following
aov(dependent 
variable~burning*temperature*site+Error(replicate),data=dataset) and 
variations on that, however can't get it quite right the F ratios are 
not correct. I imagine this is a fairly common experimental design in 
ecology and would ask that anyone who has some advice please reply to this 
email?

Thank-you,
Frith

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Cochran-Armitage-trend-test

2005-07-28 Thread amelie2000
Hi!

I am searching for the Cochran-Armitage-trend-test. Is it included in an
R-package? 

Thank you!

--

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] CUSUM SQUARED structural breaks approach?

2005-07-28 Thread Rick Ram
Hi all,
 I have not looked at this CUSUM SQUARED issue since the emails at the 
beginning of the year but am looking at it again. For those who are 
interested the following paper gives critical values where n60 in addition 
to the ones in Durbin 1969. 
 Edgerton, David  Wells, Curt, 1994. *Critical Values for the Cusumsq 
Statistic in Medium and Large Sized
Sampleshttp://ideas.repec.org/a/bla/obuest/v56y1994i3p355-65.html
*, Oxford Bulletin of Economics and
Statisticshttp://ideas.repec.org/s/bla/obuest.html,
Blackwell Publishing, vol. 56(3), pages 355-65. 
  All the best,
 R.


On 12/01/05, Achim Zeileis [EMAIL PROTECTED] wrote: 
 
 On Tue, 11 Jan 2005 19:33:41 + Rick Ram wrote:
 
  Groundwork for the choice of break method in my specific application
  has already been done - otherwise I would need to rework the wheel
  (make a horribly detailed comparison of performance of break
  approaches in context of modelling post break)
 
  If it interests you, Pesaran  Timmerman 2002 compared CUSUM Squared,
  BaiPerron and a time varying approach to detect singular previous
  breaks in reverse ordered financial time series so as to update a
  forecasting model.
 
 Yes, I know that paper. And if I recall correctly they are mainly
 interested in modelling the time period after the last break. For this,
 the reverse ordered recursive CUSUM approach works because they
 essentially look back in time to see when their predictions break down.
 And for their application looking for variance changes also makes sense.
 The approach is surely valid and sound in this context...but it might be
 possible to do something better (but I would have to look much closer at
 the particular application to have an idea what might be a way to go).
 
  This works fine i.e. the plot looks correct. The problem is how to
  appropriately normalise these to rescale them to what the CUSUM
  squared procedure expects (this looks to be a different and more
  complicated procedure than the normalisation used for the basic
  CUSUM). I am from an IT background and am slightly illiterate in
  terms of math notation... guidance from anyone would be appreciated
 
 I just had a brief glance at BDE75, page 154, Section 2.4. If I
 haven't missed anything important on reading it very quickly, you just
 need to do something like the following (a reproducible example, based
 on data from strucchange, using a notation similar to BDE's):
 
 ## load GermanM1 data and model
 library(strucchange)
 data(GermanM1)
 M1.model - dm ~ dy2 + dR + dR1 + dp + ecm.res + season
 
 ## compute squared recursive residuals
 w2 - recresid(M1.model, data = GermanM1)^2
 ## compute CUSUM of squares process
 sr - ts(cumsum(c(0, w2))/sum(w2), end = end(GermanM1$dm), freq = 12)
 ## the border (r-k)/(T-k)
 border - ts(seq(0, 1, length = length(sr)),
 start = start(sr), freq = 12)
 
 ## nice plot
 plot(sr, xaxs = i, yaxs = i, main = CUSUM of Squares)
 lines(border, col = grey(0.5))
 lines(0.4 + border, col = grey(0.5))
 lines(- 0.4 + border, col = grey(0.5))
 
 Instead of 0.4 you would have to use the appropriate critical values
 from Durbin (1969) if my reading of the paper is correct.
 
 hth,
 Z
 
  Does anyone know if this represents some commonly performed type of
  normalisation than exists in another function??
 
  I will hunt out the 1969 paper for the critical values but prior to
  doing this I am a bit confused as to how they will
  implemented/interpreted... the CUSUM squared plot does/should run
  diagonally up from left to right and there are two straight lines that
  one would put around this from the critical values. Hence, a
  different interpretation/implementation of confidence levels than in
  other contexts. I realise this is not just a R thing but a problem
  with my theoretical background.
 
 
  Thanks for detailed reply!
 
  Rick.
 
 
  
   But depending on the model and hypothesis you want to test, another
   technique than CUSUM of squares might be more appropriate and also
   available in strucchange.
 
  
   hth,
   Z
  
Any help or pointers about where to look would be more than
appreciated! Hopefully I have just missed obvious something in
the package...
   
Many thanks,
   
Rick R.
   
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide!
http://www.R-project.org/posting-guide.html
   
  
 


[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] replace matrix values with names from a dataframe

2005-07-28 Thread jhainm
Hi,

I am looking for a way to replace matrix values with names from a dataframe.

Let me do this by example: I have a dataframe:

data
  city.name
1munich
2 paris
3 tokio
4london
5boston

each city name corresponds to only one index number (there is only one
observation for each city). After doing some matching I end up with a matrix
that looks something like this:

 X
   [,1] [,2]
  [1,]24
  [2,]51
  [3,]53
  [4,]   12  217
  [5,]   16   13

Here the numbers in the matrix are the index numbers from my original dataset,
each row is a matched pair (so e.g. the first row tells me that obs. number 2
(i.e. Paris) was matched to obs number 4 (i.e. London)).

Now I am looking for a quick way to transform the index numbers back to city
names, so that at the end I have a matrix that looks something like this:

 X.transformed
 [,1] [,2]
  [1,]  paris   london
  [2,] boston   munich
  [3,] bostontokio
  [4,] 12  217
  [5,] 16   13

etc. So instead of the index number, the matrix should contain the names that
corresponds to it. In my real data, I have many many names and replacing each
value by hand would take too long. Any help is highly appreciated.

Thank you.

Regards,
Jens

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] catching errors in a loop

2005-07-28 Thread Anders Bjørgesæter
Hello

I can't figure out how to handle errors in R. I have a loop, e.g.

for (i in 2:n) {
.
fit - nls(model), start=list…
if any type of error occur write i to a text file
.
}

I putted “try” around the nls-expression and this let me run through the 
loop without R stopping (which I want because each loop takes some time so 
I do not want it to stop), but I also want to capture the variable when an 
error occur.

Appreciate any help

/Anders
- - - - - - - - - - - - - - - - - - -
I tried to use:
**“options(error=write(variable.names(matrix[i]), 
file=..\\error.txt,append = TRUE))”, hoping this made R write to the text 
file every time an error occurred (but this made R write all i’s in the 
loop to the text file).
**tryCatch(- nls(model), start=list…), finally =write(…) also writes to a 
text file but not necessary when there is an error.
**“if (Parameterx errorM=9 else errorM =0” works but I want to capture any 
type of error.
- - - - - - - - - - - - - - - - -

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] odesolve/lsoda differences on Windows and Mac UPDATE

2005-07-28 Thread Martin Henry H. Stevens
To all:
After talking to Woody Setzer offline, I ran my problem scripts 
again. I am embarrassed to say that it worked fine for all previously 
intransigent parameter sets. I compared the results to those of my 
Windows buddy, and they are essentially identical, with the average 
absolute difference at each time point is 1.3e-06. I am planning to go 
back and try to understand what went wrong before. I never used hmin 
(mistakenly of course, instead of hmax) UNTIL a run with default lsoda 
argument values failed. Now the default values work! Thus the mistaken 
use of hmin isn't the entire answer.
Thank you for the time and interest, and I apologize for troubling you. 
I will get back to the list if I can ever repeat the problem or if I 
can figure out what I did wrong.

Best Regards,
Hank

On Jul 27, 2005, at 12:36 PM, Martin Henry H. Stevens wrote:

 Hi -
 I am getting different results when I run the numerical integrator
 function lsoda (odesolve package) on a Mac and a PC. I am trying to
 simulating a system of 10 ODE's with two exogenous pulsed inputs to the
 system, and have had reasonably good success with many model parameter
 sets. Under some parameter sets, however, the simulations fail on the
 Mac (see error message below). The same parameter sets, however, appear
 to run fine for our computational technician on his PC, generating
 apparently very  reasonable data.

 Our tech is successfully  running
 Dell Latitude D810, Windows XP Pro (Service Pack 2), 1Gb
 RAM.  RGUI 2.1.1

 I am running:
   R Version 2.1.1  (2005-06-20) on a
 Mac OS 10.3.9
Machine Model: Power Mac G5
CPU Type:  PowerPC 970  (2.2)
Number Of CPUs:2
CPU Speed: 2 GHz
L2 Cache (per CPU):512 KB
Memory:1.5 GB
Bus Speed: 1 GHz
Boot ROM Version:  5.0.7f0
Serial Number: XB3472Q1NVS

 My Error Message
 system.time(
 + outAc2 - as.data.frame(lsoda(xstart,times, pondamph, parms,
 tcrit=170*730, hmin=.1))
 + )
 [1] 0.02 0.01 0.04 0.00 0.00
 Warning messages:
 1: lsoda--  at t (=r1) and step size h (=r2), the
 2:   corrector convergence failed repeatedly
 3:   or with abs(h) = hmin
 4: Returning early from lsoda.  Results are accurate, as far as they go

 Thanks for any input.

 Hank Stevens



 Dr. Martin Henry H. Stevens, Assistant Professor
 338 Pearson Hall
 Botany Department
 Miami University
 Oxford, OH 45056

 Office: (513) 529-4206
 Lab: (513) 529-4262
 FAX: (513) 529-4243
 http://www.cas.muohio.edu/botany/bot/henry.html
 http://www.muohio.edu/ecology/
 http://www.muohio.edu/botany/
 E Pluribus Unum

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! 
 http://www.R-project.org/posting-guide.html


Dr. Martin Henry H. Stevens, Assistant Professor
338 Pearson Hall
Botany Department
Miami University
Oxford, OH 45056

Office: (513) 529-4206
Lab: (513) 529-4262
FAX: (513) 529-4243
http://www.cas.muohio.edu/botany/bot/henry.html
http://www.muohio.edu/ecology/
http://www.muohio.edu/botany/
E Pluribus Unum

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] CSV file and date. Dates are read as factors!

2005-07-28 Thread Jorge de la Vega Gongora
Use the package chron. Before importing the data to R from the cvs file, 
convert dates to numeric format. Dates are just a sequence from a starting 
point. I use the following to work with dates. Asuming you have a column in 
your cvs file with header date:
 options(chron.origin=c(month=12,day=31,year=1899))
 x - read.csv(../bla/bla.csv)
x$date - chron(x$date,format=y-m-d)
 Or if you have your cvs file with labels like 02/03/2005, then replace 
the last line with:
 x$date - chron(as.character(x$date),format=y-m-d)
 Then you can use your field date to do date operations
 Hope this is useful.

 On 7/27/05, John Sorkin [EMAIL PROTECTED] wrote: 
 
 I am using read.csv to read a CSV file (produced by saving an Excel file
 as a CSV file). The columns containing dates are being read as factors.
 Because of this, I can not compute follow-up time, i.e.
 Followup-postDate-preDate. I would appreciate any suggestion that would
 help me read the dates as dates and thus allow me to calculate follow-up
 time.
 Thanks
 John
 
 John Sorkin M.D., Ph.D.
 Chief, Biostatistics and Informatics
 Baltimore VA Medical Center GRECC and
 University of Maryland School of Medicine Claude Pepper OAIC
 
 University of Maryland School of Medicine
 Division of Gerontology
 Baltimore VA Medical Center
 10 North Greene Street
 GRECC (BT/18/GR)
 Baltimore, MD 21201-1524
 
 410-605-7119
 -- NOTE NEW EMAIL ADDRESS:
 [EMAIL PROTECTED]
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! 
 http://www.R-project.org/posting-guide.html


[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] replace matrix values with names from a dataframe

2005-07-28 Thread Dimitris Rizopoulos
maybe something like this could be helpful

city.name - c(munich, paris, tokio, london, boston)
X - cbind(c(2, 5, 5), c(4, 1, 3))

matrix(city.name[X], ncol = 2)


I hope it helps.

Best,
Dimitris


Dimitris Rizopoulos
Ph.D. Student
Biostatistical Centre
School of Public Health
Catholic University of Leuven

Address: Kapucijnenvoer 35, Leuven, Belgium
Tel: +32/16/336899
Fax: +32/16/337015
Web: http://www.med.kuleuven.be/biostat/
 http://www.student.kuleuven.be/~m0390867/dimitris.htm



- Original Message - 
From: [EMAIL PROTECTED]
To: r-help@stat.math.ethz.ch
Sent: Thursday, July 28, 2005 3:52 PM
Subject: [R] replace matrix values with names from a dataframe


 Hi,

 I am looking for a way to replace matrix values with names from a 
 dataframe.

 Let me do this by example: I have a dataframe:

data
  city.name
 1munich
 2 paris
 3 tokio
 4london
 5boston

 each city name corresponds to only one index number (there is only 
 one
 observation for each city). After doing some matching I end up with 
 a matrix
 that looks something like this:

 X
   [,1] [,2]
  [1,]24
  [2,]51
  [3,]53
  [4,]   12  217
  [5,]   16   13

 Here the numbers in the matrix are the index numbers from my 
 original dataset,
 each row is a matched pair (so e.g. the first row tells me that obs. 
 number 2
 (i.e. Paris) was matched to obs number 4 (i.e. London)).

 Now I am looking for a quick way to transform the index numbers back 
 to city
 names, so that at the end I have a matrix that looks something like 
 this:

 X.transformed
 [,1] [,2]
  [1,]  paris   london
  [2,] boston   munich
  [3,] bostontokio
  [4,] 12  217
  [5,] 16   13

 etc. So instead of the index number, the matrix should contain the 
 names that
 corresponds to it. In my real data, I have many many names and 
 replacing each
 value by hand would take too long. Any help is highly appreciated.

 Thank you.

 Regards,
 Jens

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! 
 http://www.R-project.org/posting-guide.html


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] conversion from SAS

2005-07-28 Thread Frank E Harrell Jr
alessandro carletti wrote:
 Hi, I wonder if anybody could help me in converting
 this easy SAS program into R.
 (I'm still trying to do that!)

Converting that program into R will be very feasible and the solution 
will be far more elegant than SAS.  But I think you are expecting other 
people to do your work.

Frank

 
 PROC IMPORT OUT= WORK.CHLA_italian 
 DATAFILE= C:\Documents and
 Settings\carleal\My
 Documents\REBECCA\stat\sas\Allnutrients.xls 
 DBMS=EXCEL2000 REPLACE;
  GETNAMES=YES;
 RUN;
 data chla_italian;
set chla_italian;
year=year(datepart(date));
month=month(datepart(date));
run;
 
 proc sort data=chla_italian; by station; run;
 /* Check bloom for seasonal cycle outliers */
 data sort_dataset;
set chla_italian;
chla=chl_a;
dayno=date-mdy(1,1,year)+1;
cos1=cos(2*3.14*dayno/365);
sin1=sin(2*3.14*dayno/365);
cos2=cos(4*3.14*dayno/365);
sin2=sin(4*3.14*dayno/365);
cos3=cos(6*3.14*dayno/365);
sin3=sin(6*3.14*dayno/365);
cos4=cos(8*3.14*dayno/365);
sin4=sin(8*3.14*dayno/365);
bloom=0;
w_chla=1/chla/chla;
 run;
 ODS listing close;
 %macro sort_event(cut_off,last=0);
 /*proc glm data=sort_dataset;
class year;
model logchla=year cos1 sin1 cos2 sin2 cos3 sin3
 cos4 sin4 /solution;
by station;
where bloom=0;
output out=chla_res predicted=pred student=studres
 cookd=cookd rstudent=rstudent u95=u95;
lsmeans year / at (cos1 sin1 cos2 sin2 cos3 sin3
 cos4 sin4)=(0 0 0 0 0 0 0 0);
ODS output ParameterEstimates=parmest
 LSmeans=lsmeans;
 run;*/
 proc glm data=sort_dataset;
class year month;
model chla=/solution;
by station;
weight w_chla;
where bloom=0;
output out=chla_res predicted=pred student=studres
 cookd=cookd 
 daynumber-data$date-mdy(1,1,y)+1 
 rstudent=rstudent ucl=ucl lcl=lcl / alpha=0.02;
 *   lsmeans year / at (cos1 sin1)=(0 0);
 *   ODS output ParameterEstimates=parmest
 LSmeans=lsmeans;
 run;
 quit;
 data sort_dataset;
set chla_res;
increase=ucl-pred;
if chlaUCL then bloom=1;
w_chla=1/(50+5*pred*pred);
%if last=0 %then %do; drop ucl lcl cookd rstudent
 studres pred; %end;
 run;
 %mend sort_event;
 ODS listing;
 %sort_event(2.0);
 %sort_event(2.0);
 %sort_event(2.0);
 %sort_event(2.0);
 %sort_event(2.0);
 %sort_event(2.0);
 %sort_event(2.0);
 %sort_event(2.0);
 %sort_event(2.0);
 %sort_event(2.0);
 %sort_event(2.0);
 %sort_event(2.0);
 %sort_event(2.0);
 %sort_event(2.0);
 %sort_event(2.0,last=1);
 /* Combine bloom information with all chlorophyll
 values */
 data chla_sep;
merge sort_dataset(keep=station date bloom pred ucl
 lcl) chla_italian (rename=(chl_a=chla));
by station date;
 *   lcl=exp(lcl);
 *   ucl=exp(ucl);
 *   pred=exp(pred);
if bloom=. then bloom=1;
if bloom=0 then chla1=chla; else chla1=.;
if bloom=1 then chla2=chla; else chla2=.;
 run;
  
 symbol1 i=none value=plus color=red;
 symbol2 i=none value=plus color=green;
 symbol3 i=join value=none line=1 color=black;
 axis1 logbase=10; axis1;
 proc gplot data=chla_sep;
plot chla2*date=1 chla1*date=2 (ucl pred
 lcl)*date=3 /overlay vaxis=axis1;
by station;
 run;
 quit;
 proc glm data=chla_sep;
class station year month;
model salinity temperature Transparency__m_
 Nitrate__mmol_l_1_ Phosphate__mmol_l_1_
 Silicate__mmol_l_1_=bloom month/solution;
by station;
 run;
 quit;
 
 Thanks
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
 


-- 
Frank E Harrell Jr   Professor and Chair   School of Medicine
  Department of Biostatistics   Vanderbilt University

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] catching errors in a loop

2005-07-28 Thread Uwe Ligges
Anders Bjørgesæter wrote:
 Hello
 
 I can't figure out how to handle errors in R. I have a loop, e.g.
 
 for (i in 2:n) {
 .
 fit - nls(model), start=list…
 if any type of error occur write i to a text file
 .
 }
 
 I putted “try” around the nls-expression and this let me run through the 
 loop without R stopping (which I want because each loop takes some time so 
 I do not want it to stop), but I also want to capture the variable when an 
 error occur.

Right idea:

   fit -  try(nls(model, .))
   if(inherits(fit, try-error))
write(i, file=hello.txt)

Uwe Ligges





 
 Appreciate any help
 
 /Anders
 - - - - - - - - - - - - - - - - - - -
 I tried to use:
 **“options(error=write(variable.names(matrix[i]), 
 file=..\\error.txt,append = TRUE))”, hoping this made R write to the text 
 file every time an error occurred (but this made R write all i’s in the 
 loop to the text file).
 **tryCatch(- nls(model), start=list…), finally =write(…) also writes to a 
 text file but not necessary when there is an error.
 **“if (Parameterx errorM=9 else errorM =0” works but I want to capture any 
 type of error.
 - - - - - - - - - - - - - - - - -
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Cochran-Armitage-trend-test

2005-07-28 Thread Liaw, Andy
Simply do RSiteSearch(Armitage) would have given you:
http://finzi.psych.upenn.edu/R/Rhelp02a/archive/20396.html

Andy

 From: [EMAIL PROTECTED]
 
 Hi!
 
 I am searching for the Cochran-Armitage-trend-test. Is it 
 included in an
 R-package? 
 
 Thank you!
 
 --
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! 
 http://www.R-project.org/posting-guide.html
 
 


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] CSV file and date. Dates are read as factors!

2005-07-28 Thread Don MacQueen
It's really pretty simple.

First, if you supply as.is=TRUE to read.csv() [or read.table()] then 
your dates will be read as character strings, not factors. That saves 
the step of converting them from factor to character.

Then, use as.Date() to convert the date columns to objects of class 
Date. You will have to specify the format, if your dates are not in 
the default format.

  tmp - as.Date('2002-5-1')
  as.Date(Sys.time())-tmp
Time difference of 1184 days

If your dates include times, then use as.POSIXct() instead of as.Date().

  tmp - as.POSIXct('2002-5-1 13:21')
  Sys.time()-tmp
Time difference of 1183.746 days

If you don't want to use as.is, perhaps because you have other 
columns that you *want* to have as factors, then either supply 
colClasses to read.csv, or else just use format() to convert the 
factors to character.

as.Date(format(your_date_column))

As an aside, you might save yourself some time by using read.xls() 
from the gdata package.

And of course, there's always the ugly work-around. In your Excel, 
create new columns in which the dates are formatted as numbers, 
presumably as the number of days since whatever Excel uses for its 
origin. Then, in R, you can simply subtract the numbers. If you have 
date-time values in Excel, this might be a little trickier.

-Don

At 9:28 PM -0400 7/27/05, John Sorkin wrote:
I am using read.csv to read a CSV file (produced by saving an Excel file
as a CSV file). The columns containing dates are being read as factors.
Because of this, I can not compute follow-up time, i.e.
Followup-postDate-preDate. I would appreciate any suggestion that would
help me read the dates as dates and thus allow me to calculate follow-up
time.
Thanks
John

John Sorkin M.D., Ph.D.
Chief, Biostatistics and Informatics
Baltimore VA Medical Center GRECC and
University of Maryland School of Medicine Claude Pepper OAIC

University of Maryland School of Medicine
Division of Gerontology
Baltimore VA Medical Center
10 North Greene Street
GRECC (BT/18/GR)
Baltimore, MD 21201-1524

410-605-7119
-- NOTE NEW EMAIL ADDRESS:
[EMAIL PROTECTED]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


-- 
--
Don MacQueen
Environmental Protection Department
Lawrence Livermore National Laboratory
Livermore, CA, USA

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Cochran-Armitage-trend-test

2005-07-28 Thread Vito Ricci
Hi,
see:
http://finzi.psych.upenn.edu/R/Rhelp02a/archive/20396.html

Regards,
Vito



[EMAIL PROTECTED] wrote:

Hi!

I am searching for the Cochran-Armitage-trend-test. Is
it included in an
R-package? 

Thank you!



Diventare costruttori di soluzioni
Became solutions' constructors

The business of the statistician is to catalyze 
the scientific learning process.  
George E. P. Box

Statistical thinking will one day be as necessary for efficient citizenship as 
the ability to read and write
H. G. Wells

Top 10 reasons to become a Statistician

 1. Deviation is considered normal
 2. We feel complete and sufficient
 3. We are 'mean' lovers
 4. Statisticians do it discretely and continuously
 5. We are right 95% of the time
 6. We can legally comment on someone's posterior distribution
 7. We may not be normal, but we are transformable
 8. We never have to say we are certain
 9. We are honestly significantly different
10. No one wants our jobs


Visitate il portale http://www.modugno.it/
e in particolare la sezione su Palese  
http://www.modugno.it/archivio/palesesanto_spirito/

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] stl()

2005-07-28 Thread Vito Ricci
Hi,

maybe residuals are autocorreleted, you could you use
ARIMA models. See arima() in R to fit an ARIMA model.
Regards,
Vito

Sebastian.Leuzinger at unibas.ch wrote:

Hello, anyone got an idea on how to use stl() so that
the remainder eventually 
becomes white noise? i used stl repeatedly but there
is autocorrelation in 
the remainder that i can't get rid of. 
os: linux suse9.3

Sebastian Leuzinger
Institute of Botany, University of Basel
Schönbeinstr. 6 CH-4056 Basel
ph0041 (0) 61 2673511
fax   0041 (0) 61 2673504
email Sebastian.Leuzinger at unibas.ch 
web   http://pages.unibas.ch/botschoen/leuzinger


Diventare costruttori di soluzioni
Became solutions' constructors

The business of the statistician is to catalyze 
the scientific learning process.  
George E. P. Box

Statistical thinking will one day be as necessary for efficient citizenship as 
the ability to read and write
H. G. Wells

Top 10 reasons to become a Statistician

 1. Deviation is considered normal
 2. We feel complete and sufficient
 3. We are 'mean' lovers
 4. Statisticians do it discretely and continuously
 5. We are right 95% of the time
 6. We can legally comment on someone's posterior distribution
 7. We may not be normal, but we are transformable
 8. We never have to say we are certain
 9. We are honestly significantly different
10. No one wants our jobs


Visitate il portale http://www.modugno.it/
e in particolare la sezione su Palese  
http://www.modugno.it/archivio/palesesanto_spirito/

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] CSV file and date. Dates are read as factors!

2005-07-28 Thread Peter Dalgaard
Don MacQueen [EMAIL PROTECTED] writes:

 It's really pretty simple.
 
 First, if you supply as.is=TRUE to read.csv() [or read.table()] then 
 your dates will be read as character strings, not factors. That saves 
 the step of converting them from factor to character.
 
 Then, use as.Date() to convert the date columns to objects of class 
 Date. You will have to specify the format, if your dates are not in 
 the default format.
 
   tmp - as.Date('2002-5-1')
   as.Date(Sys.time())-tmp
 Time difference of 1184 days
 
 If your dates include times, then use as.POSIXct() instead of as.Date().
 
   tmp - as.POSIXct('2002-5-1 13:21')
   Sys.time()-tmp
 Time difference of 1183.746 days
 
 If you don't want to use as.is, perhaps because you have other 
 columns that you *want* to have as factors, then either supply 
 colClasses to read.csv, or else just use format() to convert the 
 factors to character.
 
 as.Date(format(your_date_column))

 Actually, you can forget about the as.is stuff from 2.1.1 onwards
since as.Date works happily with factors:

 as.Date.factor
function (x, ...)
as.Date(as.character(x), ...)

(previous versions forgot to pass the ... arguments so it only worked
there if the standard format was used.) I suspect that as.character()
is preferable to format() - there could be issues with padding.

However, you can apply as.is selectively on columns: It can be a
logical vector or a vector of indices (numeric or character).  
 
 As an aside, you might save yourself some time by using read.xls() 
 from the gdata package.
 
 And of course, there's always the ugly work-around. In your Excel, 
 create new columns in which the dates are formatted as numbers, 
 presumably as the number of days since whatever Excel uses for its 
 origin. Then, in R, you can simply subtract the numbers. If you have 
 date-time values in Excel, this might be a little trickier.
 
 -Don
 
 At 9:28 PM -0400 7/27/05, John Sorkin wrote:
 I am using read.csv to read a CSV file (produced by saving an Excel file
 as a CSV file). The columns containing dates are being read as factors.
 Because of this, I can not compute follow-up time, i.e.
 Followup-postDate-preDate. I would appreciate any suggestion that would
 help me read the dates as dates and thus allow me to calculate follow-up
 time.
 Thanks
 John
 
 John Sorkin M.D., Ph.D.
 Chief, Biostatistics and Informatics
 Baltimore VA Medical Center GRECC and
 University of Maryland School of Medicine Claude Pepper OAIC
 
 University of Maryland School of Medicine
 Division of Gerontology
 Baltimore VA Medical Center
 10 North Greene Street
 GRECC (BT/18/GR)
 Baltimore, MD 21201-1524
 
 410-605-7119
 --- NOTE NEW EMAIL ADDRESS:
 [EMAIL PROTECTED]
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
 
 
 -- 
 --
 Don MacQueen
 Environmental Protection Department
 Lawrence Livermore National Laboratory
 Livermore, CA, USA
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
 

-- 
   O__   Peter Dalgaard Øster Farimagsgade 5, Entr.B
  c/ /'_ --- Dept. of Biostatistics PO Box 2099, 1014 Cph. K
 (*) \(*) -- University of Copenhagen   Denmark  Ph:  (+45) 35327918
~~ - ([EMAIL PROTECTED])  FAX: (+45) 35327907

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Cochran-Armitage-trend-test

2005-07-28 Thread Eric Lecoutre
Hi there,

I often do receive some mails about this piece of code regarding
Cochran-Armitage or Mantel Chi square.
The archived mail does unfortunately lack some pieces of code (function
scores).
I copy there all my raw code that I did implement to mimic SAS PROC FREQ
statistics regarding the analysis of contingency tables. Whoever is
interested to take it and rework it a little bit (for example redefining
outputs so that they suits a htest object) is welcome.

Best wishes,

Eric


-



# R functions to provides statistics on contingency tables
# Mimics SAS PROC FREQ outputs
# Implementation is the one described in SAS PROC FREQ manual

# Eric Lecoutre [EMAIL PROTECTED]

# Feel free to use / modify / document / add to a package



# UTILITARY FUNCTIONS
#


print.ordtest=function(l,...)
{
 tmp=matrix(c(l$estimate,l$ASE),nrow=1)
 dimnames(tmp)=list(l$name,c(Estimate,ASE))
 print(round(tmp,4),...)
}


compADPQ=function(x)
{
nr=nrow(x)
nc=ncol(x)
Aij=matrix(0,nrow=nr,ncol=nc)
Dij=matrix(0,nrow=nr,ncol=nc)
for (i in 1:nr) {
for (j in 1:nc) {

Aij[i,j]=sum(x[1:i,1:j])+sum(x[i:nr,j:nc])-sum(x[i,])-sum(x[,j])

Dij[i,j]=sum(x[i:nr,1:j])+sum(x[1:i,j:nc])-sum(x[i,])-sum(x[,j])
}}
P=sum(x*Aij)
Q=sum(x*Dij)
return(list(Aij=Aij,Dij=Dij,P=P,Q=Q))
}


scores=function(x,MARGIN=1,method=table,...)
{
# MARGIN
#   1 - row
#   2 - columns

# Methods for ranks are
#
# x - default
# rank
# ridit
# modridit

if (method==table)
{
if (is.null(dimnames(x))) return(1:(dim(x)[MARGIN]))
else {
options(warn=-1)
if
(sum(is.na(as.numeric(dimnames(x)[[MARGIN]])))0)
{
out=(1:(dim(x)[MARGIN]))
}
else
{
out=(as.numeric(dimnames(x)[[MARGIN]]))
}
options(warn=0)
}
}
else{
### method is a rank one
Ndim=dim(x)[MARGIN]
OTHERMARGIN=3-MARGIN

ranks=c(0,(cumsum(apply(x,MARGIN,sum[1:Ndim]+(apply(x,MARGIN,sum)+1)
/2
if (method==ranks) out=ranks
if (method==ridit) out=ranks/(sum(x))
if (method==modridit) out=ranks/(sum(x)+1)
}

return(out)
}


# FUNCTIONS
#

tablegamma=function(x)
{
# Statistic
tmp=compADPQ(x)
P=tmp$P
Q=tmp$Q
gamma=(P-Q)/(P+Q)
# ASE
Aij=tmp$Aij
Dij=tmp$Dij
tmp1=4/(P+Q)^2
tmp2=sqrt(sum((Q*Aij - P*Dij)^2 * x))
gamma.ASE=tmp1*tmp2
# Test  
var0=(4/(P+Q)^2) * (sum(x*(Aij-Dij)^2) - ((P-Q)^2)/sum(x))
tb=gamma/sqrt(var0)
p.value=2*(1-pnorm(tb))
# Output

out=list(estimate=gamma,ASE=gamma.ASE,statistic=tb,p.value=p.value,name=
Gamma,bornes=c(-1,1))
class(out)=ordtest
return(out)
}


tabletauc=function(x)
{
tmp=compADPQ(x)
P=tmp$P
Q=tmp$Q
m=min(dim(x))
n=sum(x)
# statistic

tauc=(m*(P-Q))/(n^2*(m-1))
# ASE   
Aij=tmp$Aij
Dij=tmp$Dij
dij=Aij-Dij
tmp1=2*m/((m-1)*n^2)
tmp2= sum(x * dij^2) - (P-Q)^2/n
ASE=tmp1*sqrt(tmp2)

# Test  
tb=tauc/ASE
p.value=2*(1-pnorm(tb))
# Output

out=list(estimate=tauc,ASE=ASE,statistic=tb,p.value=p.value,name=Kendal
l's tau-c,bornes=c(-1,1))
class(out)=ordtest
return(out)
}

tabletaub=function(x)
{
# Statistic
tmp=compADPQ(x)
P=tmp$P
Q=tmp$Q
n=sum(x)
wr=n^2 - sum(apply(x,1,sum)^2)
wc=n^2 - sum(apply(x,2,sum)^2)
taub=(P-Q)/sqrt(wr*wc)  
# ASE
Aij=tmp$Aij
Dij=tmp$Dij
w=sqrt(wr*wc)
dij=Aij-Dij
nidot=apply(x,1,sum)
ndotj=apply(x,2,sum)
n=sum(x)
vij=outer(nidot,ndotj, FUN=function(a,b) return(a*wc+b*wr))
tmp1=1/(w^2)
tmp2= sum(x*(2*w*dij + taub*vij)^2)
tmp3=n^3*taub^2*(wr+wc)^2
tmp4=sqrt(tmp2-tmp3)
taub.ASE=tmp1*tmp4
# Test
var0=4/(wr*wc) * (sum(x*(Aij-Dij)^2) - (P-Q)^2/n)
tb=taub/sqrt(var0)
p.value=2*(1-pnorm(tb))
# Output

out=list(estimate=taub,ASE=taub.ASE,statistic=tb,p.value=p.value,name=K
endall's tau-b,bornes=c(-1,1))
class(out=ordtest)
return(out)
}

tablesomersD=function(x,dep=2)
{
# dep: which dimension stands for the dependant variable
# 1 - ROWS
# 2 - COLS
# Statistic
if (dep==1) x=t(x)
tmp=compADPQ(x)
P=tmp$P
Q=tmp$Q
n=sum(x)
 

[R] Help: how to specify the current active window by mouse

2005-07-28 Thread Shengzhe Wu
Hi Duncan,

Thanks for your reply. I will try to use tcltk to do that.

Sincerely,
Shengzhe

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Running Internet Explorer from Withing R

2005-07-28 Thread Walter R. Paczkowski
Good morning,

Is it possible to open an html file using IE but from within R?  I wrote a 
small function to generate tables in html but I'd like to write another 
function to call IE and open the html file.

Thanks,

Walt Paczkowski



Walter R. Paczkowski, Ph.D.
Data Analytics Corp.
44 Hamilton Lane
Plainsboro, NJ  08536
(V) 609-936-8999
(F) 609-936-3733

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Running Internet Explorer from Withing R

2005-07-28 Thread Peter Dalgaard
Walter R. Paczkowski [EMAIL PROTECTED] writes:

 Good morning,
 
 Is it possible to open an html file using IE but from within R? I
 wrote a small function to generate tables in html but I'd like to
 write another function to call IE and open the html file.

browseURL() (if it exists on Windows)

-- 
   O__   Peter Dalgaard Øster Farimagsgade 5, Entr.B
  c/ /'_ --- Dept. of Biostatistics PO Box 2099, 1014 Cph. K
 (*) \(*) -- University of Copenhagen   Denmark  Ph:  (+45) 35327918
~~ - ([EMAIL PROTECTED])  FAX: (+45) 35327907

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Running Internet Explorer from Withing R

2005-07-28 Thread ronggui
see ?browseURL 



=== 2005-07-29 00:08:00 您在来信中写道:===

Good morning,

Is it possible to open an html file using IE but from within R?  I wrote a 
small function to generate tables in html but I'd like to write another 
function to call IE and open the html file.

Thanks,

Walt Paczkowski



Walter R. Paczkowski, Ph.D.
Data Analytics Corp.
44 Hamilton Lane
Plainsboro, NJ  08536
(V) 609-936-8999
(F) 609-936-3733

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

= = = = = = = = = = = = = = = = = = = =



 

2005-07-29

--
Deparment of Sociology
Fudan University

Blog:http://sociology.yculblog.com

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

[R] problem with an IF statement?

2005-07-28 Thread tom wright
Can somebody please take a look at this and tell me whats going wrong?
It seems to be parsing wronly around the 'if' statement and gives me a
directory listing.
Thanks in advance
Tom
N.B. datan is an invented dataset


xvals-c(1,0.4,0.2)
datan-data.frame(s1=c(3,4,5),s2=c(5,5,5),s3=c(21,55,34),s4=c(5,3,2))

datan$sint-NA
datan$sgrad-NA
for(icount in 1:dim(datan)[1]) {
yvals-c(datan[icount,4],datan[icount,3],datan[icount,2])
if((is.na(yvals[1]) + is.na(yvals[2]) + is.na(yvals[3]))2) {
g-lm(yvals~xvals)
datan$sint[icount]-g$coef[1]
datan$sgrad[icount]-g$coef[2]
}
}

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Running Internet Explorer from Withing R

2005-07-28 Thread Mike Waters
Walt,

As Peter said - browsURL(), which does work on Windows (well XP SP2 for
definite).

For example, to open a google search window:

browseURL(http://www.google.co.uk;)

Regards,

Mike

 -Original Message-
 From: [EMAIL PROTECTED] 
 [mailto:[EMAIL PROTECTED] On Behalf Of Walter 
 R. Paczkowski
 Sent: 28 July 2005 17:08
 To: r-help@stat.math.ethz.ch
 Subject: [R] Running Internet Explorer from Withing R
 
 Good morning,
 
 Is it possible to open an html file using IE but from within 
 R?  I wrote a small function to generate tables in html but 
 I'd like to write another function to call IE and open the html file.
 
 Thanks,
 
 Walt Paczkowski
 
 
 
 Walter R. Paczkowski, Ph.D.
 Data Analytics Corp.
 44 Hamilton Lane
 Plainsboro, NJ  08536
 (V) 609-936-8999
 (F) 609-936-3733
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! 
 http://www.R-project.org/posting-guide.html


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] problem with an IF statement?

2005-07-28 Thread Thomas Lumley
On Thu, 28 Jul 2005, tom wright wrote:

 Can somebody please take a look at this and tell me whats going wrong?
 It seems to be parsing wronly around the 'if' statement and gives me a
 directory listing.

I think the code that you posted is not quite the code you were 
using.

The problem is that your code had tab characters in it.  The readline 
support in R interprets a tab character as a request to complete a file 
name.  If it can't do that unambiguously it will show the directory 
listing in response to a second tab.

Your email program has turned the tabs into spaces, which fixes the 
problem.  Your text editor probably has a way to do this.

Incidentally, this happens only on certain platforms, which is why the 
posting guide suggests you should say what system you are using.

-thomas


 Thanks in advance
 Tom
 N.B. datan is an invented dataset


 xvals-c(1,0.4,0.2)
 datan-data.frame(s1=c(3,4,5),s2=c(5,5,5),s3=c(21,55,34),s4=c(5,3,2))

 datan$sint-NA
 datan$sgrad-NA
 for(icount in 1:dim(datan)[1]) {
   yvals-c(datan[icount,4],datan[icount,3],datan[icount,2])
   if((is.na(yvals[1]) + is.na(yvals[2]) + is.na(yvals[3]))2) {
   g-lm(yvals~xvals)
   datan$sint[icount]-g$coef[1]
   datan$sgrad[icount]-g$coef[2]
   }
 }

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Thomas Lumley   Assoc. Professor, Biostatistics
[EMAIL PROTECTED]   University of Washington, Seattle

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] problem with an IF statement?

2005-07-28 Thread Peter Dalgaard
tom wright [EMAIL PROTECTED] writes:

 Can somebody please take a look at this and tell me whats going wrong?
 It seems to be parsing wronly around the 'if' statement and gives me a
 directory listing.
 Thanks in advance
 Tom
 N.B. datan is an invented dataset
 
 
 xvals-c(1,0.4,0.2)
 datan-data.frame(s1=c(3,4,5),s2=c(5,5,5),s3=c(21,55,34),s4=c(5,3,2))
 
 datan$sint-NA
 datan$sgrad-NA
 for(icount in 1:dim(datan)[1]) {
   yvals-c(datan[icount,4],datan[icount,3],datan[icount,2])
   if((is.na(yvals[1]) + is.na(yvals[2]) + is.na(yvals[3]))2) {
   g-lm(yvals~xvals)
   datan$sint[icount]-g$coef[1]
   datan$sgrad[icount]-g$coef[2]
   }
 }


Did you copy+paste it? The TABs will behave as if you pressed the TAB
key... 

-p

-- 
   O__   Peter Dalgaard Øster Farimagsgade 5, Entr.B
  c/ /'_ --- Dept. of Biostatistics PO Box 2099, 1014 Cph. K
 (*) \(*) -- University of Copenhagen   Denmark  Ph:  (+45) 35327918
~~ - ([EMAIL PROTECTED])  FAX: (+45) 35327907

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] NA handling with lm

2005-07-28 Thread Andreas Cordes
Hi,
I have a problem that is hopefully easily solvable, but I dont find the 
clue in the documentation. I am examining a linear model. One of the 
variables has NA values. Even though na.action=na.omit, i get NA as 
results for this variable. Can I use lm in such a case to get estimates? 
Or do I have to do some form of imputation before doing so?
Here is the call and the results, hope you can help.
Best regards,
Andreas
-
lm(formula = ESSIK ~ ALTER + as.factor(S2) + as.factor(S15A) +
as.factor(S8) + as.factor(LAND) + as.factor(S18B) + as.factor(BERUF) +
as.factor(KIRCHE) + as.factor(H_EINKOM) + as.factor(PARTNERS),
na.action = na.omit)

Residuals:
 Min   1Q   Median   3Q  Max
-17.0675  -2.0151   0.4267   2.7644   9.7333

Coefficients: (2 not defined because of singularities)
  Estimate Std. Error t value Pr(|t|)   
(Intercept)  23.755915   1.844110  12.882   2e-16 ***
[...]
as.factor(BERUF)7-1.236836   0.701323  -1.764 0.077942 . 
as.factor(KIRCHE)1   -0.811751   0.237699  -3.415 0.000649 ***
as.factor(H_EINKOM)2NA NA  NA   NA   
as.factor(H_EINKOM)3NA NA  NA   NA   
as.factor(PARTNERS)1  2.057070   0.342546   6.005 2.23e-09 ***

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] matrix form

2005-07-28 Thread Hathaikan Chootrakool

I am a new user, i was wondering how to define a collection of data in
matrix form,
this is a part of my data,there are 26 studies, 3 Treatments

   Arm No  Study no.  Treatment  Num(r) Total(n)
111 1  243
221 2  942
331 3  13   41
442 1  12   68
552 2  13   73
662 3  13   72
773 1   4   20
883 3   4   16
994 1  20   116
10  104 3  30   111

I would like to use matrix [study No,Treatment] how can i define code for
using matrix?

has anyone can help me?,thank you very much.

Hathaikan

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] problem with an IF statement?

2005-07-28 Thread Peter Dalgaard
Thomas Lumley [EMAIL PROTECTED] writes:

 On Thu, 28 Jul 2005, tom wright wrote:
 
  Can somebody please take a look at this and tell me whats going wrong?
  It seems to be parsing wronly around the 'if' statement and gives me a
  directory listing.
 
 I think the code that you posted is not quite the code you were 
 using.
 
 The problem is that your code had tab characters in it.  The readline 
 support in R interprets a tab character as a request to complete a file 
 name.  If it can't do that unambiguously it will show the directory 
 listing in response to a second tab.
 
 Your email program has turned the tabs into spaces, which fixes the 
 problem.  Your text editor probably has a way to do this.

I did see the TABs and they're still present for me in the cited code
below, so I suppose it is Thomas' system which occasionally converts
them to spaces...
 
 Incidentally, this happens only on certain platforms, which is why the 
 posting guide suggests you should say what system you are using.
 
   -thomas
 
 
  Thanks in advance
  Tom
  N.B. datan is an invented dataset
 
 
  xvals-c(1,0.4,0.2)
  datan-data.frame(s1=c(3,4,5),s2=c(5,5,5),s3=c(21,55,34),s4=c(5,3,2))
 
  datan$sint-NA
  datan$sgrad-NA
  for(icount in 1:dim(datan)[1]) {
  yvals-c(datan[icount,4],datan[icount,3],datan[icount,2])
  if((is.na(yvals[1]) + is.na(yvals[2]) + is.na(yvals[3]))2) {
  g-lm(yvals~xvals)
  datan$sint[icount]-g$coef[1]
  datan$sgrad[icount]-g$coef[2]
  }
  }
 
  __
  R-help@stat.math.ethz.ch mailing list
  https://stat.ethz.ch/mailman/listinfo/r-help
  PLEASE do read the posting guide! 
  http://www.R-project.org/posting-guide.html
 
 
 Thomas Lumley Assoc. Professor, Biostatistics
 [EMAIL PROTECTED] University of Washington, Seattle
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
 

-- 
   O__   Peter Dalgaard Øster Farimagsgade 5, Entr.B
  c/ /'_ --- Dept. of Biostatistics PO Box 2099, 1014 Cph. K
 (*) \(*) -- University of Copenhagen   Denmark  Ph:  (+45) 35327918
~~ - ([EMAIL PROTECTED])  FAX: (+45) 35327907

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] NA handling with lm

2005-07-28 Thread Petr Pikal
Hallo

On 28 Jul 2005 at 18:44, Andreas Cordes wrote:

 Hi,
 I have a problem that is hopefully easily solvable, but I dont find
 the clue in the documentation. I am examining a linear model. One of
 the variables has NA values. Even though na.action=na.omit, i get NA
 as results for this variable. Can I use lm in such a case to get
 estimates? Or do I have to do some form of imputation before doing so?
 Here is the call and the results, hope you can help. Best regards,
 Andreas
 --
 --- lm(formula = ESSIK ~ ALTER + as.factor(S2) +
 as.factor(S15A) +
 as.factor(S8) + as.factor(LAND) + as.factor(S18B) +
 as.factor(BERUF) + as.factor(KIRCHE) + as.factor(H_EINKOM) +
 as.factor(PARTNERS), na.action = na.omit)
 
 Residuals:
  Min   1Q   Median   3Q  Max
 -17.0675  -2.0151   0.4267   2.7644   9.7333
 
 Coefficients: (2 not defined because of singularities)

Problem is not in NA handling but that some of your coeficients 
can be represented as linear combination of other coeficients. You 
have to omit them.

HTH
Petr


   Estimate Std. Error t value Pr(|t|)   
 (Intercept)  23.755915   1.844110  12.882   2e-16 ***
 [...]
 as.factor(BERUF)7-1.236836   0.701323  -1.764 0.077942 . 
 as.factor(KIRCHE)1   -0.811751   0.237699  -3.415 0.000649 ***
 as.factor(H_EINKOM)2NA NA  NA   NA   
 as.factor(H_EINKOM)3NA NA  NA   NA   
 as.factor(PARTNERS)1  2.057070   0.342546   6.005 2.23e-09 ***
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide!
 http://www.R-project.org/posting-guide.html

Petr Pikal
[EMAIL PROTECTED]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Forcing coefficents in lm(), recursive residuals, etc.

2005-07-28 Thread Rick Ram
Hello all,
 Does anyone know how to constrain/force specific coefficients when running 
lm()? 
 I need to run recresid() {strucchange package} on the residuals of 
forecast.lm, but forecast.lm's coefficients must be determined by 
parameter.estimation.lm 
 I could estimate forecast.lm without lm() and use some other kind of 
optimisation, but recresid() requires an object with class lm. recresid() 
allows you to specify a formula, rather than an lm object, but it looks like 
coefficients are estimated this way too and can't be forced. 
 Here is a bit of code to compensate for my poor explanation:. 
 # Estimate the coefficients of model
parameter.estimation.lm = lm(formula = y ~ x1 + x2, data = 
estimation.dataset)
# How do I force the coefficients in forecast.lm to the coeff estimation 
from parameter.estimation.lm??
 forecast.lm = lm(formula = y ~ x1 + x2, data = forecast.dataset)
# Because I need recursive residuals from the application of the 
coefficients from parameter.estimation.lm to a different dataset
recresid(forecast.lm)
 Thanks in advance guys,
 R.

[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] R Reference Card (especially useful for Newbies)

2005-07-28 Thread Rick Ram
This is a very useful resource. I also wandered around the rest of the site 
when I found this. Rpad itself looks like a fanstastic tool. 

On 27/07/05, Berton Gunter [EMAIL PROTECTED] wrote: 
 
 
 Newbies (and others!) may find useful the R Reference Card made available 
 by
 Tom Short and Rpad at http://www.rpad.org/Rpad/Rpad-refcard.pdf or through
 the Contributed link on CRAN (where some other reference cards are also
 linked). It categorizes and organizes a bunch of R's basic, most used
 functions so that they can be easily found. For example, paste() is under
 the Strings heading and expand.grid() is under Data Creation. For
 newbies struggling to find the right R function as well as veterans who
 can't quite remember the function name, it's very handy.
 
 -- Bert Gunter
 Genentech Non-Clinical Statistics
 South San Francisco, CA
 
 The business of the statistician is to catalyze the scientific learning
 process. - George E. P. Box
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! 
 http://www.R-project.org/posting-guide.html


[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Running Internet Explorer from Withing R

2005-07-28 Thread roger bos
This might be slightly off topic, but Rpad() is a good library that
displays tables in HTML format in a brower very well.  You can also
use it to easily make a gui for your program/code.

HTH,

Roger

On 7/28/05, Walter R. Paczkowski [EMAIL PROTECTED] wrote:
 Good morning,
 
 Is it possible to open an html file using IE but from within R?  I wrote a 
 small function to generate tables in html but I'd like to write another 
 function to call IE and open the html file.
 
 Thanks,
 
 Walt Paczkowski
 
 
 
 Walter R. Paczkowski, Ph.D.
 Data Analytics Corp.
 44 Hamilton Lane
 Plainsboro, NJ  08536
 (V) 609-936-8999
 (F) 609-936-3733
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] lattice/ grid.layout/ multiple graphs per page

2005-07-28 Thread Deepayan Sarkar
On 7/27/05, McClatchie, Sam (PIRSA-SARDI)
[EMAIL PROTECTED] wrote:
 Background:
 OS: Linux Mandrake 10.1
 release: R 2.0.0
 editor: GNU Emacs 21.3.2
 front-end: ESS 5.2.3
 -
 Colleagues
 
 I have a set of lattice plots, and want to plot 4 of them on the page.
 I am having trouble with the layout.
 
 grid.newpage()
 pushViewport(viewport(layout = grid.layout(2,2)))
 pushviewport(viewport(layout.pos.col = 1, layout.pos.row = 1))
 working trellis graph code here
 pushviewport(viewport(layout.pos.col = 1, layout.pos.row = 2))
 working trellis graph code here
 pushviewport(viewport(layout.pos.col = 2, layout.pos.row = 1))
 working trellis graph code here
 pushviewport(viewport(layout.pos.col = 2, layout.pos.row = 2))
 
 I'm obviously doing something wrong since my graphs still print one per
 page?

Depends on the details of your 'working trellis code'. Are you using
print() explicitly with 'newpage=FALSE'? See ?print.trellis for
details.

Deepayan

 Would you be able to provide advice?
 
 Thanks
 
 Sam
 
 Sam McClatchie,
 Biological oceanography
 South Australian Aquatic Sciences Centre
 PO Box 120, Henley Beach 5022
 Adelaide, South Australia
 email [EMAIL PROTECTED]
 Telephone: (61-8) 8207 5448
 FAX: (61-8) 8207 5481
 Research home page http://www.members.iinet.net.au/~s.mcclatchie/
 
/\
   ...xX(°
  
°)Xx
   /  \\
 (((°
   (((°   ...xX(°O°)Xx
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Fwd: Forcing coefficents in lm(), recursive residuals, etc.

2005-07-28 Thread Rick Ram
Resending cos I think this didn't get through for some reason... apologies 
if it arrives twice!

-- Forwarded message --
From: Rick Ram [EMAIL PROTECTED]
Date: 28-Jul-2005 18:03
Subject: Forcing coefficents in lm(), recursive residuals, etc.
To: R-help r-help@stat.math.ethz.ch

Hello all,
 Does anyone know how to constrain/force specific coefficients when running 
lm()? 
 I need to run recresid() {strucchange package} on the residuals of 
forecast.lm, but forecast.lm's coefficients must be determined by 
parameter.estimation.lm 
 I could estimate forecast.lm without lm() and use some other kind of 
optimisation, but recresid() requires an object with class lm. recresid() 
allows you to specify a formula, rather than an lm object, but it looks like 
coefficients are estimated this way too and can't be forced. 
 Here is a bit of code to compensate for my poor explanation:. 
 # Estimate the coefficients of model
parameter.estimation.lm = lm(formula = y ~ x1 + x2, data = 
estimation.dataset)
# How do I force the coefficients in forecast.lm to the coeff estimation 
from parameter.estimation.lm??
 forecast.lm = lm(formula = y ~ x1 + x2, data = forecast.dataset)
# Because I need recursive residuals from the application of the 
coefficients from parameter.estimation.lm to a different dataset
recresid(forecast.lm)
 Thanks in advance guys,
 R.

[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] CUSUM SQUARED structural breaks approach?

2005-07-28 Thread Rick Ram
Sorry guys, resending this - none of my posts have gone through
because HTML emails where not being delivered... sending this
plaintext now!

On 28/07/05, Rick Ram [EMAIL PROTECTED] wrote:
 Hi all,
  
 I have not looked at this CUSUM SQUARED issue since the emails at the
 beginning of the year but am looking at it again.  For those who are
 interested the following paper gives critical values where n60 in addition
 to the ones in Durbin 1969.  
  
 Edgerton, David  Wells, Curt, 1994. Critical Values for the Cusumsq
 Statistic in Medium and Large Sized Samples, Oxford Bulletin of Economics
 and Statistics, Blackwell Publishing, vol. 56(3), pages 355-65. 
  
  
 All the best,
  
 R.
 
 On 12/01/05, Achim Zeileis [EMAIL PROTECTED] wrote: 
  On Tue, 11 Jan 2005 19:33:41 + Rick Ram wrote:
  
   Groundwork for the choice of break method in my specific application 
   has already been done - otherwise I would need to rework the wheel
   (make a horribly detailed comparison of performance of break
   approaches in context of modelling post break)
  
   If it interests you, Pesaran  Timmerman 2002 compared CUSUM Squared, 
   BaiPerron and a time varying approach to detect singular previous
   breaks in reverse ordered financial time series so as to update a
   forecasting model.
  
  Yes, I know that paper. And if I recall correctly they are mainly 
  interested in modelling the time period after the last break. For this,
  the reverse ordered recursive CUSUM approach works because they
  essentially look back in time to see when their predictions break down.
  And for their application looking for variance changes also makes sense.
  The approach is surely valid and sound in this context...but it might be
  possible to do something better (but I would have to look much closer at 
  the particular application to have an idea what might be a way to go).
  
   This works fine i.e. the plot looks correct.  The problem is how to
   appropriately normalise these to rescale them to what the CUSUM 
   squared procedure expects (this looks to be a different and more
   complicated procedure than the normalisation used for the basic
   CUSUM).  I am from an IT background and am slightly illiterate in
   terms of math notation... guidance from anyone would be appreciated
  
  I just had a brief glance at BDE75, page 154, Section 2.4. If I
  haven't missed anything important on reading it very quickly, you just
  need to do something like the following (a reproducible example, based
  on data from strucchange, using a notation similar to BDE's):
  
  ## load GermanM1 data and model
  library(strucchange)
  data(GermanM1)
  M1.model - dm ~ dy2 + dR + dR1 + dp + ecm.res + season
  
  ## compute squared recursive residuals
  w2 - recresid(M1.model, data = GermanM1)^2
  ## compute CUSUM of squares process
  sr - ts(cumsum(c(0, w2))/sum(w2), end = end(GermanM1$dm), freq = 12) 
  ## the border (r-k)/(T-k)
  border - ts(seq(0, 1, length = length(sr)),
  start = start(sr), freq = 12)
  
  ## nice plot
  plot(sr, xaxs = i, yaxs = i, main = CUSUM of Squares) 
  lines(border, col = grey(0.5))
  lines(0.4 + border, col = grey(0.5))
  lines(- 0.4 + border, col = grey(0.5))
  
  Instead of 0.4 you would have to use the appropriate critical values
  from Durbin (1969) if my reading of the paper is correct. 
  
  hth,
  Z
  
   Does anyone know if this represents some commonly performed type of
   normalisation than exists in another function??
  
   I will hunt out the 1969 paper for the critical values but prior to 
   doing this I am a bit confused as to how they will
   implemented/interpreted... the CUSUM squared plot does/should run
   diagonally up from left to right and there are two straight lines that
   one would put around this from the critical values.  Hence, a 
   different interpretation/implementation of confidence levels than in
   other contexts.  I realise this is not just a R thing but a problem
   with my theoretical background.
  
  
   Thanks for detailed reply! 
  
   Rick.
  
  
   
But depending on the model and hypothesis you want to test, another
technique than CUSUM of squares might be more appropriate and also
available in strucchange. 
  
   
hth,
Z
   
 Any help or pointers about where to look would be more than
 appreciated!  Hopefully I have just missed obvious something in 
 the package...

 Many thanks,

 Rick R.

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! 
 http://www.R-project.org/posting-guide.html

   
  
  
 


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] R Reference Card (especially useful for Newbies)

2005-07-28 Thread Rick Ram
Sorry guys, resending this - none of my posts have gone through
because HTML emails where not being delivered... sending this
plaintext now!

On 28/07/05, Rick Ram [EMAIL PROTECTED] wrote:
 This is a very useful resource.  I also wandered around the rest of the site
 when I found this.  Rpad itself looks like a fanstastic tool.  
 
 On 27/07/05, Berton Gunter [EMAIL PROTECTED] wrote: 
  
  Newbies (and others!) may find useful the R Reference Card made available
 by
  Tom Short and Rpad at
 http://www.rpad.org/Rpad/Rpad-refcard.pdf  or through
  the Contributed link on CRAN (where some other reference cards are also
  linked). It categorizes and organizes a bunch of R's basic, most used 
  functions so that they can be easily found. For example, paste() is under
  the Strings heading and expand.grid() is under Data Creation. For
  newbies struggling to find the right R function as well as veterans who 
  can't quite remember the function name, it's very handy.
  
  -- Bert Gunter
  Genentech Non-Clinical Statistics
  South San Francisco, CA
  
  The business of the statistician is to catalyze the scientific learning 
  process.  - George E. P. Box
  
  __
  R-help@stat.math.ethz.ch mailing list
  https://stat.ethz.ch/mailman/listinfo/r-help
  PLEASE do read the posting guide!
 http://www.R-project.org/posting-guide.html
  
 


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Forcing coefficents in lm(), recursive residuals, etc.

2005-07-28 Thread Rick Ram
Sorry guys, resending this - none of my posts have gone through
because HTML emails where not being delivered... sending this
plaintext now!

On 28/07/05, Rick Ram [EMAIL PROTECTED] wrote:
 Resending cos I think this didn't get through for some reason... apologies
 if it arrives twice!
 
 
 -- Forwarded message --
 From: Rick Ram  [EMAIL PROTECTED]
 Date: 28-Jul-2005 18:03
 Subject: Forcing coefficents in lm(), recursive residuals, etc.
 To: R-help r-help@stat.math.ethz.ch 
 
 Hello all,
  
 Does anyone know how to constrain/force specific coefficients when running
 lm()?  
  
 I need to run recresid() {strucchange package} on the residuals of
 forecast.lm, but forecast.lm's coefficients must be determined by
 parameter.estimation.lm  
  
 I could estimate forecast.lm without lm() and use some other kind of
 optimisation, but recresid() requires an object with class lm.  recresid()
 allows you to specify a formula, rather than an lm object, but it looks like
 coefficients are estimated this way too and can't be forced.  
  
 Here is a bit of code to compensate for my poor explanation:.  
  
 # Estimate the coefficients of model
 parameter.estimation.lm = lm(formula = y ~ x1 + x2, data =
 estimation.dataset)
 # How do I force the coefficients in forecast.lm to the coeff estimation
 from parameter.estimation.lm??
 
 forecast.lm = lm(formula = y ~ x1 + x2, data = forecast.dataset)
 # Because I need recursive residuals from the application of the
 coefficients from parameter.estimation.lm to a different dataset
 recresid(forecast.lm)
  
 Thanks in advance guys,
  
 R.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] using integrate with optimize nested in the integration

2005-07-28 Thread Gregory Gentlemen
Hi guys
im having a problem getting R to numerically integrate for some function, 
say f(bhat)*optimize(G(bhat)), over bhat. Where id like to integrate this over 
some finite range, so that here as we integrate over bhat optimize would return 
a different optimum.
 
For instance consider this simple example for which I cannot get R to return 
the desired result:
 
f - function(bhat) exp(bhat)
g - function(bhat) optimize(f,c(0,15),maximum=TRUE)$maximum*bhat
integrate(g,lower=0,upper=5)
which returns:
187.499393759 with absolute error  2.1e-12

However this is an approximation of : 15*(5^2/2 - 0)=187.5, not what I intended 
on getting. Its not identifying that f is a function of bhat ... any advice or 
ways I can get integrate to not treat this as a constant?

Any help is appreciated.
 
Gregoy Gentlemen

__



[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] using integrate with optimize nested in the integration

2005-07-28 Thread Huntsinger, Reid
In your example, f is a function, but
optimize(f,c(0,15),maximum=TRUE)$maximum is just a number (the point at
which f reaches its maximum value). I'm not sure what you want, but if you
had say

f - function(x,y) x^3 + yx + 1

and defined

g - function(y) optimize(f,c(0,5),maximum=TRUE,y)$maximum

then g(t) is the x-value at which the function f(x,t) (over x in (0,5))
reaches its maximum for this fixed t. That could then be integrated. 

Reid Huntsinger

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Gregory Gentlemen
Sent: Thursday, July 28, 2005 3:58 PM
To: r-help@stat.math.ethz.ch
Subject: [R] using integrate with optimize nested in the integration


Hi guys
im having a problem getting R to numerically integrate for some function, 
say f(bhat)*optimize(G(bhat)), over bhat. Where id like to integrate this
over some finite range, so that here as we integrate over bhat optimize
would return a different optimum.
 
For instance consider this simple example for which I cannot get R to return
the desired result:
 
f - function(bhat) exp(bhat)
g - function(bhat) optimize(f,c(0,15),maximum=TRUE)$maximum*bhat
integrate(g,lower=0,upper=5)
which returns:
187.499393759 with absolute error  2.1e-12

However this is an approximation of : 15*(5^2/2 - 0)=187.5, not what I
intended on getting. Its not identifying that f is a function of bhat ...
any advice or ways I can get integrate to not treat this as a constant?

Any help is appreciated.
 
Gregoy Gentlemen

__



[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide!
http://www.R-project.org/posting-guide.html

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] using integrate with optimize nested in the integration

2005-07-28 Thread Sundar Dorai-Raj


Gregory Gentlemen wrote:
 Hi guys
 im having a problem getting R to numerically integrate for some function, 
 say f(bhat)*optimize(G(bhat)), over bhat. Where id like to integrate this 
 over some finite range, so that here as we integrate over bhat optimize would 
 return a different optimum.
  
 For instance consider this simple example for which I cannot get R to return 
 the desired result:
  
 f - function(bhat) exp(bhat)
 g - function(bhat) optimize(f,c(0,15),maximum=TRUE)$maximum*bhat
 integrate(g,lower=0,upper=5)
 which returns:
 187.499393759 with absolute error  2.1e-12
 
 However this is an approximation of : 15*(5^2/2 - 0)=187.5, not what I 
 intended on getting. Its not identifying that f is a function of bhat ... any 
 advice or ways I can get integrate to not treat this as a constant?
 
 Any help is appreciated.
  
 Gregoy Gentlemen
 

Hi, Gregory,

Your example is not very useful here since the optimize function is 
constant.

I think you want something more like:

f - function(bhat, x) {
   exp(bhat * x)
}
g - function(bhat) {
   o - vector(numeric, length(bhat))
   for(i in seq(along = bhat))
 o[i] - optimize(f, c(0,15), maximum = TRUE, x = bhat[i])$maximum
   bhat * o
}
integrate(g, lower = 0, upper = 5)

Because of the way integrate works, g(bhat) will always return a vector.

HTH,

--sundar

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] using integrate with optimize nested in the integration

2005-07-28 Thread Gregory Gentlemen
Thanks for the prompt reply.
Your right, that was a weak example.
Consider this one though:
 
f - function(n,x) (x-2.5)^2*n
g - function(y) optimize(f,c(0,15), maximum=TRUE,x=y)$maximum*y
 
then if you try:
integrate(g,lower=0,upper=5)
it produces:
Error in optimize(f, c(0, 15), maximum = TRUE, x = y) : 
invalid function value in 'optimize'

Any ideas? 
My problem is a similar more complex function in which optimizaiton depends on 
the value of the integrator.

Huntsinger, Reid [EMAIL PROTECTED] wrote:
In your example, f is a function, but
optimize(f,c(0,15),maximum=TRUE)$maximum is just a number (the point at
which f reaches its maximum value). I'm not sure what you want, but if you
had say

f - function(x,y) x^3 + yx + 1

and defined

g - function(y) optimize(f,c(0,5),maximum=TRUE,y)$maximum

then g(t) is the x-value at which the function f(x,t) (over x in (0,5))
reaches its maximum for this fixed t. That could then be integrated. 

Reid Huntsinger

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Gregory Gentlemen
Sent: Thursday, July 28, 2005 3:58 PM
To: r-help@stat.math.ethz.ch
Subject: [R] using integrate with optimize nested in the integration


Hi guys
im having a problem getting R to numerically integrate for some function, 
say f(bhat)*optimize(G(bhat)), over bhat. Where id like to integrate this
over some finite range, so that here as we integrate over bhat optimize
would return a different optimum.

For instance consider this simple example for which I cannot get R to return
the desired result:

f - function(bhat) exp(bhat)
g - function(bhat) optimize(f,c(0,15),maximum=TRUE)$maximum*bhat
integrate(g,lower=0,upper=5)
which returns:
187.499393759 with absolute error  2.1e-12

However this is an approximation of : 15*(5^2/2 - 0)=187.5, not what I
intended on getting. Its not identifying that f is a function of bhat ...
any advice or ways I can get integrate to not treat this as a constant?

Any help is appreciated.

Gregoy Gentlemen

__



[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide!
http://www.R-project.org/posting-guide.html





--

--

__



[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] using integrate with optimize nested in the integration

2005-07-28 Thread Sundar Dorai-Raj
Again, not a good example, since f is linear in n so the max will always 
be at 15.

Try this:

f - function(x, n) -(x - 2.5 * n)^2 # max is at 2.5*n
g - function(n) {
o - vector(numeric, length(n))
for(i in seq(along = n))
  o[i] - optimize(f, c(0, 15), maximum = TRUE, n = n[i])$maximum
n * o
}
integrate(g, lower = 0, upper = 5)

# int_0^5 (2.5 * n^2) dn = 2.5/3 * 5^3 = 104.1667

--sundar


Gregory Gentlemen wrote:
 Thanks for the prompt reply.
 Your right, that was a weak example.
 Consider this one though:
  
 f - function(n,x) (x-2.5)^2*n
 g - function(y) optimize(f,c(0,15), maximum=TRUE,x=y)$maximum*y
  
 then if you try:
 integrate(g,lower=0,upper=5)
 it produces:
 Error in optimize(f, c(0, 15), maximum = TRUE, x = y) : 
 invalid function value in 'optimize'
 
 Any ideas? 
 My problem is a similar more complex function in which optimizaiton depends 
 on the value of the integrator.
 
 Huntsinger, Reid [EMAIL PROTECTED] wrote:
 In your example, f is a function, but
 optimize(f,c(0,15),maximum=TRUE)$maximum is just a number (the point at
 which f reaches its maximum value). I'm not sure what you want, but if you
 had say
 
 f - function(x,y) x^3 + yx + 1
 
 and defined
 
 g - function(y) optimize(f,c(0,5),maximum=TRUE,y)$maximum
 
 then g(t) is the x-value at which the function f(x,t) (over x in (0,5))
 reaches its maximum for this fixed t. That could then be integrated. 
 
 Reid Huntsinger
 
 -Original Message-
 From: [EMAIL PROTECTED]
 [mailto:[EMAIL PROTECTED] On Behalf Of Gregory Gentlemen
 Sent: Thursday, July 28, 2005 3:58 PM
 To: r-help@stat.math.ethz.ch
 Subject: [R] using integrate with optimize nested in the integration
 
 
 Hi guys
 im having a problem getting R to numerically integrate for some function, 
 say f(bhat)*optimize(G(bhat)), over bhat. Where id like to integrate this
 over some finite range, so that here as we integrate over bhat optimize
 would return a different optimum.
 
 For instance consider this simple example for which I cannot get R to return
 the desired result:
 
 f - function(bhat) exp(bhat)
 g - function(bhat) optimize(f,c(0,15),maximum=TRUE)$maximum*bhat
 integrate(g,lower=0,upper=5)
 which returns:
 187.499393759 with absolute error  2.1e-12
 
 However this is an approximation of : 15*(5^2/2 - 0)=187.5, not what I
 intended on getting. Its not identifying that f is a function of bhat ...
 any advice or ways I can get integrate to not treat this as a constant?
 
 Any help is appreciated.
 
 Gregoy Gentlemen
 
 __
 
 
 
 [[alternative HTML version deleted]]
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide!
 http://www.R-project.org/posting-guide.html
 
 
 
 
 
 --
 
 --
 
 __
 
 
 
   [[alternative HTML version deleted]]
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Displaying p-values in latex

2005-07-28 Thread Juned Siddique
Hi. I want to create a table in latex with regression coefficients and their
corresponding p-values. My code is below. How do I format the p-values so
that values less than 0.0001 are formated as .0001 rather than just rounded
to 0.? Thank you.

model-lm(y~x1+x2)

output-summary(model)
output-as.matrix(coefficients(output))
output-format.df(ouput,cdec=c(2,2,2,4))

latex(output,longtable=TRUE, file=C:/model.tex)

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] using integrate with optimize nested in the integration

2005-07-28 Thread Huntsinger, Reid
Integrate needs a vectorized function. For your example,
 
 integrate(function(x) sapply(x,g),lower=0,upper=5)
187.4994 with absolute error  2.1e-12

Reid Huntsinger
-Original Message-
From: Gregory Gentlemen [mailto:[EMAIL PROTECTED] 
Sent: Thursday, July 28, 2005 4:37 PM
To: Huntsinger, Reid; r-help@stat.math.ethz.ch
Subject: RE: [R] using integrate with optimize nested in the integration


Thanks for the prompt reply.
Your right, that was a weak example.
Consider this one though:
 
f - function(n,x) (x-2.5)^2*n
g - function(y) optimize(f,c(0,15), maximum=TRUE,x=y)$maximum*y
 
then if you try:
integrate(g,lower=0,upper=5)
it produces:
Error in optimize(f, c(0, 15), maximum = TRUE, x = y) : 
invalid function value in 'optimize'

Any ideas? 
My problem is a similar more complex function in which optimizaiton depends
on the value of the integrator.

Huntsinger, Reid [EMAIL PROTECTED] wrote:

In your example, f is a function, but
optimize(f,c(0,15),maximum=TRUE)$maximum is just a number (the point at
which f reaches its maximum value). I'm not sure what you want, but if you
had say

f - function(x,y) x^3 + yx + 1

and defined

g - function(y) optimize(f,c(0,5),maximum=TRUE,y)$maximum

then g(t) is the x-value at which the function f(x,t) (over x in (0,5))
reaches its maximum for this fixed t. That could then be integrated. 

Reid Huntsinger

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Gregory Gentlemen
Sent: Thursday, July 28, 2005 3:58 PM
To: r-help@stat.math.ethz.ch
Subject: [R] using integrate with optimize nested in the integration


Hi guys
im having a problem getting R to numerically integrate for some function, 
say f(bhat)*optimize(G(bhat)), over bhat. Where id like to integrate this
over some finite range, so that here as we integrate over bhat optimize
would return a different optimum.

For instance consider this simple example for which I cannot get R to return
the desired result:

f - function(bhat) exp(bhat)
g - function(bhat) optimize(f,c(0,15),maximum=TRUE)$maximum*bhat
integrate(g,lower=0,upper=5)
which returns:
187.499393759 with absolute error  2.1e-12

However this is an approximation of : 15*(5^2/2 - 0)=187.5, not what I
intended on getting. Its not identifying that f is a function of bhat ...
any advice or ways I can get integrate to not treat this as a constant?

Any help is appreciated.

Gregoy Gentlemen

__



[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide!
http://www.R-project.org/posting-guide.html






--
Notice: This e-mail message, together with any attachments, ...{{dropped}}

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Identify function with matplot

2005-07-28 Thread Hari Iyer
Hello
Is there a way to use the   identify function with matplot  ?
Thanks.
Hari

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Anova's in R

2005-07-28 Thread Simon Blomberg
If I understand you correctly, this looks like a split-plot design.

The anova is:

aov(response~burning*temperature+Error(site), data=dataset)

Alternatively in lme:

lme(response~burning*temperature, random=~1|site, data=dataset)

At 03:09 PM 29/07/2005, you wrote:
Hello.

I am looking for some help using anova's in RGui.

My experiment ie data, has a fixed burning treatment (Factor A) 2 levels,
unburnt/burnt.
Nested within each level of Factor A are 2 random sites (Factor B).
All sites are crossed with a fixed temperature treatment (Factor C) 2
levels, 0 degreesC/2 degreesC, with many replicates of these temperature
treatments randomly located at each site.

I am trying the following
aov(dependent
variable~burning*temperature*site+Error(replicate),data=dataset) and
variations on that, however can't get it quite right the F ratios are
not correct. I imagine this is a fairly common experimental design in
ecology and would ask that anyone who has some advice please reply to this
email?

Thank-you,
Frith

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

Simon Blomberg, B.Sc.(Hons.), Ph.D, M.App.Stat.
Centre for Resource and Environmental Studies
The Australian National University
Canberra ACT 0200
Australia
T: +61 2 6125 7800 email: Simon.Blomberg_at_anu.edu.au
F: +61 2 6125 0757
CRICOS Provider # 00120C

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Displaying p-values in latex

2005-07-28 Thread Marc Schwartz
On Thu, 2005-07-28 at 14:00 -0700, Juned Siddique wrote:
 Hi. I want to create a table in latex with regression coefficients and their
 corresponding p-values. My code is below. How do I format the p-values so
 that values less than 0.0001 are formated as .0001 rather than just rounded
 to 0.? Thank you.
 
 model-lm(y~x1+x2)
 
 output-summary(model)
 output-as.matrix(coefficients(output))
 output-format.df(ouput,cdec=c(2,2,2,4))
 
 latex(output,longtable=TRUE, file=C:/model.tex)

I have not used Frank's latex() function, so there may be a setting that
helps with this, but something along the lines of:

# Using one example from ?lm:

ctl - c(4.17,5.58,5.18,6.11,4.50,4.61,5.17,4.53,5.33,5.14)
trt - c(4.81,4.17,4.41,3.59,5.87,3.83,6.03,4.89,4.32,4.69)
group - gl(2,10,20, labels=c(Ctl,Trt))
weight - c(ctl, trt)
lm.D9 - lm(weight ~ group)

output - (summary(lm.D9))
output - as.matrix(coefficients(output))
output - format.df(output, rdec = c(2, 2, 2, 4))

# Here is the line to re-format the p values
# Note that column alignment on the decimal may be problematic here 
# depending upon the specifications for column justifications. 
# If the column is right justified, it should be easier, since we 
# are forcing four decimal places.

output[, 4] - ifelse(as.numeric(output[, 4])  0.0001, 
  $$0.0001, sprintf(%6.4f, output[, 4]))


 output
Estimate Std. Error t value Pr(|t|)   
(Intercept) 5.03   0.22 22.85 $$0.0001
groupTrt-0.37  0.3114   -1.19 0.2490   
attr(,col.just)
[1] r r r r


Note the use of $ to indicate math mode for the symbols. Presumably
Frank has a similar approach when the object passed to latex() is a
model, since the heading for the p value column should end up being
something like:

   Pr($$$|$t$|$)

and the minus signs in the second line should similarly be surrounded:

   $-$0.37

I have cc:d Frank here for his clarifications.

HTH,

Marc Schwartz

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] lattice/ grid.layout/ multiple graphs per page

2005-07-28 Thread McClatchie, Sam (PIRSA-SARDI)
Background:
OS: Linux Mandrake 10.1
release: R 2.0.0
editor: GNU Emacs 21.3.2
front-end: ESS 5.2.3
-

-Original Message-
From: Deepayan Sarkar [mailto:[EMAIL PROTECTED]
Sent: Friday, 29 July 2005 3:44 AM
To: McClatchie, Sam (PIRSA-SARDI)
Cc: R-Help-Request (E-mail)
Subject: Re: [R] lattice/ grid.layout/ multiple graphs per page

Depends on the details of your 'working trellis code'. Are you using
print() explicitly with 'newpage=FALSE'? See ?print.trellis for
details.

Deepayan

-
Sorry Deepayan, I should have included the full code originally.
Here it is, with your suggestion for using the newpage=FALSE option to
print(). I still have not got it right. 

egg.and.larvae.vs.intermittency.conditioned.plots -
  function()
{
  library(lattice)
  library(grid)

  n - layout(matrix(c(1,2,3,4), 2,2))
#grid.newpage()
#pushViewport(viewport(layout =
grid.layout(2, 2)))
 
#pushViewport(viewport(layout.pos.col=1, layout.pos.row=1))

#trellis.device(postscript,
#file=../figures/egg.and.larvae.vs.intermittency.conditioned.plots.2001.egg
s.ps,horizontal=FALSE,# color=TRUE)

  trellis.par.set(par.xlab.text = list(cex = 1.5))
  trellis.par.set(par.ylab.text = list(cex = 1.5))
  trellis.par.set(axis.text = list(cex = 1.2))
  
  d.2001 -
mn.ts.e.zoo.vol.vol.filt.anchovy.egg.larv.mn.fluor.wire.intermittency.2001
  d.2002 -
mn.ts.e.zoo.vol.vol.filt.anchovy.egg.larv.mn.fluor.wire.intermittency.2002
#browser()
  
  attach(d.2001)
#intermittency.blocks -
cut(percent.unstable, breaks=c(0,10,20,30,40,50))
  intermittency.blocks - cut(percent.unstable,
breaks=c(0,5,10,15,20,25,30,35,40,45,50))

###trellis plots, 2001 data
  int - matrix(c(0,50,45,100,90,200), ncol=2, byrow=TRUE)  
  egg.counts - shingle(d.2001$eggs2.Pilch.Eggs, intervals = int)
  out1 - bwplot(eggs2.Pilch.Eggs ~  intermittency.blocks |  egg.counts,
 data = d.2001,
 xlab = intermittency  (% of water column unstable),
 ylab = eggs m^2,
 main = 2001,
 ylim = c(0,200),
 layout = c(1,3),
 auto.key = T)
  print(out1, newpage=F)
  par(par.old) 
#  graphics.off()
  
#  x11()
# trellis.device(postscript,
#file=../figures/egg.and.larvae.vs.intermittency.conditioned.plots.2001.lar
vae.ps,horizontal=FALS#E, color=TRUE)

 
#pushViewport(viewport(layout.pos.col=1, layout.pos.row=2))
  trellis.par.set(par.xlab.text = list(cex = 1.5))
  trellis.par.set(par.ylab.text = list(cex = 1.5))
  trellis.par.set(axis.text = list(cex = 1.2))
  
  int - matrix(c(0,50,45,100,90,200), ncol=2, byrow=TRUE)  
  larval.counts - shingle(d.2001$eggs2.Pilch.Larv, intervals = int)
  out2 - bwplot(eggs2.Pilch.Larv ~  intermittency.blocks |  larval.counts,
 data = d.2001,
 xlab = intermittency (% of water column unstable),
 ylab = expression(paste(larvae ,m^{-2})),
 main = 2001,
 ylim = c(0,200),
 layout = c(1,3),
 auto.key = T)
  detach(d.2001)
  print(out2, newpage=F)
#graphics.off()
  
###trellis plots, 2002 data
  attach(d.2002)
#  x11()
#trellis.device(postscript,
#file=../figures/egg.and.larvae.vs.intermittency.conditioned.plots.2002.egg
s.ps,horizontal=FALSE#, color=TRUE)

 
#pushViewport(viewport(layout.pos.col=2, layout.pos.row=1))
  trellis.par.set(par.xlab.text = list(cex = 1.5))
  trellis.par.set(par.ylab.text = list(cex = 1.5))
  trellis.par.set(axis.text = list(cex = 1.2))
  
  int - matrix(c(0,50,45,100,90,200), ncol=2, byrow=TRUE)  
  egg.counts - shingle(d.2002$eggs2.Pilch.Eggs, intervals = int)
  out3 - bwplot(eggs2.Pilch.Eggs ~  intermittency.blocks |  egg.counts,
 data = d.2002,
 xlab = intermittency  (% of water column unstable),
 ylab = eggs m^2,
 main = 2002,
 ylim = c(0,200),
 layout = c(1,3),
 auto.key = T)
  
  par(par.old)
#graphics.off()
  print(out3, newpage=F)
# x11()
# trellis.device(postscript,
#file=../figures/egg.and.larvae.vs.intermittency.conditioned.plots.2002.lar
vae.ps,horizontal=FALS#E, color=TRUE)

 
#pushViewport(viewport(layout.pos.col=2, layout.pos.row=2))
  trellis.par.set(par.xlab.text = list(cex = 1.5))
  trellis.par.set(par.ylab.text = list(cex = 1.5))
  trellis.par.set(axis.text = list(cex = 1.2))
  
  

Re: [R] gamma distribution

2005-07-28 Thread pantd
Hi Christopher and Uwe. thanks for your time and guidance.
I deeply appreciate it.


-dev


Quoting Christoph Buser [EMAIL PROTECTED]:

 Hi

 As Uwe mentioned be careful about the difference the
 significance level alpha and the power of a test.

 To do power calculations you should specify and alternative
 hypothesis H_A, e.g. if you have two populations you want to
 compare and we assume that they are normal distributed (equal
 unknown variance for simplicity). We are interested if there is
 a difference in the mean and want to use the t.test.
 Our Null hypothesis H_0: there is no difference in the means

 To do a power calculation for our test, we first have to specify
 and alternative H_A: the mean difference is 1 (unit)
 Now for a fix number of observations we can calculate the power
 of our test, which is in that case the probability that (if the
 true unknown difference is 1, meaning that H_A is true) our test
 is significant, meaning if I repeat the test many times (always
 taking samples with mean difference of 1), the number of
 significant test divided by the total number of tests is an
 estimate for the power.


 In you case the situation is a little bit more complicated. You
 need to specify an alternative hypothesis.
 In one of your first examples you draw samples from two gamma
 distributions with different shape parameter and the same
 scale. But by varying the shape parameter the two distributions
 not only differ in their mean but also in their form.

 I got an email from Prof. Ripley in which he explained in
 details and very precise some examples of tests and what they
 are testing. It was in addition to the first posts about t tests
 and wilcoxon test.
 I attached the email below and recommend to read it carefully. It
 might be helpful for you, too.

 Regards,

 Christoph Buser

 --
 Christoph Buser [EMAIL PROTECTED]
 Seminar fuer Statistik, LEO C13
 ETH (Federal Inst. Technology)8092 Zurich  SWITZERLAND
 phone: x-41-44-632-4673   fax: 632-1228
 http://stat.ethz.ch/~buser/
 --

 

 From: Prof Brian Ripley [EMAIL PROTECTED]
 To: Christoph Buser [EMAIL PROTECTED]
 cc: Liaw, Andy [EMAIL PROTECTED]
 Subject: Re: [R] Alternatives to t-tests (was Code Verification)
 Date: Thu, 21 Jul 2005 10:33:28 +0100 (BST)

 I believe there is a rather more to this than Christoph's account.  The
 Wilcoxon test is not testing the same null hypothesis as the t-test, and
 that may very well matter in practice and it does in the example given.

 The (default in R) Welch t-test tests a difference in means between two
 samples, not necessarily of the same variance or shape.  A difference in
 means is simple to understand, and is unambiguously defined at least if
 the distributions have means, even for real-life long-tailed
 distributions.  Inference from the t-test is quite accurate even a long
 way from normality and from equality of the shapes of the two
 distributions, except in very small sample sizes.  (I point my beginning
 students at the simulation study in `The Statistical Sleuth' by Ramsey and
 Schafer, stressing that the unequal-variance t-test ought to be the
 default choice as it is in R.  So I get them to redo the simulations.)

 The Wilcoxon test tests a shift in location between two samples from
 distributions of the same shape differing only by location.  Having the
 same shape is part of the null hypothesis, and so is an assumption that
 needs to be verified if you want to conclude there is a difference in
 location (e.g. in means).  Even if you assume symmetric distributions (so
 the location is unambiguously defined) the level of the test depends on
 the shapes, tending to reject equality of location in the presence of
 difference of shape.  So you really are testing equality of distribution,
 both location and shape, with power concentrated on location-shift
 alternatives.

 Given samples from a gamma(shape=2) and gamma(shape=20) distributions, we
 know what the t-test is testing (equality of means).  What is the Wilcoxon
 test testing?  Something hard to describe and less interesting, I believe.

 BTW, I don't see the value of the gamma simulation as this
 simultaneously changes mean and shape between the samples.  How about
 checking holding the mean the same:

 n - 1000
 z1 - z2 - numeric(n)
 for (i in 1:n) {
x - rgamma(40, 2.5, 0.1)
y - rgamma(40, 10, 0.1*10/2.5)
z1[i] - t.test(x, y)$p.value
z2[i] - wilcox.test(x, y)$p.value
 }
 ## Level
 1 - sum(z10.05)/1000  ## 0.049
 1 - sum(z20.05)/1000  ## 0.15

 ? -- the Wilcoxon test is shown to be a poor test of equality of means.
 Christoph's simulation shows that it is able to use difference in shape as
 well as location in the test of these two distributions, whereas the
 t-test is designed only to use the 

Re: [R] Displaying p-values in latex

2005-07-28 Thread Frank E Harrell Jr
Marc Schwartz wrote:
 On Thu, 2005-07-28 at 14:00 -0700, Juned Siddique wrote:
 
Hi. I want to create a table in latex with regression coefficients and their
corresponding p-values. My code is below. How do I format the p-values so
that values less than 0.0001 are formated as .0001 rather than just rounded
to 0.? Thank you.

model-lm(y~x1+x2)

output-summary(model)
output-as.matrix(coefficients(output))
output-format.df(ouput,cdec=c(2,2,2,4))

latex(output,longtable=TRUE, file=C:/model.tex)
 
 
 I have not used Frank's latex() function, so there may be a setting that
 helps with this, but something along the lines of:
 
 # Using one example from ?lm:
 
 ctl - c(4.17,5.58,5.18,6.11,4.50,4.61,5.17,4.53,5.33,5.14)
 trt - c(4.81,4.17,4.41,3.59,5.87,3.83,6.03,4.89,4.32,4.69)
 group - gl(2,10,20, labels=c(Ctl,Trt))
 weight - c(ctl, trt)
 lm.D9 - lm(weight ~ group)
 
 output - (summary(lm.D9))
 output - as.matrix(coefficients(output))
 output - format.df(output, rdec = c(2, 2, 2, 4))
 
 # Here is the line to re-format the p values
 # Note that column alignment on the decimal may be problematic here 
 # depending upon the specifications for column justifications. 
 # If the column is right justified, it should be easier, since we 
 # are forcing four decimal places.
 
 output[, 4] - ifelse(as.numeric(output[, 4])  0.0001, 
   $$0.0001, sprintf(%6.4f, output[, 4]))
 
 
 
output
 
 Estimate Std. Error t value Pr(|t|)   
 (Intercept) 5.03   0.22 22.85 $$0.0001
 groupTrt-0.37  0.3114   -1.19 0.2490   
 attr(,col.just)
 [1] r r r r
 
 
 Note the use of $ to indicate math mode for the symbols. Presumably
 Frank has a similar approach when the object passed to latex() is a
 model, since the heading for the p value column should end up being
 something like:
 
Pr($$$|$t$|$)
 
 and the minus signs in the second line should similarly be surrounded:
 
$-$0.37
 
 I have cc:d Frank here for his clarifications.
 
 HTH,
 
 Marc Schwartz
 
 
 

I've only implemented a fully automatic latex function for anova output, 
which handles P-value printing as requested.  Look at anova.Design for 
the code. -Frank

-- 
Frank E Harrell Jr   Professor and Chair   School of Medicine
  Department of Biostatistics   Vanderbilt University

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html