Re: [R] file reading problem unique to windows. Err info: Error in file(file, ifelse(append, a, w)). cannot open the connection

2010-11-25 Thread Prof Brian Ripley
We don't have any of the information asked for in the posting guide, 
such as your version of R, reproducible example 


But please try R-patched, since this might be

• download.file() could leave the destination file open if the URL
  was not able to be opened.  (PR#14414)

(If you had followed the posting guide you would have tried R-patched 
before posting )



On Wed, 24 Nov 2010, Yong Wang wrote:


Dear List

I asked this question before, got some tips but can't get it solved.


Where?  You didn't give a reference, and it would have helped the 
helpers.



it is clear now that this problem only occurs when run on windows (I
tested it on windows XP) nothing wrong at all when run on unix.
unfortunately, sometimes I have to run it on windows,
I appreciate any suggestion on how to circumvent this problem when run
on windows.
below is the problem description I submitted before.

#

I am running a loop downloading  web pages and save the html to a
temporary file (use download.file() )
then read (using readLines)  it in for processing;
finally write useful info from each processed page to a unique file

the problem is once the loop runs up to somewhere near  5000, it will
throw out an err like below and won't go further.


Error in file(file, ifelse(append, a, w)) :
cannot open the connection
-

In the meantime, a request for new connection won't be successful, for
example, a request for the help page of file will trigger err below

---
?file
Error in gzfile(file, rb) : cannot open the connection
In addition: Warning message:
In gzfile(file, rb) :
cannot open compressed file
'C:/PROGRA~1/R/R-211~1.1/library/stats/help/aliases.rds', probable
reason 'Too many open files'
---

I am not sure if the problem is too many connections not closed. since
I close the file connection after each readLines.
checking with showConnections(all=T) does not show excessive
connections and closeAllConnections() does not help.

Can any one help me on this?


Any answer highly appreciated.

yong

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.



--
Brian D. Ripley,  rip...@stats.ox.ac.uk
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] how to find a row index in a matrix or a data frame ?

2010-11-25 Thread Luedde, Mirko
Dear all, 

this looks pretty much a standard problem, but I couldn't find a
satisfying and understandable solution.

(A) Given a data frame (or matrix), e.g.

  x - data.frame(A=c(1, 2, 2), B=c(4, 5, 5))

and a row of this data frame, e.g. 

  r - c(2, 5)

I need to find one row index i (or all such indices) such that r
is at the i-th row in x, that is, the expression 

  all(x[i,]==as.list(r)) 

evaluates to TRUE.  I can not evaluate an expression like

  x[x[,1]==2  x[,2]==5,]

because I do not know in advance how many columns x will have.

Basically, thus, I'm looking for an equivalent of vectorfind in
Scilab.

(B) Which would be the most appropriate data type for x, matrix or
data frame or another type?  

(C) What will be better, searching for rows or searching for columns?

Thank you for your help!

Best, Mirko

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] how to find a row index in a matrix or a data frame ?

2010-11-25 Thread Eric Lecoutre
Mirko,

Here is a solution - I am sure other R mentors would find a more efficient
one


 x - data.frame(A=c(1, 2, 2), B=c(4, 5, 5))

 x - rbind(x,x)

 r - c(2,5)



 (rowfind=apply(x,1,FUN=function(row,add){isTRUE(all.equal(row,add,check.attributes
= FALSE))},add=r))

1 2 3 4 5 6

FALSE TRUE TRUE FALSE TRUE TRUE

 (rowfindindex=(1:nrow(x))[rowfind])

[1] 2 3 5 6



Hope this help,


Eric

2010/11/25 Luedde, Mirko mirko.lue...@sap.com

 Dear all,

 this looks pretty much a standard problem, but I couldn't find a
 satisfying and understandable solution.

 (A) Given a data frame (or matrix), e.g.

  x - data.frame(A=c(1, 2, 2), B=c(4, 5, 5))

and a row of this data frame, e.g.

  r - c(2, 5)

I need to find one row index i (or all such indices) such that r
is at the i-th row in x, that is, the expression

  all(x[i,]==as.list(r))

evaluates to TRUE.  I can not evaluate an expression like

  x[x[,1]==2  x[,2]==5,]

because I do not know in advance how many columns x will have.

Basically, thus, I'm looking for an equivalent of vectorfind in
Scilab.

 (B) Which would be the most appropriate data type for x, matrix or
data frame or another type?

 (C) What will be better, searching for rows or searching for columns?

 Thank you for your help!

 Best, Mirko

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.htmlhttp://www.r-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.




-- 
Eric Lecoutre
Consultant - Business  Decision
Business Intelligence  Customer Intelligence

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Problem with plotting diagnostics - Error in object$coefficients : $ operator is invalid for atomic vectors

2010-11-25 Thread Manderscheid Katharina
this problem seems to only exist in R 2.12.0 but not in R  2.11.1.
any ideas? a bug?


--
dr. katharina manderscheid

soziologisches seminar
universität luzern

kasernenplatz 3
6000 luzern 7

tel. ++41 41 228 4657

web: http://www.unilu.ch/deu/dr.-katharina-manderscheid_346380.aspx

-Ursprüngliche Nachricht-
Von: Duncan Murdoch [mailto:murdoch.dun...@gmail.com] 
Gesendet: Mittwoch, 17. November 2010 16:33
An: Manderscheid Katharina
Cc: 'r-help@r-project.org'
Betreff: Re: [R] Problem with plotting diagnostics - Error in 
object$coefficients : $ operator is invalid for atomic vectors

On 17/11/2010 10:28 AM, Manderscheid Katharina wrote:
 hi all,
 after fitting a multiple linear regression
 model- lm(y ~ a + b+ c+d)
 i wanted to plot diagnostics
 plot(model)
 but get the error message
 Error in object$coefficients : $ operator is invalid for atomic vectors.
 which does not make a lot of sense, since there is no $ - i am working with 
 an attached dataset.
 can anyone help, please??
 thanks a lot,
 kat


I just tried those lines (with fake data for a,b,c,d and y) and got no error 
message.  I was using R 2.12.0.

I think you need to show us a reproducible example, and the
sessionInfo() to go with it, to help with this.

Duncan Murdoch

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] how to find a row index in a matrix or a data frame ?

2010-11-25 Thread Henrik Bengtsson
rows - which(apply(mapply(x, r, FUN===), MARGIN=1, FUN=all));

/H

On Thu, Nov 25, 2010 at 1:59 AM, Luedde, Mirko mirko.lue...@sap.com wrote:
 Dear all,

 this looks pretty much a standard problem, but I couldn't find a
 satisfying and understandable solution.

 (A) Given a data frame (or matrix), e.g.

      x - data.frame(A=c(1, 2, 2), B=c(4, 5, 5))

    and a row of this data frame, e.g.

      r - c(2, 5)

    I need to find one row index i (or all such indices) such that r
    is at the i-th row in x, that is, the expression

      all(x[i,]==as.list(r))

    evaluates to TRUE.  I can not evaluate an expression like

      x[x[,1]==2  x[,2]==5,]

    because I do not know in advance how many columns x will have.

    Basically, thus, I'm looking for an equivalent of vectorfind in
    Scilab.

 (B) Which would be the most appropriate data type for x, matrix or
    data frame or another type?

 (C) What will be better, searching for rows or searching for columns?

 Thank you for your help!

 Best, Mirko

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Running R from SAS

2010-11-25 Thread Eric Lecoutre
Ziad,

Also you didn't mention which version of SAS and which modules.
In case you are working with 9.2 and have at disposal IML, you can have a
look at
http://support.sas.com/documentation/cdl/en/imlsstat/63545/HTML/default/viewer.htm#statr_toc.htm

Eric



2010/11/24 ziad.elmou...@tnsglobal.com

 Hello All,

 I am interested in running an R program with several random seeds.  One
 approach is to launch the program from SAS.  The recommended approach is to
 use the X command as shown below:
 OPTIONS XWAIT XSYNC;

 X r.exe --no-save --quiet c:\temp\r\program.r
 c:\temp\r\program.log;

 However, this does not seem to work for me.  Does anyone know how to launch
 an R program from SAS?  Thank you in advance.

 Ziad Elmously

 ziad.elmou...@tnsglobal.com




[[alternative HTML version deleted]]

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.htmlhttp://www.r-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.




-- 
Eric Lecoutre
Consultant - Business  Decision
Business Intelligence  Customer Intelligence

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] how to find a row index in a matrix or a data frame ?

2010-11-25 Thread Luedde, Mirko
Hi all, thank you for quick help. 

Any ideas on (better) efficiency 
(with other data types)?

Best, Mirko 

-Ursprüngliche Nachricht-
Von: henrik.bengts...@gmail.com [mailto:henrik.bengts...@gmail.com] Im Auftrag 
von Henrik Bengtsson
Gesendet: Donnerstag, 25. November 2010 11:25
An: Luedde, Mirko
Cc: r-help@r-project.org
Betreff: Re: [R] how to find a row index in a matrix or a data frame ?

rows - which(apply(mapply(x, r, FUN===), MARGIN=1, FUN=all));

/H

On Thu, Nov 25, 2010 at 1:59 AM, Luedde, Mirko mirko.lue...@sap.com wrote:
 Dear all,

 this looks pretty much a standard problem, but I couldn't find a
 satisfying and understandable solution.

 (A) Given a data frame (or matrix), e.g.

      x - data.frame(A=c(1, 2, 2), B=c(4, 5, 5))

    and a row of this data frame, e.g.

      r - c(2, 5)

    I need to find one row index i (or all such indices) such that r
    is at the i-th row in x, that is, the expression

      all(x[i,]==as.list(r))

    evaluates to TRUE.  I can not evaluate an expression like

      x[x[,1]==2  x[,2]==5,]

    because I do not know in advance how many columns x will have.

    Basically, thus, I'm looking for an equivalent of vectorfind in
    Scilab.

 (B) Which would be the most appropriate data type for x, matrix or
    data frame or another type?

 (C) What will be better, searching for rows or searching for columns?

 Thank you for your help!

 Best, Mirko

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] how to find a row index in a matrix or a data frame ?

2010-11-25 Thread Henrique Dallazuanna
Try this:

which((apply(x, 1, toString) %in% toString(r)))

On Thu, Nov 25, 2010 at 7:59 AM, Luedde, Mirko mirko.lue...@sap.com wrote:

 Dear all,

 this looks pretty much a standard problem, but I couldn't find a
 satisfying and understandable solution.

 (A) Given a data frame (or matrix), e.g.

  x - data.frame(A=c(1, 2, 2), B=c(4, 5, 5))

and a row of this data frame, e.g.

  r - c(2, 5)

I need to find one row index i (or all such indices) such that r
is at the i-th row in x, that is, the expression

  all(x[i,]==as.list(r))

evaluates to TRUE.  I can not evaluate an expression like

  x[x[,1]==2  x[,2]==5,]

because I do not know in advance how many columns x will have.

Basically, thus, I'm looking for an equivalent of vectorfind in
Scilab.

 (B) Which would be the most appropriate data type for x, matrix or
data frame or another type?

 (C) What will be better, searching for rows or searching for columns?

 Thank you for your help!

 Best, Mirko

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.




-- 
Henrique Dallazuanna
Curitiba-Paraná-Brasil
25° 25' 40 S 49° 16' 22 O

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Help in improving the style of R plots

2010-11-25 Thread Duncan Murdoch

On 25/11/2010 5:38 AM, pilchat wrote:


Dear all,

I am running a MCMC on my data, and I want to plot the results of the
simulation. I want to dedicate a page in the PS file for each element in
my sample, and in each page I need to plot 8 quadrants.

In attachment you find my first attemp (just the first page). Here are
my troubles (I'm sorry if they are stupid, but I am a newby with R):
-) I need to substitute the greek letters wherever you find the full
spelling (I also need to substitute Surface with the greek Sigma letter)


See ?plotmath.  The general idea is that you would use

xlab = expression(alpha)

instead of

xlab = alpha




-)in the top plots I need to write the variable=median+-stddeviation
(units) in a more readable way: some characters overlap and I want to
approximate the numebrs to the second decimal digit. This is how I have
been trying so far: text(min(mchain[1,]),max(ndensity$y),substitute(
Log(Surface)[med] ==nmed%+-%nstd,list(nmed=nmed,nstd=nstd) ),pos=4 )


You have some strange font problems. I would guess that you produced the 
plots by copying from the screen or another device:  don't do that, it 
gets the spacing wrong.  Use pdf() or postscript() to open the device 
and plot directly to it in the desired final size.




-) plot (c): ticks, annotations and labels overlap, but they shouldn't.


Same problem as above.


Here is the code:
persp(d,col=fcol,zlim=zlim,theta=theta,phi=phi,zlab=density,xlab=xlab,ylab=ylab,main=,sub=(c),ticktype=detailed)

-) plot (e) and (f): I need to cancel plot (f) and place plot (e) at the
center of the bottom row


The layout() function might help with this.


-) when I close the R session, the content of the PS file disappears.
How can I retain the graphics output after closing R?


I think you are misunderstanding what's going on.  R deletes temporary 
files when it closes, but you wouldn't normally create a plot in a 
temporary file.


Duncan Murdoch



Can you help me in getting what I wish? Any help is greatly appreciated.

Thanks a lot

Gaetano



__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] \Sweaveopts error

2010-11-25 Thread John Maindonald
I have a file 4lmetc.Rnw, intended for inclusion in a LaTeX document,
that starts:

\SweaveOpts{engine=R, keep.source=TRUE}
\SweaveOpts{eps=FALSE, prefix.string=snArt/4lmetc}

The attempt to process the file through Sweave generates the error:

 Sweave(4lmetc)
Writing to file 4lmetc.tex
Processing code chunks ...
 1 : keep.source term verbatim
Error in file(srcfile$filename, open = rt, encoding = encoding) : 
  cannot open the connection
In addition: Warning message:
In file(srcfile$filename, open = rt, encoding = encoding) :
  cannot open file '4lmetc': No such file or directory

The same file processes through Stangle() without problems.
If I comment out the \Sweaveopts lines, there is no problem,
except that I do not get the options that I want.

This processed fine in R-2.11.1

 sessionInfo()
R version 2.12.0 (2010-10-15)
Platform: x86_64-apple-darwin9.8.0/x86_64 (64-bit)

locale:
[1] C

attached base packages:
[1] stats graphics  grDevices utils datasets  grid  methods  
[8] base 

other attached packages:
[1] lattice_0.19-13 DAAG_1.02   randomForest_4.5-36
[4] rpart_3.1-46MASS_7.3-8  reshape_0.8.3  
[7] plyr_1.2.1  proto_0.3-8

loaded via a namespace (and not attached):
[1] ggplot2_0.8.8   latticeExtra_0.6-14

Is there a workaround?

John Maindonald email: john.maindon...@anu.edu.au
phone : +61 2 (6125)3473fax  : +61 2(6125)5549
Centre for Mathematics  Its Applications, Room 1194,
John Dedman Mathematical Sciences Building (Building 27)
Australian National University, Canberra ACT 0200.
http://www.maths.anu.edu.au/~johnm

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Generalized linear models with categorical data

2010-11-25 Thread Diogo B. Provete
I got a question about using a GLZ with categorical x categorical data.
Below there is a data set I want to know the influence of treatments (CONT,
and LPS2H LPS24H) on the categories of pigmentation of the right testis of
an amphibian. From these data, I used the function glm with binomial family
(logit). But in the result (see below) is not possible to know the influence
of the three treatments on the categories, only the treatments as a whole.
Could someone help me?

Thanks in advance


 td=read.table(file.choose(), h=T)
 td
   Tratamento Categoria
1CONT Cat.2
2CONT Cat.2
3CONT Cat.2
4CONT Cat.2
5CONT Cat.3
6CONT Cat.3
7CONT Cat.3
8CONT Cat.3
9CONT Cat.3
10   CONT Cat.3
11  LPS2h Cat.2
12  LPS2h Cat.2
13  LPS2h Cat.2
14  LPS2h Cat.3
15  LPS2h Cat.3
16  LPS2h Cat.3
17  LPS2h Cat.3
18  LPS2h Cat.3
19  LPS2h Cat.3
20  LPS2h Cat.3
21 LPS24h Cat.1
22 LPS24h Cat.1
23 LPS24h Cat.1
24 LPS24h Cat.1
25 LPS24h Cat.1
26 LPS24h Cat.2
27 LPS24h Cat.2
28 LPS24h Cat.2
29 LPS24h Cat.2
30 LPS24h Cat.2

 mod.test=glm(Categoria~Tratamento, family=binomial(logit), data=td)

 mod.test

Call:  glm(formula = Categoria ~ Tratamento, family = binomial(logit),
data = td)

Coefficients:
 (Intercept)  TratamentoLPS24h   TratamentoLPS2h
   2.057e+01-2.057e+01 1.558e-08

Degrees of Freedom: 29 Total (i.e. Null);  27 Residual
Null Deviance:27.03
Residual Deviance: 13.86 AIC: 19.86

 anova(mod.test, test=Chisq)
Analysis of Deviance Table

Model: binomial, link: logit

Response: Categoria

Terms added sequentially (first to last)


   Df Deviance Resid. Df Resid. Dev P(|Chi|)
NULL  29 27.034
Tratamento  2   13.17127 13.863  0.001380 **
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1


-- 
Atenciosamente,
*Diogo Borges Provete*

==
Biólogo
Mestre em Biologia Animal (UNESP)
Laboratório de Ecologia Animal
Departamento de Zoologia e Botânica
Instituto de Biociências, Letras e Ciências Exatas
Universidade Estadual Paulista - UNESP
São José do Rio Preto-SP
Brazil

Rua Cristóvão Colombo, 2265
Jardim Nazareth -  15054-000

*Skype*: diogoprovete
*MSN*: diogop...@yahoo.com.br

*Personal web page https://sites.google.com/site/diogoprovetepage/*

Traduza conosco:
American Journal Experts http://www.journalexperts.com/br/
D-Lang  Soluções linguisticashttp://www.d-lang.com.br/site/sitept/index.htm


==




-- 
Atenciosamente,
*Diogo Borges Provete*

==
Biólogo
Mestre em Biologia Animal (UNESP)
Laboratório de Ecologia Animal
Departamento de Zoologia e Botânica
Instituto de Biociências, Letras e Ciências Exatas
Universidade Estadual Paulista - UNESP
São José do Rio Preto-SP
Brazil

Rua Cristóvão Colombo, 2265
Jardim Nazareth -  15054-000

*Skype*: diogoprovete
*MSN*: diogop...@yahoo.com.br

*Personal web page https://sites.google.com/site/diogoprovetepage/*

Traduza conosco:
American Journal Experts http://www.journalexperts.com/br/
D-Lang  Soluções linguisticashttp://www.d-lang.com.br/site/sitept/index.htm


==

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] \Sweaveopts error

2010-11-25 Thread Duncan Murdoch

On 25/11/2010 6:34 AM, John Maindonald wrote:

I have a file 4lmetc.Rnw, intended for inclusion in a LaTeX document,
that starts:


I think this may have been fixed in the patched version.  Could you give 
it a try to confirm?  If not, please send me a simplified version of the 
file, and I'll see what's going wrong.


Duncan Murdoch




\SweaveOpts{engine=R, keep.source=TRUE}
\SweaveOpts{eps=FALSE, prefix.string=snArt/4lmetc}

The attempt to process the file through Sweave generates the error:


Sweave(4lmetc)

Writing to file 4lmetc.tex
Processing code chunks ...
  1 : keep.source term verbatim
Error in file(srcfile$filename, open = rt, encoding = encoding) :
   cannot open the connection
In addition: Warning message:
In file(srcfile$filename, open = rt, encoding = encoding) :
   cannot open file '4lmetc': No such file or directory

The same file processes through Stangle() without problems.
If I comment out the \Sweaveopts lines, there is no problem,
except that I do not get the options that I want.

This processed fine in R-2.11.1


sessionInfo()

R version 2.12.0 (2010-10-15)
Platform: x86_64-apple-darwin9.8.0/x86_64 (64-bit)

locale:
[1] C

attached base packages:
[1] stats graphics  grDevices utils datasets  grid  methods
[8] base

other attached packages:
[1] lattice_0.19-13 DAAG_1.02   randomForest_4.5-36
[4] rpart_3.1-46MASS_7.3-8  reshape_0.8.3
[7] plyr_1.2.1  proto_0.3-8

loaded via a namespace (and not attached):
[1] ggplot2_0.8.8   latticeExtra_0.6-14

Is there a workaround?

John Maindonald email: john.maindon...@anu.edu.au
phone : +61 2 (6125)3473fax  : +61 2(6125)5549
Centre for Mathematics  Its Applications, Room 1194,
John Dedman Mathematical Sciences Building (Building 27)
Australian National University, Canberra ACT 0200.
http://www.maths.anu.edu.au/~johnm

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] RODBC

2010-11-25 Thread Jorge Nieves


Hi,



I am running the RODBC examples form the help guide. I am trying to
UPDATE a table in an Access data base but I am having an error.


library(RODBC)

library(termstrc)



path = getwd()
setwd(getwd())


dbName = data.mdb
pathdbname = paste(path,/,dbName,sep=)


accesChannel = odbcConnectAccess(pathdbname, uid = , pwd = )
sqlSave(accesChannel, USArrests, rownames = state, addPK=TRUE)
sqlFetch(accesChannel , USArrests, rownames = state) # get the lot

foo - cbind(state=row.names(USArrests), USArrests)[1:3, c(1,3)]
foo[1:3,2] - 

sqlUpdate(accesChannel , foo, USArrests)


The sqlSave and sqlFetch command seem to work fine.


 foo

 state Assault

Alabama Alabama

Alaska   Alaska

Arizona Arizona

 sqlUpdate(accesChannel , foo, USArrests)

Error in sqlUpdate(accesChannel, foo, USArrests) :

 cannot update 'USArrests' without unique column





I am using R 2.12.0(2010-10-15)
Using Microsoft access 2003.


Furthermore, the sqlColumns(accesChannel , USArrests)   returns the
following information



 sqlColumns(accesChannel , USArrests)

  TABLE_CAT TABLE_SCHEM
TABLE_NAME

1 C:\\ARTIFICALDESKTOP\\CurrentDownloads\\termstrc\\dataNA
USArrests

2 C:\\ARTIFICALDESKTOP\\CurrentDownloads\\termstrc\\dataNA
USArrests

3 C:\\ARTIFICALDESKTOP\\CurrentDownloads\\termstrc\\dataNA
USArrests

4 C:\\ARTIFICALDESKTOP\\CurrentDownloads\\termstrc\\dataNA
USArrests

5 C:\\ARTIFICALDESKTOP\\CurrentDownloads\\termstrc\\dataNA
USArrests

 COLUMN_NAME DATA_TYPE TYPE_NAME COLUMN_SIZE BUFFER_LENGTH
DECIMAL_DIGITS

1   state12   VARCHAR 255   510
NA

2  Murder 8DOUBLE  53 8
NA

3 Assault 4   INTEGER  10 4
0

4UrbanPop 4   INTEGER  10 4
0

5Rape 8DOUBLE  53 8
NA

 NUM_PREC_RADIX NULLABLE REMARKS COLUMN_DEF SQL_DATA_TYPE
SQL_DATETIME_SUB

1 NA1NA   NA12
NA

2  21NA   NA 8
NA

3 101NA   NA 4
NA

4 101NA   NA 4
NA

5  21NA   NA 8
NA

 CHAR_OCTET_LENGTH ORDINAL_POSITION IS_NULLABLE ORDINAL

1   5101 YES   1

2NA2 YES   2

3NA3 YES   3

4NA4 YES   4

5NA5 YES   5




Any ideas as of what might I have missed?

Thanks,

Jorge



[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Is there an equivalent to predict(..., type=linear) of a Proportional hazard model for a Cox model instead?

2010-11-25 Thread Ben Rhelp
I manage to achieve similar results with a Cox model as follows but I don't 
really understand why we have to take the inverse of the linear prediction with 
the Cox model and why we do not need to divide by the number of days in the 
year 
anymore?

Am I getting a similar result out of pure luck?

thanks in advance,

Ben

# MASS example with the proportional hazard model
par(mfrow = c(1, 2));
(aids.ps - survreg(Surv(survtime + 0.9, status) ~ state + T.categ + 
pspline(age, df=6), data = Aidsp))

zz - predict(aids.ps, data.frame(state = factor(rep(NSW, 83), levels = 
levels(Aidsp$state)),
T.categ = factor(rep(hs, 83), levels = levels(Aidsp$T.categ)), age = 
0:82), se = T, type = linear)
plot(0:82, exp(zz$fit)/365.25, type = l, ylim = c(0, 2), xlab = age, ylab = 
expected lifetime (years))
lines(0:82, exp(zz$fit+1.96*zz$se.fit)/365.25, lty = 3, col = 2)
lines(0:82, exp(zz$fit-1.96*zz$se.fit)/365.25, lty = 3, col = 2)
rug(Aidsp$age + runif(length(Aidsp$age), -0.5, 0.5), ticksize = 0.015)


# same example but with a Cox model instead
(aids.pscp - coxph(Surv(survtime + 0.9, status) ~ state + T.categ + 
pspline(age, df=6), data = Aidsp))
zzcp - predict(aids.pscp, data.frame(state = factor(rep(NSW, 83), levels = 
levels(Aidsp$state)),
T.categ = factor(rep(hs, 83), levels = levels(Aidsp$T.categ)), age = 
0:82), se = T, type = lp)
plot(0:82, 1/exp(zzcp$fit), type = l, ylim = c(0, 2), xlab = age, ylab = 
expected lifetime (years))
lines(0:82, 1/exp(zzcp$fit+1.96*zzcp$se.fit), lty = 3, col = 2)
lines(0:82, 1/exp(zzcp$fit-1.96*zzcp$se.fit), lty = 3, col = 2)
rug(Aidsp$age + runif(length(Aidsp$age), -0.5, 0.5), ticksize = 0.015)




__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] looking for the RMySQL package for R 2.12.0 under XP

2010-11-25 Thread PtitBleu

Thank you for all your answers.

I've looked at RODBC but it seems that you have to declare the database with
the administration tools before using it . With RMySQL, you can do it
directly in the R-script which is very convenient for me as I regularly
treat new databases.

For those who already used rtool :
- are the administration rights are requested (to install or/and to use it)
?
- after the installation, install.packages('RMySQL',type='source') is the
only things to do or more skills in computer are needed ?

I will have a look at the R-sig-DB list if I can't do it alone.

Again thanks for your help and advices.
Ptit Bleu  
-- 
View this message in context: 
http://r.789695.n4.nabble.com/looking-for-the-RMySQL-package-for-R-2-12-0-under-XP-tp3057537p3058891.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] difficulty setting the random = argument to lme()

2010-11-25 Thread Robert Kinley
My small brain is having trouble getting to grips with lme()

I wonder if anyone can help me correctly set the   random = argument
to lme() for this kind of setup with  (I think) 9  variance/covariance
components ...

   Study.1  Study.2 ... 
Study.10
 Treatment.A:   subject:  1  2  3   4  5  6  etc. 28 29 30
 
 Treatment.B:   subject: 31 32 3334 35 36   58 59 60  
   A variable is measured at 2 fixed sites (A and B) on each subject

so we have fixed effects :-

 between-Treatments 
 between-sites (A and B)
 Treatment*site interaction


and we have random effects :-

 study effects at site A 
 study effects at site B
 correlation between site A and site B study effects 

 study*treatment interaction effects at site A 
 study*treatment interaction effects at site B
 correlation between site A and B study*treatment interaction effects 
 
 residual (between-subject) effects at site A 
 residual (between-subject) effects at site B
 correlation between site A and B residuals (between-subject) effects 

My problem is formulating the  random = argument to give estimates
of all 9 random components ...

Hope someone can help ...

  Robert Kinley 
 



Study:  Pos tissue VC, Neg tissue VC, Pos/Neg tissue 
correlation
Study*Group:Pos tissue VC, Neg tissue VC, Pos/Neg tissue 
correlation
Residual (animal):  Pos tissue VC, Neg tissue VC, Pos/Neg tissue 
correlation
 
 

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] t-stat for the coefficients of an ARIMA model

2010-11-25 Thread Samuel Le
Dear all,



I am  fitting a time series using the following command:

Ts.arima-arima(x,c(2,1,2)) where x is a time series.



What the function returns is perfectly fine but I was wondering if I could 
access to the t-stat of the coefficients I got from the arima function.



Any help would be greatly appreciated.



Samuel


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] how to find a row index in a matrix or a data frame ?

2010-11-25 Thread Gabor Grothendieck
On Thu, Nov 25, 2010 at 4:59 AM, Luedde, Mirko mirko.lue...@sap.com wrote:
 Dear all,

 this looks pretty much a standard problem, but I couldn't find a
 satisfying and understandable solution.

 (A) Given a data frame (or matrix), e.g.

      x - data.frame(A=c(1, 2, 2), B=c(4, 5, 5))

    and a row of this data frame, e.g.

      r - c(2, 5)

    I need to find one row index i (or all such indices) such that r
    is at the i-th row in x, that is, the expression

      all(x[i,]==as.list(r))

    evaluates to TRUE.  I can not evaluate an expression like

      x[x[,1]==2  x[,2]==5,]

    because I do not know in advance how many columns x will have.

    Basically, thus, I'm looking for an equivalent of vectorfind in
    Scilab.

 (B) Which would be the most appropriate data type for x, matrix or
    data frame or another type?

 (C) What will be better, searching for rows or searching for columns?


Try this:

   which(apply(t(x) == r, 2, all))

or this:

   which(colSums((t(x) != r)  0) == 0)

-- 
Statistics  Software Consulting
GKX Group, GKX Associates Inc.
tel: 1-877-GKX-GROUP
email: ggrothendieck at gmail.com

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] txtProgressBar strange behavior in R 2.12.0

2010-11-25 Thread Prof Brian Ripley

I've seen it, but not reliably enough to reproduce.

The issue is that there isn't really support for '\r' (CR) in Rgui, 
and so it was sometimes getting junk characters.


However, I think that this is probably fixed in R-patched = r53662, 
so please try an update in a day or two.


On Mon, 22 Nov 2010, Viechtbauer Wolfgang (STAT) wrote:

I believe nobody has responded to far, so maybe this is not a 
wide-spread issue. However, I have also encountered this since 
upgrading to R 2.12.0 (Windows 7, 64-bit). In my simulations where I 
use txtProgressBar(), the problem usually disappears after the bar 
has progressed to a certain amount, but it's quite strange 
nonetheless. The characters that appear are gibberish and include 
some Asian symbols. Here is a screenshot:


http://www.wvbauer.com/screenshot.jpg

sessionInfo():

R version 2.12.0 (2010-10-15)
Platform: x86_64-pc-mingw32/x64 (64-bit)

locale:
[1] LC_COLLATE=English_United States.1252
[2] LC_CTYPE=English_United States.1252
[3] LC_MONETARY=English_United States.1252
[4] LC_NUMERIC=C
[5] LC_TIME=English_United States.1252

attached base packages:
[1] stats graphics  grDevices utils datasets  methods   base

No idea what could be causing this.

Best,

--
Wolfgang Viechtbauer
Department of Psychiatry and Neuropsychology
School for Mental Health and Neuroscience
Maastricht University, P.O. Box 616
6200 MD Maastricht, The Netherlands
Tel: +31 (43) 368-5248
Fax: +31 (43) 368-8689
Web: http://www.wvbauer.com



-Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org]
On Behalf Of Jalla
Sent: Saturday, November 20, 2010 23:49
To: r-help@r-project.org
Subject: [R] txtProgressBar strange behavior in R 2.12.0


Hi,
I am running R 2.12.0 (windows).

example(txtProgressBar)

gives me some funny screen output with all kinds of special characters
appearing and disappearing. It's happening on two different mashines since
vs. 2.12.0. Is this a known issue?

Best,
Jalla


--
Brian D. Ripley,  rip...@stats.ox.ac.uk
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Lifting Wavelet Transform

2010-11-25 Thread Mike Marchywka

( I'm top posting for clarity, hotmail decided not to  original
text and I'm not going to even try to outguess it LOL) 
 
I have a non-definitive negative answer : I found two
packages but it isn't obvious either expose anything
related to lifting. I haven't worked with these lately
and so not sure I missed obvious synonyms etc.
 
http://cran.r-project.org/web/packages/wavelets/index.html
 
http://cran.r-project.org/web/packages/waveslim/index.html

and normally I would have documentation available for grepping for
various terms but quick scan of pdf didn't show anything related
to lifting. AFAIK, the wavelet designs are limited to fixed types/lengths
and there is no explicit method for deriving new ones and the WT impl
is not obvious on a quick read but looks like filters just take coefs
and time series and output mra. 


http://en.wikipedia.org/wiki/Lifting_scheme
 
The Daubechies Sweldens paper seems to be available
online etc. 



Date: Wed, 24 Nov 2010 20:46:30 -0800
From: assaed...@yahoo.com
To: r-help@r-project.org
Subject: [R] Lifting Wavelet Transform


Hi R users

Thanks in advance

Is lifting wavelet transform implemented in R? If so, which package or codes 
can be used for performing that?



Thanks





[[alternative HTML version deleted]]


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.  
  
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Filled contour plot showing labeled isolines?

2010-11-25 Thread Fiona Berryman


jt306 wrote:
 
 Is it possible to create a contour plot with the isolines labeled.  I know
 you can do this with Matlab.  Argh!
 
 I tried creating a filled contour plot, then using par(new=T), followed by
 overlaying the contour plot on top.  However, the placement of the filled
 contour plot and the contour plot do not align correctly. Any suggestions
 would be appreciated.
 
 Thanks,
 Jon
 
 
 
 

Jon,

I don't use filled.contour but image and then contour.

Try this.

image(volcano)
contour(volcano,add=TRUE)

Is this what you want?

Regards,

Fiona
-- 
View this message in context: 
http://r.789695.n4.nabble.com/Filled-contour-plot-showing-labeled-isolines-tp3056437p3058974.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Filled contour plot showing labeled isolines?

2010-11-25 Thread S Ellison
If you usre filled.contour, then use contour() as a part of the function
supplied as the axis parameter, you can correctly overlay contours on
the colour contour plot.

Steve e  
-Original Message- 
From: Fiona Berryman f.berry...@wlv.ac.uk 
To:  r-help@r-project.org 
 
Sent: 11/25/2010 13:57:06 
Subject: Re: [R] Filled contour plot showing labeled isolines? 
 
 
 
jt306 wrote: 
  
 Is it possible to create a contour plot with the isolines labeled.  I
know 
 you can do this with Matlab.  Argh! 
  
 I tried creating a filled contour plot, then using par(new=T),
followed by 
 overlaying the contour plot on top.  However, the placement of the
filled 
 contour plot and the contour plot do not align correctly. Any
suggestions 
 would be appreciated. 
  
 Thanks, 
 Jon 
  
  
  
  
 
Jon, 
 
I don't use filled.contour but image and then contour. 
 
Try this. 
 
image(volcano) 
contour(volcano,add=TRUE) 
 
Is this what you want? 
 
Regards, 
 
Fiona 
--  
View this message in context:
http://r.789695.n4.nabble.com/Filled-contour-plot-showing-labeled-isolines-tp3056437p3058974.html

Sent from the R help mailing list archive at Nabble.com. 
 
__ 
R-help@r-project.org mailing list 
https://stat.ethz.ch/mailman/listinfo/r-help 
PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html 
and provide commented, minimal, self-contained, reproducible code. 

***
This email and any attachments are confidential. Any use...{{dropped:8}}

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] What to do if geoRglm results showed that a non-spacial model fits better?

2010-11-25 Thread Jimmy Martina

Hi R-floks:
 
Working in geoRglm, it shows me, according to AIC criterion, that the 
non-spacial model describes the process in a better way. It's the first time 
that I'm facing up to.
 
These are my results:
OP2003Seppos.AICnonsp-OP2003Seppos.AICsp
#[1] -4

(OP2003Seppos.lf0.p-exp(OP2003Seppos.lf0$beta)/(1+exp(OP2003Seppos.lf0$beta))) 
#P non spatial
#[1] 0.9717596

(OP2003Seppos.lf.p-exp(OP2003Seppos.lf$beta)/(1+exp(OP2003Seppos.lf$beta)))
  #P spatial
#[1] 0.9717596

It must what have an important influence at kriging, because it shows as 
following:
 
OP2003Sepposbin.krig-glsm.krige(OP2003Seppos.tune,loc=OP2003Seppospro.pred.grid,bor=OP2003Sepposbor)
#glsm.krige: Prediction for a generalised linear spatial model 
#There are 50 or mode advices (use warnings() to see them)
# warnings()
#Warning messages:
#1: In asympvar(kpl.result$predict, messages = FALSE) ... :
#  value of argument lag.max is not suffiently long
#2: In asympvar(kpl.result$predict, messages = FALSE) ... :
#  value of argument lag.max is not suffiently long

I don't understand the warnings. Help me, please.
  
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Is there an equivalent to predict(..., type=linear) of a Proportional hazard model for a Cox model instead?

2010-11-25 Thread David Winsemius


On Nov 25, 2010, at 7:27 AM, Ben Rhelp wrote:

I manage to achieve similar results with a Cox model as follows but  
I don't
really understand why we have to take the inverse of the linear  
prediction with

the Cox model


Different parameterization. You can find expanded answer(s) in the  
archives and in the documentation of survreg.distributions.




and why we do not need to divide by the number of days in the year
anymore?


Here I'm guessing (since you don't offer enough evidence to confirm)  
that the difference is in the time scales used in your Aidsp$survtime  
versus some other example to which you are comparing .


 data(Aidsp)
Warning message:
In data(Aidsp) : data set 'Aidsp' not found



Am I getting a similar result out of pure luck?

thanks in advance,

Ben

# MASS example with the proportional hazard model
par(mfrow = c(1, 2));
(aids.ps - survreg(Surv(survtime + 0.9, status) ~ state + T.categ +
pspline(age, df=6), data = Aidsp))

zz - predict(aids.ps, data.frame(state = factor(rep(NSW, 83),  
levels =

levels(Aidsp$state)),
   T.categ = factor(rep(hs, 83), levels = levels(Aidsp$T.categ)),  
age =

0:82), se = T, type = linear)
plot(0:82, exp(zz$fit)/365.25, type = l, ylim = c(0, 2), xlab =  
age, ylab =

expected lifetime (years))
lines(0:82, exp(zz$fit+1.96*zz$se.fit)/365.25, lty = 3, col = 2)
lines(0:82, exp(zz$fit-1.96*zz$se.fit)/365.25, lty = 3, col = 2)
rug(Aidsp$age + runif(length(Aidsp$age), -0.5, 0.5), ticksize = 0.015)


# same example but with a Cox model instead
(aids.pscp - coxph(Surv(survtime + 0.9, status) ~ state + T.categ +
pspline(age, df=6), data = Aidsp))
zzcp - predict(aids.pscp, data.frame(state = factor(rep(NSW, 83),  
levels =

levels(Aidsp$state)),
   T.categ = factor(rep(hs, 83), levels = levels(Aidsp$T.categ)),  
age =

0:82), se = T, type = lp)
plot(0:82, 1/exp(zzcp$fit), type = l, ylim = c(0, 2), xlab =  
age, ylab =

expected lifetime (years))
lines(0:82, 1/exp(zzcp$fit+1.96*zzcp$se.fit), lty = 3, col = 2)
lines(0:82, 1/exp(zzcp$fit-1.96*zzcp$se.fit), lty = 3, col = 2)
rug(Aidsp$age + runif(length(Aidsp$age), -0.5, 0.5), ticksize = 0.015)




David Winsemius, MD
West Hartford, CT

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] overlap cdf plots and add colors and etc

2010-11-25 Thread Peter Ehlers

On 2010-11-24 22:24, Roslina Zakaria wrote:

Hi Jorge,

I tried but still it does not work.  Thank you for your time.


Jorge's code works perfectly well.

If you prefer lines() over plot(, add = TRUE),
then use lines():

 plot( ecdf(rnorm(15, sd=3)), verticals=TRUE, col=black)
 lines(ecdf(rnorm(11, sd=3)), verticals=TRUE, col=red)
 legend(topright, legend = c(observed,fitted),
   col = c(black,red), pch=c(NA,NA), lty = c(1, 1),
   lwd=c(3,3),bty=n, pt.cex=2)

(Note that topright is probably the worst choice you
can make. And why set pt.cex when you don't have points??)

The key thing is that you can't specify your two
colours in the first plot() call. At that point,
R has no idea that you want to add to the plot.
(I understand that Greg Snow is working on a
mind-reading package, but so far it can probably
read only his mind, not yours.)

Here is a (very brief) reminder of how to post:

1. Use an informative subject line.
   You've done that. Good!

2. *Never* use the phrase it does not work.
   That is meaningless.
   Be specific about your problem.

3. Provide *reproducible* code.
   We don't have your 'datobs' or 'gam_sum_gen'.

4. Try to make the code *minimal*.
   It's not likely that anyone cares what labels you
   use for your axes/title (unless that's the problem).
   And nobody wants to see reams of data; use rnorm(),
   etc, or built-in data if possible.

5. Figure out how to set your mail program to send
   plain text.

I know the above (and more) is in the posting guide,
but it seems that nobody wants to read that quite
brief document.

Peter Ehlers




From: Jorge Ivan Velezjorgeivanve...@gmail.com

Cc: r-help@r-project.org
Sent: Thu, November 25, 2010 4:46:37 PM
Subject: Re: [R] overlap cdf plots and add colors and etc

Hi Roslina,


Try

par(mar=c(4,4,2,1.2),oma=c(0,0,0,0),xaxs=i, yaxs=i)
plot(ecdf(rnorm(100)))
plot(ecdf(rnorm(100)), add = TRUE, col = 2)

HTH,
Jorge


On Thu, Nov 25, 2010 at 12:18 AM, Roslina Zakaria  wrote:

Hi r-users,


I would like to overlap 2 ecdf plots.

I tried this below and it gives me two plots of ecdf but just both just in
black.

par(mar=c(4,4,2,1.2),oma=c(0,0,0,0),xaxs=i, yaxs=i)
plot(ecdf(datobs))
lines(ecdf(gam_sum_gen))

Then I try to add colors etc and also the legend but fail.

par(mar=c(4,4,2,1.2),oma=c(0,0,0,0),xaxs=i, yaxs=i)
plot(ecdf(datobs),main =CDF of the sum for winter
season-Hume,cex.axis=1.2,xlab=Rainfall (mm), pch =
pch,verticals=TRUE,col=c(black,red), lty=c(1,1),ylab=Cumulative
probability, xlim=c(0,800),lwd=1)
lines(ecdf(gam_sum_gen))
legend(topright, legend = c(observed,fitted),
col = c(black,red), pch=c(NA,NA), lty = c(1, 1),
lwd=c(3,3),bty=n, pt.cex=2)
box()

I'm not sure why it doesn't show up at all.

Thank you for any help given.



[[alternative HTML version deleted]]




__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] How to change value of y axis from log relative Hazard to relative Hazard

2010-11-25 Thread David Winsemius


On Nov 25, 2010, at 2:00 AM, jsntxt wrote:



http://r.789695.n4.nabble.com/file/n3058505/file.csv file.csv
Hi, Rusers
   I have a problem in making a rcspline.plot with a Hmisc package.
   My data is in the upload attachment.
   My programme as follows:
library(Hmisc)
A-read.csv(file.csv,header=TRUE)
attach(A)
rcspline
.plot
(factor
,Time
,model
=
cox
,xrange
=
c
(0,3
),ylim
=
c
(-1,2
),event
=
event
,nk
=
4
,knots
=
c
(0.8,1.0,1.5,2.0
),showknots=TRUE,plotcl=FALSE,statloc=none,subset=SEX==2,lty=2)
   The plot could be made but its y axis represents the value of log
relative Hazard, is there any method to change value of y axis from  
log

relative Hazard to relative Hazard?


I have never used that function. For the task you are facing, Harrell  
provides tools in the form of Predict with the fun argument = exp and  
the usual ploting functions for rms


--

David Winsemius, MD
West Hartford, CT

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Applying function to elements of matrices in a list

2010-11-25 Thread statmobile

Hello R-help,

Please cc me on all responses, as I only receive summary emails from 
this list.


I'm wondering if anybody has any tips on how to accomplish this 
efficiently.  I have a list of matrices, and I'm trying to get the mean 
of the [i,j]'th element of each matrix in a list.


So if I have a list of matrices, say

x - list(a=matrix(rnorm(4),nrow=2),b=matrix(rnorm(4),nrow=2))

How would I get a 2x2 matrix, where the i,j'th element would be the mean 
across the the list of each of the i,j'th elements in the list?  That 
is, where the [1,2] element would be the average of a[1,2] and b[1,2].


Of course my list and matrices are much larger, and I was hoping there 
would be some trick with lapply that I may be missing here.


Thanks,
Brian

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Applying function to elements of matrices in a list

2010-11-25 Thread Dimitris Rizopoulos

try this:

Reduce(+, x) / length(x)


Best,
Dimitris


On 11/25/2010 3:42 PM, statmobile wrote:

Hello R-help,

Please cc me on all responses, as I only receive summary emails from
this list.

I'm wondering if anybody has any tips on how to accomplish this
efficiently. I have a list of matrices, and I'm trying to get the mean
of the [i,j]'th element of each matrix in a list.

So if I have a list of matrices, say

x - list(a=matrix(rnorm(4),nrow=2),b=matrix(rnorm(4),nrow=2))

How would I get a 2x2 matrix, where the i,j'th element would be the mean
across the the list of each of the i,j'th elements in the list? That is,
where the [1,2] element would be the average of a[1,2] and b[1,2].

Of course my list and matrices are much larger, and I was hoping there
would be some trick with lapply that I may be missing here.

Thanks,
Brian

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.



--
Dimitris Rizopoulos
Assistant Professor
Department of Biostatistics
Erasmus University Medical Center

Address: PO Box 2040, 3000 CA Rotterdam, the Netherlands
Tel: +31/(0)10/7043478
Fax: +31/(0)10/7043014
Web: http://www.erasmusmc.nl/biostatistiek/

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Applying function to elements of matrices in a list

2010-11-25 Thread statmobile

On 11/25/2010 09:44 AM, Dimitris Rizopoulos wrote:

try this:

Reduce(+, x) / length(x)


Thanks Dimitris, that's very slick, I was unaware of this Reduce function.

The issue, is that I actually wanted to do a trimmed mean, and if things 
prove possible even the median.


Is there a way to apply a generic function in the manner I described?

Thanks,
Brian




Best,
Dimitris


On 11/25/2010 3:42 PM, statmobile wrote:

Hello R-help,

Please cc me on all responses, as I only receive summary emails from
this list.

I'm wondering if anybody has any tips on how to accomplish this
efficiently. I have a list of matrices, and I'm trying to get the mean
of the [i,j]'th element of each matrix in a list.

So if I have a list of matrices, say

x - list(a=matrix(rnorm(4),nrow=2),b=matrix(rnorm(4),nrow=2))

How would I get a 2x2 matrix, where the i,j'th element would be the mean
across the the list of each of the i,j'th elements in the list? That is,
where the [1,2] element would be the average of a[1,2] and b[1,2].

Of course my list and matrices are much larger, and I was hoping there
would be some trick with lapply that I may be missing here.

Thanks,
Brian

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.





__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Is there an equivalent to predict(..., type=linear) of a Proportional hazard model for a Cox model instead?

2010-11-25 Thread Ben Rhelp
Hi David,

Thank you for your reply. See below for more information.


 From: David Winsemius 
 
 
 On Nov 25, 2010, at 7:27 AM, Ben Rhelp wrote:
 
  I manage to  achieve similar results with a Cox model as follows but I don't
  really  understand why we have to take the inverse of the linear prediction 
with
   the Cox model
 
 Different parameterization. You can find expanded answer(s)  in the archives 
and in the documentation of  survreg.distributions.
 

I understand (i think) the difference in model structures between a Cox (coxph) 
and Proportional hazard model (survreg). 


 
  and why we do not need to divide by the  number of days in the year
  anymore?
 
 Here I'm guessing (since you  don't offer enough evidence to confirm) that 
 the 
difference is in the time  scales used in your Aidsp$survtime versus some 
other 
example to which you are  comparing .

Both models are run from the same data, so I am not expecting any differences 
in 
time scales. 

To get similar results, I need to actually run the following equations:
expected_lifetime_in_years = exp(fit)/365.25--- Linear 
prediction of the Proportional hazard model
expected_lifetime_in_years = 1/exp(fit)--- Linear 
prediction of the Cox model
where fit come from the linear prediction of each models, respectively.

Actually, in the code below, I re-run the models and predictions based on a 
yearly sampling time (instead of daily).
Again, to get similar results, I now need to actually run the following 
equations:
expected_lifetime_in_years = exp(fit)   --- Linear 
prediction of the Proportional hazard model
expected_lifetime_in_years = 1/exp(fit)--- Linear 
prediction of the Cox model

I think I understand the logic behind the results of the proportional hazard 
model, but not for the prediction of the Cox model.

Thank you for your help. I hope this is not a too stupid hole in my logic.

Here is the self contained R code to produce the charts:

library(MASS);
library(survival);

#Same data but parametric fit
make.aidsp - function(){
   cutoff - 10043 # July 1987 in julian days
   btime - pmin(cutoff, Aids2$death) - pmin(cutoff, Aids2$diag)
   atime - pmax(cutoff, Aids2$death) - pmax(cutoff, Aids2$diag)
   survtime - btime + 0.5*atime
   status - as.numeric(Aids2$status)
   data.frame(survtime, status = status - 1, state = Aids2$state,
   T.categ = Aids2$T.categ, age = Aids2$age, sex = Aids2$sex)
}
Aidsp - make.aidsp()

# MASS example with the proportional hazard model
par(mfrow = c(1, 2));
(aids.ps - survreg(Surv(survtime + 0.9, status) ~ state + T.categ + 
pspline(age, df=6), data = Aidsp))
zz - predict(aids.ps, data.frame(state = factor(rep(NSW, 83), levels = 
levels(Aidsp$state)),
T.categ = factor(rep(hs, 83), levels = levels(Aidsp$T.categ)), age = 
0:82), se = T, type = linear)
plot(0:82, exp(zz$fit)/365.25, type = l, ylim = c(0, 2), xlab = age, ylab = 
expected lifetime (years))
lines(0:82, exp(zz$fit+1.96*zz$se.fit)/365.25, lty = 3, col = 2)
lines(0:82, exp(zz$fit-1.96*zz$se.fit)/365.25, lty = 3, col = 2)
rug(Aidsp$age + runif(length(Aidsp$age), -0.5, 0.5), ticksize = 0.015)


# Same example but with a Cox model instead
(aids.pscp - coxph(Surv(survtime + 0.9, status) ~ state + T.categ + 
pspline(age, df=6), data = Aidsp))
zzcp - predict(aids.pscp, data.frame(state = factor(rep(NSW, 83), levels = 
levels(Aidsp$state)),
T.categ = factor(rep(hs, 83), levels = levels(Aidsp$T.categ)), age = 
0:82), se = T, type = lp)
plot(0:82, 1/exp(zzcp$fit), type = l, ylim = c(0, 2), xlab = age, ylab = 
expected lifetime (years))
lines(0:82, 1/exp(zzcp$fit+1.96*zzcp$se.fit), lty = 3, col = 2)
lines(0:82, 1/exp(zzcp$fit-1.96*zzcp$se.fit), lty = 3, col = 2)
rug(Aidsp$age + runif(length(Aidsp$age), -0.5, 0.5), ticksize = 0.015)


# Change the sampling time from daily to yearly
par(mfrow = c(1, 1));
(aids.ps - survreg(Surv((survtime + 0.9)/365.25, status) ~ state + T.categ + 
pspline(age, df=6), data = Aidsp))
zz - predict(aids.ps, data.frame(state = factor(rep(NSW, 83), levels = 
levels(Aidsp$state)),
T.categ = factor(rep(hs, 83), levels = levels(Aidsp$T.categ)), age = 
0:82), se = T, type = linear)
plot(0:82, exp(zz$fit), type = l, ylim = c(0, 2), xlab = age, ylab = 
expected lifetime (years))

(aids.pscp - coxph(Surv((survtime + 0.9)/365.25, status) ~ state + T.categ + 
pspline(age, df=6), data = Aidsp))
zzcp - predict(aids.pscp, data.frame(state = factor(rep(NSW, 83), levels = 
levels(Aidsp$state)),
T.categ = factor(rep(hs, 83), levels = levels(Aidsp$T.categ)), age = 
0:82), se = T, type = lp)
lines(0:82, 1/exp(zzcp$fit),col=4)





__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Help on running regression by grouping firms

2010-11-25 Thread Dennis Murphy
Hi:

You could try something like this:

For illustration, I'll use a data frame that was presented in a recent post
to the ggplot2 group. The poster wanted regressions by individual, but you
can add more than one grouping variable to the code I show below. It uses
the plyr package.

library(plyr)
ds_test - structure(list(individual = structure(c(1L, 1L, 1L, 2L, 2L, 2L,
2L,
2L, 2L, 2L, 2L, 3L, 3L, 3L, 3L, 4L, 4L, 4L, 4L, 5L, 5L, 5L, 5L,
5L, 5L, 5L, 6L, 6L, 6L, 7L, 7L, 7L, 7L, 7L, 7L, 7L, 8L, 8L, 8L,
8L, 8L, 8L, 9L, 9L, 9L, 9L, 9L, 10L, 10L, 10L), .Label = c(1,
2, 3, 4, 5, 6, 7,
8, 9, 10, 11, 12, 13,
14, 15, 16, 17, 18, 19,
20, 21, 22, 23, 24, 25,
26, 27, 28, 29, 30), class = factor),
   time = c(0L, 1671L, 1896L, 0L, 105L, 196L, 384L, 582L, 797L,
   998L, 1419L, 0L, 290L, 451L, 752L, 0L, 487L, 619L, 820L,
   0L, 384L, 463L, 832L, 932L, 1322L, 1688L, 0L, 101L, 390L,
   0L, 746L, 761L, 899L, 1118L, 1236L, 1375L, 0L, 544L, 870L,
   927L, 1117L, 1870L, 0L, 326L, 383L, 573L, 1326L, 0L, 1572L,
   1592L), size = c(2, 2.6, 2.6, 1.2, 1.4, 1.5,
   1.6, 1.7, 1.8, 2, 2.2, 1.3, 1.6, 1.5, 1.5, 2.8, 2.8, 2.4,
   2.9, 2.1, 2.4, 2.4, 2.4, 2.3, 2.5, 2.4, 6, 5.8, 5.4, 1.1,
   1.6, 1.5, 1.5, 1.5, 2.3, 2.3, 3.2, 4.1, 4, 3.9, 4.1, 4.3,
   1.2, 2.1, 2.2, 2.2, 3, 2.2, 3, 3.9)), .Names = c(individual,
time,
size), row.names = c(NA, 50L), class = data.frame)

# Run models by individual and put the results into a list. The advantage
# is that one can extract multiple pieces from each component of the list,
# if so desired, by writing simple extraction functions using plyr. dlply()
is an
# apply-like function: the first letter indicates that the input object
(first
# argument) is a data frame and that the output object after executing
# the function is a list (in this case, a list of lists). The anonymous
function
# in the call performs the desired operation on each generic data subset x.

mods - dlply(ds_test, .(individual), function(x) lm(size ~ time, data = x))

# This function does the actual work within subgroup; since the number
# of residuals will vary from group to group, the output of the calling
# function has to be a list object of residuals, one component per
individual.
# The outer function do.call() is intended to collapse the list object into
a
# vector, and the resulting vector can be attached to the original data
frame
# with $:

res - function(x) resid(x)
ds_test$u - do.call(c, llply(mods, res))

In your case, where you have multiple grouping factors, you may have to be a
little more careful, but the strategy is the same. You could possibly reduce
it to a one-liner (untested):

ds_test$u - do.call(c, dlply(ds_test, .(individual), function(x)
resid(lm(size ~ time, data = x

HTH,
Dennis

On Wed, Nov 24, 2010 at 4:56 PM, Ray Zhang lz...@sfu.ca wrote:


 Hi there,

 I have a huge data set with multiple firms years and other firm
 characteristics. I want to run a regression on the dependent variable and
 other explanatory variables and calculate the residual terms by grouping the
 firms in same year and same industry.

 What I want to do is to divide my obseravtion into sub sample that contains
 the observation with same fiscal year(FYEAR=1990) and same firm
 characteristic (Industry =1) and run the regression and put the residual
 back to the observation by creating a new column. I want to do that for
 multiple years and multiple firms. I wonder is that any easy command with
 out creating multiple loops?

 Ray


[[alternative HTML version deleted]]

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Applying function to elements of matrices in a list

2010-11-25 Thread Peter Ehlers

On 2010-11-25 07:06, statmobile wrote:

On 11/25/2010 09:44 AM, Dimitris Rizopoulos wrote:

try this:

Reduce(+, x) / length(x)


Thanks Dimitris, that's very slick, I was unaware of this Reduce function.

The issue, is that I actually wanted to do a trimmed mean, and if things
prove possible even the median.

Is there a way to apply a generic function in the manner I described?



You could use the abind function in the abind package to
convert your list to a 3d array and then use apply on that:

  require(abind)
  xa - abind(x, along=3)
  apply(xa, 1:2, mean, trim=0.3)

Peter Ehlers


Thanks,
Brian




Best,
Dimitris


On 11/25/2010 3:42 PM, statmobile wrote:

Hello R-help,

Please cc me on all responses, as I only receive summary emails from
this list.

I'm wondering if anybody has any tips on how to accomplish this
efficiently. I have a list of matrices, and I'm trying to get the mean
of the [i,j]'th element of each matrix in a list.

So if I have a list of matrices, say

x- list(a=matrix(rnorm(4),nrow=2),b=matrix(rnorm(4),nrow=2))

How would I get a 2x2 matrix, where the i,j'th element would be the mean
across the the list of each of the i,j'th elements in the list? That is,
where the [1,2] element would be the average of a[1,2] and b[1,2].

Of course my list and matrices are much larger, and I was hoping there
would be some trick with lapply that I may be missing here.

Thanks,
Brian

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.





__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] help: program efficiency

2010-11-25 Thread randomcz

hey guys,

I am working on a function to make a duplicated value unique. For example,
the original vector would be like : a = c(2,1,1,3,3,3,4)
I'll like to transform it into:
a.nodup = 2, 1.01, 1.02, 3.01, 3.02, 3.03, 4
basically, find the duplicates and assign a unique value by adding a small
amount and keep it in order.
I come up with the following codes, but it runs slow if t is large. Is there
a better way to do it?
nodup = function(t)
{
  t.index=0
  t.dup=duplicated(t)
  for (i in 2:length(t))
  {
if (t.dup[i]==T)
  t.index=t.index+0.01
else t.index=0
t[i]=t[i]+t.index
  }
  return(t)
}


-- 
View this message in context: 
http://r.789695.n4.nabble.com/help-program-efficiency-tp3059079p3059079.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] moving xlabels in lattice

2010-11-25 Thread statquant2

Dear R users,

I am trying to move the xlab string on my xyplot, without success, I would
like it to shifted down, would one of you know a way ?

Thanks for reading
Colin
-- 
View this message in context: 
http://r.789695.n4.nabble.com/moving-xlabels-in-lattice-tp3059092p3059092.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Spearman's Power

2010-11-25 Thread Lewis G. Dean
I am trying to get some idea of the power of the Spearman's Rank correlation
I have carried out. Is there a way I can calculate power post-hoc? I know
that confidence intervals are preferred by many people, but how do I
generate those for a Spearman's?

Thanks

Lewis Dean

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] About searching criteria

2010-11-25 Thread Stephen Liu
Hi folks,

I need to search the dataset on data with name on heading;
Run conc density


I look at ??help.search and could not resolve;

help.search(pattern, fields = c(alias, concept, title)

What shall I replace pattern?

I suppose replacing alias, concept, title with Run, conc, density ?

Please help.  TIA

B.R.
Stephen L



__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] moving xlabels in lattice

2010-11-25 Thread David Winsemius


On Nov 25, 2010, at 10:00 AM, statquant2 wrote:



Dear R users,

I am trying to move the xlab string on my xyplot, without success,  
I would

like it to shifted down, would one of you know a way ?


A quick and dirty way would be to stick a \n in front of your x-label.

--

David Winsemius, MD
West Hartford, CT

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] t-stat for the coefficients of an ARIMA model

2010-11-25 Thread Samuel Le
Dear all,



I am  fitting a time series using the following command:

Ts.arima-arima(x,c(2,1,2)) where x is a time series.



What the function returns is perfectly fine but I was wondering if I could 
access to the t-stat of the coefficients I got from the arima function.



Any help would be greatly appreciated.



Samuel






[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] moving xlabels in lattice

2010-11-25 Thread pilchat

Hi Colin,

I wish I could do the same with a persp plot, since the labels overlap 
the ticks and the annotations. Unfortunately, I don't know the answer to 
your question, but I'll stay tuned on this thread.


Good luck

Gaetano

On 11/25/2010 04:00 PM, statquant2 wrote:

Dear R users,

I am trying to move the xlab string on my xyplot, without success, I would
like it to shifted down, would one of you know a way ?

Thanks for reading
Colin


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] help: program efficiency

2010-11-25 Thread Dimitris Rizopoulos

one way is the following:

a - c(2,1,1,3,3,3,4)

d - unlist(sapply(rle(a)$length, function (x)
if (x  1) seq(0.01, by = 0.01, len = x) else 0))

a + d


I hope it helps.

Best,
Dimitris


On 11/25/2010 3:49 PM, randomcz wrote:


hey guys,

I am working on a function to make a duplicated value unique. For example,
the original vector would be like : a = c(2,1,1,3,3,3,4)
I'll like to transform it into:
a.nodup = 2, 1.01, 1.02, 3.01, 3.02, 3.03, 4
basically, find the duplicates and assign a unique value by adding a small
amount and keep it in order.
I come up with the following codes, but it runs slow if t is large. Is there
a better way to do it?
nodup = function(t)
{
   t.index=0
   t.dup=duplicated(t)
   for (i in 2:length(t))
   {
 if (t.dup[i]==T)
   t.index=t.index+0.01
 else t.index=0
 t[i]=t[i]+t.index
   }
   return(t)
}




--
Dimitris Rizopoulos
Assistant Professor
Department of Biostatistics
Erasmus University Medical Center

Address: PO Box 2040, 3000 CA Rotterdam, the Netherlands
Tel: +31/(0)10/7043478
Fax: +31/(0)10/7043014
Web: http://www.erasmusmc.nl/biostatistiek/

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Is there an implementation for URL Encoding (/format) in R?

2010-11-25 Thread Tal Galili
Hello all,

I would like some R function that can translate a string to a URL encoding
(see here: http://www.w3schools.com/tags/ref_urlencode.asp)

Is it implemented? (I wasn't able to find any reference to it)

Thanks,
Tal





Contact
Details:---
Contact me: tal.gal...@gmail.com |  972-52-7275845
Read me: www.talgalili.com (Hebrew) | www.biostatistics.co.il (Hebrew) |
www.r-statistics.com (English)
--

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Generalized linear models with categorical data

2010-11-25 Thread Dennis Murphy
Hi:

On Thu, Nov 25, 2010 at 3:53 AM, Diogo B. Provete dbprov...@gmail.comwrote:

 I got a question about using a GLZ with categorical x categorical data.
 Below there is a data set I want to know the influence of treatments (CONT,
 and LPS2H LPS24H) on the categories of pigmentation of the right testis of
 an amphibian. From these data, I used the function glm with binomial family
 (logit). But in the result (see below) is not possible to know the
 influence
 of the three treatments on the categories, only the treatments as a whole.
 Could someone help me?


You have three response categories, which makes the response multinomial,
not binomial.

with(td, table(Tratamento, Categoria))
  Categoria
Tratamento Cat.1 Cat.2 Cat.3
CONT   0 4 6
LPS24h 5 5 0
LPS2h  0 3 7

Why is it that for each treatment, the responses only fall into two of the
three categories? Is this by happenstance or by design? Methinks some
information is lacking at the moment...but I'm pretty sure the binomial
model is not correct in its current manifestation.

Dennis

Thanks in advance


  td=read.table(file.choose(), h=T)
  td
   Tratamento Categoria
 1CONT Cat.2
 2CONT Cat.2
 3CONT Cat.2
 4CONT Cat.2
 5CONT Cat.3
 6CONT Cat.3
 7CONT Cat.3
 8CONT Cat.3
 9CONT Cat.3
 10   CONT Cat.3
 11  LPS2h Cat.2
 12  LPS2h Cat.2
 13  LPS2h Cat.2
 14  LPS2h Cat.3
 15  LPS2h Cat.3
 16  LPS2h Cat.3
 17  LPS2h Cat.3
 18  LPS2h Cat.3
 19  LPS2h Cat.3
 20  LPS2h Cat.3
 21 LPS24h Cat.1
 22 LPS24h Cat.1
 23 LPS24h Cat.1
 24 LPS24h Cat.1
 25 LPS24h Cat.1
 26 LPS24h Cat.2
 27 LPS24h Cat.2
 28 LPS24h Cat.2
 29 LPS24h Cat.2
 30 LPS24h Cat.2

  mod.test=glm(Categoria~Tratamento, family=binomial(logit), data=td)

  mod.test

 Call:  glm(formula = Categoria ~ Tratamento, family = binomial(logit),
data = td)

 Coefficients:
 (Intercept)  TratamentoLPS24h   TratamentoLPS2h
   2.057e+01-2.057e+01 1.558e-08

 Degrees of Freedom: 29 Total (i.e. Null);  27 Residual
 Null Deviance:27.03
 Residual Deviance: 13.86 AIC: 19.86

  anova(mod.test, test=Chisq)
 Analysis of Deviance Table

 Model: binomial, link: logit

 Response: Categoria

 Terms added sequentially (first to last)


   Df Deviance Resid. Df Resid. Dev P(|Chi|)
 NULL  29 27.034
 Tratamento  2   13.17127 13.863  0.001380 **
 ---
 Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
 

 --
 Atenciosamente,
 *Diogo Borges Provete*

 ==
 Biólogo
 Mestre em Biologia Animal (UNESP)
 Laboratório de Ecologia Animal
 Departamento de Zoologia e Botânica
 Instituto de Biociências, Letras e Ciências Exatas
 Universidade Estadual Paulista - UNESP
 São José do Rio Preto-SP
 Brazil

 Rua Cristóvão Colombo, 2265
 Jardim Nazareth -  15054-000

 *Skype*: diogoprovete
 *MSN*: diogop...@yahoo.com.br

 *Personal web page https://sites.google.com/site/diogoprovetepage/*

 Traduza conosco:
 American Journal Experts http://www.journalexperts.com/br/
 D-Lang  Soluções linguisticas
 http://www.d-lang.com.br/site/sitept/index.htm
 

 ==




 --
 Atenciosamente,
 *Diogo Borges Provete*

 ==
 Biólogo
 Mestre em Biologia Animal (UNESP)
 Laboratório de Ecologia Animal
 Departamento de Zoologia e Botânica
 Instituto de Biociências, Letras e Ciências Exatas
 Universidade Estadual Paulista - UNESP
 São José do Rio Preto-SP
 Brazil

 Rua Cristóvão Colombo, 2265
 Jardim Nazareth -  15054-000

 *Skype*: diogoprovete
 *MSN*: diogop...@yahoo.com.br

 *Personal web page https://sites.google.com/site/diogoprovetepage/*

 Traduza conosco:
 American Journal Experts http://www.journalexperts.com/br/
 D-Lang  Soluções linguisticas
 http://www.d-lang.com.br/site/sitept/index.htm
 

 ==

[[alternative HTML version deleted]]


 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.



[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Is there an implementation for URL Encoding (/format) in R?

2010-11-25 Thread Gustavo Carvalho
?URLencode

On Thu, Nov 25, 2010 at 3:53 PM, Tal Galili tal.gal...@gmail.com wrote:
 Hello all,

 I would like some R function that can translate a string to a URL encoding
 (see here: http://www.w3schools.com/tags/ref_urlencode.asp)

 Is it implemented? (I wasn't able to find any reference to it)

 Thanks,
 Tal





 Contact
 Details:---
 Contact me: tal.gal...@gmail.com |  972-52-7275845
 Read me: www.talgalili.com (Hebrew) | www.biostatistics.co.il (Hebrew) |
 www.r-statistics.com (English)
 --

        [[alternative HTML version deleted]]

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Is there an implementation for URL Encoding (/format) in R?

2010-11-25 Thread Romain Francois

Le 25/11/10 16:53, Tal Galili a écrit :


Hello all,

I would like some R function that can translate a string to a URL encoding
(see here: http://www.w3schools.com/tags/ref_urlencode.asp)

Is it implemented? (I wasn't able to find any reference to it)

Thanks,
Tal


Perhaps ?URLencode

--
Romain Francois
Professional R Enthusiast
+33(0) 6 28 91 30 30
http://romainfrancois.blog.free.fr
|- http://bit.ly/9VOd3l : ZAT! 2010
|- http://bit.ly/c6DzuX : Impressionnism with R
`- http://bit.ly/czHPM7 : Rcpp Google tech talk on youtube

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] t-stat for the coefficients of an ARIMA model

2010-11-25 Thread David Winsemius


On Nov 25, 2010, at 10:50 AM, Samuel Le wrote:


Dear all,



I am  fitting a time series using the following command:

Ts.arima-arima(x,c(2,1,2)) where x is a time series.

What the function returns is perfectly fine but I was wondering if I  
could access to the t-stat of the coefficients I got from the arima  
function.


The typical approach is to see if there is a coef function and a vcov  
function for your fit and to see if this gives sensible results:


coef(fit)/sqrt(vcov(diag(fit)))

(I looked at the docs and those functions are available for arima  
objects. However, it's still your responsibility to properly interpret  
such output. I have no substantial experience with time series  
analysis and as a general rule worry when the package authors choose  
not to provide a particular statistic. )


David Winsemius, MD
West Hartford, CT

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Fast Two-Dimensional Optimization

2010-11-25 Thread Wonsang You
Thank you for your kind comments.
I tried different algorithms such as BFGS, CG, and so on. However, the
choice of algorithms was not effective on reducing computation time.

Instead, your second suggestion of coding the gradient of minimization
function was a little bit successful to reduce computation time. It was
twice faster than before. I appreciate you for your help.


2010/11/22 Rubén Roa r...@azti.es

 Did you try different algorithms? optim has Nelder-Mead (default), BFGS,
 simulated annealing, CG, etc.
 Depending on your problem, they can be much faster than the default.
 See
 ?optim
 and check the info for 'method'

 Check also optimx, a new wrapper for optim that can try out all methods.

 Another approach that may speed up calculations is to code for the
 gradients of the function that you're minimizing.
 Check argument 'gr' of optimx.

 HTH


 

 Dr. Rubén Roa-Ureta
 AZTI - Tecnalia / Marine Research Unit
 Txatxarramendi Ugartea z/g
 48395 Sukarrieta (Bizkaia)
 SPAIN



  -Mensaje original-
  De: r-help-boun...@r-project.org
  [mailto:r-help-boun...@r-project.org] En nombre de Wonsang You
  Enviado el: lunes, 22 de noviembre de 2010 16:17
  Para: r-help@r-project.org
  Asunto: [R] Fast Two-Dimensional Optimization
 
 
  Dear R Helpers,
 
  I have attempted optim function to solve a two-dimensional
  optimization problem. It took around 25 second to complete
  the procedure.
  However, I want to reduce the computation time: less than 7
  second. Is there any optimization function in R which is very rapid?
 
  Best Regards,
  Wonsang
 
 
  -
  Wonsang You
  Leibniz Institute for Neurobiology
  --
  View this message in context:
  http://r.789695.n4.nabble.com/R-Fast-Two-Dimensional-Optimizat
  ion-tp3053782p3053782.html
  Sent from the R help mailing list archive at Nabble.com.
 
[[alternative HTML version deleted]]
 
  __
  R-help@r-project.org mailing list
  https://stat.ethz.ch/mailman/listinfo/r-help
  PLEASE do read the posting guide
  http://www.R-project.org/posting-guide.html
  and provide commented, minimal, self-contained, reproducible code.
 


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] About searching criteria

2010-11-25 Thread Henrique Dallazuanna
Try this:

d - data()
ne - new.env()
data(list = grep(\\(, d$results[,'Item'], value = TRUE, invert = TRUE),
envir = ne)
out - eapply(ne, names)
names(which(lapply(lapply(out, '%in%', c(Run, conc, density)), sum) ==
3))



On Thu, Nov 25, 2010 at 1:38 PM, Stephen Liu sati...@yahoo.com wrote:

 Hi folks,

 I need to search the dataset on data with name on heading;
 Run conc density


 I look at ??help.search and could not resolve;

 help.search(pattern, fields = c(alias, concept, title)

 What shall I replace pattern?

 I suppose replacing alias, concept, title with Run, conc,
 density ?

 Please help.  TIA

 B.R.
 Stephen L



 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.




-- 
Henrique Dallazuanna
Curitiba-Paraná-Brasil
25° 25' 40 S 49° 16' 22 O

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] moving xlabels in lattice

2010-11-25 Thread Peter Ehlers

On 2010-11-25 07:00, statquant2 wrote:


Dear R users,

I am trying to move the xlab string on my xyplot, without success, I would
like it to shifted down, would one of you know a way ?

Thanks for reading
Colin


Have a look at the possible height adjustments with

  trellis.par.get()$layout.heights

Likely candidates for adjustment are:
  xlab (space for the label)
  axis.bottom  (space between axis and bottom)
  axis.xlab.bottom (space between axis, label, and bottom)

Then try

  xyplot(rnorm(10) ~ 1:10, xlab = the x-label,
par.settings = list(
  layout.heights = list(
xlab = 3
  # axis.xlab.padding = 3
  # axis.bottom = 3
  )))

Comment/uncomment to suit.

Peter Ehlers

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] aftreg vs survreg loglogistic aft model (different intercept term)

2010-11-25 Thread Immanuel

Hi, I'm estimating a loglogistic aft (accelerated failure time) model, just a
simple plain vanilla one (without time dependent covariates), I'm comparing
the results that I obtain between aftreg (eha package) and survreg(surv
package). If I don't use any covariate the results are identical , if I add
covariates all the coefficients are the same until a precision of 10^4 or
10^-5 except for the intercept term(in survreg)=log(scale) (in aftreg) that
are similar but with a worse precision  of 10^-1 or 10^-2. Any idea of why
this happens? Probably the different covariates coefficients  have a so big
influence on the intercept term?
Thank you

Immanuel
-- 
View this message in context: 
http://r.789695.n4.nabble.com/aftreg-vs-survreg-loglogistic-aft-model-different-intercept-term-tp3059250p3059250.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] help: program efficiency

2010-11-25 Thread William Dunlap
If the input vector t is known to be ordered
(or if you only care about runs of duplicated
values, not all duplicated values) the following
is pretty quick

nodup3 - function (t) { 
t + (sequence(rle(t)$lengths) - 1)/100
}

If you don't know if the the input will be ordered
then ave() will do it a bit faster than your
code

nodup2 - function (t) { 
ave(t, t, FUN = function(x) x + (seq_along(x) - 1)/100)
}

E.g., for a sorted sequence of 300,000 numbers drawn with
replacement from 1:100,000 I get:

 a2 - sort(sample(1:1e5, size=3e5, replace=TRUE))
 system.time(v - nodup(a2))
   user  system elapsed 
   2.780.053.97 
 system.time(v2 - nodup2(a2))
   user  system elapsed 
   1.830.022.66 
 system.time(v3 - nodup3(a2))
   user  system elapsed 
   0.180.000.14 
 identical(v,v2)  identical(v,v3)
[1] TRUE

If speed is truly an issue, the built-in sequence may
be replaced by a faster one that does the same thing:

nodup3a - function (t) {
faster.sequence - function(nvec) {
seq_len(sum(nvec)) - rep(cumsum(c(0L, nvec[-length(nvec)])), 
nvec)
}
t + (faster.sequence(rle(t)$lengths) - 1)/100
}

That took 0.05 seconds on the a2 dataset and produced
identical results.

Bill Dunlap
Spotfire, TIBCO Software
wdunlap tibco.com  

 -Original Message-
 From: r-help-boun...@r-project.org 
 [mailto:r-help-boun...@r-project.org] On Behalf Of randomcz
 Sent: Thursday, November 25, 2010 6:49 AM
 To: r-help@r-project.org
 Subject: [R] help: program efficiency
 
 
 hey guys,
 
 I am working on a function to make a duplicated value unique. 
 For example,
 the original vector would be like : a = c(2,1,1,3,3,3,4)
 I'll like to transform it into:
 a.nodup = 2, 1.01, 1.02, 3.01, 3.02, 3.03, 4
 basically, find the duplicates and assign a unique value by 
 adding a small
 amount and keep it in order.
 I come up with the following codes, but it runs slow if t is 
 large. Is there
 a better way to do it?
 nodup = function(t)
 {
   t.index=0
   t.dup=duplicated(t)
   for (i in 2:length(t))
   {
 if (t.dup[i]==T)
   t.index=t.index+0.01
 else t.index=0
 t[i]=t[i]+t.index
   }
   return(t)
 }
 
 
 -- 
 View this message in context: 
 http://r.789695.n4.nabble.com/help-program-efficiency-tp305907
9p3059079.html
 Sent from the R help mailing list archive at Nabble.com.
 
 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide 
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.
 

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Is there an equivalent to predict(..., type=linear) of a Proportional hazard model for a Cox model instead?

2010-11-25 Thread David Winsemius


On Nov 25, 2010, at 10:08 AM, Ben Rhelp wrote:


Hi David,

Thank you for your reply. See below for more information.



From: David Winsemius




On Nov 25, 2010, at 7:27 AM, Ben Rhelp wrote:

I manage to  achieve similar results with a Cox model as follows  
but I don't
really  understand why we have to take the inverse of the linear  
prediction

with

the Cox model


Different parameterization. You can find expanded answer(s)  in the  
archives

and in the documentation of  survreg.distributions.



I understand (i think) the difference in model structures between a  
Cox (coxph)

and Proportional hazard model (survreg).


I couldn't tell whether this means you decided that those citations  
answered your question. If not, then refer to Therneau's or Lumley's  
replies in rhelp to a similar question earlier this month.:


https://stat.ethz.ch/pipermail/r-help/2010-November/259796.html
https://stat.ethz.ch/pipermail/r-help/2010-November/259747.html







and why we do not need to divide by the  number of days in the year
anymore?


Here I'm guessing (since you  don't offer enough evidence to  
confirm) that the
difference is in the time  scales used in your Aidsp$survtime  
versus some other

example to which you are  comparing .


Both models are run from the same data, so I am not expecting any  
differences in

time scales.

To get similar results, I need to actually run the following  
equations:
expected_lifetime_in_years = exp(fit)/365.25---  
Linear

prediction of the Proportional hazard model
expected_lifetime_in_years = 1/exp(fit) 
--- Linear

prediction of the Cox model
where fit come from the linear prediction of each models,  
respectively.


Actually, in the code below, I re-run the models and predictions  
based on a

yearly sampling time (instead of daily).
Again, to get similar results, I now need to actually run the  
following

equations:
expected_lifetime_in_years = exp(fit)
--- Linear

prediction of the Proportional hazard model
expected_lifetime_in_years = 1/exp(fit) 
--- Linear

prediction of the Cox model

I think I understand the logic behind the results of the  
proportional hazard

model, but not for the prediction of the Cox model.


 Cox models are PH models.



Thank you for your help. I hope this is not a too stupid hole in my  
logic.


Here is the self contained R code to produce the charts:

library(MASS);
library(survival);

#Same data but parametric fit
make.aidsp - function(){
  cutoff - 10043 # July 1987 in julian days
  btime - pmin(cutoff, Aids2$death) - pmin(cutoff, Aids2$diag)
  atime - pmax(cutoff, Aids2$death) - pmax(cutoff, Aids2$diag)
  survtime - btime + 0.5*atime
  status - as.numeric(Aids2$status)
  data.frame(survtime, status = status - 1, state = Aids2$state,
  T.categ = Aids2$T.categ, age = Aids2$age, sex = Aids2$sex)
}
Aidsp - make.aidsp()

# MASS example with the proportional hazard model
par(mfrow = c(1, 2));
(aids.ps - survreg(Surv(survtime + 0.9, status) ~ state + T.categ +
pspline(age, df=6), data = Aidsp))
zz - predict(aids.ps, data.frame(state = factor(rep(NSW, 83),  
levels =

levels(Aidsp$state)),
   T.categ = factor(rep(hs, 83), levels = levels(Aidsp$T.categ)),  
age =

0:82), se = T, type = linear)
plot(0:82, exp(zz$fit)/365.25, type = l, ylim = c(0, 2), xlab =  
age, ylab =

expected lifetime (years))
lines(0:82, exp(zz$fit+1.96*zz$se.fit)/365.25, lty = 3, col = 2)
lines(0:82, exp(zz$fit-1.96*zz$se.fit)/365.25, lty = 3, col = 2)
rug(Aidsp$age + runif(length(Aidsp$age), -0.5, 0.5), ticksize = 0.015)


# Same example but with a Cox model instead
(aids.pscp - coxph(Surv(survtime + 0.9, status) ~ state + T.categ +
pspline(age, df=6), data = Aidsp))
zzcp - predict(aids.pscp, data.frame(state = factor(rep(NSW, 83),  
levels =

levels(Aidsp$state)),
   T.categ = factor(rep(hs, 83), levels = levels(Aidsp$T.categ)),  
age =

0:82), se = T, type = lp)
plot(0:82, 1/exp(zzcp$fit), type = l, ylim = c(0, 2), xlab =  
age, ylab =

expected lifetime (years))
lines(0:82, 1/exp(zzcp$fit+1.96*zzcp$se.fit), lty = 3, col = 2)
lines(0:82, 1/exp(zzcp$fit-1.96*zzcp$se.fit), lty = 3, col = 2)
rug(Aidsp$age + runif(length(Aidsp$age), -0.5, 0.5), ticksize = 0.015)


# Change the sampling time from daily to yearly
par(mfrow = c(1, 1));
(aids.ps - survreg(Surv((survtime + 0.9)/365.25, status) ~ state +  
T.categ +

pspline(age, df=6), data = Aidsp))
zz - predict(aids.ps, data.frame(state = factor(rep(NSW, 83),  
levels =

levels(Aidsp$state)),
   T.categ = factor(rep(hs, 83), levels = levels(Aidsp$T.categ)),  
age =

0:82), se = T, type = linear)
plot(0:82, exp(zz$fit), type = l, ylim = c(0, 2), xlab = age,  
ylab =

expected lifetime (years))

(aids.pscp - coxph(Surv((survtime + 0.9)/365.25, status) ~ state +  
T.categ +

pspline(age, df=6), data = Aidsp))
zzcp - predict(aids.pscp, data.frame(state = factor(rep(NSW, 83),  
levels =

levels(Aidsp$state)),
   

Re: [R] Is there an implementation for URL Encoding (/format) in R?

2010-11-25 Thread Duncan Temple Lang


On 11/25/10 7:53 AM, Tal Galili wrote:
 Hello all,
 
 I would like some R function that can translate a string to a URL encoding
 (see here: http://www.w3schools.com/tags/ref_urlencode.asp)
 
 Is it implemented? (I wasn't able to find any reference to it)


I expect there are several implementations, spread across
different packages.  The function curlEscape() in RCurl is one.

 D.


 
 Thanks,
 Tal
 
 
 
 
 
 Contact
 Details:---
 Contact me: tal.gal...@gmail.com |  972-52-7275845
 Read me: www.talgalili.com (Hebrew) | www.biostatistics.co.il (Hebrew) |
 www.r-statistics.com (English)
 --
 
   [[alternative HTML version deleted]]
 
 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] difficulty setting the random = argument to lme()

2010-11-25 Thread Dennis Murphy
Hi Robert:

It appears to me that you have a split-plot structure, so let me see if I
have it right.

The 'whole-plot' experiment looks like a replicated randomized block design
- the studies are the blocks, the treatments A and B are the whole-plot
treatments, each of which is assigned randomly to three subjects...basically
the 'between-subjects' part of the design.

The within-subject treatment factor is site (let's call them 1 and 2 to
avoid confusion with the treatment labels), and it makes sense to me that
the two measurements are correlated within animal, but I don't quite see why
it makes sense that they should be correlated between animals not treated
alike in the same study. I'm not in pharma, so I might learn something by
asking.  I tend to look at the means model part of a design with ANOVA just
because I'm old:

Study  9
Treatment   1
Study * treatment9
Subjects40  (whole-plot error)

Site 1
Site * Study9
Site * Treatment  1
Site * Study * Trt 9
Residual40  (split plot error)

Study and subjects can reasonably be thought of as random effects. If the
same sites are chosen for each subject, they would have to be fixed. The
interactions with study and subject are random. I agree with the fixed
effects specification so far, but some of the random interactions aren't
obvious to me, although I can understand how they arise from the structure
of the data.

A couple of questions:
(1) Are the studies expected to be correlated, and if so, was that the
motivation for the replicate subjects per treatment level? I'm rather
accustomed to blocks representing independent replications of the
experiment, which I would have expected by the use of different subjects in
each study. Or is all of this a necessary precaution in the clinical trial?

(2) Since sites were the same for each subject, I can see the within-subject
correlation and between-subject correlation for subjects in the same study
with the same treatment, but I'm curious as to why the Site * Study
interaction is relevant.

The whole-plot part of this is pretty easy, but I think I (and perhaps
others) would need some data to play with to work on getting the random
effects and correlation structure specified properly. It also appears you
will need to use lme4 rather than nlme for this problem due to the crossed
random effects, which lme() can't handle. That may be the source of your
trouble :)

This is certainly an interesting problem; thanks for sharing it. I might
also suggest that this be taken to the r-sig-mixed-models list, where you
are likely to have more people who are interested in this type of problem.

Cheers,
Dennis

On Thu, Nov 25, 2010 at 5:00 AM, Robert Kinley kinley_rob...@lilly.comwrote:

 My small brain is having trouble getting to grips with lme()

 I wonder if anyone can help me correctly set the   random = argument
 to lme() for this kind of setup with  (I think) 9  variance/covariance
 components ...

   Study.1  Study.2 ...
 Study.10
  Treatment.A:   subject:  1  2  3   4  5  6  etc. 28 29 30

  Treatment.B:   subject: 31 32 3334 35 36   58 59 60
   A variable is measured at 2 fixed sites (A and B) on each subject

 so we have fixed effects :-

  between-Treatments
  between-sites (A and B)
  Treatment*site interaction


 and we have random effects :-

  study effects at site A
  study effects at site B
  correlation between site A and site B study effects

  study*treatment interaction effects at site A
  study*treatment interaction effects at site B
  correlation between site A and B study*treatment interaction effects

  residual (between-subject) effects at site A
  residual (between-subject) effects at site B
  correlation between site A and B residuals (between-subject) effects

 My problem is formulating the  random = argument to give estimates
 of all 9 random components ...

 Hope someone can help ...

  Robert Kinley




 Study:  Pos tissue VC, Neg tissue VC, Pos/Neg tissue
 correlation
 Study*Group:Pos tissue VC, Neg tissue VC, Pos/Neg tissue
 correlation
 Residual (animal):  Pos tissue VC, Neg tissue VC, Pos/Neg tissue
 correlation



[[alternative HTML version deleted]]

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Lifting Wavelet Transform

2010-11-25 Thread Dennis Murphy
Hi:

Package sos is a wonderful tool for this sort of question...

library(sos)
findFn('lifting wavelet transform')

It returned nine possible matches in two packages, adlift and nlt.

HTH,
Dennis

On Wed, Nov 24, 2010 at 8:46 PM, assaedi76 assaedi76 assaed...@yahoo.comwrote:

 Hi R users

 Thanks in advance

 Is lifting wavelet transform implemented in R? If so, which package or
 codes  can be used for performing that?

 assaed...@yahoo.com

 Thanks





[[alternative HTML version deleted]]


 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.



[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] delete-d jackknife

2010-11-25 Thread ufuk beyaztas

Thank you so much :)
-- 
View this message in context: 
http://r.789695.n4.nabble.com/delete-d-jackknife-tp3058335p3059364.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Is there an equivalent to predict(..., type=linear) of a Proportional hazard model for a Cox model instead?

2010-11-25 Thread David Winsemius
I hit the send button on my second reply before I intended to. Since  
then I have noticed that the question I thought you were asking is not  
at all a good match to the Subject line of your message. There is a  
type =lp in predict.coxph and that is the linear predictor, although  
it is not a value that is on any transformation of time as would be  
the linear predictor of an accelerated failure time estimate. If you  
are looking for another method (in addition to predict.coxph) to  
produce expected survival from a coxph fit, you could also look at  
survexp. The ratetable argument accepts a fitted Cox model.


I have still not found a good answer to the question that I thought  
the body of your second posting was posing: namely why days and years  
are not handled the same in predict.survreg and predict.coxph.


--
David.
On Nov 25, 2010, at 12:35 PM, David Winsemius wrote:



On Nov 25, 2010, at 10:08 AM, Ben Rhelp wrote:


Hi David,

Thank you for your reply. See below for more information.



From: David Winsemius




On Nov 25, 2010, at 7:27 AM, Ben Rhelp wrote:

I manage to  achieve similar results with a Cox model as follows  
but I don't
really  understand why we have to take the inverse of the linear  
prediction

with

the Cox model


Different parameterization. You can find expanded answer(s)  in  
the archives

and in the documentation of  survreg.distributions.



I understand (i think) the difference in model structures between a  
Cox (coxph)

and Proportional hazard model (survreg).


I couldn't tell whether this means you decided that those citations  
answered your question. If not, then refer to Therneau's or Lumley's  
replies in rhelp to a similar question earlier this month.:


https://stat.ethz.ch/pipermail/r-help/2010-November/259796.html
https://stat.ethz.ch/pipermail/r-help/2010-November/259747.html







and why we do not need to divide by the  number of days in the year
anymore?


Here I'm guessing (since you  don't offer enough evidence to  
confirm) that the
difference is in the time  scales used in your Aidsp$survtime  
versus some other

example to which you are  comparing .


Both models are run from the same data, so I am not expecting any  
differences in

time scales.

To get similar results, I need to actually run the following  
equations:
expected_lifetime_in_years = exp(fit)/365.25--- 
 Linear

prediction of the Proportional hazard model
expected_lifetime_in_years = 1/exp(fit) 
--- Linear

prediction of the Cox model
where fit come from the linear prediction of each models,  
respectively.


Actually, in the code below, I re-run the models and predictions  
based on a

yearly sampling time (instead of daily).
Again, to get similar results, I now need to actually run the  
following

equations:
expected_lifetime_in_years = exp(fit)
--- Linear

prediction of the Proportional hazard model
expected_lifetime_in_years = 1/exp(fit) 
--- Linear

prediction of the Cox model

I think I understand the logic behind the results of the  
proportional hazard

model, but not for the prediction of the Cox model.


Cox models are PH models.



Thank you for your help. I hope this is not a too stupid hole in my  
logic.


Here is the self contained R code to produce the charts:

library(MASS);
library(survival);

#Same data but parametric fit
make.aidsp - function(){
 cutoff - 10043 # July 1987 in julian days
 btime - pmin(cutoff, Aids2$death) - pmin(cutoff, Aids2$diag)
 atime - pmax(cutoff, Aids2$death) - pmax(cutoff, Aids2$diag)
 survtime - btime + 0.5*atime
 status - as.numeric(Aids2$status)
 data.frame(survtime, status = status - 1, state = Aids2$state,
 T.categ = Aids2$T.categ, age = Aids2$age, sex = Aids2$sex)
}
Aidsp - make.aidsp()

# MASS example with the proportional hazard model
par(mfrow = c(1, 2));
(aids.ps - survreg(Surv(survtime + 0.9, status) ~ state + T.categ +
pspline(age, df=6), data = Aidsp))
zz - predict(aids.ps, data.frame(state = factor(rep(NSW, 83),  
levels =

levels(Aidsp$state)),
  T.categ = factor(rep(hs, 83), levels = levels(Aidsp$T.categ)),  
age =

0:82), se = T, type = linear)
plot(0:82, exp(zz$fit)/365.25, type = l, ylim = c(0, 2), xlab =  
age, ylab =

expected lifetime (years))
lines(0:82, exp(zz$fit+1.96*zz$se.fit)/365.25, lty = 3, col = 2)
lines(0:82, exp(zz$fit-1.96*zz$se.fit)/365.25, lty = 3, col = 2)
rug(Aidsp$age + runif(length(Aidsp$age), -0.5, 0.5), ticksize =  
0.015)



# Same example but with a Cox model instead
(aids.pscp - coxph(Surv(survtime + 0.9, status) ~ state + T.categ +
pspline(age, df=6), data = Aidsp))
zzcp - predict(aids.pscp, data.frame(state = factor(rep(NSW,  
83), levels =

levels(Aidsp$state)),
  T.categ = factor(rep(hs, 83), levels = levels(Aidsp$T.categ)),  
age =

0:82), se = T, type = lp)
plot(0:82, 1/exp(zzcp$fit), type = l, ylim = c(0, 2), xlab =  
age, ylab =

expected lifetime (years))
lines(0:82, 

[R] [libsvm] predict function error

2010-11-25 Thread zhliu.tju
Dear R users,

There is a error message when I run the following code. It is used to load
microarray data and use the top 1000 genes for training svm to classify test
set .

 library(e1071)
Loading required package: class
 f=read.table(F:\\lab\\
microarray analysis\\VEH LPS\\exprs.txt,
sep=\t,header=FALSE,col.names=c(Gene,VEH,VEH,VEH,VEH,VEH,LPS,LPS,LPS,LPS,LPS))
 tf=t(f)
 colnames(tf)=tf[1,]
 tf=tf[-1,]
 tf=as.data.frame(tf)
 array - apply(tf[,2:24928],c(1,2),as.numeric)
 label - as.factor(tf$ENTREZGENE)

 n - nrow(array) #get sample number
 permutation - sample(1:n)
 array.perm - array[permutation,] # random permutation of samples
 label.perm - label[permutation] # same permutation of labels

 k - 5 #set cross validation steps

 for (i in 1:k){
+ win - round(n/k) # size of window
+ cv - ((i-1)*win+1):min(n,i*win) # move window
+
+ CV.label.test - label.perm[cv]
+ CV.label.train - label.perm[-cv]
+
+ CV.train.neg.num -
nrow(as.data.frame(CV.label.train[CV.label.train==VEH]))
+ CV.train.pos.num -
nrow(as.data.frame(CV.label.train[CV.label.train==LPS]))
+ CV.train.num - CV.train.pos.num+CV.train.neg.num
+
+ CV.test - array.perm[cv,] # samples within window
+ CV.train - array.perm[-cv,] # samples outside window
+
+ CV.train.pos -
as.data.frame(CV.train)[as.data.frame(CV.label.train)==LPS,] # matrix of
positive samples
+ CV.train.pos.mean - apply(CV.train.pos,2,mean) # compute mean of each
column (gene)
+ CV.train.pos.var - apply(CV.train.pos,2,var) # compute variance of each
column (gene)
+
+ CV.train.neg -
as.data.frame(CV.train)[as.data.frame(CV.label.train)==VEH,] # matrix of
positive samples
+ CV.train.neg.mean - apply(CV.train.neg,2,mean) # compute mean of each
column (gene)
+ CV.train.neg.var - apply(CV.train.neg,2,var)
+
+ tscore
-(abs(CV.train.pos.mean-CV.train.neg.mean)/sqrt((CV.train.pos.num-1)*CV.train.pos.var+(CV.train.neg.num-1)*CV.train.neg.var))*sqrt(CV.train.pos.num*CV.train.neg.num*(CV.train.num-2)/CV.train.num)
+ index - order(tscore, decreasing=TRUE)
+ CV.train=CV.train[,index[1:100]]
+
+ svm.train-svm(CV.train, CV.label.train, type=C-classification,
kernel=linear, cross=5, scale=TRUE)
+ }
 pred-predict(svm.train,CV.test)
Error in scale.default(newdata[, object$scaled, drop = FALSE], center =
object$x.scale$scaled:center,  :
  length of 'center' must equal the number of columns of 'x'

I wonder what is the reason for such kind of error.

 sessionInfo()
R version 2.11.1 (2010-05-31)
i386-pc-mingw32

locale:
[1] LC_COLLATE=Chinese_People's Republic of China.936
[2] LC_CTYPE=Chinese_People's Republic of China.936
[3] LC_MONETARY=Chinese_People's Republic of China.936
[4] LC_NUMERIC=C
[5] LC_TIME=Chinese_People's Republic of China.936

attached base packages:
[1] stats graphics  grDevices utils datasets  methods   base

other attached packages:
[1] e1071_1.5-24 class_7.3-2

Thanks a lot!

-- 
Best Regards,

Zhe Liu

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] help: program efficiency

2010-11-25 Thread Mike Marchywka





 Date: Thu, 25 Nov 2010 06:49:19 -0800
 From: rando...@gmail.com
 To: r-help@r-project.org
 Subject: [R] help: program efficiency


 hey guys,

 I am working on a function to make a duplicated value unique. For example,
 the original vector would be like : a = c(2,1,1,3,3,3,4)
 I'll like to transform it into:
 a.nodup = 2, 1.01, 1.02, 3.01, 3.02, 3.03, 4
 basically, find the duplicates and assign a unique value by adding a small
 amount and keep it in order.
 I come up with the following codes, but it runs slow if t is large. Is there
 a better way to do it?


I guess I'd just make a vector of uniform or even normal random numbers
and add to your input vector. This of course is not guaranteed and adds
to uniques  but you can test and repeat and it is probably closer
to what you want but I am only speculating on your objectives.


 nodup = function(t)
 {
 t.index=0
 t.dup=duplicated(t)
 for (i in 2:length(t))
 {
 if (t.dup[i]==T)
 t.index=t.index+0.01
 else t.index=0
 t[i]=t[i]+t.index
 }
 return(t)
 }


 --
 View this message in context: 
 http://r.789695.n4.nabble.com/help-program-efficiency-tp3059079p3059079.html
 Sent from the R help mailing list archive at Nabble.com.

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.
   
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] R and R-related meetups at UCLA

2010-11-25 Thread Jan de Leeuw
UCLA Statistics is hosting three meetup groups that are either straight R (361 
members)
or R-related (GRASS, which interacts directly with R, and GPGPU, which will look
at interfacing R with OpenMP, GCD, OpenCL, CUDA, ...)

http://www.meetup.com/LAarea-R-usergroup
http://www.meetup.com/Los-Angeles-Area-GRASS-Users-Group/
http://www.meetup.com/SoCal-GPGPU-and-Commodity-Parallel-Programming-Group/

If you are local SoCal, join, drop in.

===
Jan de Leeuw; Distinguished Professor and Chair, UCLA Department of Statistics;
Director: UCLA Center for Environmental Statistics (CES);
Editor: Journal of Multivariate Analysis, Journal of Statistical Software;
US mail: 8125 Math Sciences Bldg, Box 951554, Los Angeles, CA 90095-1554
phone (310)-825-9550;  fax (310)-206-5658;  email: dele...@stat.ucla.edu
.mac: jdeleeuw ++  aim: deleeuwjan ++ skype: j_deleeuw
homepages: http://gifi.stat.ucla.edu ++ http://www.cuddyvalley.org
 
-
  No matter where you go, there you are. --- Buckaroo Banzai
   http://gifi.stat.ucla.edu/sounds/nomatter.au 
 
 
-


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Cumsum with a max and min value

2010-11-25 Thread henrique
I have a vector of values -1, 0, 1, say

 

a - c(0, 1, 0, 1, 1, -1, -1, -1, 0, -1, -1)

 

I want to create a vector of the cumulative sum of this, but I need to set a
maximum and minimum value for the cumsum, for example:

 

max_value - 2

min_value - -2

 

the expected result would be (0, 1, 1, 2, 2, 1, 0, -1, -1, -2, -2)

 

The only way I managed to do It, was

 

res - vector(length=length(a))

res[1] - a[1]

for ( i in 2:length(a)) res[i] - res[i-1] + a[i] * (( res[i-1]  max_value
 a[i]  0 ) | ( res[i-1]  min_value  a[i]  0 ))

 

 

This is certainly not the best way to do it, so any suggestions?

 

 

Henrique


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Go (back) from Rd to roxygen

2010-11-25 Thread Yihui Xie
Hi all,

Since roxygen is a great help to document R packages, I am wondering
if there exists an approach to go back from the raw Rd files to
roxygen-documentation? E.g. turn \author{Somebody} into @author
Somebody. This sounds ridiculous, but I believe it helps in the long
term for me to maintain R packages.

Thanks!

Regards,
Yihui
--
Yihui Xie xieyi...@gmail.com
Phone: 515-294-2465 Web: http://yihui.name
Department of Statistics, Iowa State University
2215 Snedecor Hall, Ames, IA

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] How to bin/average time points?

2010-11-25 Thread DonDolowy

Dear all,

I am pretty new to R only having an introduction course, so please bare with
me. I am doing my PhD at The Max Planck Institute of Immunobiology where I
am analyzing some calorimetry data from some mice. 
I have a spreadsheet consisting of measurements of the respiratory exchange
rate at different time points measured every 9 minutes over some days.
My goal is bin/average the time points of each hour to only one
measurements. 

E.g.
[Time] - [Measurement]
12.09 - 0.730
12.18 - 0.732
12.27 - 0.743
12.36 - 0.757
12.45 - 0.781
12.54 - 0.731
-- should be averaged to fx one time point and one value, fx:
12.30 - [average of the six measurements]

I know how to average the measurements in a whole column but how to average
every six measurements automatically and also how to average every six time
points and make a new sheet consisting of these data?

I hope you guys are able to help, since we are really stuck here. I can of
course do it manually but with 8000 measurements it will take lots of time.

Thank you very much.

Best regards,
Kevin Dalgaard
-- 
View this message in context: 
http://r.789695.n4.nabble.com/How-to-bin-average-time-points-tp3059509p3059509.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Cumsum with a max and min value

2010-11-25 Thread Henrique Dallazuanna
Try this:

ac - cumsum(a)
ifelse(ac  2, max_value, ifelse(ac  -2, min_value, ac))


On Thu, Nov 25, 2010 at 6:44 PM, henrique henri...@allianceasset.com.brwrote:

 I have a vector of values -1, 0, 1, say



 a - c(0, 1, 0, 1, 1, -1, -1, -1, 0, -1, -1)



 I want to create a vector of the cumulative sum of this, but I need to set
 a
 maximum and minimum value for the cumsum, for example:



 max_value - 2

 min_value - -2



 the expected result would be (0, 1, 1, 2, 2, 1, 0, -1, -1, -2, -2)



 The only way I managed to do It, was



 res - vector(length=length(a))

 res[1] - a[1]

 for ( i in 2:length(a)) res[i] - res[i-1] + a[i] * (( res[i-1]  max_value
  a[i]  0 ) | ( res[i-1]  min_value  a[i]  0 ))





 This is certainly not the best way to do it, so any suggestions?





 Henrique


[[alternative HTML version deleted]]

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.




-- 
Henrique Dallazuanna
Curitiba-Paraná-Brasil
25° 25' 40 S 49° 16' 22 O

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Problem with plotting diagnostics - Error in object$coefficients : $ operator is invalid for atomic vectors

2010-11-25 Thread Peter Ehlers

On 2010-11-25 02:25, Manderscheid Katharina wrote:

this problem seems to only exist in R 2.12.0 but not in R  2.11.1.
any ideas? a bug?



Duncan *did* say that he was using  R 2.12.0. So that's not likely
to be the problem. Most of the time, when users claim that a
problem exists in a new version that did not exist in an older
version, it's due to a change in the user's setup or to not
updating packages or to not checking the NEWS file.

Since you still have not provided a *reproducible* example,
it's not likely that anyone can help. Can't you make up a
small example that shows exactly how you are using lm()
and that will generate the error?

Peter Ehlers



--
dr. katharina manderscheid

soziologisches seminar
universität luzern

kasernenplatz 3
6000 luzern 7

tel. ++41 41 228 4657

web: http://www.unilu.ch/deu/dr.-katharina-manderscheid_346380.aspx

-Ursprüngliche Nachricht-
Von: Duncan Murdoch [mailto:murdoch.dun...@gmail.com]
Gesendet: Mittwoch, 17. November 2010 16:33
An: Manderscheid Katharina
Cc: 'r-help@r-project.org'
Betreff: Re: [R] Problem with plotting diagnostics - Error in 
object$coefficients : $ operator is invalid for atomic vectors

On 17/11/2010 10:28 AM, Manderscheid Katharina wrote:

hi all,
after fitting a multiple linear regression
model- lm(y ~ a + b+ c+d)
i wanted to plot diagnostics
plot(model)
but get the error message
Error in object$coefficients : $ operator is invalid for atomic vectors.
which does not make a lot of sense, since there is no $ - i am working with 
an attached dataset.
can anyone help, please??
thanks a lot,
kat



I just tried those lines (with fake data for a,b,c,d and y) and got no error 
message.  I was using R 2.12.0.

I think you need to show us a reproducible example, and the
sessionInfo() to go with it, to help with this.

Duncan Murdoch

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Apropos the day...

2010-11-25 Thread Dennis Murphy
Since today is American Thanksgiving, I want to thank:

(a) R-core for all of their efforts to produce what is, IMHO,
 the best statistical software around, not simply for the
 convenience of doing more, better, quicker, but also
 because it changes the landscape in the way one
 thinks about data analysis. It's a fantastic playground
 for a statistician and I enjoy using it immensely.
(b) All the folks on R-help who have asked interesting,
  thought-provoking questions and those who have
  provided even more thought-provoking and insightful
  answers. I have learned so much from this forum in
  the year or so since I rejoined it and I want to express
  my appreciation for that to all of you.
(c) All the package developers who expand the capabilities
  of the software and/or simplify the workflow - you
  do a great service to the community with your efforts.
(d) All of you who have written books or tutorials about R
  and those who host R-related web sites for your efforts
  to transfer your cumulative knowledge to the community.
(e) All those kindly people who do yeoman work in the
  background to make this list and project run more smoothly
  and productively.
(f) Ross Ihaka and Robert Gentleman, without whom
  this community would not exist.
(g) Anyone else I forgot who deserves some props for their
  work in making this one of the top open-source
  success stories around.

To the Americans on this list, Happy Thanksgiving, and to
the rest of you, thank you for making this such a vibrant
professional community.

Best regards to all,
Dennis

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Is there an implementation for URL Encoding (/format) in R?

2010-11-25 Thread Tal Galili
Thank you Duncan, Romain and Gustavo for referring me to both:
URLencode
and
curlEscape

I see that both functions work great for English, but fail to provide with
the proper translation for Hebrew characters.

For example, the word
שלום
(Peace, in Hebrew)
Should be this:
%D7%A9%D7%9C%D7%95%D7%9D
But instead, both commands translate it to:
URLencode(שלום)
%f9%ec%e5%ed

What do you suggest?  (write it myself, or is there something pre-made)

Thanks again for the help,
Tal



Contact
Details:---
Contact me: tal.gal...@gmail.com |  972-52-7275845
Read me: www.talgalili.com (Hebrew) | www.biostatistics.co.il (Hebrew) |
www.r-statistics.com (English)
--




On Thu, Nov 25, 2010 at 5:59 PM, Gustavo Carvalho
gustavo.bi...@gmail.comgustavo.bio%...@gmail.com
 wrote:

 URLencode

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Apropos the day...

2010-11-25 Thread Spencer Graves

ditto.  Spencer Graves


On 11/25/2010 1:20 PM, Dennis Murphy wrote:

Since today is American Thanksgiving, I want to thank:

(a) R-core for all of their efforts to produce what is, IMHO,
  the best statistical software around, not simply for the
  convenience of doing more, better, quicker, but also
  because it changes the landscape in the way one
  thinks about data analysis. It's a fantastic playground
  for a statistician and I enjoy using it immensely.
(b) All the folks on R-help who have asked interesting,
   thought-provoking questions and those who have
   provided even more thought-provoking and insightful
   answers. I have learned so much from this forum in
   the year or so since I rejoined it and I want to express
   my appreciation for that to all of you.
(c) All the package developers who expand the capabilities
   of the software and/or simplify the workflow - you
   do a great service to the community with your efforts.
(d) All of you who have written books or tutorials about R
   and those who host R-related web sites for your efforts
   to transfer your cumulative knowledge to the community.
(e) All those kindly people who do yeoman work in the
   background to make this list and project run more smoothly
   and productively.
(f) Ross Ihaka and Robert Gentleman, without whom
   this community would not exist.
(g) Anyone else I forgot who deserves some props for their
   work in making this one of the top open-source
   success stories around.

To the Americans on this list, Happy Thanksgiving, and to
the rest of you, thank you for making this such a vibrant
professional community.

Best regards to all,
Dennis

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.





--
Spencer Graves, PE, PhD
President and Chief Operating Officer
Structure Inspection and Monitoring, Inc.
751 Emerson Ct.
San José, CA 95126
ph:  408-655-4567

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Apropos the day...

2010-11-25 Thread Paul

I'll second that.

Paul Hurley.

On 25/11/10 21:25, Spencer Graves wrote:

ditto.  Spencer Graves


On 11/25/2010 1:20 PM, Dennis Murphy wrote:

Since today is American Thanksgiving, I want to thank:

(a) R-core for all of their efforts to produce what is, IMHO,
  the best statistical software around, not simply for the
  convenience of doing more, better, quicker, but also
  because it changes the landscape in the way one
  thinks about data analysis. It's a fantastic playground
  for a statistician and I enjoy using it immensely.
(b) All the folks on R-help who have asked interesting,
   thought-provoking questions and those who have
   provided even more thought-provoking and insightful
   answers. I have learned so much from this forum in
   the year or so since I rejoined it and I want to express
   my appreciation for that to all of you.
(c) All the package developers who expand the capabilities
   of the software and/or simplify the workflow - you
   do a great service to the community with your efforts.
(d) All of you who have written books or tutorials about R
   and those who host R-related web sites for your efforts
   to transfer your cumulative knowledge to the community.
(e) All those kindly people who do yeoman work in the
   background to make this list and project run more smoothly
   and productively.
(f) Ross Ihaka and Robert Gentleman, without whom
   this community would not exist.
(g) Anyone else I forgot who deserves some props for their
   work in making this one of the top open-source
   success stories around.

To the Americans on this list, Happy Thanksgiving, and to
the rest of you, thank you for making this such a vibrant
professional community.

Best regards to all,
Dennis

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide 
http://www.R-project.org/posting-guide.html

and provide commented, minimal, self-contained, reproducible code.







__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] question about importing phylogenetic tree

2010-11-25 Thread Nora Szabo
Hello,

I am trying to import a phylogenetic tree from Mesquite into R.  When I use
the read.nexus command I get the following message:

Warning message:
In matrix(x, ncol = 2, byrow = TRUE) :
  data length [589] is not a sub-multiple or multiple of the number of rows
[295]

A phylo object is created but I am unable to plot it (when I try R freezes)
and I can tell by looking at the tip labels that R is not interpreting the
tree properly (in addition to my taxa names I get a bunch of numbers as tip
labels).

I also tried downloaded RMesquite, however it failed and I got the following
message:

Warning: unable to access index for repository
http://R-Forge.R-project.org/bin/windows/contrib/2.11
Warning message:
In getDependencies(pkgs, dependencies, available, lib) :
  package ‘RMesquite’ is not available

Any suggestions would be greatly appreciated.

Thank you.

Nora

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] coxph strange result

2010-11-25 Thread Bond, Stephen
The following fit does not make sense to me, please, correct me if I have a 
logical error.

 moddowsn
Call:
coxph(formula = Surv(start, stop, resp) ~ sn + matfac2, data = coxsn1,
method = efron)


coef exp(coef) se(coef)   z   p
sn2   0.0497 1.051  0.02030   2.450 1.4e-02
sn3  -0.0532 0.948  0.02038  -2.610 9.0e-03
sn4  -0.0410 0.960  0.01979  -2.073 3.8e-02
sn5  -0.0776 0.925  0.01954  -3.973 7.1e-05
sn6  -0.1133 0.893  0.01839  -6.161 7.2e-10
sn7  -0.1252 0.882  0.01846  -6.781 1.2e-11
sn8  -0.1222 0.885  0.01994  -6.130 8.8e-10
sn9  -0.0507 0.951  0.02047  -2.478 1.3e-02
sn10 -0.0444 0.957  0.02056  -2.159 3.1e-02
sn11 -0.0433 0.958  0.02157  -2.008 4.5e-02
sn12 -0.0114 0.989  0.02037  -0.557 5.8e-01
matfac22 -0.2599 0.771  0.01727 -15.048 0.0e+00
matfac25 -0.1804 0.835  0.00924 -19.512 0.0e+00

Likelihood ratio test=651  on 13 df, p=0  n= 253802

This would indicate that in sn6 to sn8 there is less of a chance of an event. 
?? do the relative frequencies implied by the following table make any sense??

 table(coxsn1$matfac2,coxsn1$sn,coxsn1$resp)
, ,  = 0


1 2 3 4 5 6 7 8 9101112
  1  3407  3177  3425  3348  3975  3564  3181  3077  2894  2610  3441  3443
  2   920  1005  1142  1327  1645  1530  1330  1184   964   864   888   860
  5  9036  9507 10258 11888 16826 15575 13394 12346  9938  9001  8970  8599

, ,  = 1


1 2 3 4 5 6 7 8 9101112
  1  1453  1459  1186  1496  1295  1754  1429  1153  1106  1234   965  1532
  2   312   290   330   390   454   539   479   367   295   276   256   267
  5  2994  3207  3371  3629  4095  5581  5837  3844  3400  3199  2705  3084

Apparently the frequency of an event is higher in the summer months.
I apologize for not being able to disclose the dataset, but think that the 
table provides enough to address the question.
Thanks everybody.


Stephen B

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Help on running regression by grouping firms

2010-11-25 Thread Hadley Wickham
 res - function(x) resid(x)
 ds_test$u - do.call(c, llply(mods, res))

I'd be a little careful with this, because there's no guarantee the
results will by ordered in the same way as the input (and I'd also
prefer ds_test$u - unlist(llply(mods, res)) or ds_test$u -
laply(mods, res))

 In your case, where you have multiple grouping factors, you may have to be a
 little more careful, but the strategy is the same. You could possibly reduce
 it to a one-liner (untested):

 ds_test$u - do.call(c, dlply(ds_test, .(individual), function(x)
 resid(lm(size ~ time, data = x

Or:

ds_test - ddply(ds_test, .(individual), transform, u = resid(lm(size ~ time)))

which will guarantee the correct ordering.

Hadley

-- 
Assistant Professor / Dobelman Family Junior Chair
Department of Statistics / Rice University
http://had.co.nz/

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] discriminant function analysis

2010-11-25 Thread Chris Mcowen
Hi,

I am not sure if it is more robust than a discriminant function but it is 
certainly capable if differentiating between species based on morphology. I 
used 12 measurements in my fish. 

What did your PCA results show?

Unfortunately I haven't got round to publishing my data yet but I can send you 
a paper by my supervisor who did a similar analysis looking at morphological 
differentiation in cichlids in Africa.

Chris

Sent from my iPhone

On 25 Nov 2010, at 20:30, Mike Gibson megalop...@hotmail.com wrote:

 Sorry for the delay in getting back to you.  
  
 I did use a PCA.  
  
 I wanted to run a discriminant function as a comparison.  Is PCA more robust 
 at detecting differences based on the morphometrics?  
  
 Can you give me the details of your masters (title, school, ect) so I can 
 reference it.  
  
 Mike
  
  CC: r-help@r-project.org
  From: chrismco...@me.com
  Subject: Re: [R] discriminant function analysis
  Date: Tue, 16 Nov 2010 22:12:32 +
  To: megalop...@hotmail.com
  
  Hi, 
  
  I did this exact thing for my masters, with intertidal fish, I just used a 
  PCA? have you tried that? 
  
  Sent from my iPhone
  
  On 16 Nov 2010, at 17:01, Mike Gibson megalop...@hotmail.com wrote:
  
   
   My objective is to look at differences in two species of fish from 
   morphometric measurements. My morphometric measurements are head length, 
   eye diameter, snout length, and measurements from tail to each fin. I 
   want to use discrimanant function analyis to determine if there are 
   differences between the two species. 
   
   I am familiar with R but new to discrimannt function analysis. I want to 
   learn how to run this analysis in R. My internet search has not been very 
   successful. I did see refrence of the linear discriminant function (lda) 
   in R but I am not sure if this is the correct function for me. 
   
   Any advice and especially a reference to any examples would be greatly 
   appreciated. 
   
   Thanks. 
   
   Mike 
   [[alternative HTML version deleted]]
   
   __
   R-help@r-project.org mailing list
   https://stat.ethz.ch/mailman/listinfo/r-help
   PLEASE do read the posting guide 
   http://www.R-project.org/posting-guide.html
   and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] overlap cdf plots and add colors and etc

2010-11-25 Thread Roslina Zakaria
Hi Peter and Jorge,

Finally, it works.  Thank you for your guidance.

## CDF plots

par(mar=c(4,4,2,1.2),oma=c(0,0,0,0),xaxs=i, yaxs=i)
plot(ecdf(dt1),verticals=TRUE, horizontals=TRUE,pch=46,do.p=FALSE, col.hor=1, 
col.vert=1,lwd=2,)
lines(ecdf(dt2),verticals=TRUE, horizontals=TRUE,pch=20,do.p=FALSE, col.hor=2, 
col.vert=2,lwd=2)
legend(40, 0.8, legend = c(observed,fitted),
    col = c(1,2), pch=c(NA,NA), lty = c(1, 1),
    lwd=c(3,3),bty=n)

What is plain text?

Thank you.





From: Peter Ehlers ehl...@ucalgary.ca

Cc: Jorge Ivan Velez jorgeivanve...@gmail.com; r-help@r-project.org 
r-help@r-project.org
Sent: Fri, November 26, 2010 12:52:56 AM
Subject: Re: [R] overlap cdf plots and add colors and etc

On 2010-11-24 22:24, Roslina Zakaria wrote:
 Hi Jorge,

 I tried but still it does not work.  Thank you for your time.

Jorge's code works perfectly well.

If you prefer lines() over plot(, add = TRUE),
then use lines():

  plot( ecdf(rnorm(15, sd=3)), verticals=TRUE, col=black)
  lines(ecdf(rnorm(11, sd=3)), verticals=TRUE, col=red)
  legend(topright, legend = c(observed,fitted),
        col = c(black,red), pch=c(NA,NA), lty = c(1, 1),
        lwd=c(3,3),bty=n, pt.cex=2)

(Note that topright is probably the worst choice you
can make. And why set pt.cex when you don't have points??)

The key thing is that you can't specify your two
colours in the first plot() call. At that point,
R has no idea that you want to add to the plot.
(I understand that Greg Snow is working on a
mind-reading package, but so far it can probably
read only his mind, not yours.)

Here is a (very brief) reminder of how to post:

1. Use an informative subject line.
[[elided Yahoo spam]]

2. *Never* use the phrase it does not work.
    That is meaningless.
    Be specific about your problem.

3. Provide *reproducible* code.
    We don't have your 'datobs' or 'gam_sum_gen'.

4. Try to make the code *minimal*.
    It's not likely that anyone cares what labels you
    use for your axes/title (unless that's the problem).
    And nobody wants to see reams of data; use rnorm(),
    etc, or built-in data if possible.

5. Figure out how to set your mail program to send
    plain text.

I know the above (and more) is in the posting guide,
but it seems that nobody wants to read that quite
brief document.

Peter Ehlers


 
 From: Jorge Ivan Velezjorgeivanve...@gmail.com

 Cc: r-help@r-project.org
 Sent: Thu, November 25, 2010 4:46:37 PM
 Subject: Re: [R] overlap cdf plots and add colors and etc

 Hi Roslina,


 Try

 par(mar=c(4,4,2,1.2),oma=c(0,0,0,0),xaxs=i, yaxs=i)
 plot(ecdf(rnorm(100)))
 plot(ecdf(rnorm(100)), add = TRUE, col = 2)

 HTH,
 Jorge


 On Thu, Nov 25, 2010 at 12:18 AM, Roslina Zakaria  wrote:

 Hi r-users,

 I would like to overlap 2 ecdf plots.

 I tried this below and it gives me two plots of ecdf but just both just in
 black.

 par(mar=c(4,4,2,1.2),oma=c(0,0,0,0),xaxs=i, yaxs=i)
 plot(ecdf(datobs))
 lines(ecdf(gam_sum_gen))

 Then I try to add colors etc and also the legend but fail.

 par(mar=c(4,4,2,1.2),oma=c(0,0,0,0),xaxs=i, yaxs=i)
 plot(ecdf(datobs),main =CDF of the sum for winter
 season-Hume,cex.axis=1.2,xlab=Rainfall (mm), pch =
 pch,verticals=TRUE,col=c(black,red), lty=c(1,1),ylab=Cumulative
 probability, xlim=c(0,800),lwd=1)
 lines(ecdf(gam_sum_gen))
 legend(topright, legend = c(observed,fitted),
        col = c(black,red), pch=c(NA,NA), lty = c(1, 1),
        lwd=c(3,3),bty=n, pt.cex=2)
 box()

 I'm not sure why it doesn't show up at all.

 Thank you for any help given.



        [[alternative HTML version deleted]]





  
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] coxph strange result

2010-11-25 Thread David Winsemius


On Nov 25, 2010, at 5:16 PM, Bond, Stephen wrote:

The following fit does not make sense to me, please, correct me if I  
have a logical error.



moddowsn

Call:
coxph(formula = Surv(start, stop, resp) ~ sn + matfac2, data = coxsn1,
   method = efron)


   coef exp(coef) se(coef)   z   p
sn2   0.0497 1.051  0.02030   2.450 1.4e-02
sn3  -0.0532 0.948  0.02038  -2.610 9.0e-03
sn4  -0.0410 0.960  0.01979  -2.073 3.8e-02
sn5  -0.0776 0.925  0.01954  -3.973 7.1e-05
sn6  -0.1133 0.893  0.01839  -6.161 7.2e-10
sn7  -0.1252 0.882  0.01846  -6.781 1.2e-11
sn8  -0.1222 0.885  0.01994  -6.130 8.8e-10
sn9  -0.0507 0.951  0.02047  -2.478 1.3e-02
sn10 -0.0444 0.957  0.02056  -2.159 3.1e-02
sn11 -0.0433 0.958  0.02157  -2.008 4.5e-02
sn12 -0.0114 0.989  0.02037  -0.557 5.8e-01
matfac22 -0.2599 0.771  0.01727 -15.048 0.0e+00
matfac25 -0.1804 0.835  0.00924 -19.512 0.0e+00

Likelihood ratio test=651  on 13 df, p=0  n= 253802

This would indicate that in sn6 to sn8 there is less of a chance of  
an event. ?? do the relative frequencies implied by the following  
table make any sense??



table(coxsn1$matfac2,coxsn1$sn,coxsn1$resp)

, ,  = 0


   1 2 3 4 5 6 7 8 910 
1112
 1  3407  3177  3425  3348  3975  3564  3181  3077  2894  2610   
3441  3443
 2   920  1005  1142  1327  1645  1530  1330  1184   964   864
888   860
 5  9036  9507 10258 11888 16826 15575 13394 12346  9938  9001   
8970  8599


, ,  = 1


   1 2 3 4 5 6 7 8 910 
1112
 1  1453  1459  1186  1496  1295  1754  1429  1153  1106  1234
965  1532
 2   312   290   330   390   454   539   479   367   295   276
256   267
 5  2994  3207  3371  3629  4095  5581  5837  3844  3400  3199   
2705  3084


Apparently the frequency of an event is higher in the summer months.


Without knowing how the data was prepared, agreement would seem highly  
speculative on our parts. We haven't been told very much about this  
dataset apart from the cryptic variable names apparently referring to  
calendar months regarding some aspect of the cases.



I apologize for not being able to disclose the dataset, but think  
that the table provides enough to address the question.


I disagee. Suppose sn2 cases began observation in February and then  
had events in July or September. Or even supposing that observations  
are confined within a single month. If on average, the durations are  
longer within those months than in others, then you could get lower  
rates without having lower even frequencies.



Thanks everybody.


Stephen B


David Winsemius, MD
West Hartford, CT

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Go (back) from Rd to roxygen

2010-11-25 Thread Duncan Murdoch

On 25/11/2010 3:45 PM, Yihui Xie wrote:

Hi all,

Since roxygen is a great help to document R packages, I am wondering
if there exists an approach to go back from the raw Rd files to
roxygen-documentation? E.g. turn \author{Somebody} into @author
Somebody. This sounds ridiculous, but I believe it helps in the long
term for me to maintain R packages.


I have no idea, but it should be reasonably straightforward to write 
such a thing, since there's an Rd parser in the tools package.  Take a 
look at Rd2txt or one of the other converters that works on parsed Rd 
code, and write Rd2roxygen.


Duncan Murdoch

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] How to bin/average time points?

2010-11-25 Thread sachinthaka . abeywardana
It would be something like this (might have to change the syntax a bit)

bin_ave=0;
while (i lenth(time)){
bin_ave[k]=mean(time(i:i+6));
k=k+1;
i=i+6;
}

if your data is in a table format replace time with mytable$time.

hope this helps,

Sachin
p.s. sorry about corporate notice

--- Please consider the environment before printing this email --- 

Allianz - Best General Insurance Company of the Year 2010*
Allianz - General Insurance Company of the Year 2009+ 

* Australian Banking and Finance Insurance Awards
+ Australia and New Zealand Insurance Industry Awards 

This email and any attachments has been sent by Allianz ...{{dropped:3}}

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Cumsum with a max and min value

2010-11-25 Thread Gabor Grothendieck
On Thu, Nov 25, 2010 at 3:44 PM, henrique henri...@allianceasset.com.br wrote:
 I have a vector of values -1, 0, 1, say

 a - c(0, 1, 0, 1, 1, -1, -1, -1, 0, -1, -1)

 I want to create a vector of the cumulative sum of this, but I need to set a
 maximum and minimum value for the cumsum, for example:

 max_value - 2
 min_value - -2
 the expected result would be (0, 1, 1, 2, 2, 1, 0, -1, -1, -2, -2)


Try this:

f - function(x, y) max(min(x + y, max_value), min_value)
Reduce(f, a, 0, accumulate = TRUE)[-1]

-- 
Statistics  Software Consulting
GKX Group, GKX Associates Inc.
tel: 1-877-GKX-GROUP
email: ggrothendieck at gmail.com

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Installing RQuantLib on Win 7 64 Bit

2010-11-25 Thread Santosh Srinivas
Hello Group,

I am trying out RQuantLib on a 64bit Win 7 machine. But running into
installation errors

install.packages(RQuantLib)

Warning in install.packages(RQuantLib) :
  argument 'lib' is missing: using
'C:\Users\Tester\Documents/R/win64-library/2.11'
Warning: unable to access index for repository
http://www.stats.ox.ac.uk/pub/RWin/bin/windows64/contrib/2.11
Warning message:
In getDependencies(pkgs, dependencies, available, lib) :
  package ‘RQuantLib’ is not available


Any help with this installation?

Thank you.
S

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Cumsum with a max and min value

2010-11-25 Thread jim holtman
Does this do it;

 pmin(2, pmax(-2, cumsum(a)))
 [1]  0  1  1  2  2  2  1  0  0 -1 -2



On Thu, Nov 25, 2010 at 3:44 PM, henrique henri...@allianceasset.com.br wrote:
 I have a vector of values -1, 0, 1, say



 a - c(0, 1, 0, 1, 1, -1, -1, -1, 0, -1, -1)



 I want to create a vector of the cumulative sum of this, but I need to set a
 maximum and minimum value for the cumsum, for example:



 max_value - 2

 min_value - -2



 the expected result would be (0, 1, 1, 2, 2, 1, 0, -1, -1, -2, -2)



 The only way I managed to do It, was



 res - vector(length=length(a))

 res[1] - a[1]

 for ( i in 2:length(a)) res[i] - res[i-1] + a[i] * (( res[i-1]  max_value
  a[i]  0 ) | ( res[i-1]  min_value  a[i]  0 ))





 This is certainly not the best way to do it, so any suggestions?





 Henrique


        [[alternative HTML version deleted]]

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.




-- 
Jim Holtman
Cincinnati, OH
+1 513 646 9390

What is the problem that you are trying to solve?

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Go (back) from Rd to roxygen

2010-11-25 Thread Hadley Wickham
 Since roxygen is a great help to document R packages, I am wondering
 if there exists an approach to go back from the raw Rd files to
 roxygen-documentation? E.g. turn \author{Somebody} into @author
 Somebody. This sounds ridiculous, but I believe it helps in the long
 term for me to maintain R packages.

Have a look at https://gist.github.com/d1bbd44894a99a2e1d1b for a start.

Hadley

-- 
Assistant Professor / Dobelman Family Junior Chair
Department of Statistics / Rice University
http://had.co.nz/

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] How to bin/average time points?

2010-11-25 Thread jim holtman
try this:

 # create times 9 minutes apart
 time - seq(as.POSIXct('2010-11-25 00:00'), by = '9 min', length = 480)
 mySamp - data.frame(time = time, value = sample(1:100, length(time), TRUE))
 # add column to split by hour
 mySamp$hour - format(mySamp$time, '%Y-%m-%d %H:30')
 # compute the mean for each hour
 tapply(mySamp$value, mySamp$hour, mean)
2010-11-25 00:30 2010-11-25 01:30 2010-11-25 02:30 2010-11-25 03:30
2010-11-25 04:30 2010-11-25 05:30
54.42857 59.85714 47.5 45.71429
 40.28571 56.5
2010-11-25 06:30 2010-11-25 07:30 2010-11-25 08:30 2010-11-25 09:30
2010-11-25 10:30 2010-11-25 11:30
46.57143 47.14286 34.0 53.85714
 50.28571 36.0
2010-11-25 12:30 2010-11-25 13:30 2010-11-25 14:30 2010-11-25 15:30
2010-11-25 16:30 2010-11-25 17:30
31.57143 44.57143 42.5 52.42857
 54.14286 44.7
2010-11-25 18:30 2010-11-25 19:30 2010-11-25 20:30 2010-11-25 21:30
2010-11-25 22:30 2010-11-25 23:30
50.28571 60.57143 36.0 42.14286
 65.14286 37.5
2010-11-26 00:30 2010-11-26 01:30 2010-11-26 02:30 2010-11-26 03:30
2010-11-26 04:30 2010-11-26 05:30
51.71429 58.85714 48.5 45.0
 44.0 38.0
2010-11-26 06:30 2010-11-26 07:30 2010-11-26 08:30 2010-11-26 09:30
2010-11-26 10:30 2010-11-26 11:30
56.0 34.14286 64.7 51.42857
 57.57143 44.5
2010-11-26 12:30 2010-11-26 13:30 2010-11-26 14:30 2010-11-26 15:30
2010-11-26 16:30 2010-11-26 17:30
65.0 59.57143 63.5 52.57143
 36.85714 63.3
2010-11-26 18:30 2010-11-26 19:30 2010-11-26 20:30 2010-11-26 21:30
2010-11-26 22:30 2010-11-26 23:30
44.85714 64.85714 63.0 62.57143
 62.0 57.0
2010-11-27 00:30 2010-11-27 01:30 2010-11-27 02:30 2010-11-27 03:30
2010-11-27 04:30 2010-11-27 05:30
26.71429 33.57143 37.5 67.0
 47.85714 63.0
2010-11-27 06:30 2010-11-27 07:30 2010-11-27 08:30 2010-11-27 09:30
2010-11-27 10:30 2010-11-27 11:30
40.28571 46.42857 54.5 41.0
 51.0 58.3
2010-11-27 12:30 2010-11-27 13:30 2010-11-27 14:30 2010-11-27 15:30
2010-11-27 16:30 2010-11-27 17:30
62.14286 52.28571 75.3 43.71429
 53.14286 27.5
2010-11-27 18:30 2010-11-27 19:30 2010-11-27 20:30 2010-11-27 21:30
2010-11-27 22:30 2010-11-27 23:30
33.42857 56.85714 57.8 51.0
 57.71429 38.7


On Thu, Nov 25, 2010 at 3:49 PM, DonDolowy kevin.dalga...@gmail.com wrote:

 Dear all,

 I am pretty new to R only having an introduction course, so please bare with
 me. I am doing my PhD at The Max Planck Institute of Immunobiology where I
 am analyzing some calorimetry data from some mice.
 I have a spreadsheet consisting of measurements of the respiratory exchange
 rate at different time points measured every 9 minutes over some days.
 My goal is bin/average the time points of each hour to only one
 measurements.

 E.g.
 [Time] - [Measurement]
 12.09 - 0.730
 12.18 - 0.732
 12.27 - 0.743
 12.36 - 0.757
 12.45 - 0.781
 12.54 - 0.731
 -- should be averaged to fx one time point and one value, fx:
 12.30 - [average of the six measurements]

 I know how to average the measurements in a whole column but how to average
 every six measurements automatically and also how to average every six time
 points and make a new sheet consisting of these data?

 I hope you guys are able to help, since we are really stuck here. I can of
 course do it manually but with 8000 measurements it will take lots of time.

 Thank you very much.

 Best regards,
 Kevin Dalgaard
 --
 View this message in context: 
 http://r.789695.n4.nabble.com/How-to-bin-average-time-points-tp3059509p3059509.html
 Sent from the R help mailing list archive at Nabble.com.

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.




-- 
Jim Holtman
Cincinnati, OH
+1 513 646 9390

What is the problem that you are trying to solve?

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] How to bin/average time points?

2010-11-25 Thread Niels Richard Hansen

Hi Kevin

Here is one way:

yourData - c(0.730, 0.732, 0.743, 0.757,0.781, 0.731,
0.830, 0.832, 0.843, 0.857, 0.881, 0.831)

nrGroups - 2
lengthGroups - 6

tapply(yourData, factor(rep(c(1,nrGroups), each = lengthGroups)), mean)

and you will have to adjust the number of groups and if necessary the
length of the groups to your needs. I am assuming that all groups are
of the same length.

Do you really want to average the time points? In that case you might
want to consider the data structures for representing dates and times,
see e.g.

?POSIXct

or converting the times to minutes, say.

- Niels


On 25/11/10 12.49, DonDolowy wrote:


Dear all,

I am pretty new to R only having an introduction course, so please bare with
me. I am doing my PhD at The Max Planck Institute of Immunobiology where I
am analyzing some calorimetry data from some mice.
I have a spreadsheet consisting of measurements of the respiratory exchange
rate at different time points measured every 9 minutes over some days.
My goal is bin/average the time points of each hour to only one
measurements.

E.g.
[Time] - [Measurement]
12.09 - 0.730
12.18 - 0.732
12.27 - 0.743
12.36 - 0.757
12.45 - 0.781
12.54 - 0.731
--  should be averaged to fx one time point and one value, fx:
12.30 - [average of the six measurements]

I know how to average the measurements in a whole column but how to average
every six measurements automatically and also how to average every six time
points and make a new sheet consisting of these data?

I hope you guys are able to help, since we are really stuck here. I can of
course do it manually but with8000 measurements it will take lots of time.

Thank you very much.

Best regards,
Kevin Dalgaard


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] About searching criteria

2010-11-25 Thread Stephen Liu
Hi Henrique,

Lot of thanks for your advice which is a little complicate to me.  It involves 
multiple R commands:

 d - data()
 ne - new.env()
 data(list = grep(\\(, d$results[,'Item'], value = TRUE, invert = TRUE), 
 envir 
= ne)
 out - eapply(ne, names)
 names(which(lapply(lapply(out, '%in%', c(Run, conc, density)), sum) == 
3))
[1] DNase
 DNase
Runconc density
1 1  0.04882812   0.017
2 1  0.04882812   0.018
3 1  0.19531250   0.121
4 1  0.19531250   0.124
...


It works for me.

grep - is simliar to sh command grep, egrep, fgrep, rgrep - print lines 
matching a pattern
which - differs from sh command which - locate a command

Wonderful !!!

B.R.
Stephen L






From: Henrique Dallazuanna www...@gmail.com

Cc: r-help@r-project.org
Sent: Fri, November 26, 2010 12:35:23 AM
Subject: Re: [R] About searching criteria

Try this:

d - data()
ne - new.env()
data(list = grep(\\(, d$results[,'Item'], value = TRUE, invert = TRUE), envir 
= ne)
out - eapply(ne, names)
names(which(lapply(lapply(out, '%in%', c(Run, conc, density)), sum) == 3))






Hi folks,

I need to search the dataset on data with name on heading;
Run conc density


I look at ??help.search and could not resolve;

help.search(pattern, fields = c(alias, concept, title)

What shall I replace pattern?

I suppose replacing alias, concept, title with Run, conc, density ?

Please help.  TIA

B.R.
Stephen L



__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.



-- 
Henrique Dallazuanna
Curitiba-Paraná-Brasil
25° 25' 40 S 49° 16' 22 O



[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] How to unsubscribe to mailing list

2010-11-25 Thread jsntxt

How to unsubscribe to mailing list
-- 
View this message in context: 
http://r.789695.n4.nabble.com/How-to-unsubscribe-to-mailing-list-tp3059708p3059708.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] get list index

2010-11-25 Thread Lorenzo Cattarino
Hi R-users,

 

I have a list 

 

mylist - list(c(0.79, 0.92, 0.91, 0.86, 0.96, 0.96, 0.95, 0.94, 0.99),
c(0.28, 0.45, 0.59, 0.69, 0.80, 0.87, 0.95, 0.94, 0.98), c(0.29, 0.39,
0.59, 0.69, 0.68, 0.80, 0.93, 0.95, 0.98))

 

Is there a  way to find the index of the list element that contains the
lowest value among all the other elements? As the lowest value in each
element is the first, the question is actually how to find the lowest
'first' values among the list elements, and then get the index of that
element.  

 

In my example the list element would be (because the value is 0.28):

 

[[2]]

[1] 0.28 0.45 0.59 0.69 0.80 0.87 0.95 0.94 0.98

 

and the position of course 2.

 

I am looking for the index because I would like to subset the list
afterwards (e.g. mylist[[2]]) and extract that element (i.e. the whole
vector).

 

Thanks for your help

Lorenzo

 

Lorenzo Cattarino

PhD Candidate (Confirmed)

 

Landscape Ecology and Conservation Group

Centre for Spatial Environmental Research

School of Geography, Planning and Environmental Management

The University of Queensland

Brisbane, Queensland, 4072, Australia

Telephone 61-7-3365 4370, Mobile 0410884610

Email l.cattar...@uq.edu.au

Internet http://www.gpem.uq.edu.au/cser http://www.gpem.uq.edu.au/cser


 


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] get list index

2010-11-25 Thread Michael Bedward
Hello Lorezo,

Try this...

order(sapply(mylist, min))[1]

Michael


On 26 November 2010 11:23, Lorenzo Cattarino l.cattar...@uq.edu.au wrote:
 Hi R-users,



 I have a list



 mylist - list(c(0.79, 0.92, 0.91, 0.86, 0.96, 0.96, 0.95, 0.94, 0.99),
 c(0.28, 0.45, 0.59, 0.69, 0.80, 0.87, 0.95, 0.94, 0.98), c(0.29, 0.39,
 0.59, 0.69, 0.68, 0.80, 0.93, 0.95, 0.98))



 Is there a  way to find the index of the list element that contains the
 lowest value among all the other elements? As the lowest value in each
 element is the first, the question is actually how to find the lowest
 'first' values among the list elements, and then get the index of that
 element.



 In my example the list element would be (because the value is 0.28):



 [[2]]

 [1] 0.28 0.45 0.59 0.69 0.80 0.87 0.95 0.94 0.98



 and the position of course 2.



 I am looking for the index because I would like to subset the list
 afterwards (e.g. mylist[[2]]) and extract that element (i.e. the whole
 vector).



 Thanks for your help

 Lorenzo



 Lorenzo Cattarino

 PhD Candidate (Confirmed)



 Landscape Ecology and Conservation Group

 Centre for Spatial Environmental Research

 School of Geography, Planning and Environmental Management

 The University of Queensland

 Brisbane, Queensland, 4072, Australia

 Telephone 61-7-3365 4370, Mobile 0410884610

 Email l.cattar...@uq.edu.au

 Internet http://www.gpem.uq.edu.au/cser http://www.gpem.uq.edu.au/cser





        [[alternative HTML version deleted]]

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Installing RQuantLib on Win 7 64 Bit

2010-11-25 Thread Dirk Eddelbuettel

On 26 November 2010 at 07:05, Santosh Srinivas wrote:
| Hello Group,
| 
| I am trying out RQuantLib on a 64bit Win 7 machine. But running into
| installation errors

The error message is about as clear as it can get:
 
| install.packages(RQuantLib)
| 
| Warning in install.packages(RQuantLib) :
|   argument 'lib' is missing: using
| 'C:\Users\Tester\Documents/R/win64-library/2.11'
| Warning: unable to access index for repository
| http://www.stats.ox.ac.uk/pub/RWin/bin/windows64/contrib/2.11
| Warning message:
| In getDependencies(pkgs, dependencies, available, lib) :
|   package ‘RQuantLib’ is not available


There is your answer: there simply is no binary package.

I need a win64 development box (which I currently do not have) to build
QuantLib as a win64 library so that CRAN and R-Forge can turn the RQuantLib
source into a binary for you.

Dirk

-- 
Dirk Eddelbuettel | e...@debian.org | http://dirk.eddelbuettel.com

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] overlap histogram and density

2010-11-25 Thread Roslina Zakaria
Hi Ted,

Regarding your examples, is it possible to get a smooth line for the density 
which overlap with the histogram?

Regards,

Roslina



From: ted.hard...@wlandres.net ted.hard...@wlandres.net
To: r-help@r-project.org

Sent: Fri, November 12, 2010 6:42:31 AM
Subject: Re: [R] overlap histogram and density

[OOPS!!I accidentally reproduced my second example below
as my third example. Now corrected. See below.]

On 11-Nov-10 20:02:29, Ted Harding wrote:

On 11-Nov-10 18:39:34, Roslina Zakaria wrote:
 Hi,
 Does anybody encounter the same problem when we overlap histogram
 and density that the density line seem to shift to the right a
 little bit?
 
 If you do have the same problem, what should we do to correct that?
 Thank you.
 
 par(mar=c(4,4,2,1.2),oma=c(0,0,0,0))
 hist(datobs,prob=TRUE,
      main =Volume of a catchment from four stations,
      col=yellowgreen, cex.axis=1, xlab=rainfall,
      ylab=Relative frequency, ylim= c(0,.003), xlim=c(0,1200))
 
 lines(density(dd), lwd=3,col=red)
 
#legend(topright,c(observed,generated),
#      lty=c(0,1),fill=c(blue,),bty=n)
 
 legend(topright, legend = c(observed,generated),
 col = c(yellowgreen, red), pch=c(15,NA), lty = c(0, 1), 
 lwd=c(0,3),bty=n, pt.cex=2)
 box()
 
 Thank you.

In theory that is not a problem. The density() function will
estimate a density whose integral over each of the intervals
in the histogram is equal to the probability of that interval,
and the proportion of the data expected in that interval will
also be its probability.

In practice, the estent to which you observe what you describe
(or a displacement to the left) will depend on how your data
are distributed within the intervals, and on the precision
with which density() happens to estimate the true density.

The following 3 cases of the same data sampled from a log-Normal
distribution, illustrate different impressions of the kind that
one might get, depending on the details of the histogram. Note
that there is no overall effect of displacement to the right
in any histogram, while the extent to which one observes it
varies according to the histogram. Without knowledge of your
data it is not possible to comment further on the extent to
[[elided Yahoo spam]]

set.seed(54321)
N  - 1000
X  - exp(rnorm(N,sd=0.4))
dd - density(X)

# A coarse histogram
H  - hist(X,prob=TRUE,
          xlim=c(-0.5,4),ylim=c(0,max(dd$y)),breaks=0.5*(0:8))
dx - unique(diff(H$breaks))
lines(dd$x,dd$y)

## A finer histogram
H  - hist(X,prob=TRUE,
          xlim=c(-0.5,4),ylim=c(0,max(dd$y)),breaks=0.25*(0:16))
dx - unique(diff(H$breaks))
lines(dd$x,dd$y)

## A still finer histogram
H  - hist(X,prob=TRUE,
## OOPS!!  xlim=c(-0.5,4),ylim=c(0,max(dd$y)),breaks=0.25*(0:16))
          xlim=c(-0.5,4),ylim=c(0,max(dd$y)),breaks=0.20*(0:20))
dx - unique(diff(H$breaks))
lines(dd$x,dd$y)


Ted.


E-Mail: (Ted Harding) ted.hard...@wlandres.net
Fax-to-email: +44 (0)870 094 0861
Date: 11-Nov-10                                      Time: 20:12:27
-- XFMail --



  
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Go (back) from Rd to roxygen

2010-11-25 Thread Yihui Xie
Awesome! Thanks a lot!

Regards,
Yihui
--
Yihui Xie xieyi...@gmail.com
Phone: 515-294-2465 Web: http://yihui.name
Department of Statistics, Iowa State University
2215 Snedecor Hall, Ames, IA



On Thu, Nov 25, 2010 at 8:09 PM, Hadley Wickham had...@rice.edu wrote:
 Since roxygen is a great help to document R packages, I am wondering
 if there exists an approach to go back from the raw Rd files to
 roxygen-documentation? E.g. turn \author{Somebody} into @author
 Somebody. This sounds ridiculous, but I believe it helps in the long
 term for me to maintain R packages.

 Have a look at https://gist.github.com/d1bbd44894a99a2e1d1b for a start.

 Hadley

 --
 Assistant Professor / Dobelman Family Junior Chair
 Department of Statistics / Rice University
 http://had.co.nz/


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] How to save a data set as .txt on fly?

2010-11-25 Thread Stephen Liu
Hi folks,

Win7 64bit

I tried to save DNase, a data set on database, as .txt file for future use with 
load.

I can't do it on fly;
 save(DNase, file=C:/Users/satimis/Documents/aaa.txt)
 load(file=C:/Users/satimis/Documents/aaa.txt)
 aaa
Error: object 'aaa' not found
 aaa.txt
Error: object 'aaa.txt' not found


I must perform following steps;
 aaa-DNase
 save(aaa, file=C:/Users/satimis/Documents/aaa.txt)
 load(file=C:/Users/satimis/Documents/aaa.txt)

 aaa
Runconc density
1 1  0.04882812   0.017
2 1  0.04882812   0.018
3 1  0.19531250   0.121



Is there any way doing it on fly?

Besides aaa.txt can't be read direct with Notepad nor WordPad

TIA

B.R.
Stephen L



__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] How to save a data set as .txt on fly?

2010-11-25 Thread David Winsemius


On Nov 25, 2010, at 10:45 PM, Stephen Liu wrote:


Hi folks,

Win7 64bit

I tried to save DNase, a data set on database, as .txt file for  
future use with

load.

I can't do it on fly;

save(DNase, file=C:/Users/satimis/Documents/aaa.txt)
load(file=C:/Users/satimis/Documents/aaa.txt)
aaa

Error: object 'aaa' not found

aaa.txt

Error: object 'aaa.txt' not found


But you didn't try:

DNase# which was after all the name of the object you saved.



I must perform following steps;

aaa-DNase
save(aaa, file=C:/Users/satimis/Documents/aaa.txt)
load(file=C:/Users/satimis/Documents/aaa.txt)



aaa

   Runconc density
1 1  0.04882812   0.017
2 1  0.04882812   0.018
3 1  0.19531250   0.121



Is there any way doing it on fly?


Does the above method (typing the name of the save object rather  
than the file name) meet your definition of on the fly?


Besides aaa.txt can't be read direct with Notepad nor WordPad


Right. Try reading the Posting Guide which among many other useful  
things has a method for saving objects in text format.




TIA

B.R.
Stephen L



__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


David Winsemius, MD
West Hartford, CT

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] How to save a data set as .txt on fly?

2010-11-25 Thread Stephen Liu
Hi David,


 But you didn't try:

 DNase# which was after all the name of the object you saved.

Sorry I don't follow.


I can't do it with following steps:
 DNase
 save(DNase, file=C:/Users/satimis/Documents/dnase.txt)
 load(file=C:/Users/satimis/Documents/dnase.txt)
 dnase
Error: object 'dnase' not found
 dnase.txt
Error: object 'dnase.txt' not found


sink command can make it

 sink(dnase.txt, append=TRUE, split=TRUE)
 DNase
 sink()

to load the file
 read.table(file=C:/Users/satimis/Documents/dnase.txt)

Besides the file created can be read with Notpad and WordPad


 Does the above method (typing the name of the save object rather  
 than the file name) meet your definition of on the fly?
Could you pls explain in more detail?  Thanks


 Right. Try reading the Posting Guide which among many other useful  
 things has a method for saving objects in text format.

Yes.

I'm curious to know why the .txt file created in this way can't be read with 
Notpad and WordPad?


B.R.
Stephen L




- Original Message 
From: David Winsemius dwinsem...@comcast.net
To: Stephen Liu sati...@yahoo.com
Cc: r-help@r-project.org
Sent: Fri, November 26, 2010 11:56:23 AM
Subject: Re: [R] How to save a data set as .txt on fly?


On Nov 25, 2010, at 10:45 PM, Stephen Liu wrote:

 Hi folks,

 Win7 64bit

 I tried to save DNase, a data set on database, as .txt file for  
 future use with
 load.

 I can't do it on fly;
 save(DNase, file=C:/Users/satimis/Documents/aaa.txt)
 load(file=C:/Users/satimis/Documents/aaa.txt)
 aaa
 Error: object 'aaa' not found
 aaa.txt
 Error: object 'aaa.txt' not found

But you didn't try:

DNase# which was after all the name of the object you saved.


 I must perform following steps;
 aaa-DNase
 save(aaa, file=C:/Users/satimis/Documents/aaa.txt)
 load(file=C:/Users/satimis/Documents/aaa.txt)

 aaa
Runconc density
 1 1  0.04882812   0.017
 2 1  0.04882812   0.018
 3 1  0.19531250   0.121
 


 Is there any way doing it on fly?

Does the above method (typing the name of the save object rather  
than the file name) meet your definition of on the fly?

 Besides aaa.txt can't be read direct with Notepad nor WordPad

Right. Try reading the Posting Guide which among many other useful  
things has a method for saving objects in text format.


 TIA

 B.R.
 Stephen L



 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.

David Winsemius, MD
West Hartford, CT



__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] \Sweaveopts error

2010-11-25 Thread John Maindonald
Yes, that has fixed the problem. (2010-11-24 r53659)

Thanks.

John Maindonald email: john.maindon...@anu.edu.au
phone : +61 2 (6125)3473fax  : +61 2(6125)5549
Centre for Mathematics  Its Applications, Room 1194,
John Dedman Mathematical Sciences Building (Building 27)
Australian National University, Canberra ACT 0200.
http://www.maths.anu.edu.au/~johnm

On 25/11/2010, at 10:56 PM, Duncan Murdoch wrote:

 On 25/11/2010 6:34 AM, John Maindonald wrote:
 I have a file 4lmetc.Rnw, intended for inclusion in a LaTeX document,
 that starts:
 
 I think this may have been fixed in the patched version.  Could you give it a 
 try to confirm?  If not, please send me a simplified version of the file, and 
 I'll see what's going wrong.
 
 Duncan Murdoch
 
 
 
 \SweaveOpts{engine=R, keep.source=TRUE}
 \SweaveOpts{eps=FALSE, prefix.string=snArt/4lmetc}
 
 The attempt to process the file through Sweave generates the error:
 
 Sweave(4lmetc)
 Writing to file 4lmetc.tex
 Processing code chunks ...
 1 : keep.source term verbatim
 Error in file(srcfile$filename, open = rt, encoding = encoding) :
  cannot open the connection
 In addition: Warning message:
 In file(srcfile$filename, open = rt, encoding = encoding) :
  cannot open file '4lmetc': No such file or directory
 
 The same file processes through Stangle() without problems.
 If I comment out the \Sweaveopts lines, there is no problem,
 except that I do not get the options that I want.
 
 This processed fine in R-2.11.1
 
 sessionInfo()
 R version 2.12.0 (2010-10-15)
 Platform: x86_64-apple-darwin9.8.0/x86_64 (64-bit)
 
 locale:
 [1] C
 
 attached base packages:
 [1] stats graphics  grDevices utils datasets  grid  methods
 [8] base
 
 other attached packages:
 [1] lattice_0.19-13 DAAG_1.02   randomForest_4.5-36
 [4] rpart_3.1-46MASS_7.3-8  reshape_0.8.3
 [7] plyr_1.2.1  proto_0.3-8
 
 loaded via a namespace (and not attached):
 [1] ggplot2_0.8.8   latticeExtra_0.6-14
 
 Is there a workaround?
 
 John Maindonald email: john.maindon...@anu.edu.au
 phone : +61 2 (6125)3473fax  : +61 2(6125)5549
 Centre for Mathematics  Its Applications, Room 1194,
 John Dedman Mathematical Sciences Building (Building 27)
 Australian National University, Canberra ACT 0200.
 http://www.maths.anu.edu.au/~johnm
 
 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.
 

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.