Re: [R] Lattice device page options-margins

2005-03-10 Thread Paul Murrell
Hi
Bock, Michael wrote:
 


-Original Message-
From: Deepayan Sarkar [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, March 09, 2005 5:09 PM
To: r-help@stat.math.ethz.ch
Cc: Paul Murrell; Bock, Michael; Sundar Dorai-Raj
Subject: Re: [R] Lattice device page options-margins

On Wednesday 09 March 2005 13:54, Paul Murrell wrote:
Hi
Sundar Dorai-Raj wrote:
Bock, Michael wrote on 3/9/2005 1:19 PM:
I am using lattice to make figures as pdfs:
trellis.device(device = pdf,file = Figure6.pdf,color = FALSE)
I need to specify some blank space on the left-hand margins (the 
pages will be bound so we need about 0.5 inch)). I have tried a 
number of solutions but none seems to work (e.g. 

par.set). Can this 

be done when initiating the plotting device? Or is the 

some other 

way that does not require me to manually move everything over?
Michael,
I believe you can do this using print.trellis and setting the 
position argument. E.g.

trellis.device(device = pdf,file = Figure6.pdf,color 

= FALSE) xy 

- xyplot(...) print(xy, pos = c(0.10, 0, 1, 1))
dev.off()
Or if you want exactly 0.5 inches, something like ...
# may need
# library(grid)
trellis.device(device = pdf,file = Figure6.pdf,color = 
FALSE) xy 

- xyplot(1:10 ~ 1:10) pushViewport(viewport(x=1,
  width=unit(1, npc) - unit(0.5, inches),
  just=right))
# to show region you're plotting in
# grid.rect(gp=gpar(col=grey))
print(xy, newpage=FALSE)
popViewport()
dev.off()
Yet another option, especially if you want this for multiple 
plots without having to handle each one separately, is to say 
(after loading
lattice):

lattice.options(layout.widths = list(left.padding = 
   list(x = 0.5, units = inches)))

Unlike par settings though, this would stay in effect across devices.
Deepayan

Thanks for the suggestions. None of these do exactly what I want. I have
some multi panel plotts
Excepted Commands:
BC.PP-bwplot(PP.PAH ~ Area, Crab, horizontal = FALSE,
ylab = Priority Pollutant PAHs (mg/kg),
scales = list(x=free,rot=90),  aspect = 1.5,
main = list(label = Blue Crabs,cex=0.8,font=1),
fontfamily = HersheySans)
RM.PP-bwplot( PP.PAH ~ Area, Mussel, horizontal = FALSE,
ylab = ,
scales = list(x=free,rot=90), aspect = 1.5,
main = list(label=   Ribbed Mussels,cex =0.8,font=1),
fontfamily = HersheySans)
print(BC.PP, split = c(1,1,2,1), more = TRUE )
print(RM.PP, split = c(2,1,2,1),more = TRUE )
grid.text(label = Figure 6. Priority Pollutant PAH Tissue
Concentrations,
  x = unit(0.01, npc), y = unit(0.1, npc),
  just = left, rot = 0,
  check.overlap = FALSE, default.units = npc,
  name = NULL, gp = gpar(fontsize = 10, font = 2), draw = TRUE)
grid.text(label = Privileged and Confidential \nAttorney Work Product,
  x = unit(0.01, npc), y = unit(0.95, npc),
  just = left, rot = 0,
  check.overlap = FALSE, default.units = npc,
  name = NULL, gp = gpar(fontsize = 6, font = 3), draw = TRUE)
All of the suggestions shrink the individual boxplots so they both now
have empty space on the left size. I guess what I really want is to be
able define the plotting area as a portion of the page, allowing for a
blank region on the left. Basically defining the margin of the print
area.
BSomething like:
trellis.device(device = pdf,file = Figure6.pdf,color = FALSE,left =
0.1, right = 0.9)
Assuming the hypothetical left and right commands set the print area for
the pdf. Hopefully this better defines my ideal solution.
Thanks for the suggestions, if an ideal solution can not be found I at
least have enough to develop a work around.  It just too bad I hadn't
though of this issue before writing so many scripts to make my plots
that will now have to be adjusted manually.

Do you mean like this ...?
# Dummy data
# (it would be easier to give an answer if you gave us the real thing)
Crab - data.frame(PP.PAH=rnorm(20),
   Area=factor(sample(20, 1:2, replace=TRUE)))
Mussel - data.frame(PP.PAH=rnorm(20),
 Area=factor(sample(20, 1:2, replace=TRUE)))
BC.PP-bwplot(PP.PAH ~ Area, Crab, horizontal = FALSE,
ylab = Priority Pollutant PAHs (mg/kg),
scales = list(x=free,rot=90),  aspect = 1.5,
main = list(label = Blue Crabs,cex=0.8,font=1),
fontfamily = HersheySans)
RM.PP-bwplot( PP.PAH ~ Area, Mussel, horizontal = FALSE,
ylab = ,
scales = list(x=free,rot=90), aspect = 1.5,
main = list(label=   Ribbed Mussels,cex =0.8,font=1),
fontfamily = HersheySans)
trellis.device(device = pdf,file = Figure6.pdf,color = FALSE)
# push viewport for ALL drawing
pushViewport(viewport(x=.1, width=.8, just=left))
# just to show region you're plotting in
grid.rect(gp=gpar(col=grey))
print(BC.PP, split = c(1,1,2,1), more = TRUE, newpage=FALSE)
print(RM.PP, split = c(2,1,2,1),more = TRUE)
grid.text(label = Figure 6. Priority Pollutant PAH Tissue Concentrations,
 

RE: [R] Lattice device page options-margins

2005-03-10 Thread Bock, Michael
 

 -Original Message-
 From: Deepayan Sarkar [mailto:[EMAIL PROTECTED] 
 Sent: Wednesday, March 09, 2005 5:09 PM
 To: r-help@stat.math.ethz.ch
 Cc: Paul Murrell; Bock, Michael; Sundar Dorai-Raj
 Subject: Re: [R] Lattice device page options-margins
 
 On Wednesday 09 March 2005 13:54, Paul Murrell wrote:
  Hi
 
  Sundar Dorai-Raj wrote:
   Bock, Michael wrote on 3/9/2005 1:19 PM:
   I am using lattice to make figures as pdfs:
   trellis.device(device = pdf,file = Figure6.pdf,color = FALSE)
  
   I need to specify some blank space on the left-hand margins (the 
   pages will be bound so we need about 0.5 inch)). I have tried a 
   number of solutions but none seems to work (e.g. 
 par.set). Can this 
   be done when initiating the plotting device? Or is the 
 some other 
   way that does not require me to manually move everything over?
  
   Michael,
  
   I believe you can do this using print.trellis and setting the 
   position argument. E.g.
  
   trellis.device(device = pdf,file = Figure6.pdf,color 
 = FALSE) xy 
   - xyplot(...) print(xy, pos = c(0.10, 0, 1, 1))
   dev.off()
 
  Or if you want exactly 0.5 inches, something like ...
 
  # may need
  # library(grid)
  trellis.device(device = pdf,file = Figure6.pdf,color = 
 FALSE) xy 
  - xyplot(1:10 ~ 1:10) pushViewport(viewport(x=1,
 width=unit(1, npc) - unit(0.5, inches),
 just=right))
  # to show region you're plotting in
  # grid.rect(gp=gpar(col=grey))
  print(xy, newpage=FALSE)
  popViewport()
  dev.off()
 
 Yet another option, especially if you want this for multiple 
 plots without having to handle each one separately, is to say 
 (after loading
 lattice):
 
 lattice.options(layout.widths = list(left.padding = 
 list(x = 0.5, units = inches)))
 
 Unlike par settings though, this would stay in effect across devices.
 
 Deepayan
 
 
Thanks for the suggestions. None of these do exactly what I want. I have
some multi panel plotts
Excepted Commands:
BC.PP-bwplot(PP.PAH ~ Area, Crab, horizontal = FALSE,
ylab = Priority Pollutant PAHs (mg/kg),
scales = list(x=free,rot=90),  aspect = 1.5,
main = list(label = Blue Crabs,cex=0.8,font=1),
fontfamily = HersheySans)
RM.PP-bwplot( PP.PAH ~ Area, Mussel, horizontal = FALSE,
ylab = ,
scales = list(x=free,rot=90), aspect = 1.5,
main = list(label=   Ribbed Mussels,cex =0.8,font=1),
fontfamily = HersheySans)
print(BC.PP, split = c(1,1,2,1), more = TRUE )
print(RM.PP, split = c(2,1,2,1),more = TRUE )
grid.text(label = Figure 6. Priority Pollutant PAH Tissue
Concentrations,
  x = unit(0.01, npc), y = unit(0.1, npc),
  just = left, rot = 0,
  check.overlap = FALSE, default.units = npc,
  name = NULL, gp = gpar(fontsize = 10, font = 2), draw = TRUE)
grid.text(label = Privileged and Confidential \nAttorney Work Product,
  x = unit(0.01, npc), y = unit(0.95, npc),
  just = left, rot = 0,
  check.overlap = FALSE, default.units = npc,
  name = NULL, gp = gpar(fontsize = 6, font = 3), draw = TRUE)


All of the suggestions shrink the individual boxplots so they both now
have empty space on the left size. I guess what I really want is to be
able define the plotting area as a portion of the page, allowing for a
blank region on the left. Basically defining the margin of the print
area.

BSomething like:
trellis.device(device = pdf,file = Figure6.pdf,color = FALSE,left =
0.1, right = 0.9)
Assuming the hypothetical left and right commands set the print area for
the pdf. Hopefully this better defines my ideal solution.

Thanks for the suggestions, if an ideal solution can not be found I at
least have enough to develop a work around.  It just too bad I hadn't
though of this issue before writing so many scripts to make my plots
that will now have to be adjusted manually.

Mike

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] plotting

2005-03-10 Thread Marc Schwartz
On Wed, 2005-03-09 at 14:28 -0400, Owen Buchner wrote:
 I have two questions for you.  Firstly I'm having troubles trying to plot more
 then 1 graph.  I'm attempting to make a plot with 9 panels, but i have no clue
 what type of code to use.

If you are using R's base graphics and you want 9 plots arranged
vertically, you can use par(mfrow = 9) before your first plot.

The first plot will then be in the top panel (row).

Each successive call to a high level plot function [ie. plot(), boxplot
(), barplot()] will draw the plot in the next panel (row) down.

See ?par for more information and some examples.

Another option, for base graphics, is to use the layout() function and
yet another, is to use the grid/lattice packages for creating Trellis
type plots.

 Secondly i was wondering if there was some code to generate random numbers
 between two defined intervals and then have R chose one randomly in a 
 program. 
 If you could answer either of these questions for me I would appreciate it.

If you want to generate random numbers from a pre-specified sequence of
numbers between some min and max values (each number having an equal
probability of being selected) and then return one of them, you can use
the sample function:

  sample(min:max, 1)

See ?sample for more information. Note that the sequence need not be all
integers:

 sample(seq(1, 10, 0.5), 1)
[1] 2.5


If you want the random numbers to be within some min:max set of limits,
such that:

  min = x = max

each 'x' having an equal probability of being selected and then return
one of the numbers, use runif():

  runif(1, min, max)

See ?runif for more information.

HTH,

Marc Schwartz

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] RMySQL installed but not availalable

2005-03-10 Thread Don MacQueen

require(RMySQL)
help('MySQL')
then see the examples shown by the help page
At 9:12 PM -1200 3/9/05, Adriano von Sydow wrote:
Hi
I run Linux SuSE 9.1, with MSQL 4.0.18
I installed R 2.0.1 and it is working fine
I installed RM 0.5-5 package and verified all installed.packages 
(see below) but when I tried to use any RMySQL specific comand is 
gives me the same error messages

 dbConnect
Error: Object dbConnect not found
It looks like the  packages is not installed
Any ideas or help would be welcome
Thanks,
Popolito
*** INSTALL *
ginossauro:/home/avonsydow/Download/Stats # R CMD INSTALL RMySQL_0.5-5.tar.gz
* Installing *source* package 'RMySQL' ...
creating cache ./config.cache
checking how to run the C preprocessor... cc -E
checking for compress in -lz... yes
checking for getopt_long in -lc... yes
checking for mysql_init in -lmysqlclient... yes
checking for mysql.h... no
checking for /usr/local/include/mysql/mysql.h... no
checking for /usr/include/mysql/mysql.h... yes
updating cache ./config.cache
creating ./config.status
creating src/Makevars
** libs
gcc -I/usr/lib/R/include -I/usr/include/mysql -I/usr/local/include 
-I/opt/gnome/
include   -fPIC   -c RS-DBI.c -o RS-DBI.o
gcc -I/usr/lib/R/include -I/usr/include/mysql -I/usr/local/include 
-I/opt/gnome/
include   -fPIC   -c RS-MySQL.c -o RS-MySQL.o
gcc -shared -L/usr/local/lib -L/opt/gnome/lib -o RMySQL.so RS-DBI.o 
RS-MySQL.o -
lmysqlclient -lz  -L/usr/lib/R/lib -lR
** R
** inst
** save image
Loading required package: DBI
[1] dbObjectId
[1] format
[1] show
[1] print
[1] MySQLObject
[1] MySQLDriver
[1] dbUnloadDriver
[1] dbGetInfo
[1] dbListConnections
[1] summary
[1] MySQLConnection
[1] dbConnect
[1] dbConnect
[1] dbConnect
[1] dbDisconnect
[1] dbSendQuery
[1] dbGetQuery
[1] dbGetException
[1] dbGetInfo
[1] dbListResults
[1] summary
[1] dbListTables
[1] dbReadTable
[1] dbWriteTable
[1] dbExistsTable
[1] dbRemoveTable
[1] dbListFields
[1] dbCommit
[1] dbRollback
[1] dbCallProc
[1] MySQLResult
[1] dbClearResult
[1] fetch
[1] fetch
[1] dbGetInfo
[1] dbGetStatement
[1] dbListFields
[1] dbColumnInfo
[1] dbGetRowsAffected
[1] dbGetRowCount
[1] dbHasCompleted
[1] dbGetException
[1] summary
[1] dbDataType
[1] make.db.names
[1] SQLKeywords
[1] isSQLKeyword
[1] dbApply
[1] dbApply

** help
  Building/Updating help pages for package 'RMySQL'
Formats: text html latex example
 MySQL texthtmllatex   example
 MySQLConnection-class texthtmllatex   example
 MySQLDriver-class texthtmllatex   example
 MySQLObject-class texthtmllatex   example
 MySQLResult-class texthtmllatex   example
 S4R   texthtmllatex
 dbApply-methods   texthtmllatex   example
 dbApply   texthtmllatex   example
 dbCallProc-methodstexthtmllatex
 dbCommit-methods  texthtmllatex   example
 dbConnect-methods texthtmllatex   example
 dbDataType-methodstexthtmllatex   example
 dbDriver-methods  texthtmllatex   example
 dbGetInfo-methods texthtmllatex   example
 dbListTables-methods  texthtmllatex   example
 dbObjectId-class  texthtmllatex   example
 dbReadTable-methods   texthtmllatex   example
 dbSendQuery-methods   texthtmllatex   example
 dbSetDataMappings-methods texthtmllatex   example
 fetch-methods texthtmllatex   example
 isIdCurrent   texthtmllatex   example
 make.db.names-methods texthtmllatex   example
 mysqlDBApply  texthtmllatex   example
 mysqlSupport  texthtmllatex
 safe.writetexthtmllatex   example
 summary-methods   texthtmllatex
* DONE (RMySQL)

*INSTALLED.PACKAGES ***
 installed.packages()
  Package  LibPath  Version   Priority  Bundle
base   base   /usr/lib/R/library 2.0.1   baseNA
boot   boot   /usr/lib/R/library 1.2-20  recommended NA
class  class  /usr/lib/R/library 7.2-10  recommended VR
clustercluster/usr/lib/R/library 1.9.6   recommended NA
datasets   datasets   /usr/lib/R/library 2.0.1   baseNA
DBIDBI/usr/lib/R/library 0.1-8   NANA
foreignforeign/usr/lib/R/library 0.8-0   recommended NA
graphics   graphics   /usr/lib/R/library 2.0.1   baseNA
grDevices  grDevices  /usr/lib/R/library 2.0.1   baseNA
grid   grid   /usr/lib/R/library 2.0.1   baseNA
KernSmooth KernSmooth /usr/lib/R/library 2.22-14 recommended NA
latticelattice/usr/lib/R/library 0.10-14 recommended NA
MASS   MASS   

Re: [R] two-dimensional integration?

2005-03-10 Thread Prof Brian Ripley
Nils,
For 2D, see package 'adapt' on CRAN. e.g.
adapt(2, c(0,0), c(1,1), functn=function(x) sin(prod(x))*exp(x[1]-x[2]))
Package `adapt' will do larger numbers of dimensions, but numerical 
quadrature is often no more effective than Monte-Carlo methods in more 
than a few dimensions.  For very smooth functions, quasi-random numbers 
can help.

A good reference aimed at statisticians is
@Book{Evans.Swartz.00,
  author =   {Michael Evans and Tim Swartz},
  title ={Approximating Integrals via Monte Carlo and
  Deterministic Methods},
  publisher ={Oxford University Press},
  year = 2000,
  address =  {Oxford},
  ISBN = 0-19-850278-8,
}
BTW, we are not good are predicting to 2014, but fairly good at the 
present.  In this case I could not guess a good search term on
http://search.r-project.org, but it often gets you there.  It has a 
`complete' list of packages, as does CRAN, and searching those pages for 
`integrate' works.

Brian
On Thu, 10 Mar 2005, Nils-at-Duke Lid Hjort wrote:
I find the one-dimensional integrate very helpful,
but often enough I stumble into problems that require
two (or more)-dimensional integrals. I suppose there
are no R functions that can do this for me, directly?
The ideal thing would be to be able to define say
f - function(x)
{
x1 - x[1]
x2 - x[2]
sin(x1*x2)*exp(x1-x2)
}
and then write say
integrate(f, xlim=c(0,1), ylim=c(0,1))  .
(a) No such thing exists, as of today, right?
(b) There *are* general numerical routines out there
for doing such things, right? (Importance sampling
or adaptive important sampling would often do the
job, but it would be difficult to find something that
always works -- at least in higher dimension?
Also, iterated one-dimensional integrations could
be attempted, but I find that messy, also because
things lose the g(many) = many(g) property, and
then R refuses to integrate g.)
(c) Will a thing like the above exist in R before
the Tromsoe Olympics in 2014? For which dimensions? 
Nils Lid Hjort
[Professor of statistics at Oslo, but currently at Duke]

--
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] two-dimensional integration?

2005-03-10 Thread Dimitris Rizopoulos
This existed even before the Athens Olympics 2004 :)
look at package adapt which integrates a function from 2 to 20 
dimensions, e.g.,

library(adapt)
f - function(x){
   x1 - x[1]
   x2 - x[2]
   sin(x1*x2)*exp(x1-x2)
}
adapt(2, c(0,0), c(1,1), functn=f)
I hope it helps.
Best,
Dimitris

Dimitris Rizopoulos
Ph.D. Student
Biostatistical Centre
School of Public Health
Catholic University of Leuven
Address: Kapucijnenvoer 35, Leuven, Belgium
Tel: +32/16/336899
Fax: +32/16/337015
Web: http://www.med.kuleuven.ac.be/biostat/
http://www.student.kuleuven.ac.be/~m0390867/dimitris.htm
- Original Message - 
From: Nils-at-Duke Lid Hjort [EMAIL PROTECTED]
To: R-help@stat.math.ethz.ch; [EMAIL PROTECTED]
Sent: Thursday, March 10, 2005 6:22 AM
Subject: [R] two-dimensional integration?


I find the one-dimensional integrate very helpful,
but often enough I stumble into problems that require
two (or more)-dimensional integrals. I suppose there
are no R functions that can do this for me, directly?
The ideal thing would be to be able to define say
f - function(x)
{
x1 - x[1]
x2 - x[2]
sin(x1*x2)*exp(x1-x2)
}
and then write say
integrate(f, xlim=c(0,1), ylim=c(0,1))  .
(a) No such thing exists, as of today, right?
(b) There *are* general numerical routines out there
for doing such things, right? (Importance sampling
or adaptive important sampling would often do the
job, but it would be difficult to find something that
always works -- at least in higher dimension?
Also, iterated one-dimensional integrations could
be attempted, but I find that messy, also because
things lose the g(many) = many(g) property, and
then R refuses to integrate g.)
(c) Will a thing like the above exist in R before
the Tromsoe Olympics in 2014? For which dimensions?
Nils Lid Hjort
[Professor of statistics at Oslo, but currently at Duke]
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! 
http://www.R-project.org/posting-guide.html

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] two-dimensional integration?

2005-03-10 Thread Peter Dalgaard
Nils-at-Duke Lid Hjort [EMAIL PROTECTED] writes:

 I find the one-dimensional integrate very helpful,
 but often enough I stumble into problems that require
 two (or more)-dimensional integrals. I suppose there
...
 (c) Will a thing like the above exist in R before
 the Tromsoe Olympics in 2014? For which dimensions? Nils Lid Hjort
 [Professor of statistics at Oslo, but currently at Duke]

Did you check the CRAN package adapt? It's been around at least
since Nagano 1998, I believe.

-- 
   O__   Peter Dalgaard Blegdamsvej 3  
  c/ /'_ --- Dept. of Biostatistics 2200 Cph. N   
 (*) \(*) -- University of Copenhagen   Denmark  Ph: (+45) 35327918
~~ - ([EMAIL PROTECTED]) FAX: (+45) 35327907

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Dropping coloumns while redaing dtaa from text file.

2005-03-10 Thread Upasna Sharma
Hi

I have a huge text file and .dat file from which I want to read data. I do
not need all the columns in the files. I want to extract only some columns
from the .txt file or the .dat file, because reading the entire file is
becoming very difficult due to memory issues. Is it possible to extract a
few columns from .txt or .dat file while reading the data in R?

Thanks
Upasna


-- 
The past is a history, the future is a mystery, and this moment is a
gift. That is why this moment is called 'the present'. Anonymous

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] R: LIST function and LOOPS

2005-03-10 Thread Clark Allan
hi all

another simple question.

i've written a dummy program so that you get the concept. (the code
could be simplfied such that there are no loops. but lets leave the
loops in for now.)

z1-function(w)
{
for (i in 1:w)
{
set.seed(i+6)
ss-0
for (j in 1:5)
{
set.seed(j+1+(i-1)*6)
r-rnorm(1)
ss-ss+r
}
list(ss=ss)
}
}
check.1-z1(3)
check.1

the results is:
$ss
[1] -0.01516304


what i want is something that looks like this:

j=1
$ss
[1] -2.213343

j=2
$ss
[1] -2.904235

j=3
$ss
[1] -0.01516304


i know that i could use the print command. (see z2)

z2-function(w)
{
for (i in 1:w)
{
set.seed(i+6)
ss-0
for (j in 1:5)
{
set.seed(j+1+(i-1)*6)
r-rnorm(1)
ss-ss+r
}
print(ss)
}
}
check.2-z2(3)
check.2

 check.2-z2(3)
[1] -2.213343
[1] -2.904235
[1] -0.01516304
 check.2
[1] -0.01516304

the problem with z2 is that only the last value is saved.


what i could do is use matrices like the following: (but i dont want to
do this AND WOULD PREFER TO USE list.)

z3-function(w)
{
results.-matrix(nrow=w,ncol=1)
colnames(results.)-c(ss)
for (i in 1:w)
{
set.seed(i+6)
ss-0
for (j in 1:5)
{
set.seed(j+1+(i-1)*6)
r-rnorm(1)
ss-ss+r
}
results.[i,1]-ss
}
results.
}
check.3-z3(3)
check.3

 check.3
  ss
[1,] -2.21334260
[2,] -2.90423463
[3,] -0.01516304

what if i have a new program (something different) and i want the
following:

j=1
$a
1
2
3

$b
1
2
3
4
5

$c
1


###
j=2
$a
11
21
31

$b
11
21
31
41
51

$c
11

###
j=3
$a
21
22
32

$b
21
22
32
42
52

$c
21

MATRICES SEEMS TO BE A GOOD WAY OF DOING THIS (but then you would have
to set up three matrices, one for a,b and c). BUT WHAT IF I WANT TO USE
THE LIST FUNCTION? i.e. there is a list in the first loop that i want to
display!

sorry for the long mail.

***
ALLAN__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

Re: [R] contrast matrix for aov

2005-03-10 Thread Prof Brian Ripley
On Wed, 9 Mar 2005, Darren Weber wrote:
How do we specify a contrast interaction matrix for an ANOVA model?
We have a two-factor, repeated measures design, with
Where does `repeated measures' come into this?  You appear to have 
repeated a 2x2 experiment in each of 8 blocks (subjects).  Such a design 
is usually analysed with fixed effects.  (Perhaps you averaged over 
repeats in the first few lines of your code?)

Cue Direction (2) x  Brain Hemisphere(2)
Each of these has 2 levels, 'left' and 'right', so it's a simple 2x2 design 
matrix.  We have 8 subjects in each cell (a balanced design) and we want to 
specify the interaction contrast so that:

CueLeftCueRght for the Right Hemisphere
CueRghtCueLeft for the Left Hemisphere.
Here is a copy of the relevant commands for R:

lh_cueL - rowMeans( LHroi.cueL[,t1:t2] )
lh_cueR - rowMeans( LHroi.cueR[,t1:t2] )
rh_cueL - rowMeans( RHroi.cueL[,t1:t2] )
rh_cueR - rowMeans( RHroi.cueR[,t1:t2] )
roiValues - c( lh_cueL, lh_cueR, rh_cueL, rh_cueR )
cuelabels - c(CueLeft, CueRight)
hemlabels - c(LH, RH)
roiDataframe - data.frame( roi=roiValues, Subject=gl(8,1,32,subjectlabels), 
Hemisphere=gl(2,16,32,hemlabels), Cue=gl(2,8,32,cuelabels) )

roi.aov - aov(roi ~ (Cue*Hemisphere) + Error(Subject/(Cue*Hemisphere)), 
data=roiDataframe)
I think the error model should be Error(Subject).  In what sense are `Cue' 
and `Cue:Hemisphere' random effects nested inside `Subject'?

Let me fake some `data':
set.seed(1); roiValues - rnorm(32)
subjectlabels - paste(V1:8, sep = )
options(contrasts = c(contr.helmert, contr.poly))
roi.aov - aov(roi ~ Cue*Hemisphere + Error(Subject), data=roiDataframe)
roi.aov
Call:
aov(formula = roi ~ Cue * Hemisphere + Error(Subject), data = roiDataframe)
Grand Mean: 0.1165512
Stratum 1: Subject
Terms:
Residuals
Sum of Squares   4.200946
Deg. of Freedom 7
Residual standard error: 0.7746839
Stratum 2: Within
Terms:
  Cue Hemisphere Cue:Hemisphere Residuals
Sum of Squares   0.216453   0.019712   0.057860 21.896872
Deg. of Freedom 1  1  121
Residual standard error: 1.021131
Estimated effects are balanced
Note that all the action is in one stratum, and the SSQs are the same 
as

aov(roi ~ Subject + Cue * Hemisphere, data = roiDataframe)
(and also the same as for your fit).
print(summary(roi.aov))
It auto-prints, so you don't need print().

I've tried to create a contrast matrix like this:
cm - contrasts(roiDataframe$Cue)
which gives the main effect contrasts for the Cue factor.  I really want to 
specify the interaction contrasts, so I tried this:


# c( lh_cueL, lh_cueR, rh_cueL, rh_cueR )
# CueRightCueLeft for the Left Hemisphere.
# CueLeftCueRight for the Right Hemisphere
cm - c(-1, 1, 1, -1)
dim(cm) - c(2,2)
(That is up to sign what Helmert contrasts give you.)
roi.aov - aov( roi ~ (Cue*Hemisphere) + Error(Subject/(Cue*Hemisphere)),
contrasts=cm, data=roiDataframe)
print(summary(roi.aov))

but the results of these two aov commands are identical.  Is it the case that 
the 2x2 design matrix is always going to give the same F values for the 
interaction regardless of the contrast direction?
Yes, as however you code the design (via `contrasts') you are fitting the 
same subspaces.  Not sure what you mean by `contrast direction', though.

However, you have not specified `contrasts' correctly:
contrasts: A list of contrasts to be used for some of the factors in
  the formula.
and cm is not a list, and an interaction is not a factor.
OR, is there some way to get a summary output for the contrasts that is 
not available from the print method?
For more than two levels, yes: see `split' under ?summary.aov.
Also, see se.contrasts which allows you to find the standard error for any 
contrast.

For the fixed-effects model you can use summary.lm:
fit - aov(roi ~ Subject + Cue * Hemisphere, data = roiDataframe)
summary(fit)
   Df  Sum Sq Mean Sq F value Pr(F)
Subject 7  4.2009  0.6001  0.5756 0.7677
Cue 1  0.2165  0.2165  0.2076 0.6533
Hemisphere  1  0.0197  0.0197  0.0189 0.8920
Cue:Hemisphere  1  0.0579  0.0579  0.0555 0.8161
Residuals  21 21.8969  1.0427
summary.lm(fit)
Call:
aov(formula = roi ~ Subject + Cue * Hemisphere, data = roiDataframe)
Residuals:
Min  1Q  Median  3Q Max
-1.7893 -0.4197  0.1723  0.5868  1.3033
Coefficients:
 Estimate Std. Error t value Pr(|t|)
[...]
Cue1 -0.082240.18051  -0.4560.653
Hemisphere1   0.024820.18051   0.1370.892
Cue1:Hemisphere1 -0.042520.18051  -0.2360.816
where the F values are the squares of the t values.
--
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 

Re: [R] Dropping coloumns while redaing dtaa from text file.

2005-03-10 Thread Peter Dalgaard
Upasna Sharma [EMAIL PROTECTED] writes:

 Hi
 
 I have a huge text file and .dat file from which I want to read data. I do
 not need all the columns in the files. I want to extract only some columns
 from the .txt file or the .dat file, because reading the entire file is
 becoming very difficult due to memory issues. Is it possible to extract a
 few columns from .txt or .dat file while reading the data in R?

You're not sayin what tools you are using to read in the first place,
but notice that the colClasses argument to read.table can have NULL
elements. 


-- 
   O__   Peter Dalgaard Blegdamsvej 3  
  c/ /'_ --- Dept. of Biostatistics 2200 Cph. N   
 (*) \(*) -- University of Copenhagen   Denmark  Ph: (+45) 35327918
~~ - ([EMAIL PROTECTED]) FAX: (+45) 35327907

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Dropping coloumns while redaing dtaa from text file.

2005-03-10 Thread Prof Brian Ripley
On Thu, 10 Mar 2005, Upasna Sharma wrote:
Hi
I have a huge text file and .dat file from which I want to read data. I do
not need all the columns in the files. I want to extract only some columns
from the .txt file or the .dat file, because reading the entire file is
becoming very difficult due to memory issues. Is it possible to extract a
few columns from .txt or .dat file while reading the data in R?
Yes.
Look at the use of NULL for 'colClasses' (read.table) or 'what' (scan), 
and also 'flush'.

--
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] function in order to plot the same graph to postscript and pdf

2005-03-10 Thread Ronny Klein
 I guess you would want eval(MYPLOT) there?

 But the function above works if you say
   plot.both(myplot = expression(plot(x)), filename = foo)
 and that is what I said above: set myplot = expression(plot(x)).
 Z

My syntax in the example was a little bit messed, sorry for that.

However Gabor Grothendieck's reply has brought me to the solution. The 
following function does the job right:

print.both - function(myplot, filename){
 pdf(file=paste(filename, .pdf, sep=))
 postscript(file=paste(filename, .eps, sep=))
dev.control(displaylist=enable)
myplot
dev.copy(which=dev.next())
graphics.off()
}

The dev.control(displaylist=enable) is necessary here because only then R 
does record the plot and dev.copy can be used.

So, thank you all for your help.

Ronny

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Flattening a list of data frames

2005-03-10 Thread Dimitris Rizopoulos
try this:
do.call(rbind, z)
Best,
Dimitris

Dimitris Rizopoulos
Ph.D. Student
Biostatistical Centre
School of Public Health
Catholic University of Leuven
Address: Kapucijnenvoer 35, Leuven, Belgium
Tel: +32/16/336899
Fax: +32/16/337015
Web: http://www.med.kuleuven.ac.be/biostat/
http://www.student.kuleuven.ac.be/~m0390867/dimitris.htm
- Original Message - 
From: Lorin Hochstein [EMAIL PROTECTED]
To: R-help@stat.math.ethz.ch
Sent: Wednesday, March 09, 2005 9:07 PM
Subject: [R] Flattening a list of data frames


Hello all,
Simple version of my problem:
I've got a list of data frames, where each data frame has the same 
number of columns and the same column names. I'd like to flatten the 
list into one large data frame. Is there an easy way to do this?

Quick example code:
a - data.frame(x=c(1,2,3),y=c(5,7,9)
b - data.frame(x=c(2,4,7,9),y=c(2,3,5,4))
z - list(a,b)
# Do something to get the equivalent of  rbind(z[[1]],z[[2]])
???
More complex version:
My data is in this format because it's the output of a by statment 
that looks like this:

y - by(d,list(d$StudentID,d$Assignment),gapfun)
(where gapfun is a function I've defined that takes a data frame and 
returns another data frame).

What I would like is to do is transform y into a data frame that has 
columns StudentID, Assignment, and the columns in the data frame 
returned by gapfun.

Any ideas?
Lorin
--
Lorin Hochstein
Graduate Research Assistant
Experimental Software Engineering Group
Computer Science Department
University of Maryland, College Park
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! 
http://www.R-project.org/posting-guide.html

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Flattening a list of data frames

2005-03-10 Thread Lorin Hochstein
Hello all,
Simple version of my problem:
I've got a list of data frames, where each data frame has the same 
number of columns and the same column names. I'd like to flatten the 
list into one large data frame. Is there an easy way to do this?

Quick example code:
a - data.frame(x=c(1,2,3),y=c(5,7,9)
b - data.frame(x=c(2,4,7,9),y=c(2,3,5,4))
z - list(a,b)
# Do something to get the equivalent of  rbind(z[[1]],z[[2]])
???
More complex version:
My data is in this format because it's the output of a by statment 
that looks like this:

y - by(d,list(d$StudentID,d$Assignment),gapfun)
(where gapfun is a function I've defined that takes a data frame and 
returns another data frame).

What I would like is to do is transform y into a data frame that has 
columns StudentID, Assignment, and the columns in the data frame 
returned by gapfun.

Any ideas?
Lorin
--
Lorin Hochstein
Graduate Research Assistant
Experimental Software Engineering Group
Computer Science Department
University of Maryland, College Park
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] How to get standard deviation of rows in a matrix

2005-03-10 Thread Sundar Dorai-Raj

Jagarlamudi, Choudary wrote on 3/9/2005 1:49 PM:
Hi all,
 
   I am trying to find sd of my rows in a matrix and i get column sd inspite of extracting rows.
I tried to do the sqrt(var(x)) but that did'nt work as well,
 
Here is my data
 
genes
15 24 63 40
25 42 46 35
23 53 37 45
30 37 50 55
40 51 30 48
 
x-sd(genes[1:5,])
 
y-sqrt(var(genes[1:5,]))
 
I get 4 sds for the 4 columns instead of 5 sds for my 5 rows.
Thanks you in advance.
 
This has come up several times in the past and a quick search of the 
archives would have lead you to:

http://finzi.psych.upenn.edu/R/Rhelp02a/archive/34169.html
HTH,
--sundar
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] contrast matrix for aov

2005-03-10 Thread Peter Dalgaard
Prof Brian Ripley [EMAIL PROTECTED] writes:

 On Wed, 9 Mar 2005, Darren Weber wrote:
 
  How do we specify a contrast interaction matrix for an ANOVA model?
 
  We have a two-factor, repeated measures design, with
 
 Where does `repeated measures' come into this?  You appear to have
 repeated a 2x2 experiment in each of 8 blocks (subjects).  Such a
 design is usually analysed with fixed effects.  (Perhaps you averaged
 over repeats in the first few lines of your code?)

Actually, that's not usual in SAS (and not SPSS either, I believe)
in things like

proc glm;
model y1-y4= ;
repeated row 2 col 2;

[Not that SAS/SPSS is the Gospel, but they do tend to set the
terminology in these matters.]

There you'd get the analysis split up as analyses of three contrasts
corresponding to the main effects and interaction, c(-1,-1,1,1),
c(-1,1,-1,1), and c(-1,1,1,-1) in the 2x2 case (up to scale and sign).
In the 2x2 case, this corresponds exactly to the 4-stratum model
row*col + Error(block/(row*col)).

(It is interesting to note that it is still not the optimal analysis
for arbitrary covariance patterns because dependence between contrasts
is not utilized - it is basically assumed to be absent.)

With more than two levels, you get the additional complications of
sphericity testing and GG/HF epsilons and all that.

-- 
   O__   Peter Dalgaard Blegdamsvej 3  
  c/ /'_ --- Dept. of Biostatistics 2200 Cph. N   
 (*) \(*) -- University of Copenhagen   Denmark  Ph: (+45) 35327918
~~ - ([EMAIL PROTECTED]) FAX: (+45) 35327907

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Interval censoring in Survival analysis

2005-03-10 Thread AMOROS NAVARRO, ALEX
Hi,

I would like to know if R programme allows doing different kinds of censoring 
in his last version. What kind of package should I use and is this censoring 
applicable to parametric and semi-parametric(Cox) models?

Thanks,
Àlex.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] contrast matrix for aov

2005-03-10 Thread Christophe Pallier
Prof Brian Ripley wrote:
On Wed, 9 Mar 2005, Darren Weber wrote:
We have a two-factor, repeated measures design, with

Where does `repeated measures' come into this?  You appear to have 
repeated a 2x2 experiment in each of 8 blocks (subjects).  Such a 
design is usually analysed with fixed effects.  (Perhaps you averaged 
over repeats in the first few lines of your code?)

roi.aov - aov(roi ~ (Cue*Hemisphere) + 
Error(Subject/(Cue*Hemisphere)), data=roiDataframe)

I think the error model should be Error(Subject).  In what sense are 
`Cue' and `Cue:Hemisphere' random effects nested inside `Subject'?

I do not understand this, and I think I am probably not the only one. 
That is why I would be grateful if you could give a bit more information.

My understanding is that the fixed factors Cue and Hemisphere are 
crossed with the random factor Subject (in other words, Cue and 
Hemisphere are within-subjects factors, and this is probably why Darren 
called it a repeated measure design).

In this case, it seems to me from the various textbooks I read on Anova, 
that the appropriate MS to  test the interaction Cue:Hemisphere is 
Subject:Cue:Hemisphere (with 7 degress of freedom, as there are 8 
independent subjects). 

If you input Error(Subject/(Cue*Hemisphere)) in the aov formula, then 
the test for the interaction indeed uses the Subject:Cue:Hemisphere 
source of variation in demoninator. This fits with the ouput of other 
softwares.

If you include only 'Subjet', then the test for the interaction has 21 
degrees of Freedom, and I do not understand what this tests.

I apologize in if my terminology is not accurate.  But I hope you can 
clarify what is wrong with the Error(Subject/(Cue*Hemisphere)) term,
or maybe just point us to the relevant textbooks.

Thanks in advance,
Christophe Pallier



Let me fake some `data':
set.seed(1); roiValues - rnorm(32)
subjectlabels - paste(V1:8, sep = )
options(contrasts = c(contr.helmert, contr.poly))
roi.aov - aov(roi ~ Cue*Hemisphere + Error(Subject), data=roiDataframe)
roi.aov

Call:
aov(formula = roi ~ Cue * Hemisphere + Error(Subject), data = 
roiDataframe)

Grand Mean: 0.1165512
Stratum 1: Subject
Terms:
Residuals
Sum of Squares   4.200946
Deg. of Freedom 7
Residual standard error: 0.7746839
Stratum 2: Within
Terms:
  Cue Hemisphere Cue:Hemisphere Residuals
Sum of Squares   0.216453   0.019712   0.057860 21.896872
Deg. of Freedom 1  1  121
Residual standard error: 1.021131
Estimated effects are balanced
Note that all the action is in one stratum, and the SSQs are the same as
aov(roi ~ Subject + Cue * Hemisphere, data = roiDataframe)
(and also the same as for your fit).
print(summary(roi.aov))

It auto-prints, so you don't need print().

I've tried to create a contrast matrix like this:
cm - contrasts(roiDataframe$Cue)
which gives the main effect contrasts for the Cue factor.  I really 
want to specify the interaction contrasts, so I tried this:


# c( lh_cueL, lh_cueR, rh_cueL, rh_cueR )
# CueRightCueLeft for the Left Hemisphere.
# CueLeftCueRight for the Right Hemisphere
cm - c(-1, 1, 1, -1)
dim(cm) - c(2,2)

(That is up to sign what Helmert contrasts give you.)
roi.aov - aov( roi ~ (Cue*Hemisphere) + 
Error(Subject/(Cue*Hemisphere)),
contrasts=cm, data=roiDataframe)
print(summary(roi.aov))


but the results of these two aov commands are identical.  Is it the 
case that the 2x2 design matrix is always going to give the same F 
values for the interaction regardless of the contrast direction?

Yes, as however you code the design (via `contrasts') you are fitting 
the same subspaces.  Not sure what you mean by `contrast direction', 
though.

However, you have not specified `contrasts' correctly:
contrasts: A list of contrasts to be used for some of the factors in
  the formula.
and cm is not a list, and an interaction is not a factor.
OR, is there some way to get a summary output for the contrasts that 
is not available from the print method?

For more than two levels, yes: see `split' under ?summary.aov.
Also, see se.contrasts which allows you to find the standard error for 
any contrast.

For the fixed-effects model you can use summary.lm:
fit - aov(roi ~ Subject + Cue * Hemisphere, data = roiDataframe)
summary(fit)
   Df  Sum Sq Mean Sq F value Pr(F)
Subject 7  4.2009  0.6001  0.5756 0.7677
Cue 1  0.2165  0.2165  0.2076 0.6533
Hemisphere  1  0.0197  0.0197  0.0189 0.8920
Cue:Hemisphere  1  0.0579  0.0579  0.0555 0.8161
Residuals  21 21.8969  1.0427
summary.lm(fit)

Call:
aov(formula = roi ~ Subject + Cue * Hemisphere, data = roiDataframe)
Residuals:
Min  1Q  Median  3Q Max
-1.7893 -0.4197  0.1723  0.5868  1.3033
Coefficients:
 Estimate Std. Error t value Pr(|t|)
[...]
Cue1 

Re: [R] from long/lat to UTM

2005-03-10 Thread Ted Harding
On 10-Mar-05 Sander Oom wrote:
 Hi Yyan,
 
 The proj4R package by Roger Bivand will allow you to project data in 
 many ways and directions.
 
 http://spatial.nhh.no/R/Devel/proj4R-pkg.pdf
 
 It uses the proj libraries from:
 
 http://www.remotesensing.org/proj/
 
 Not sure where you would derive the time zone!
 
 Good luck,
 
 Sander.

While there is a longitude-based nominal time-zone
structure (0deg E is the centre of Zone 0 which extends
for 7.5deg either side; successive time-zones move round
by 15deg), this does not apply cleanly to the time-shifts
adopted in different places for local time.

A World map of regions with different local-time offsets
is a crazy patchwork, with all sorts of contradictory
looking regions. For instance, the -0700 region of
the USA extends from approx -0830 to approx -0620,
covering over 2 hours, and parts of -0800 touch the
-0700 line and are more than 0100 East of parts of -0700.
Even worse can be found over the Indian/Central Asian
and Malaysian parts of the world, where time-shifts
of 30 miniutes are also frequent (and, according to my
Atlas, one country, Nepal, has 52/3 i.e. +0540!).

As Sander says, Not sure where you would derive the time zone!.

Unless you can refer a (long,lat) position to a look-up
table, you can't predict what the zone will be to less
the 1 hour (except of course for the nominal time-zones
by 15deg sectors). I've never encountered a digital
version of such a table (my Atlas must be based on one,
though).

Best wishes,
Ted.

 yyan liu wrote:
 Hi:
   Is there any function in R which can convert the
 long/lat to UTM(Universal Transverse Mercator)?
   There are quite a few converters on Internet.
 However, the interface is designed as input-output
 which I can not convert lots of locations at the same
 time.
   Another question is whether there is a function in R
 which can tell the time zone from the location's
 lat/long?
   Thank you!
 
 liu
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide!
 http://www.R-project.org/posting-guide.html
 
 
 
 -- 
 
 Dr. Sander P. Oom
 Animal, Plant and Environmental Sciences,
 University of the Witwatersrand
 Private Bag 3, Wits 2050, South Africa
 Tel (work)  +27 (0)11 717 64 04
 Tel (home)  +27 (0)18 297 44 51
 Fax +27 (0)18 299 24 64
 Email   [EMAIL PROTECTED]
 Web www.oomvanlieshout.net/sander
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide!
 http://www.R-project.org/posting-guide.html


E-Mail: (Ted Harding) [EMAIL PROTECTED]
Fax-to-email: +44 (0)870 094 0861
Date: 10-Mar-05   Time: 09:47:25
-- XFMail --

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] How to get standard deviation of rows in a matrix

2005-03-10 Thread Tobias Verbeke
On Wed, 9 Mar 2005 13:49:44 -0600
Jagarlamudi, Choudary [EMAIL PROTECTED] wrote:

 Hi all,
  
I am trying to find sd of my rows in a matrix and i get column sd inspite 
 of extracting rows.
 I tried to do the sqrt(var(x)) but that did'nt work as well,
  
 Here is my data
  
 genes
 15 24 63 40
 25 42 46 35
 23 53 37 45
 30 37 50 55
 40 51 30 48
  
 x-sd(genes[1:5,])
  
 y-sqrt(var(genes[1:5,]))
  
 I get 4 sds for the 4 columns instead of 5 sds for my 5 rows.
 Thanks you in advance.

 mymat - matrix(rnorm(20), 5)
 mymat
   [,1]   [,2]   [,3]   [,4]
[1,] -0.1418666 -0.6754704 -0.2525154 -1.4832003
[2,] -0.2254920  1.8705093 -0.9678318 -0.1108883
[3,]  2.2501392  0.1687349 -0.1279790  0.7055311
[4,]  0.9893453 -0.5924199  0.2410576  0.9001638
[5,] -0.4179559  0.9334556 -0.6501605  0.6148958
 apply(mymat, 1, sd)
[1] 0.6084171 1.2135998 1.0584750 0.7318231 0.7722641

See ?apply

HTH,
Tobias

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] contrast matrix for aov

2005-03-10 Thread Prof Brian Ripley
On Thu, 10 Mar 2005, Christophe Pallier wrote:
Prof Brian Ripley wrote:
On Wed, 9 Mar 2005, Darren Weber wrote:
We have a two-factor, repeated measures design, with

Where does `repeated measures' come into this?  You appear to have repeated 
a 2x2 experiment in each of 8 blocks (subjects).  Such a design is usually 
analysed with fixed effects.  (Perhaps you averaged over repeats in the 
first few lines of your code?)

roi.aov - aov(roi ~ (Cue*Hemisphere) + Error(Subject/(Cue*Hemisphere)), 
data=roiDataframe)

I think the error model should be Error(Subject).  In what sense are `Cue' 
and `Cue:Hemisphere' random effects nested inside `Subject'?

I do not understand this, and I think I am probably not the only one. That is 
why I would be grateful if you could give a bit more information.

My understanding is that the fixed factors Cue and Hemisphere are crossed 
with the random factor Subject (in other words, Cue and Hemisphere are 
within-subjects factors, and this is probably why Darren called it a 
repeated measure design).
The issue is whether the variance of the error really depends on the 
treatment combination, which is what the Error(Subject/(Cue*Hemisphere)) 
assumes.  With that model

Error: Subject:Cue
  Df Sum Sq Mean Sq F value Pr(F)
Cue1 0.2165  0.2165  0.1967 0.6708
Residuals  7 7.7041  1.1006
Error: Subject:Hemisphere
   Df Sum Sq Mean Sq F value Pr(F)
Hemisphere  1 0.0197  0.0197  0.0154 0.9047
Residuals   7 8.9561  1.2794
Error: Subject:Cue:Hemisphere
   Df Sum Sq Mean Sq F value Pr(F)
Cue:Hemisphere  1 0.0579  0.0579  0.0773  0.789
Residuals   7 5.2366  0.7481
you are assuming different variances for three contrasts.
In this case, it seems to me from the various textbooks I read on Anova, that 
the appropriate MS to  test the interaction Cue:Hemisphere is 
Subject:Cue:Hemisphere (with 7 degress of freedom, as there are 8 independent 
subjects). 
If you input Error(Subject/(Cue*Hemisphere)) in the aov formula, then the 
test for the interaction indeed uses the Subject:Cue:Hemisphere source of 
variation in demoninator. This fits with the ouput of other softwares.

If you include only 'Subjet', then the test for the interaction has 21 
degrees of Freedom, and I do not understand what this tests.
It uses a common variance for all treatment combinations.
I apologize in if my terminology is not accurate.  But I hope you can clarify 
what is wrong with the Error(Subject/(Cue*Hemisphere)) term,
or maybe just point us to the relevant textbooks.
Nothing is `wrong' with it, it just seems discordant with the description
of the experiment.
--
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] contrast matrix for aov

2005-03-10 Thread Prof Brian Ripley
On Thu, 10 Mar 2005, Peter Dalgaard wrote:
Prof Brian Ripley [EMAIL PROTECTED] writes:
On Wed, 9 Mar 2005, Darren Weber wrote:
How do we specify a contrast interaction matrix for an ANOVA model?
We have a two-factor, repeated measures design, with
Where does `repeated measures' come into this?  You appear to have
repeated a 2x2 experiment in each of 8 blocks (subjects).  Such a
design is usually analysed with fixed effects.  (Perhaps you averaged
over repeats in the first few lines of your code?)
Actually, that's not usual in SAS (and not SPSS either, I believe)
in things like
proc glm;
   model y1-y4= ;
   repeated row 2 col 2;
[Not that SAS/SPSS is the Gospel, but they do tend to set the
terminology in these matters.]
That seems to be appropriate only if the four treatments are done in a 
particular order (`repeated') and one expects correlations in the 
responses.  However, here the measurements seem to have been averages of 
replications.

It may be usual to (mis?)specify experiments in SAS that way: I don't 
know what end users do, but it is not the only way possible in SAS.

There you'd get the analysis split up as analyses of three contrasts
corresponding to the main effects and interaction, c(-1,-1,1,1),
c(-1,1,-1,1), and c(-1,1,1,-1) in the 2x2 case (up to scale and sign).
In the 2x2 case, this corresponds exactly to the 4-stratum model
row*col + Error(block/(row*col)).
(It is interesting to note that it is still not the optimal analysis
for arbitrary covariance patterns because dependence between contrasts
is not utilized - it is basically assumed to be absent.)
It also assumes that there is a difference between variances of the 
contrasts, that is there is either correlation between results or a 
difference in variances under different treatments.  Nothing in the 
description led me to expect either of those, but I was asking why it was 
specified that way.  (If the variance does differ with the mean then there 
are probably more appropriate analyses.)

--
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


RE: [R] How to get standard deviation of rows in a matrix

2005-03-10 Thread Liaw, Andy
You should read ?var and ?sd more carefully.  For a data frame or a matrix,
var() returns the covariance matrix of the columns, whereas sd() returns the
standard deviations of the columns.  If you want standard deviations of the
rows, you need to transpose the data.

Andy

 From: Jagarlamudi, Choudary
 
 Hi all,
  
I am trying to find sd of my rows in a matrix and i get 
 column sd inspite of extracting rows.
 I tried to do the sqrt(var(x)) but that did'nt work as well,
  
 Here is my data
  
 genes
 15 24 63 40
 25 42 46 35
 23 53 37 45
 30 37 50 55
 40 51 30 48
  
 x-sd(genes[1:5,])
  
 y-sqrt(var(genes[1:5,]))
  
 I get 4 sds for the 4 columns instead of 5 sds for my 5 rows.
 Thanks you in advance.
  
 Choudary Jagarlamudi
 Instructor
 Southwestern Oklahoma State University
 STF 254
 100 campus Drive
 Weatherford OK 73096
 Tel 580-774-7136
 
 
 


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


RE: [R] Interval censoring in Survival analysis

2005-03-10 Thread Rau, Roland
Hi,
 

 -Original Message-
 From: [EMAIL PROTECTED] 
 [mailto:[EMAIL PROTECTED] On Behalf Of AMOROS 
 NAVARRO, ALEX
 
 I would like to know if R programme allows doing different 
 kinds of censoring in his last version. What kind of package 
 should I use 

Yes, try the following:
library(survival)
?Surv


Best,
Roland


+
This mail has been sent through the MPI for Demographic Rese...{{dropped}}

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Lattice device page options-margins

2005-03-10 Thread Chuck Cleland
You can add some space to the left by setting the left.padding component 
of the layout.widths trellis setting. For example:

library(lattice)
trellis.device(new=TRUE)
trellis.par.set(layout.widths = list(left.padding = 20))
require(stats)
## Tonga Trench Earthquakes
Depth - equal.count(quakes$depth, number=8, overlap=.1)
xyplot(lat ~ long | Depth, data = quakes)
hope this helps,
Chuck Cleland
Bock, Michael wrote:
I am using lattice to make figures as pdfs:
trellis.device(device = pdf,file = Figure6.pdf,color = FALSE)
I need to specify some blank space on the left-hand margins (the pages
will be bound so we need about 0.5 inch)). I have tried a number of
solutions but none seems to work (e.g. par.set). Can this be done when
initiating the plotting device? Or is the some other way that does not
require me to manually move everything over?

Michael J. Bock, PhD.
ARCADIS
24 Preble St. Suite 100
Portland, ME 04101
207.828.0046
fax 207.828.0062

[[alternative HTML version deleted]]
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
--
Chuck Cleland, Ph.D.
NDRI, Inc.
71 West 23rd Street, 8th floor
New York, NY 10010
tel: (212) 845-4495 (Tu, Th)
tel: (732) 452-1424 (M, W, F)
fax: (917) 438-0894
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


RE: [R] Structural equation models with R

2005-03-10 Thread André TavaresCorrêa Dias
Dear John,
Thanks very much! I don’t know how I didn’t see this before. I checked a 
thousand of times…
There is one more thing that I can not understand. What are the possible 
reasons for problems on calculation of confidence interval for RMSEA? For the 
same model I sent before (after correction of the variable name) the lower CI 
output was NA:

Model Chisquare =  10.824   Df =  13 Pr(Chisq) = 0.62558
 Goodness-of-fit index =  0.91656
 Adjusted goodness-of-fit index =  0.76895
 RMSEA index =  0   90 % CI: (NA, 0.15652)
 BIC =  -60.425

I have other models with the same behavior, even when the estimative of RMSEA 
is different from zero.

Model Chisquare =  15.165   Df =  14 Pr(Chisq) = 0.367
 Goodness-of-fit index =  0.89038
 Adjusted goodness-of-fit index =  0.71811
 RMSEA index =  0.053558   90 % CI: (NA, 0.19141)
 BIC =  -61.564

Thanks and all the best.
André

---
André Tavares Corrêa Dias
Laboratório de Ecologia Vegetal
Departamento de Ecologia - IB
Universidade Federal do Rio de Janeiro
Ilha do Fundão, CCS
Rio de Janeiro, RJ. Brasil
CEP 21941-590CP 68020
Tel. (21)2562-6377

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] multiple comparisons for lme using multcomp

2005-03-10 Thread Michaël Coeurdassier
Dear R-help list,

I would like to perform multiple comparisons for lme.  Can you report to me 
if my way to is correct or not?  Please, note that I am not nor a 
statistician nor a mathematician, so, some understandings are sometimes 
quite hard for me.   According to the previous helps on the topic in R-help 
list May 2003 (please, see Torsten Hothorn advices) and books such as 
Venables  Ripley or Pinheiro  Bates. I need multcomp library. I followed 
the different examples and get these results :

In this example, response is the mass of earthworms after one month of 
exposure to different concentrations of pollutants (treatment). The 
experimental design was three containers per treatment with five animals in 
each container (or less if mortality occurred). Containers were referred as 
box and considered as the random variable.

  tab-read.delim(example1.txt)
  tab
treatment box response
1control   a  1.8
2control   a  2.3
3control   a  1.3
4control   a  0.8
5control   a  2.0
6control   b  1.1
7control   b  2.2
8control   b  1.3
9control   b  2.0
10   control   b  1.3
11   control   c  1.5
12   control   c  1.4
13   control   c  2.1
14   control   c  1.7
15   control   c  1.3
16 Al100   d  1.6
17 Al100   d  2.1
18 Al100   d  0.7
19 Al100   d  1.8
20 Al100   d  1.2
21 Al100   e  1.5
22 Al100   e  1.5
23 Al100   e  2.0
24 Al100   e  1.0
25 Al100   e  1.6
26 Al100   f  0.9
27 Al100   f  2.0
28 Al100   f  1.9
29 Al100   f  1.7
30 Al100   f  1.7
…
68 Al800   q  1.0
69 Al800   r  0.8
70 Al800   r  0.9
71 Al800   r  0.9
72 Al800   r  0.6
73 Al800   s  0.9
74 Al800   s  1.0
75 Al800   s  0.8
76 Al800   s  0.8
77 Al800   s  0.7

  attach(tab)
  library(nlme)
  lm1-lme(response~treatment,random=~1|box)
  library(multcomp)
Loading required package: mvtnorm

  # first way to do (seem uncorrect)
  summary(csimtest(coef(lm1),vcov(lm1),cmatrix=contrMat(table(treatment),
type=Tukey),df=59))
Error in csimtest(coef(lm1), vcov(lm1), cmatrix = 
contrMat(table(treatment), : estpar not a vector

  #indeed
  coef(lm1)
   (Intercept) treatmentAl200 treatmentAl400 treatmentAl600 treatmentAl800
a1.546679 -0.1648540 -0.4895219 -0.6383375 -0.7066632
b1.546657 -0.1648540 -0.4895219 -0.6383375 -0.7066632
c1.546664 -0.1648540 -0.4895219 -0.6383375 -0.7066632
d1.546643 -0.1648540 -0.4895219 -0.6383375 -0.7066632
…
s1.546667 -0.1648540 -0.4895219 -0.6383375 -0.7066632
   treatmentcontrol
a 0.06
b 0.06
c 0.06
d 0.06
…
s 0.06

  # second way to do could be to get a vector for lm1 coefficient removing 
intercept(is it correct?)
  vect-as.numeric(coef(lm1)[1,])
  vect
[1]  1.5466787 -0.1648540 -0.4895219 -0.6383375 -0.7066632  0.060
  
summary(csimtest(vect,vcov(lm1),cmatrix=contrMat(table(treatment),type=Tukey),df=59))

  Simultaneous tests: user-defined contrasts

  user-defined contrasts for factor

Contrast matrix:
   Al100 Al200 Al400 Al600 Al800 control
Al200-Al100  -1 10 00   0
Al400-Al100  -1 0 1 0 0   0
Al600-Al100  -1 0 01 0   0
Al800-Al100  -1 0 00 1   0
control-Al100-1 0 00 0   1
Al400-Al200   0-1 10 0   0
Al600-Al200   0-1 0 1 0   0
Al800-Al200   0-1 0 01   0
control-Al200 0-1 0 00   1
Al600-Al400   0 0-1 10   0
Al800-Al400   0 0-1 0 1   0
control-Al400 0 0-1 0 0   1
Al800-Al600   0 00-1 1   0
control-Al600 0 00-1 0   1
control-Al800 0 00 0-1   1


Absolute Error Tolerance:  0.001

Coefficients:
   Estimate t value Std.Err. p raw p Bonf p adj
Al800-Al100 -2.253 -10.4670.213 0.000  0.000 0.000
Al600-Al100 -2.185 -10.3890.207 0.000  0.000 0.000
Al400-Al100 -2.036  -9.8500.210 0.000  0.000 0.000
Al200-Al100 -1.712  -8.0510.215 0.000  0.000 0.000
control-Al100   -1.487  -7.2430.205 0.000  0.000 0.000
control-Al8000.767  -5.2820.143 0.000  0.000 0.000
control-Al6000.698  -5.0720.148 0.000  0.000 0.000
control-Al4000.550  -4.1600.155 0.000  0.001 0.001
Al800-Al200 -0.542  -3.4880.141 0.001  0.006 0.006
Al600-Al200 -0.473  -3.1910.140 0.002  0.014 0.012
Al400-Al200 -0.325  -2.2670.147 0.027  0.135 0.110
control-Al2000.225  -1.5930.132 0.116  0.466 0.341
Al800-Al400 -0.217  -1.4750.152 

[R] Re: plotting several series on the sames axes

2005-03-10 Thread midnightsun
Many thanks to all those who replied to my 'simple' query, all of which
have been posted to the list.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] about kpss.test()

2005-03-10 Thread Achim Zeileis
On Wed, 9 Mar 2005 14:32:09 -0500 (EST) Weiguang Shi wrote:

  --- Achim Zeileis [EMAIL PROTECTED]
 wrote: 
   First of all, could you tell me what the KPSS
  Level 
   in the output of the test means?
 Thanks. But I meant to also ask about the number after
 that, e.g., 
 KPSS Level = 0.0027

If you wouldn't have deleted my reply to that question, you could still
see the answer! It was:

  quote
  It means that you tested for level stationarity and gives you the test
  statistic.
  /quote

Note the second half of the sentence.

  Please check the references of kpss.test or any book
  on economectric
  time series analysis for which alternatives the KPSS
  test was designed.
 Will soon do.
 
  Hint: it's not designed for detecting cyclical
  variations.
 OK. I guess there must be some pre-requisite on the 
 data before kpss-ing it.

In particular it means: There are infinitely many conceivable ways of
deviation from stationarity and the KPSS test is only good at detecting
certain kinds of deviation (which do not include cyclical variations).
Z

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] problem using uniroot with integrate

2005-03-10 Thread Ken Knoblauch
Thank you very much.  Yes, that was the problem, partial matching.  
I saw a warning about that in integrate and some discussion from 1999
in the archives and so added the m0 for integrate but somehow
I wasn't bright enough to see that I had the same problem
in uniroot.  

Sorry about no working example, but I'm not sure what I could
have added, if I understand what you mean by working example,
because my function wasn't working.

best,

ken


Quoting Sundar Dorai-Raj [EMAIL PROTECTED]:

 
 
 Ken Knoblauch wrote on 3/9/2005 10:27 AM:
  Hi, 
  
  I'm trying to calculate the value of the variable, dp, below, in the
  argument to the integral of dnorm(x-dp) * pnorm(x)^(m-1).  This
  corresponds to the estimate of the sensitivity of an observer in an
  m-alternative forced choice experiment, given the probability of
  a correct response, Pc, a Gaussian assumption for the noise and
  no bias.  The function that I wrote below gives me an error:
  
  Error in f(x, ...) : recursive default argument reference
  
  The problem seems to be at the statement using uniroot,
  because the furntion est.dp works fine outside of the main function.
  I've been using R for awhile but there are still many nuances
  about the scoping and the use of environments that I'm weak on
  and would like to understand better.  I would appreciate any
  suggestions or solutions that anyone might offer for fixing
  my error.  Thank you.
  
  dprime.mAFC - function(Pc, m) {
  est.dp - function(dp, Pc = Pc, m = m) {
  
pr - function(x, dpt = dp, m0 = m) {
  dnorm(x - dpt) * pnorm(x)^(m0 - 1)
  }
  
Pc - integrate(pr, lower = -Inf, upper = Inf, 
dpt = dp, m0 = m)$value
  }
  
  dp.res - uniroot(est.dp, interval = c(0,5), Pc = Pc, m = m)
  dp.res$root 
  }
  
 
 Ken,
 
 Look at the argument list for ?uniroot and think partial matching. 
 You're m is being interpretted for maxiter. Simply change to
 
 dp.res - uniroot(est.dp, interval = c(0,5), Pc = Pc, m0 = m)
 
 and in other places for consistency and the error goes away. Of course, 
 since you gave no working example, I'm not sure if other errors are 
 present that I'm missing.
 
 --sundar
 




Ken Knoblauch
Inserm U 371
Cerveau et Vision
18 avenue du Doyen Lepine
69675 Bron cedex
France
tel: +33 (0)4 72 91 34 77
fax: +33 (0)4 72 91 34 61
portable: 06 84 10 64 10

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] multiple comparisons for lme using multcomp

2005-03-10 Thread Peter Dalgaard
Michaël Coeurdassier [EMAIL PROTECTED] writes:

 summary(csimtest(vect,vcov(lm1),cmatrix=contrMat(table(treatment),type=Tukey),df=59))

 Coefficients:
Estimate t value Std.Err. p raw p Bonf p adj
 Al800-Al100 -2.253 -10.4670.213 0.000  0.000 0.000
 Al600-Al100 -2.185 -10.3890.207 0.000  0.000 0.000
 Al400-Al100 -2.036  -9.8500.210 0.000  0.000 0.000
 Al200-Al100 -1.712  -8.0510.215 0.000  0.000 0.000
 control-Al100   -1.487  -7.2430.205 0.000  0.000 0.000
 control-Al8000.767  -5.2820.143 0.000  0.000 0.000
 control-Al6000.698  -5.0720.148 0.000  0.000 0.000
 control-Al4000.550  -4.1600.155 0.000  0.001 0.001
 Al800-Al200 -0.542  -3.4880.141 0.001  0.006 0.006
 Al600-Al200 -0.473  -3.1910.140 0.002  0.014 0.012
 Al400-Al200 -0.325  -2.2670.147 0.027  0.135 0.110
 control-Al2000.225  -1.5930.132 0.116  0.466 0.341
 Al800-Al400 -0.217  -1.4750.152 0.145  0.466 0.341
 Al600-Al400 -0.149  -1.0640.138 0.292  0.583 0.466
 Al800-Al600 -0.068  -0.4490.145 0.655  0.655 0.655
 
   #a friend told me that it is possible to do multiple comparisons for lme 
 in a simplest way, i.e. :
   anova(lm1,L=c(treatmentcontrol=1,treatmentAl200=-1))
 F-test for linear combination(s)
treatmentAl200 treatmentcontrol
-11
numDF denDF  F-value p-value
 1 112 2.538813  0.1371
 
   anova(lm1,L=c(treatmentcontrol=1,treatmentAl400=-1))
 F-test for linear combination(s)
treatmentAl400 treatmentcontrol
-11
numDF denDF  F-value p-value
 1 112 17.30181  0.0013
 
   anova(lm1,L=c(treatmentcontrol=1,treatmentAl600=-1))
 F-test for linear combination(s)
treatmentAl600 treatmentcontrol
-11
numDF denDF  F-value p-value
 1 112 25.72466   3e-04
 
   anova(lm1,L=c(treatmentcontrol=1,treatmentAl800=-1))
 F-test for linear combination(s)
treatmentAl800 treatmentcontrol
-11
numDF denDF F-value p-value
 1 112 27.9043   2e-04
 
   # however, p values are different that those obtained above. Is this way 
 OK or not?


Notice that in all cases, the F-value is exactly the square of the
t-value from Csimtest. The main difference is that you have claimed
that the vcov matrix has 59 DF, whereas the lme analysis says 12. I'd
suspect the latter to be (more) correct. Apart from that, notice that
the L approach at best gives you the p raw value, i.e., it is
uncorrected for multiple testing.


-- 
   O__   Peter Dalgaard Blegdamsvej 3  
  c/ /'_ --- Dept. of Biostatistics 2200 Cph. N   
 (*) \(*) -- University of Copenhagen   Denmark  Ph: (+45) 35327918
~~ - ([EMAIL PROTECTED]) FAX: (+45) 35327907

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] R-help:loading data from text file and viewing it in persp

2005-03-10 Thread Lakshmi Dhevi Baskar
Hi All
 
i would like to get some help on how to use the persp() for 3d surface plotting 
with the data from txt file.
 
 
it has format like
 
x1 y1 z1
x2 y2 z2
x3 y3 z3

 
and so on...sometimes the txt file data may be very large also..
how could i load this txt file and view its 3d view.
 
thanks in advance
 
 


-


[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] how modify object in parent.env

2005-03-10 Thread Gabor Grothendieck
[I sent this last night but it does not appear to have shown up
so this is second attempt.  Apologies if this gets posted twice.]



The search for the left hand side of the - starts in 
the parent while search for the right side is in the
current environment so: 

x - 1:3
local( for (i in seq(along = x)) { x - 99; x[i] - x } )

Its important not to define the local x prior seq(along = x)
so that that seq(along=x) refers to the x in the parent.  If
you do define it first and therefore need to force reference 
to the parent x use get or eval in the seq:

  seq(along = get(x, parent.frame()))

Actually, it may be less confusing to just use a different,
but related, variable name.  If you don't like x0, perhaps
x. would be ok:

x - 1:3
local( { x. - 99; for (i in seq(along = x)) x[i] - x. } )



Vadim Ogranovich vograno at evafunds.com writes:

: 
: Thank you to Gabor and Mark Schwartz for the answers. Both of them
: solved the problem I posted, but my actual problem, as I now see, is a
: little bit more involved. Let me try again.
: 
: I have a vector 'x'. I want to compute its entries in a loop (yes, I
: know...). Say
: 
: x = seq(3)
: 
: for (i in seq(length(x)) {
:   x0 = someValue
:   x[i] = x0
: } 
: 
: There are two problems with the above code:
: 1. x0 pollutes the global envirnoment (not to mention possible
: over-write of an existing x0). Therefore I thought I'd wrap it with
: local().
: 2. x0 is not a good name from a readability perspective. I'd rather call
: it x to emphasize it's an entry in an outer vector 'x'. (In this small
: example it doesn't really matter, but I have much more involved scripts
: where consistent naming is important)
: 
: Gabor's solution solves 1 but not 2. Maybe there is a simple way around
: this restriction?
: 
: Thanks,
: Vadim
: 
:  -Original Message-
:  From: r-help-bounces at stat.math.ethz.ch 
:  [mailto:r-help-bounces at stat.math.ethz.ch] On Behalf Of Gabor 
:  Grothendieck
:  Sent: Tuesday, March 08, 2005 4:06 PM
:  To: r-help at stat.math.ethz.ch
:  Subject: Re: [R] how modify object in parent.env
:  
:  
:  You can use - like this:
:  
:  x - 1:3
:  local(x[1] - x[1]+1)
:  
:  Vadim Ogranovich vograno at evafunds.com writes:
:  
:  : 
:  : Assign() re-binds the value, not modifies it (the latter is what I
:  : needed)
:  : 
:  :  -Original Message-
:  :  From: McGehee, Robert [mailto:Robert.McGehee at 
:  geodecapital.com]
:  :  Sent: Tuesday, March 08, 2005 3:48 PM
:  :  To: Vadim Ogranovich; r-help at stat.math.ethz.ch
:  :  Subject: RE: [R] how modify object in parent.env
:  : 
:  :  This isn't an environment problem. Assigning something to a
:  :  get call doesn't make any sense. Use assign.
:  : 
:  :   a - 5
:  :   get(a) - 10
:  :  Error: couldn't find function get-
:  : 
:  :  And from the ?assign help page, you can pick what environment
:  :  you want to make the assignment. Just pick the parent environment.
:  : 
:  : 
:  :  -Original Message-
:  :  From: Vadim Ogranovich [mailto:vograno at evafunds.com]
:  :  Sent: Tuesday, March 08, 2005 6:36 PM
:  :  To: r-help at stat.math.ethz.ch
:  :  Subject: [R] how modify object in parent.env
:  : 
:  : 
:  :  Hi,
:  : 
:  :  Is it possible to modify an object in the parent.env (as 
:  opposed to
:  :  re-bind)? Here is what I tried:
:  : 
:  :   x = 1:3
:  :  # try to modify the first element of x from within a new 
:  environment
:  :   local(get(x, parent.env(environment()))[1] - NA)
:  :  Error in eval(expr, envir, enclos) : Target of assignment 
:  expands to
:  :  non-language object
:  : 
:  :  # On the other hand retrieval works just fine
:  :   local(get(x, parent.env(environment()))[1])
:  :  [1] 1
:  : 
:  :  Thanks,
:  :  Vadim
:  : 
:  :  __
:  :  R-help at stat.math.ethz.ch mailing list
:  :  https://stat.ethz.ch/mailman/listinfo/r-help
:  :  PLEASE do read the posting guide!
:  :  http://www.R-project.org/posting-guide.html
:  : 
:  : 
:  : __
:  : R-help at stat.math.ethz.ch mailing list
:  : https://stat.ethz.ch/mailman/listinfo/r-help
:  : PLEASE do read the posting guide! 
:  http://www.R-project.org/posting-guide.html
:  : 
:  :
:  
:  __
:  R-help at stat.math.ethz.ch mailing list
:  https://stat.ethz.ch/mailman/listinfo/r-help
:  PLEASE do read the posting guide! 
:  http://www.R-project.org/posting-guide.html
: 
: 
: __
: R-help at stat.math.ethz.ch mailing list
: https://stat.ethz.ch/mailman/listinfo/r-help
: PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
: 
:

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Gap statistic

2005-03-10 Thread Nestor Fernandez
Dear All,
I need to calculate the optimal number of clusters for a classification based 
on a large number of observations (tens of thousands).
Thibshirani et al. proposed the gap statistic for this purpose. I tried the 
R-code developed by R. Jörnsten but R hangs with such amount of data ().
Is it available any other (optimised) code?
Any help would be appreciated, including suggestions about other alternatives 
for the selection of an optimal number of cluster from large datasets.
Thanks, 

Néstor Fernández, PhD.
Department of Ecological Modelling
UFZ - Centre for Environmental Research
PF 500136, DE-04301, Leipzig, Germany.
Tel: +49 341-2352034
E-mail: [EMAIL PROTECTED]
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Gregmisc

2005-03-10 Thread Christian Kamenik
Dear all,
I use R 2.0.1 on Windows XP professional. When I want to load the 
'Gregmisc' library I get the following error message:

Error in library(pkg, character.only = TRUE) :  'gregmisc' is not a 
valid package -- installed  2.0.0?

Can anybody tell me what's wrong with this package?
Cheers, Christian
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Gregmisc

2005-03-10 Thread Chuck Cleland
It's a bundle of 4 packages (gtools, gdata, gmodels, and gplots), so you 
need to load one of those packages.  See the following:

http://cran.us.r-project.org/src/contrib/Descriptions/gregmisc.html
Chuck Cleland
Christian Kamenik wrote:
Dear all,
I use R 2.0.1 on Windows XP professional. When I want to load the 
'Gregmisc' library I get the following error message:

Error in library(pkg, character.only = TRUE) :  'gregmisc' is not a 
valid package -- installed  2.0.0?

Can anybody tell me what's wrong with this package?
Cheers, Christian
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! 
http://www.R-project.org/posting-guide.html

--
Chuck Cleland, Ph.D.
NDRI, Inc.
71 West 23rd Street, 8th floor
New York, NY 10010
tel: (212) 845-4495 (Tu, Th)
tel: (732) 452-1424 (M, W, F)
fax: (917) 438-0894
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Gregmisc

2005-03-10 Thread Marc Schwartz
On Thu, 2005-03-10 at 14:44 +0100, Christian Kamenik wrote:
 Dear all,
 
 I use R 2.0.1 on Windows XP professional. When I want to load the 
 'Gregmisc' library I get the following error message:
 
 Error in library(pkg, character.only = TRUE) :  'gregmisc' is not a 
 valid package -- installed  2.0.0?
 
 Can anybody tell me what's wrong with this package?


gregmisc is a _bundle_, not a package. It is now comprised of four
separate packages:

1. gtools

2. gdata

3. gmodels

4. gplots

You will need to load the particular package that you wish to use or
load all four if you wish.

HTH,

Marc Schwartz

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] from long/lat to UTM

2005-03-10 Thread Gabor Grothendieck
 Ted.Harding at nessie.mcc.ac.uk writes:
: 
:  yyan liu wrote:
:  Hi:
:Is there any function in R which can convert the
:  long/lat to UTM(Universal Transverse Mercator)?
:There are quite a few converters on Internet.
:  However, the interface is designed as input-output
:  which I can not convert lots of locations at the same
:  time.
:Another question is whether there is a function in R
:  which can tell the time zone from the location's
:  lat/long?
:Thank you!
:  
:  liu
: 
: On 10-Mar-05 Sander Oom wrote:
:  Hi Yyan,
:  
:  The proj4R package by Roger Bivand will allow you to project data in 
:  many ways and directions.
:  
:  http://spatial.nhh.no/R/Devel/proj4R-pkg.pdf
:  
:  It uses the proj libraries from:
:  
:  http://www.remotesensing.org/proj/
:  
:  Not sure where you would derive the time zone!
:  
:  Good luck,
:  
:  Sander.
: 
: While there is a longitude-based nominal time-zone
: structure (0deg E is the centre of Zone 0 which extends
: for 7.5deg either side; successive time-zones move round
: by 15deg), this does not apply cleanly to the time-shifts
: adopted in different places for local time.
: 
: A World map of regions with different local-time offsets
: is a crazy patchwork, with all sorts of contradictory
: looking regions. For instance, the -0700 region of
: the USA extends from approx -0830 to approx -0620,
: covering over 2 hours, and parts of -0800 touch the
: -0700 line and are more than 0100 East of parts of -0700.
: Even worse can be found over the Indian/Central Asian
: and Malaysian parts of the world, where time-shifts
: of 30 miniutes are also frequent (and, according to my
: Atlas, one country, Nepal, has 52/3 i.e. +0540!).
: 
: As Sander says, Not sure where you would derive the time zone!.
: 
: Unless you can refer a (long,lat) position to a look-up
: table, you can't predict what the zone will be to less
: the 1 hour (except of course for the nominal time-zones
: by 15deg sectors). I've never encountered a digital
: version of such a table (my Atlas must be based on one,
: though).
: 
: Best wishes,
: Ted.

The fBasics package of the rmetrics project uses the Olsen time
zone data base which does take some of these things into account.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Gregmisc

2005-03-10 Thread Uwe Ligges


On Thu, 10 Mar 2005, Christian Kamenik wrote:

 Dear all,
 
 I use R 2.0.1 on Windows XP professional. When I want to load the 
 'Gregmisc' library I get the following error message:
 
 Error in library(pkg, character.only = TRUE) :  'gregmisc' is not a 
 valid package -- installed  2.0.0?
 
 Can anybody tell me what's wrong with this package?


gregmisc is a package bundle these days.
Probably you have installed the new package bundle (including packages 
such as gtools) but did not remove your old package gregmuisc.
 
Try, e.g., library(gtools)

Uwe Ligges


 Cheers, Christian
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


RE: [R] Structural equation models with R

2005-03-10 Thread John Fox
Dear Andre,

 -Original Message-
 From: André TavaresCorrêa Dias [mailto:[EMAIL PROTECTED] 
 Sent: Thursday, March 10, 2005 6:14 AM
 To: John Fox
 Cc: r-help@stat.math.ethz.ch
 Subject: RE: [R] Structural equation models with R
 
 Dear John,
 Thanks very much! I don’t know how I didn’t see this before. 
 I checked a thousand of times… 

(Recall that there was a typo in Andre's model specification.)

 There is one more thing that I 
 can not understand. What are the possible reasons for 
 problems on calculation of confidence interval for RMSEA? For 
 the same model I sent before (after correction of the 
 variable name) the lower CI output was NA:
 
 Model Chisquare =  10.824   Df =  13 Pr(Chisq) = 0.62558
  Goodness-of-fit index =  0.91656
  Adjusted goodness-of-fit index =  0.76895
  RMSEA index =  0   90 % CI: (NA, 0.15652)
  BIC =  -60.425
 
 I have other models with the same behavior, even when the 
 estimative of RMSEA is different from zero.
 
 Model Chisquare =  15.165   Df =  14 Pr(Chisq) = 0.367
  Goodness-of-fit index =  0.89038
  Adjusted goodness-of-fit index =  0.71811
  RMSEA index =  0.053558   90 % CI: (NA, 0.19141)
  BIC =  -61.564
 

The method used to get the confidence interval (from Browne and Du Toit,
Multivariate Behavioral Research, 1992, 27:269-300) can produce a lower
bound above the RMSEA estimate or an upper bound below the estimate; when
this happens, the bound is set to NA. One of the nice things about R is that
you can look at the code for a function -- in this case, summary.sem() -- to
see exactly what it does. Take a look at how RMSEA.L and RMSEA.U are
computed.

Regards,
 John

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] install R redhat rpm as a normal user

2005-03-10 Thread Marc Schwartz
On Wed, 2005-03-09 at 16:17 -0800, Eric Hu wrote:
 Hi, I wonder if anyone has done this before. I have rpm-build
 installed in my workstation. Thanks.
 
 Eric


I have not seen any other replies to this, but as far as I know Martyn's
RPMS are not relocatable and a test this morning confirms that.

If you need to install R as a non-root user, you are likely better off
installing from source code. Modify the --prefix argument
to ./configure as per the R Administration Manual. For example:

  ./configure --prefix=TargetDir

where TargetDir is a directory that you have write privileges to.

Then use make and make install which will then build and install R.

R will be installed in TargetDir (ie. /home/UserName/R) with a set of
subdirs bin, lib and man. To run R, you would then use:

  TargetDir/bin/R

Or you can create a symlink in /home/UserName/bin to the above file and
then just use R to start it.

See the R Administration Manual for more information.

HTH,

Marc Schwartz

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] R: LIST function and LOOPS

2005-03-10 Thread Adaikalavan Ramasamy
You will need to capture the value of ss at the end of each 'i' as such

z4 -function(w){

  output - numeric(w)
  
  for (i in 1:w){

set.seed(i+6)  # this is redundant line
ss-0

for (j in 1:5){
  set.seed(j+1+(i-1)*6)
  r-rnorm(1)
  ss-ss+r
}

output[i] - ss
  }
  return(output)
}

BTW, I do not think it is a good idea to set.seed() so many times.


To answer you more general question, see if the following is useful.
I am trying to simulate 'n' values from a standard normal distribution
but 'n' is random variable itself.

f -function(w, lambda=3){
 
  tmp - list(NULL)
  
  for (i in 1:w){
n - 1 + rpois(1, lambda=lambda)  # number of simulation required
tmp[[ i ]]  - rnorm(n)
  }

  # flatten the list into a ragged matrix
  out.lengths   - sapply(tmp, length)
  out   - matrix( nr=w, nc=max( out.lengths ) )
  rownames(out) - paste(w =, 1:w)
  for(i in 1:w) out[i, 1:out.lengths[i] ] - tmp[[i]]

  return(out)
}

f(6, lambda=3)

It is not very elegant but I hope that helps you out somehow.

Regards, Adai



On Thu, 2005-03-10 at 10:16 +0200, Clark Allan wrote:
 hi all
 
 another simple question.
 
 i've written a dummy program so that you get the concept. (the code
 could be simplfied such that there are no loops. but lets leave the
 loops in for now.)
 
 z1-function(w)
 {
 for (i in 1:w)
 {
 set.seed(i+6)
 ss-0
   for (j in 1:5)
   {
   set.seed(j+1+(i-1)*6)
   r-rnorm(1)
   ss-ss+r
   }
 list(ss=ss)
 }
 }
 check.1-z1(3)
 check.1
 
 the results is:
 $ss
 [1] -0.01516304
 
 
 what i want is something that looks like this:
 
 j=1
 $ss
 [1] -2.213343
 
 j=2
 $ss
 [1] -2.904235
 
 j=3
 $ss
 [1] -0.01516304
 
 
 i know that i could use the print command. (see z2)
 
 z2-function(w)
 {
 for (i in 1:w)
 {
 set.seed(i+6)
 ss-0
   for (j in 1:5)
   {
   set.seed(j+1+(i-1)*6)
   r-rnorm(1)
   ss-ss+r
   }
 print(ss)
 }
 }
 check.2-z2(3)
 check.2
 
  check.2-z2(3)
 [1] -2.213343
 [1] -2.904235
 [1] -0.01516304
  check.2
 [1] -0.01516304
 
 the problem with z2 is that only the last value is saved.
 
 
 what i could do is use matrices like the following: (but i dont want to
 do this AND WOULD PREFER TO USE list.)
 
 z3-function(w)
 {
 results.-matrix(nrow=w,ncol=1)
 colnames(results.)-c(ss)
 for (i in 1:w)
 {
 set.seed(i+6)
 ss-0
   for (j in 1:5)
   {
   set.seed(j+1+(i-1)*6)
   r-rnorm(1)
   ss-ss+r
   }
 results.[i,1]-ss
 }
 results.
 }
 check.3-z3(3)
 check.3
 
  check.3
   ss
 [1,] -2.21334260
 [2,] -2.90423463
 [3,] -0.01516304
 
 what if i have a new program (something different) and i want the
 following:
 
 j=1
 $a
 1
 2
 3
 
 $b
 1
 2
 3
 4
 5
 
 $c
 1
 
 
 ###
 j=2
 $a
 11
 21
 31
 
 $b
 11
 21
 31
 41
 51
 
 $c
 11
 
 ###
 j=3
 $a
 21
 22
 32
 
 $b
 21
 22
 32
 42
 52
 
 $c
 21
 
 MATRICES SEEMS TO BE A GOOD WAY OF DOING THIS (but then you would have
 to set up three matrices, one for a,b and c). BUT WHAT IF I WANT TO USE
 THE LIST FUNCTION? i.e. there is a list in the first loop that i want to
 display!
 
 sorry for the long mail.
 
 ***
 ALLAN
 __ R-help@stat.math.ethz.ch 
 mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the 
 posting guide! http://www.R-project.org/posting-guide.html

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Vogelsang test? RATS? Time series analysis.

2005-03-10 Thread abunn
Has anybody implemented the Vogelsang test from Vogelsang's 1998
Econometrica paper? Or the tests by Fomby and Vogelsang from their 2002
paper?

I have been looking for trends in many time series (1000's) with unknown
autocorrelation. In brief, I have fit AR models  and classified them as
stochastic or deterministic with augmented Dickey-Fuller tests. A colleague
has suggested that we use a new and more elegant test that is robust to
the nature of serial correlation in the residuals and pointed me the RATS
program in general and Fomby and Vogelsang test in particular.

Any leads appreciated, Andy

~~
Fomby TB, Vogelsang TJ, The application of size-robust trend statistics to
global-warming temperature series. JOURNAL OF CLIMATE 15 (1): 117-123 2002
Vogelsang TJ, Trend function hypothesis testing in the presence of serial
correlation. ECONOMETRICA 66 (1): 123-148 1998

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] problem using uniroot with integrate

2005-03-10 Thread Thomas Lumley
On Wed, 9 Mar 2005, Ken Knoblauch wrote:
Thank you very much.  Yes, that was the problem, partial matching.
I saw a warning about that in integrate and some discussion from 1999
in the archives and so added the m0 for integrate but somehow
I wasn't bright enough to see that I had the same problem
in uniroot.
It is perhaps worth pointing out that in R you usually don't need to pass 
these extra arguments down

 dprime.mAFC - function(Pc, m) {
 est.dp - function(dp) {
   pr - function(x) {
 dnorm(x - dp) * pnorm(x)^(m - 1)
 }
   Pc - integrate(pr, lower = -Inf, upper = Inf)$value
 }
 dp.res - uniroot(est.dp, interval = c(0,5))
 dp.res$root
 }
Because pr() is defined inside est.dp, which is defined inside 
dprime.mAFC, the arguments are already accessible.

In most cases, passing extra arguments through a higher-order function is 
not needed in R, as lexical scope allows the function to have access to 
the information directly.  You also don't have to keep thinking up 
different names for 'm'.

Sorry about no working example, but I'm not sure what I could
have added, if I understand what you mean by working example,
because my function wasn't working.
A set of arguments at which you wanted to evaluate the function, so that 
a) we could get the same error that you did
b) we could tell when it worked.

-thomas

best,
ken
Quoting Sundar Dorai-Raj [EMAIL PROTECTED]:

Ken Knoblauch wrote on 3/9/2005 10:27 AM:
Hi,
I'm trying to calculate the value of the variable, dp, below, in the
argument to the integral of dnorm(x-dp) * pnorm(x)^(m-1).  This
corresponds to the estimate of the sensitivity of an observer in an
m-alternative forced choice experiment, given the probability of
a correct response, Pc, a Gaussian assumption for the noise and
no bias.  The function that I wrote below gives me an error:
Error in f(x, ...) : recursive default argument reference
The problem seems to be at the statement using uniroot,
because the furntion est.dp works fine outside of the main function.
I've been using R for awhile but there are still many nuances
about the scoping and the use of environments that I'm weak on
and would like to understand better.  I would appreciate any
suggestions or solutions that anyone might offer for fixing
my error.  Thank you.
dprime.mAFC - function(Pc, m) {
est.dp - function(dp, Pc = Pc, m = m) {
  pr - function(x, dpt = dp, m0 = m) {
dnorm(x - dpt) * pnorm(x)^(m0 - 1)
}
  Pc - integrate(pr, lower = -Inf, upper = Inf,
  dpt = dp, m0 = m)$value
}
dp.res - uniroot(est.dp, interval = c(0,5), Pc = Pc, m = m)
dp.res$root
}
Ken,
Look at the argument list for ?uniroot and think partial matching.
You're m is being interpretted for maxiter. Simply change to
dp.res - uniroot(est.dp, interval = c(0,5), Pc = Pc, m0 = m)
and in other places for consistency and the error goes away. Of course,
since you gave no working example, I'm not sure if other errors are
present that I'm missing.
--sundar


Ken Knoblauch
Inserm U 371
Cerveau et Vision
18 avenue du Doyen Lepine
69675 Bron cedex
France
tel: +33 (0)4 72 91 34 77
fax: +33 (0)4 72 91 34 61
portable: 06 84 10 64 10
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Thomas Lumley   Assoc. Professor, Biostatistics
[EMAIL PROTECTED]   University of Washington, Seattle
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Interval censoring in Survival analysis

2005-03-10 Thread Thomas Lumley
On Thu, 10 Mar 2005, AMOROS NAVARRO, ALEX wrote:
Hi,
I would like to know if R programme allows doing different kinds of 
censoring in his last version. What kind of package should I use and is 
this censoring applicable to parametric and semi-parametric(Cox) models?

To some extent.
The survival package allows left censoring and interval censoring, but 
only for parametric models.  The Icens package estimates survival curves 
for interval censored data (including multivariate times), but does not do 
regression.

I am not aware of any package that allows semiparametric regression 
modelling of interval censored or doubly censored data.

-thomas
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Re: [Rd] dealing with package bundles

2005-03-10 Thread Seth Falcon
Liaw, Andy [EMAIL PROTECTED] writes:

 Given the confusions that some users have regarding package bundles, I'd
 like to suggest some changes to how package bundles are handled.

I wonder if a guideline of naming packages bundles in a manner that
identifies them as bundles would help?

An example of what I'm thinking: gregmisc-bundle_x.y.z.tar.gz

Then users could (once familiar with the convention) know when they
are dealing with a bundle.  

Such a name scheme fits with my understanding of bundles: they provide
a mechanism to download and install multiple packages at once.  After
installation there is no notion of bundle.  One doesn't call library
on a bundle (and I think that is a good thing).


Best,

+ seth

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] from long/lat to UTM

2005-03-10 Thread Ted Harding
On 10-Mar-05 Gabor Grothendieck wrote:
  Ted.Harding at nessie.mcc.ac.uk writes:
: [...]
: As Sander says, Not sure where you would derive the time zone!.
: 
: Unless you can refer a (long,lat) position to a look-up
: table, you can't predict what the zone will be to less
: the 1 hour (except of course for the nominal time-zones
: by 15deg sectors). I've never encountered a digital
: version of such a table (my Atlas must be based on one,
: though).
: 
: Best wishes,
: Ted.
 
 The fBasics package of the rmetrics project uses the Olsen time
 zone data base which does take some of these things into account.

Thanks, Gabor! However, this seems to be just the usual sort
of thing -- TimeZone name with corresponding TZ data. You first
have to know the TZ name. (Unless I have overlooked sonething
in the Olsen database).

What I had in mind was a look-up facility whereby a (long,lat)
pair could be submitted as a query, returning the TZ region
(name) within which that geographical point lay.

The only completely general mechanism I can think of would
consist of

a) a named list of boundary contours
b) a function which, for each name in the list, returns
   the TZ name

Then, for instance, given a point P = (long,lat), the list
could be searched by verifying, for each contour in the list,
whether the point lay in the region circumscribed by the
coutour (e.g. by computing the winding number for the contour
with respect to the point). This would also work for regions
with holes in them, provided the  boundary of the hole was
presented in the correct order of points (e.g. such that the
region of which it was the boundary lies to the left as you
go through the sequence).

If, for instance, I had the contour data for the TZ map
in my Atlas, I could construct such a thing! But, as I say,
I have not come across such a dataset. I'd be interested
to learn of one, though!

Best wishes,
Ted.



E-Mail: (Ted Harding) [EMAIL PROTECTED]
Fax-to-email: +44 (0)870 094 0861
Date: 10-Mar-05   Time: 16:13:01
-- XFMail --

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


RE: [R] How could I catch the R data-output stream and presented by other software func

2005-03-10 Thread Warnes, Gregory R
There are quite a few packages which allow R to be accessed from other
software systems.  Examples include RDCOM (included with Windows R), RSOAP
(http://rsoap.sf.net), Rpy (Python, http://rpy.sf.net), RSPerl
(http://www.omegahat.org/RSPerl/).  Duncan Temple Lang, the developer of
RSPerl, has a whole pile of these tools, see http://www.omegahat.org for a
full list.

-Greg

 -Original Message-
 From: [EMAIL PROTECTED]
 [mailto:[EMAIL PROTECTED] Behalf Of Michael shen
 Sent: Thursday, March 10, 2005 1:37 AM
 To: r-help@stat.math.ethz.ch
 Subject: [R] How could I catch the R data-output stream and 
 presented by
 other software func 
 
 
 Dear All R-helper,
 
 I wonder to know that could I do the computation  staff in R 
 environment and 
 get the R data output stream ,then presented by other 
 software functions in 
 their GUI.(for example: get the R data output streamand 
 present the data 
 using SPSS function in SPSS output GUI). Is there some R 
 -packages in CRAN 
 already do this kind of function? and what kind of document 
 should I read?
 
 Thanks in advance
 
 Michael
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! 
 http://www.R-project.org/posting-guide.html
 
 


LEGAL NOTICE\ Unless expressly stated otherwise, this messag...{{dropped}}

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Online short course: Advanced Resampling Methods

2005-03-10 Thread Peter C. Bruce
Online short course:  Advanced Resampling Methods
Phillip Good will offer his online short course “Advanced Resampling 
Methods” March 18 ­ April 8.  In the bootstrap area, it will cover BCA 
(bias corrected and accelerated) confidence intervals, bootstrap-t 
intervals, the parametric bootstrap, and the tilted bootstrap.  The use of 
bootstrapping for estimating power and sample sizes will be covered, as 
will model validation.  Permutation tests will also be discussed, with 
coverage of 2-sided alternatives, matched pairs, strata, and ordered RxC 
tables.  Dr. Good will also discuss decision trees.  (Note:  topic coverage 
may be adjusted slightly to accommodate the needs of 
participants.)  Examples will be provided in R.

Participants will interact with Dr. Good via a private discussion board 
over approximately 4 weeks; the course will require about 10 hours per week 
and there are no set hours when you must be online.

Details and registration:
http://www.statistics.com/content/courses/advancedresamp/index.html
Peter Bruce
[EMAIL PROTECTED]
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] from long/lat to UTM

2005-03-10 Thread Greg Snow

 Ted Harding [EMAIL PROTECTED] 03/10/05 09:13AM 
 On 10-Mar-05 Gabor Grothendieck wrote:
   Ted.Harding at nessie.mcc.ac.uk writes:
 : [...]
 : As Sander says, Not sure where you would derive the time
zone!.
 : 

[snip]

 The only completely general mechanism I can think of would
 consist of
 
 a) a named list of boundary contours

I found a shapefile of time zones at: 
http://openmap.bbn.com/data/shape/timezone/ 

A google search on the keywords timezone  shapefile finds a 
few other possibilities. 

 b) a function which, for each name in the list, returns
the TZ name

The maptools package can read in the above shapefile and 
convert it to a list of polygons and the sgeostat has an
in.polygon function to see if a point is in a given polygon.

The following code worked for me:

library(maptools)
tz - read.shape('c:/maps/WrldTZA')
plot(tz)
plot(tz,xlim=c(-150,-50), ylim=c(20,50))

mappoly - Map2poly(tz)

library(sgeostat)

tmp - sapply(mappoly, function(x){ x - na.exclude(x)
in.polygon( -110, 42, x[,1], x[,2] ) } )

tz$att.data[tmp,]


the -110 and 42 come from me plotting the map and using 
locator to approximate where I am at.  I was actually 
suprized at how quick the sapply was (under a second on
my fairly fast pc (windows 2000)).

It shouldn't be too hard to convert this into a more general
function.




Greg Snow, Ph.D.
Statistical Data Center
[EMAIL PROTECTED]
(801) 408-8111

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] from long/lat to UTM

2005-03-10 Thread Ted Harding
On 10-Mar-05 Greg Snow wrote:
 [...]
 I found a shapefile of time zones at: 
 http://openmap.bbn.com/data/shape/timezone/ 
 
 A google search on the keywords timezone  shapefile finds a 
 few other possibilities. 
 
 b) a function which, for each name in the list, returns
the TZ name
 
 The maptools package can read in the above shapefile and 
 convert it to a list of polygons and the sgeostat has an
 in.polygon function to see if a point is in a given polygon.
 
 The following code worked for me:
 []

Many thanks, Greg. That looks just the job, and I'll look
into it.

All best wishes,
Ted.



E-Mail: (Ted Harding) [EMAIL PROTECTED]
Fax-to-email: +44 (0)870 094 0861
Date: 10-Mar-05   Time: 18:10:23
-- XFMail --

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] contrast matrix for aov

2005-03-10 Thread Darren Weber
Prof Brian Ripley wrote:
On Wed, 9 Mar 2005, Darren Weber wrote:
How do we specify a contrast interaction matrix for an ANOVA model?
We have a two-factor, repeated measures design, with

Where does `repeated measures' come into this?  You appear to have 
repeated a 2x2 experiment in each of 8 blocks (subjects).  Such a 
design is usually analysed with fixed effects.  (Perhaps you averaged 
over repeats in the first few lines of your code?)

Each subject experiences multiple instances of a visual stimulus that 
cues them to orient their attention to the left or right visual field.  
For each instance, we acquire brain activity measures from the left and 
right hemisphere (to put it simply).  For each condition in this 2x2 
design, we actually have about 400 instances for each cell for each 
subject.  These are averaged in several ways, so we obtain a single 
scalar value for each subject in the final analysis.  So, at this point, 
it does not appear to be a repeated measures analysis.  Perhaps another 
way to put it - each subject experiences every cell of the design, so we 
have within-subjects factors rather than between-subjects factors.  
That is, each subject experiences all experimental conditions, we do not 
separate and randomly allocate subjects to only one cell or another of 
the experiment.

I hope this clarifies the first question.  I will try to digest further 
points in this thread, if they are not too confounded by this first 
point of contention already.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] about R CMD check

2005-03-10 Thread xt_wang
hello, everybody,

I create a package by myself named var on Linux system. I attach a C code
which uses R_ext/Lapack.h in directory src. But when I carry out  R CMD
check var, it shows

  wxt0124()
Error in .C(wxt1221, PACKAGE = var) : C function name not in DLL for package
var
Execution halted

Could you tell me what is the problem?

Maggie


[EMAIL PROTECTED] myrpackages]$ R CMD check var
* checking for working latex ... OK
* using log directory '/home/credsim/src/myrpackages/var.Rcheck'
* checking for file 'var/DESCRIPTION' ... OK
* checking if this is a source package ... OK

* Installing *source* package 'var' ...
** libs
gcc -I/usr/lib/R/include  -I/usr/local/include -D__NO_MATH_INLINES -mieee-fp 
-fPIC  -O2 -g -march=i386 -mcpu=i686 -c wxt1221.c -o wxt1221.o
wxt1221.c: In function `Matrix_A_12':
wxt1221.c:774: warning: passing arg 3 of `dpotrf_' from incompatible pointer
type
wxt1221.c:775: warning: passing arg 4 of `dpotrs_' from incompatible pointer
type
wxt1221.c: In function `Matrix_A_1':
wxt1221.c:1038: warning: passing arg 3 of `dpotrf_' from incompatible pointer
type
wxt1221.c:1039: warning: passing arg 4 of `dpotrs_' from incompatible pointer
type
wxt1221.c: In function `Matrix_A_2':
wxt1221.c:1290: warning: passing arg 3 of `dpotrf_' from incompatible pointer
type
wxt1221.c:1291: warning: passing arg 4 of `dpotrs_' from incompatible pointer
type
gcc -shared -L/usr/local/lib -o var.so wxt1221.o -L/usr/lib/R/bin -lRlapack 
-L/usr/local/lib -L/usr/lib/gcc-lib/i386-redhat-linux/3.2.2
-L/usr/lib/gcc-lib/i386-redhat-linux/3.2.2/../../.. -lfrtbegin -lg2c -lm
-lgcc_s
** R
** help
Warning: \alias{wxt0124} already in wxt0124.Rd -- skipping the one in wxt1221.Rd
  Building/Updating help pages for package 'var'
 Formats: text html latex example
  wxt0124   texthtmllatex   example
  wxt1221   texthtmllatex   example
* DONE (var)

* checking package directory ... OK
* checking for portable file names ... OK
* checking for sufficient/correct file permissions ... OK
* checking DESCRIPTION meta-information ... OK
* checking package dependencies ... OK
* checking index information ... OK
* checking package subdirectories ... OK
* checking R files for syntax errors ... OK
* checking R files for library.dynam ... OK
* checking S3 generic/method consistency ... OK
* checking for replacement functions with final arg not named 'value' ... OK
* checking foreign function calls ... OK
* checking Rd files ... OK
* checking for missing documentation entries ... WARNING
Undocumented code objects:
  b
All user-level objects in a package should have documentation entries.
See chapter 'Writing R documentation files' in manual 'Writing R
Extensions'.
* checking for code/documentation mismatches ... OK
* checking Rd \usage sections ... OK
* checking for CRLF line endings in C sources/headers ... OK
* creating var-Ex.R ... OK
* checking examples ... ERROR
Running examples in var-Ex.R failed.
The error most likely occurred in:

 ### * wxt0124

 flush(stderr()); flush(stdout())

 ### Name: wxt0124
 ### Title: Estimation of Optimal Variance Components in CCC Model
 ### Aliases: wxt0124
 ### Keywords: array
   wxt0124()
Error in .C(wxt1221, PACKAGE = var) : C function name not in DLL for package
var
Execution halted

 ### ** Examples


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] contrast matrix for aov

2005-03-10 Thread Darren Weber
As an R newbie (formerly SPSS), I was pleased to find some helpful notes 
on ANOVA here:

http://personality-project.org/r/r.anova.html
In my case, I believe the relevant section is:
Example 4. Two-Way Within-Subjects ANOVA
This is where I noted and copied the error notation.
Sorry for any confusion about terms - I did mean within-subjects 
factors, rather than repeated measures (although, as noted earlier, we 
do have both in this experiment).

Prof Brian Ripley wrote:
On Thu, 10 Mar 2005, Christophe Pallier wrote:
Prof Brian Ripley wrote:
On Wed, 9 Mar 2005, Darren Weber wrote:
We have a two-factor, repeated measures design, with

Where does `repeated measures' come into this?  You appear to have 
repeated a 2x2 experiment in each of 8 blocks (subjects).  Such a 
design is usually analysed with fixed effects.  (Perhaps you 
averaged over repeats in the first few lines of your code?)

roi.aov - aov(roi ~ (Cue*Hemisphere) + 
Error(Subject/(Cue*Hemisphere)), data=roiDataframe)

I think the error model should be Error(Subject).  In what sense are 
`Cue' and `Cue:Hemisphere' random effects nested inside `Subject'?

I do not understand this, and I think I am probably not the only one. 
That is why I would be grateful if you could give a bit more 
information.

My understanding is that the fixed factors Cue and Hemisphere are 
crossed with the random factor Subject (in other words, Cue and 
Hemisphere are within-subjects factors, and this is probably why 
Darren called it a repeated measure design).

The issue is whether the variance of the error really depends on the 
treatment combination, which is what the 
Error(Subject/(Cue*Hemisphere)) assumes.  With that model

Error: Subject:Cue
  Df Sum Sq Mean Sq F value Pr(F)
Cue1 0.2165  0.2165  0.1967 0.6708
Residuals  7 7.7041  1.1006
Error: Subject:Hemisphere
   Df Sum Sq Mean Sq F value Pr(F)
Hemisphere  1 0.0197  0.0197  0.0154 0.9047
Residuals   7 8.9561  1.2794
Error: Subject:Cue:Hemisphere
   Df Sum Sq Mean Sq F value Pr(F)
Cue:Hemisphere  1 0.0579  0.0579  0.0773  0.789
Residuals   7 5.2366  0.7481
you are assuming different variances for three contrasts.
In this case, it seems to me from the various textbooks I read on 
Anova, that the appropriate MS to  test the interaction 
Cue:Hemisphere is Subject:Cue:Hemisphere (with 7 degress of freedom, 
as there are 8 independent subjects). If you input 
Error(Subject/(Cue*Hemisphere)) in the aov formula, then the test for 
the interaction indeed uses the Subject:Cue:Hemisphere source of 
variation in demoninator. This fits with the ouput of other softwares.

If you include only 'Subjet', then the test for the interaction has 
21 degrees of Freedom, and I do not understand what this tests.

It uses a common variance for all treatment combinations.
I apologize in if my terminology is not accurate.  But I hope you can 
clarify what is wrong with the Error(Subject/(Cue*Hemisphere)) term,
or maybe just point us to the relevant textbooks.

Nothing is `wrong' with it, it just seems discordant with the description
of the experiment.
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] how to view the syntax of a method which is not a generic method

2005-03-10 Thread Sarah Holte
Hello - I'm trying to modify an option for the lme() or nlme() macros.  
I want to write my own specification for the variance function and am 
following homework problem 4, Chapter 5, page 268 of Pinheiro and Bates 
book on mixed effect.

I'm up to point where I've created a new class using an existing 
variance function class, varExp as a template.  Next I need to write an 
initialize method.  The book suggests that I use the inialize method for 
one of the exisiting variance functions for lme(), eg varExp.
What I want is the syntax of the initialize.varExp method so that I can 
edit it to create an initialize method for my newly constructed class.

So - I tried to get the text of this method, called initialize.varExp by 
using the showMethods command.

Here's what happened.
 showMethods(initialize.varExp)
Function initialize.varExp:
not a generic function
So I assume that inialize.varExp is not a generic method.   I also have 
the following information.

 findFunction(initialize.varExp)
list()
and
 findFunction(varExp)
[[1]]
environment: package:nlme
attr(,name)
[1] package:nlme
attr(,path)
[1] C:/PROGRA~1/R/rw1091/library/nlme
So I'm guessing I somehow need to tell the showMethods command that 
initialize.varExp is part of the nlme package.  And now I'm stuck - any 
suggestions?

Thanks so much - Sarah Holte
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] RODBC autocommit

2005-03-10 Thread Omar Lakkis
How can I turn autocommit off using my RODBC connection to an Informix database?
I want to turn autocommit off, insert a thousand or so rows then commit.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Transparent colors OR two series on one histogram

2005-03-10 Thread Nathaniel Street
Hi,
I want to be able to plot a single histogram of a measured trait with 
trait values from two conditions on the same histogram to allow easy 
comparison.

I have previously done this in excel by plotting the two series on a 
single bargraph having calculated frequencies in bins. You then get one 
condition plotted immediately to the right of the other. I hope that 
makes sense?

I don't know if this is possible in R?
If not, I could plot the second one on top of the first i.e.
hist(x)
hist(y,add=TRUE)
but i would need to set one as having a semi-transparent color so that 
where the two sets of are plotted on top of each other, you can see the 
frequency of the first series 'behind' the second.

Can anyone help me do this? It would be another step towards never using 
excel again :)

Thanks
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] how to view the syntax of a method which is not a generic method

2005-03-10 Thread Seth Falcon
Hi Sarah,

Sarah Holte [EMAIL PROTECTED] writes:

 The book suggests that I use the inialize method for one of the
 exisiting variance functions for lme(), eg varExp.  What I want is
 the syntax of the initialize.varExp method so that I can edit it to
 create an initialize method for my newly constructed class.

You might want to download the source package for nlme.  You can get
it here:

http://cran.fhcrc.org/src/contrib/nlme_3.1-56.tar.gz

I think the function you are looking for is defined in varFunc.R and
it looks like it is Initialize.varExp (initial cap) so you might retry
your search techniques with that name.

Hope that helps,

+ seth

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Transparent colors OR two series on one histogram

2005-03-10 Thread Thomas Lumley
On Thu, 10 Mar 2005, Nathaniel Street wrote:
Hi,
I want to be able to plot a single histogram of a measured trait with trait 
values from two conditions on the same histogram to allow easy comparison.

I have previously done this in excel by plotting the two series on a single 
bargraph having calculated frequencies in bins. You then get one condition 
plotted immediately to the right of the other. I hope that makes sense?

I don't know if this is possible in R?
You can do it for bar plots, but not for histograms.  Look at 
barplot(beside=TRUE)

If not, I could plot the second one on top of the first i.e.
hist(x)
hist(y,add=TRUE)
but i would need to set one as having a semi-transparent color so that where 
the two sets of are plotted on top of each other, you can see the frequency 
of the first series 'behind' the second.
Semitransparent colors are available for the pdf() and quartz() devices. 
They were described by Paul Murrell in R News Vol 4 No 2.

-thomas
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Question regarding mosaicplot

2005-03-10 Thread Jean Vidal
I tried this : 
mosaicplot(stoc ~ q9r + segca,data=tmp2,color=T) : works fine.
And now, this :
mosaicplot(stoc ~ q9r + segca, data=tmp2, color=T, main=Big title)
Error in model.frame(formula, rownames, variables, varnames, extras, extranames,  : 
   invalid variable type

I'm probably stupid and missed something simple in the manual (and wouldn't 
like to be flamed if insinuating that, may be... a bug ? Oh no !).
It can be done with :
mosaicplot(table(tmp2$stoc,tmp2$q9r,tmp2$segca),color=T,main=Big title) : works fine.
So, no real trouble for me... 

version
_  
platform i386-pc-mingw32
arch i386   
os   mingw32
system   i386, mingw32  
status  
major1  
minor8.0
year 2003   
month10 
day  08 
language R

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Logistic regression goodness of fit tests

2005-03-10 Thread Trevor Wiens
I was unsure of what suitable goodness-of-fit tests existed in R for logistic 
regression. After searching the R-help archive I found that using the Design 
models and resid, could be used to calculate this as follows:

d - datadist(mydataframe)
options(datadist = 'd')
fit - lrm(response ~ predictor1 + predictor2..., data=mydataframe, x =T, y=T)
resid(fit, 'gof').

I set up a script to first use glm to create models use stepAIC to determine 
the optimal model. I used this instead of fastbw because I found the AIC values 
to be completely different and the final models didn't always match. Then my 
script takes the reduced model formula and recreates it using lrm as above. Now 
the problem is that for some models I run into an error to which I can find no 
reference whatsoever on the mailing list or on the web. It is as follows:

test.lrm - lrm(cclo ~ elev + aspect + cti_var + planar + feat_div + loamy + 
sands + sandy + wet + slr_mean, data=datamatrix, x = T, y = T)
singular information matrix in lrm.fit (rank= 10 ).  Offending variable(s):
slr_mean 
Error in j:(j + params[i] - 1) : NA/NaN argument


Now if I add the singularity criterion and make the value smaller than the 
default of 1E-7 to 1E-9 or 1E-12 which is the default in calibrate, it works. 
Why is that?

Not being a statistician but a biogeographer using regression as a tool, I 
don't really understand what is happening here. 

Does changing the tol variable, change how I should interpret goodness-of-fit 
results or other evaluations of the models created?

I've included a summary of the data below (in case it might be helpful) with 
all variables in the data frame as it was easier than selecting out the ones 
used in the model.

Thanks in advance.

T
-- 
Trevor Wiens 
[EMAIL PROTECTED]

The significant problems that we face cannot be solved at the same 
level of thinking we were at when we created them. 
(Albert Einstein)


 summary(datamatrix)
 siteid block recordyearcclo   
 564-125:   5   Min.   :1.000   Min.   :2000   Min.   :0.  
 564-130:   5   1st Qu.:2.000   1st Qu.:2001   1st Qu.:1.  
 564-135:   5   Median :3.000   Median :2002   Median :1.  
 564-140:   5   Mean   :3.042   Mean   :2002   Mean   :0.7509  
 564-145:   5   3rd Qu.:4.000   3rd Qu.:2003   3rd Qu.:1.  
 564-150:   5   Max.   :5.000   Max.   :2004   Max.   :1.  
 (Other):1098  

  elevslopeaspect  slr_mean   
 Min.   :0.   Min.   :0.1499   Min.   :0.   Min.   :7681  
 1st Qu.:0.   1st Qu.:0.5876   1st Qu.:0.   1st Qu.:7852  
 Median :1.   Median :0.9195   Median :0.   Median :7877  
 Mean   :0.6259   Mean   :1.2523   Mean   :0.2482   Mean   :7871  
 3rd Qu.:1.   3rd Qu.:1.6694   3rd Qu.:0.   3rd Qu.:7892  
 Max.   :1.   Max.   :5.3366   Max.   :1.   Max.   :7981  

   cti   cti_var   planar  feat_div
 Min.   :7.157   Min.   :0.4497   Min.   :0.   Min.   :1.000  
 1st Qu.:7.651   1st Qu.:0.6187   1st Qu.:1.   1st Qu.:2.000  
 Median :7.720   Median :0.8495   Median :1.   Median :3.000  
 Mean   :7.763   Mean   :0.9542   Mean   :0.8254   Mean   :3.379  
 3rd Qu.:7.822   3rd Qu.:1.1918   3rd Qu.:1.   3rd Qu.:4.000  
 Max.   :8.769   Max.   :2.5615   Max.   :1.   Max.   :6.000  

   chop_san   loamysandssandy   
 Min.   :0.0   Min.   :0.   Min.   :0.   Min.   :0.  
 1st Qu.:0.0   1st Qu.:0.   1st Qu.:0.   1st Qu.:0.  
 Median :0.0   Median :0.   Median :0.   Median :0.  
 Mean   :0.05762   Mean   :0.3094   Mean   :0.3236   Mean   :0.1099  
 3rd Qu.:0.0   3rd Qu.:1.   3rd Qu.:1.   3rd Qu.:0.  
 Max.   :1.0   Max.   :1.   Max.   :1.   Max.   :1.  
 
  wet  timesinceburn ndvi evi
 Min.   :0.0   Min.   :  1.00   Min.   :0.1140   Min.   :0.1041  
 1st Qu.:0.0   1st Qu.:100.00   1st Qu.:0.2973   1st Qu.:0.1667  
 Median :0.0   Median :100.00   Median :0.3342   Median :0.2027  
 Mean   :0.01950   Mean   : 87.84   Mean   :0.3629   Mean   :0.2184  
 3rd Qu.:0.0   3rd Qu.:100.00   3rd Qu.:0.4463   3rd Qu.:0.2711  
 Max.   :1.0   Max.   :100.00   Max.   :0.5932   Max.   :0.4788  
 
 msavi2  fc  gddprecip  
 Min.   :0.09156   Min.   :0.1552   Min.   :380.6   Min.   : 50.04  
 1st Qu.:0.14936   1st Qu.:0.3246   1st Qu.:492.8   1st Qu.: 76.17  
 Median :0.18257   Median :0.4082   Median :500.8   Median : 85.50  
 Mean   :0.19653   Mean   :0.4398   Mean   :476.4   Mean   : 94.35  
 3rd Qu.:0.24626   3rd Qu.:0.5630   3rd Qu.:501.6   3rd Qu.: 95.16  
 Max.   :0.33258   Max.   :0.6996   Max.   

Re: [R] Question regarding mosaicplot

2005-03-10 Thread Achim Zeileis
Jean:

On Tue, 18 Nov 2003 23:32:05 Jean Vidal wrote:

 I tried this : 
  mosaicplot(stoc ~ q9r + segca,data=tmp2,color=T) : works fine.
 
 And now, this :
  mosaicplot(stoc ~ q9r + segca, data=tmp2, color=T, main=Big title)
 Error in model.frame(formula, rownames, variables, varnames, extras,
 extranames,  : 
 invalid variable type
 
 I'm probably stupid and missed something simple in the manual (and
 wouldn't like to be flamed if insinuating that, may be... a bug ? Oh
 no !).

OK, so you're getting flamed for something else:

  1. Your example is not reproducible because you didn't provide
 the data (or use artificial data or ...)!
 If I use
   tmp2 - data.frame(stoc = gl(2, 1, 8), q9r = gl(2, 2, 8),
  segca = gl(2, 4, 8))
 the above works correct. But then again...
  2. ...your R version is ancient, please upgrade before posting
 such requests!
  3. Please read the posting guide.

Best,
Z


 It can be done with :
  mosaicplot(table(tmp2$stoc,tmp2$q9r,tmp2$segca),color=T,main=Big
  title) : works fine.
 
 So, no real trouble for me... 
 
  version
  _  
 platform i386-pc-mingw32
 arch i386   
 os   mingw32
 system   i386, mingw32  
 status  
 major1  
 minor8.0
 year 2003   
 month10 
 day  08 
 language R
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide!
 http://www.R-project.org/posting-guide.html


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Logistic regression goodness of fit tests

2005-03-10 Thread Roger Bivand
On Thu, 10 Mar 2005, Trevor Wiens wrote:

 I was unsure of what suitable goodness-of-fit tests existed in R for
 logistic regression. After searching the R-help archive I found that
 using the Design models and resid, could be used to calculate this as
 follows:
 
 d - datadist(mydataframe)
 options(datadist = 'd')
 fit - lrm(response ~ predictor1 + predictor2..., data=mydataframe, x =T, y=T)
 resid(fit, 'gof').
 
 I set up a script to first use glm to create models use stepAIC to
 determine the optimal model. I used this instead of fastbw because I
 found the AIC values to be completely different and the final models
 didn't always match. Then my script takes the reduced model formula and
 recreates it using lrm as above. Now the problem is that for some models
 I run into an error to which I can find no reference whatsoever on the
 mailing list or on the web. It is as follows:
 
 test.lrm - lrm(cclo ~ elev + aspect + cti_var + planar + feat_div + loamy + 
 sands + sandy + wet + slr_mean, data=datamatrix, x = T, y = T)
 singular information matrix in lrm.fit (rank= 10 ).  Offending variable(s):
 slr_mean 
 Error in j:(j + params[i] - 1) : NA/NaN argument
 
 
 Now if I add the singularity criterion and make the value smaller than
 the default of 1E-7 to 1E-9 or 1E-12 which is the default in calibrate,
 it works. Why is that?
 
 Not being a statistician but a biogeographer using regression as a tool,
 I don't really understand what is happening here.

From one geographer to another, and being prepared to bow to
better-founded explanations, you seem to have included a variable - the
offending variable slr_mean - that is very highly correlated with another.  
Making the tolerance tighter says that you are prepared to take the risk
of confounding your results. You've already been fishing for right hand
side variables anyway, so your results are somewhat prejudiced, aren't
they?

I think you may also like to review which of the right hand side variables
should be treated as factors rather than numeric (looking at the summary
suggests that many are factors), and perhaps the dependent variable too,
although lrm() seems to take care of this if you haven't.

 
 Does changing the tol variable, change how I should interpret
 goodness-of-fit results or other evaluations of the models created?
 
 I've included a summary of the data below (in case it might be helpful)
 with all variables in the data frame as it was easier than selecting out
 the ones used in the model.
 
 Thanks in advance.
 
 T
 

-- 
Roger Bivand
Economic Geography Section, Department of Economics, Norwegian School of
Economics and Business Administration, Breiviksveien 40, N-5045 Bergen,
Norway. voice: +47 55 95 93 55; fax +47 55 95 93 93
e-mail: [EMAIL PROTECTED]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Logistic regression goodness of fit tests

2005-03-10 Thread Frank E Harrell Jr
Trevor Wiens wrote:
I was unsure of what suitable goodness-of-fit tests existed in R for logistic 
regression. After searching the R-help archive I found that using the Design 
models and resid, could be used to calculate this as follows:
d - datadist(mydataframe)
options(datadist = 'd')
fit - lrm(response ~ predictor1 + predictor2..., data=mydataframe, x =T, y=T)
resid(fit, 'gof').
I set up a script to first use glm to create models use stepAIC to determine 
the optimal model. I used this instead of fastbw because I found the AIC values 
to be completely different and the final models didn't always match. Then my 
script takes the reduced model formula and recreates it using lrm as above. Now 
the problem is that for some models I run into an error to which I can find no 
reference whatsoever on the mailing list or on the web. It is as follows:
test.lrm - lrm(cclo ~ elev + aspect + cti_var + planar + feat_div + loamy + sands + sandy + wet + slr_mean, data=datamatrix, x = T, y = T)
singular information matrix in lrm.fit (rank= 10 ).  Offending variable(s):
slr_mean 
Error in j:(j + params[i] - 1) : NA/NaN argument

Now if I add the singularity criterion and make the value smaller than the 
default of 1E-7 to 1E-9 or 1E-12 which is the default in calibrate, it works. 
Why is that?
Not being a statistician but a biogeographer using regression as a tool, I don't really understand what is happening here. 

Does changing the tol variable, change how I should interpret goodness-of-fit 
results or other evaluations of the models created?
I've included a summary of the data below (in case it might be helpful) with 
all variables in the data frame as it was easier than selecting out the ones 
used in the model.
Thanks in advance.
T
The goodness of fit test only works on prespecified models.  It is not 
valid when stepwise variable selection is used (unless perhaps you use 
alpha=0.5).

--
Frank E Harrell Jr   Professor and Chair   School of Medicine
 Department of Biostatistics   Vanderbilt University
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] install R redhat rpm as a normal user

2005-03-10 Thread Eric Hu
Thank you all.
Best,
Eric
On Mar 10, 2005, at 7:21 AM, Marc Schwartz wrote:
On Wed, 2005-03-09 at 16:17 -0800, Eric Hu wrote:
Hi, I wonder if anyone has done this before. I have rpm-build
installed in my workstation. Thanks.
Eric

I have not seen any other replies to this, but as far as I know 
Martyn's
RPMS are not relocatable and a test this morning confirms that.

If you need to install R as a non-root user, you are likely better off
installing from source code. Modify the --prefix argument
to ./configure as per the R Administration Manual. For example:
  ./configure --prefix=TargetDir
where TargetDir is a directory that you have write privileges to.
Then use make and make install which will then build and install R.
R will be installed in TargetDir (ie. /home/UserName/R) with a set of
subdirs bin, lib and man. To run R, you would then use:
  TargetDir/bin/R
Or you can create a symlink in /home/UserName/bin to the above file and
then just use R to start it.
See the R Administration Manual for more information.
HTH,
Marc Schwartz

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Logistic regression goodness of fit tests

2005-03-10 Thread Trevor Wiens
On Thu, 10 Mar 2005 22:36:09 +0100 (CET)
Roger Bivand [EMAIL PROTECTED] wrote:

 From one geographer to another, and being prepared to bow to
 better-founded explanations, you seem to have included a variable - the
 offending variable slr_mean - that is very highly correlated with another.  
 Making the tolerance tighter says that you are prepared to take the risk
 of confounding your results. You've already been fishing for right hand
 side variables anyway, so your results are somewhat prejudiced, aren't
 they?
 
 I think you may also like to review which of the right hand side variables
 should be treated as factors rather than numeric (looking at the summary
 suggests that many are factors), and perhaps the dependent variable too,
 although lrm() seems to take care of this if you haven't.
 

Thanks for your informative reply.

The nature of the research is habitat selection for 15 species of grassland 
birds (my masters project).  The response here is presence of Chestnut-collared 
Longsur (cclo). I very carefull reviewed the variables for collinearity and 
none of them showed any difficulty except in a few cases which I've used to 
break some cases into two models where one would have otherwise seemed 
reasonable. Just out of interest however I did run the global model, and this 
problem didn't occur, which seems to indicate to me, based on your comments, 
I'm seeing an interaction effect, not a result of two closely correlated 
variables. 

I don't think I've been fishing. I selected variables for inclusion in 
competing models based on ecologically reasonable criteria. I did examine 
relationships between species occurance and static variables such as dem 
derived variables, to see if the data supported including all variables that 
based on ecological criteria should explain the birds distribution. I have 
included some variables inspite of weak statistical relationships based on a 
paper by Anderson, Burnham and Thompson in Journal of Wildlife Management which 
talks about how factors that are ecologically significant, can have interaction 
effects in a model to provide explanation in your response variable, even 
though individually they are not statistically significant by themselves. So, 
I've tried to avoid fishing, but instead simply trying to select the most 
parsimonious models from the set of selected models for each species.

Based on your advice once I've selected my top candidate models, I'll re-run at 
lower tolerances and only keep models that can pass at that level. 
Alternatively I could simply reject models that don't pass at lower tolerances. 
I do find it curious however that they run fine using glm (family = binomial) 
without complaint.

Thanks again.

T
-- 
Trevor Wiens 
[EMAIL PROTECTED]

The significant problems that we face cannot be solved at the same 
level of thinking we were at when we created them. 
(Albert Einstein)

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Logistic regression goodness of fit tests

2005-03-10 Thread Trevor Wiens
On Thu, 10 Mar 2005 16:19:41 -0600
Frank E Harrell Jr [EMAIL PROTECTED] wrote:

 The goodness of fit test only works on prespecified models.  It is not 
 valid when stepwise variable selection is used (unless perhaps you use 
 alpha=0.5).
 

Perhaps I'm blind, but I can't find any reference to an alpha parameter on the 
help page for stepAIC. It runs fine however when I set the parameter and 
produces as model with the same right hand variables as without. Can you tell 
me what it is ?

T
-- 
Trevor Wiens 
[EMAIL PROTECTED]

The significant problems that we face cannot be solved at the same 
level of thinking we were at when we created them. 
(Albert Einstein)

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Is anyone using the MiniR distribution?

2005-03-10 Thread Duncan Murdoch
On Fri, 04 Feb 2005 14:45:35 +, Duncan Murdoch
[EMAIL PROTECTED] wrote :

On Fri, 4 Feb 2005 14:46:03 +0100, Martin Maechler
[EMAIL PROTECTED] wrote :


 Duncan == Duncan Murdoch [EMAIL PROTECTED]
 on Fri, 04 Feb 2005 09:50:22 + writes:

Duncan The miniR files only include a minimal installation
Duncan of R, and are rarely tested.  Rather than building
Duncan something that may not even work, I'd like to stop
Duncan building them.

Duncan Would this be a problem for anyone?


People for which this is a problem will probably not be
subscribed to R-help because they'll be living in places /
situations with bad / expensive internet connection.

(Sorry to be not really constructive here).

That's a good point; I'll put a copy of my question in the miniR
directory on CRAN.  It's not a big problem to build it, but I don't
want to test it if it's not being used.  

Not having heard any complaints on this issue, I have now stopped
building the mini distribution.  I have taken the 2.0.1 files offline
(but still have them available in case this causes hardship for
anyone).

Duncan Murdoch

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Any upcoming R Advanced Programming in Bay area?

2005-03-10 Thread paul king
Hi all,
I am taking a new job in San Francisco and wonder if there's an upcoming
R advanced course in bay area.
Best- Paul
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Setting variable main in plot-function

2005-03-10 Thread T Petersen
Hi,
I am plotting the residuals of z like this
plot(resid(z))
and I want the title of the graph to be
main=Residuals from regression [name of regression]
where in this case z is the name of the regression. Is there a way to 
automaticall put the name of the regression in the title, e.g if the 
regressions name changes to y, the title changes accordingly?

Regards T Petersen
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Ploting a function of two arguments

2005-03-10 Thread Martin C. Martin
Hi,
I've written a function:
myfun - function(x, y) {
  // blah blah
}
and I want to graph it.  My plan is to use persp(), and my question is: 
how do I create the array of values?  One possibility is:

x - seq(0, 10, by=.1)
y - seq(0, 10, by=.1)
inputs - somehow create an array of x,y pairs
outputs - apply(A, c(1,2), myfun)
The inputs array would need to be 101x101x2 array, i.e. basically a 2D 
array of pairs.  I might need to write a little wrapper for myfun.

Am I on the right track, or is there an easier way to do this?  It seems 
there should be an easier way to graph a 2 argument function like this.

- Martin
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] trouble loading R function code

2005-03-10 Thread David Reinke
I created an R function offline with a text editor and saved it as fhv.R
When I type in 
load(file=fhv.R)
I get the message
Error: bad restore file magic number (file may be corrupted)-- no data
loaded
I've checked the file, and it opens up fine in Notepad and other text
editors. I also tried this from another folder and got the same message.
What's going on?
Thanks in advance.

David Reinke

Senior Transportation Engineer/Economist
Dowling Associates, Inc.
180 Grand Avenue, Suite 250
Oakland, California 94612-3774
510.839.1742 x104 (voice)
510.839.0871 (fax)
www.dowlinginc.com

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] trouble loading R function code

2005-03-10 Thread Martin C. Martin
try:
source(fhv.R)
David Reinke wrote:
I created an R function offline with a text editor and saved it as fhv.R
When I type in 
load(file=fhv.R)
I get the message
Error: bad restore file magic number (file may be corrupted)-- no data
loaded
I've checked the file, and it opens up fine in Notepad and other text
editors. I also tried this from another folder and got the same message.
What's going on?
Thanks in advance.

David Reinke
Senior Transportation Engineer/Economist
Dowling Associates, Inc.
180 Grand Avenue, Suite 250
Oakland, California 94612-3774
510.839.1742 x104 (voice)
510.839.0871 (fax)
www.dowlinginc.com
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Errors reading data file?

2005-03-10 Thread Chris Bergstresser
Hi all --
   I tried loading a data file with the following command:
 data = read.table(filename.txt, header = TRUE, sep = ,)
   This appeared to work fine, except it silently skipped 400 records 
(out of 1200).  It turns out, some of the text fields included quotes, 
and I needed to use 'quote = '.
   Why wasn't there an error message?  Is there some way to enable one?

-- Chris
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Bonferroni simultaneous confidence intervals for multiple regression

2005-03-10 Thread Cara Gormally
Hi,

I'm having no luck figuring out how to find Bonferroni simultaneous confidence 
intervals to obtain a family of estimates in R.  Does anyone know how to do 
this?
Thank you!

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Setting variable main in plot-function

2005-03-10 Thread Marc Schwartz
On Fri, 2005-03-11 at 00:50 +0100, T Petersen wrote:
 Hi,
 
 I am plotting the residuals of z like this
 
 plot(resid(z))
 
 and I want the title of the graph to be
 
 main=Residuals from regression [name of regression]
 
 where in this case z is the name of the regression. Is there a way to 
 automaticall put the name of the regression in the title, e.g if the 
 regressions name changes to y, the title changes accordingly?


I am presuming that by the name of the regression, you mean the model
formula.

Using one of the example models from ?lm, where a model is created
called lm.D9:

plot(resid(lm.D9), 
 main = paste(Residuals from regression, deparse(lm.D9$call)))

In this case, the model function call is:

 deparse(lm.D9$call)
[1] lm(formula = weight ~ group)

which gets pasted to your initial text. The initial model function call
is stored in the linear model object as ModelName$call.

So in your case use:

plot(resid(z), 
 main = paste(Residuals from regression, deparse(z$call)))

Just replace 'z' with the name of each new model object and the title
will change or use the same model object name each time you create a new
model, whichever makes sense for your application.

One way to figure these things out, if you know that it has been done by
some other function, is to review the code for that function. In this
case review the code for print.lm or even plot.lm to gain some insight
into how R expressions are used in function text or plot title and axis
label outputs.

HTH,

Marc Schwartz

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] XML to data frame or list

2005-03-10 Thread bogdan romocea
Dear useRs,

I have a simple/RTFM question about XML parsing. Given an XML file,
such as (fragment)
A100/A
B23/B
Ctrue/C
how do I import it in a data frame or list, so that the values (100,
23, true) can be accessed through the names A, B and C?

I installed the XML package and looked over the documentation...
however after 20 minutes and a couple of tests I still don't know what
I should start with. 

Can someone provide an example or point me to the appropriate
function(s)?

Thank you,
b.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] How to reply to a posting if I subscribe to the digest version of R-help?

2005-03-10 Thread Keith Wong
Dear listmembers,
Is there a recommended method of responding to a thread if I am getting the 
mailing list in digest form?

Will the mailing list program automatically match threads if I copy the 
subject from the original posting?

Thank you.
Keith
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] sample function

2005-03-10 Thread mirage sell
Hi everyone, I need help.
I want to have a uniform kind distribution. When I used sample function I 
got almost twice many zeros compared to other numbers. What's wrong with my 
command ?

temp -sample(0:12, 2000, replace=T,prob=(rep(1/13,13)))
hist(temp)
Thanks in advance,
Taka,
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] sample function

2005-03-10 Thread Martin C. Martin
hist is lumping things together.
Try:
sum(temp == 0)
compare to the height of the left most bar.
Is this a bug in hist?
- Martin
mirage sell wrote:
Hi everyone, I need help.
I want to have a uniform kind distribution. When I used sample 
function I got almost twice many zeros compared to other numbers. What's 
wrong with my command ?

temp -sample(0:12, 2000, replace=T,prob=(rep(1/13,13)))
hist(temp)
Thanks in advance,
Taka,
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! 
http://www.R-project.org/posting-guide.html
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


RE: [R] sample function

2005-03-10 Thread Liaw, Andy
It's not the simulated data, but how hist() handled it.  If you use
truehist() in the MASS package, you don't see the problem.  Nor would you
see it like this:

 table(temp)/length(temp)
temp
 0  1  2  3  4  5  6  7  8  9 10
11 
0.0745 0.0745 0.0830 0.0755 0.0760 0.0750 0.0700 0.0765 0.0775 0.0805 0.0830
0.0765 
12 
0.0775 

Andy

 From: mirage sell
 
 Hi everyone, I need help.
 I want to have a uniform kind distribution. When I used 
 sample function I 
 got almost twice many zeros compared to other numbers. What's 
 wrong with my 
 command ?
 
 temp -sample(0:12, 2000, replace=T,prob=(rep(1/13,13)))
 hist(temp)
 
 Thanks in advance,
 
 Taka,
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! 
 http://www.R-project.org/posting-guide.html
 
 


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] sample function

2005-03-10 Thread David Scott
On Thu, 10 Mar 2005, mirage sell wrote:
Hi everyone, I need help.
I want to have a uniform kind distribution. When I used sample function I 
got almost twice many zeros compared to other numbers. What's wrong with my 
command ?

Nothing is wrong with your sampling, it is the display in the histogram.
Try
temp -sample(0:12, 2000, replace=T,prob=(rep(1/13,13)))
table(temp)
David Scott
_
David Scott Department of Statistics, Tamaki Campus
The University of Auckland, PB 92019
AucklandNEW ZEALAND
Phone: +64 9 373 7599 ext 86830 Fax: +64 9 373 7000
Email:  [EMAIL PROTECTED]
Graduate Officer, Department of Statistics
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] sample function

2005-03-10 Thread David Scott
On Thu, 10 Mar 2005, Martin C. Martin wrote:
hist is lumping things together.
Try:
sum(temp == 0)
compare to the height of the left most bar.
Is this a bug in hist?
No, hist is the wrong thing to use to display this data.
Try
temp -sample(0:12, 2000, replace=T,prob=(rep(1/13,13)))
barplot(table(temp))
David Scott
_
David Scott Department of Statistics, Tamaki Campus
The University of Auckland, PB 92019
AucklandNEW ZEALAND
Phone: +64 9 373 7599 ext 86830 Fax: +64 9 373 7000
Email:  [EMAIL PROTECTED]
Graduate Officer, Department of Statistics
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] sample function

2005-03-10 Thread Marc Schwartz
On Thu, 2005-03-10 at 20:54 -0600, mirage sell wrote:
 Hi everyone, I need help.
 I want to have a uniform kind distribution. When I used sample function I 
 got almost twice many zeros compared to other numbers. What's wrong with my 
 command ?
 
 temp -sample(0:12, 2000, replace=T,prob=(rep(1/13,13)))
 hist(temp)
 
 Thanks in advance,

Hint: take note that there are only 12 cells in the plot, not 13...

However, note that the frequency of the 13 elements are appropriate:

 table(sample(0:12, 2000, replace=T))

  0   1   2   3   4   5   6   7   8   9  10  11  12
158 156 151 163 156 158 146 154 134 158 146 147 173


Review the details of how the breaks are selected in ?hist.

BTW, you do not need to specify the 'prob' argument if you want equal
probabilities as per my example above.

HTH,

Marc Schwartz

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Interval censoring in Survival analysis

2005-03-10 Thread A.J. Rossini
On Thu, 10 Mar 2005 08:01:57 -0800 (PST), Thomas Lumley
[EMAIL PROTECTED] wrote:
 On Thu, 10 Mar 2005, AMOROS NAVARRO, ALEX wrote:
 
 I am not aware of any package that allows semiparametric regression
 modelling of interval censored or doubly censored data.

I'm aware of 2 such packages, but the one that is more accurate won't
be published due to licensing restrictions on one of the subcomponents
(nonlinear programming with nonlinear constraints optimizer), and the
other is less accurate (a profiling approach) and shouldn't be.

That being said, I'm sure others have tried.

-- 
best,
-tony

Commit early,commit often, and commit in a repository from which we can easily
roll-back your mistakes (AJR, 4Jan05).

A.J. Rossini
[EMAIL PROTECTED]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Apologies --- Re: [R] Question regarding mosaicplot

2005-03-10 Thread Jean Vidal
May I present my apologies to all, on this list, and to you Achim ?
It's an old message that was sent by accident yesterday when testing the 
Thunderbird mail program.

So, please, ignore this question. It was answered and solved months ago.


At 22:16 10/03/2005, Achim Zeileis wrote:
Jean:
On Tue, 18 Nov 2003 23:32:05 Jean Vidal wrote:
 I tried this :
  mosaicplot(stoc ~ q9r + segca,data=tmp2,color=T) : works fine.

 And now, this :
  mosaicplot(stoc ~ q9r + segca, data=tmp2, color=T, main=Big title)
 Error in model.frame(formula, rownames, variables, varnames, extras,
 extranames,  :
 invalid variable type

 I'm probably stupid and missed something simple in the manual (and
 wouldn't like to be flamed if insinuating that, may be... a bug ? Oh
 no !).
OK, so you're getting flamed for something else:
  1. Your example is not reproducible because you didn't provide
 the data (or use artificial data or ...)!
 If I use
   tmp2 - data.frame(stoc = gl(2, 1, 8), q9r = gl(2, 2, 8),
  segca = gl(2, 4, 8))
 the above works correct. But then again...
  2. ...your R version is ancient, please upgrade before posting
 such requests!
  3. Please read the posting guide.
Best,
Z
 It can be done with :
  mosaicplot(table(tmp2$stoc,tmp2$q9r,tmp2$segca),color=T,main=Big
  title) : works fine.

 So, no real trouble for me...

  version
  _
 platform i386-pc-mingw32
 arch i386
 os   mingw32
 system   i386, mingw32
 status
 major1
 minor8.0
 year 2003
 month10
 day  08
 language R

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide!
 http://www.R-project.org/posting-guide.html

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] help on warning message when using anova

2005-03-10 Thread tpeng
Hi can someone help me with the following warning message I got when I used two
way anova: Warning messages: 1: Models with response NULL removed because
response differs from model 1 in: anova.lmlist(object, ...) 2: Models with
response NULL removed because response differs from model 1 in:
anova.lmlist(object, ...)
 (I have two rows of separate data sets)

Thanks in advance,

tao

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] howto: plot location, web search

2005-03-10 Thread Mike R
Hi,

new to the list, R and rpy. my first post.

using rpy, i find this

  r.X11(height=3, width=6)

allows me to preset the size of a subsequent

   r.plot(x, y)

but i've been unable to find documention on related aspects, such as
programatic control of the plot window location on the monitor, or
programatic resizing of a plot once it is generated. Are these
possible?

In general I've found it diffcult to use google to search for answers
because R is not a very effective keyword. Are there tricks to
googling for R info?  And If a start a web accessible html page for R,
what keywords can I include that would make it easier for search
engines to index so that other R-folk could find it?

Thanks in advance,
mike

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] xlab cex bug in Lattice/grid and multipage-output (Windows only?)

2005-03-10 Thread Dieter Menne

The following problem seems to be Windows-specific. Deepayan informed me
that it cannot be reproduced on platform i386-pc-linux-gnu.

-
library(lattice)
trellis.device(png, file=sepal%02d.png,color=T,  theme=col.whitebg)
p=xyplot(Sepal.Length + Sepal.Width ~ Petal.Length + Petal.Width | Species,
   data = iris, scales = free, layout = c(1, 1)
   )

print(p)
dev.off()


Compare the x-label on the first and on all following pages: font size is
different.
Same for bmp. Does not occur on multipage-devices such as ps.
By comparing with the png files from Deepayan/Linux, I believe that only the
first xlab has the correct small size, and all others are not correct.

My version: (all libraries updated today)
platform i386-pc-mingw32
arch i386
os   mingw32
system   i386, mingw32
status
major2
minor0.1
year 2004
month11
day  15
language R

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


RE: [R] howto: plot location, web search

2005-03-10 Thread Mulholland, Tom
If it's specifically R that I am looking for I use the following link 
http://finzi.psych.upenn.edu/search.html

If I were going to use google I would go to the advanced page and insert 
finzi.psych.upenn.edu into the Domain field so as to restrict the search. As 
you have found out the letter R is ubiquitous. There are other search engines 
listed on the CRAN website that you can also use 
http://cran.r-project.org/search.html

However rpy is much less frequent. In fact google comes up with it straight 
away. From that I discovered that RPy has a mailing list 
http://rpy.sourceforge.net/maillist.html. Maybe you should try searching their 
archives http://rpy.sourceforge.net/maillist.html. 

I hope that helps

Tom

 -Original Message-
 From: Mike R [mailto:[EMAIL PROTECTED]
 Sent: Friday, 11 March 2005 2:44 PM
 To: r-help@stat.math.ethz.ch
 Subject: [R] howto: plot location, web search
 
 
 Hi,
 
 new to the list, R and rpy. my first post.
 
 using rpy, i find this
 
   r.X11(height=3, width=6)
 
 allows me to preset the size of a subsequent
 
r.plot(x, y)
 
 but i've been unable to find documention on related aspects, such as
 programatic control of the plot window location on the monitor, or
 programatic resizing of a plot once it is generated. Are these
 possible?
 
 In general I've found it diffcult to use google to search for answers
 because R is not a very effective keyword. Are there tricks to
 googling for R info?  And If a start a web accessible html page for R,
 what keywords can I include that would make it easier for search
 engines to index so that other R-folk could find it?
 
 Thanks in advance,
 mike
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! 
 http://www.R-project.org/posting-guide.html


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


RE: [R] howto: plot location, web search

2005-03-10 Thread Mulholland, Tom
Sorry about the wrong link. The archives are at 
http://sourceforge.net/mailarchive/forum.php?forum=rpy-list

 -Original Message-
 From: Mike R [mailto:[EMAIL PROTECTED]
 Sent: Friday, 11 March 2005 2:44 PM
 To: r-help@stat.math.ethz.ch
 Subject: [R] howto: plot location, web search
 
 
 Hi,
 
 new to the list, R and rpy. my first post.
 
 using rpy, i find this
 
   r.X11(height=3, width=6)
 
 allows me to preset the size of a subsequent
 
r.plot(x, y)
 
 but i've been unable to find documention on related aspects, such as
 programatic control of the plot window location on the monitor, or
 programatic resizing of a plot once it is generated. Are these
 possible?
 
 In general I've found it diffcult to use google to search for answers
 because R is not a very effective keyword. Are there tricks to
 googling for R info?  And If a start a web accessible html page for R,
 what keywords can I include that would make it easier for search
 engines to index so that other R-folk could find it?
 
 Thanks in advance,
 mike
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! 
 http://www.R-project.org/posting-guide.html


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


RE: [R] about R CMD check

2005-03-10 Thread Liaw, Andy
Please show us at least the prototype of the C function(s) being called, and
the .C call in the R code.

Just to make sure:  The first argument to .C() should be the name of the C
function, not the name of the C source file.

Andy

 From: [EMAIL PROTECTED]
 
 hello, everybody,
 
 I create a package by myself named var on Linux system. I 
 attach a C code
 which uses R_ext/Lapack.h in directory src. But when I 
 carry out  R CMD
 check var, it shows
 
   wxt0124()
 Error in .C(wxt1221, PACKAGE = var) : C function name not 
 in DLL for package
 var
 Execution halted
 
 Could you tell me what is the problem?
 
 Maggie
 
 
 [EMAIL PROTECTED] myrpackages]$ R CMD check var
 * checking for working latex ... OK
 * using log directory '/home/credsim/src/myrpackages/var.Rcheck'
 * checking for file 'var/DESCRIPTION' ... OK
 * checking if this is a source package ... OK
 
 * Installing *source* package 'var' ...
 ** libs
 gcc -I/usr/lib/R/include  -I/usr/local/include 
 -D__NO_MATH_INLINES -mieee-fp 
 -fPIC  -O2 -g -march=i386 -mcpu=i686 -c wxt1221.c -o wxt1221.o
 wxt1221.c: In function `Matrix_A_12':
 wxt1221.c:774: warning: passing arg 3 of `dpotrf_' from 
 incompatible pointer
 type
 wxt1221.c:775: warning: passing arg 4 of `dpotrs_' from 
 incompatible pointer
 type
 wxt1221.c: In function `Matrix_A_1':
 wxt1221.c:1038: warning: passing arg 3 of `dpotrf_' from 
 incompatible pointer
 type
 wxt1221.c:1039: warning: passing arg 4 of `dpotrs_' from 
 incompatible pointer
 type
 wxt1221.c: In function `Matrix_A_2':
 wxt1221.c:1290: warning: passing arg 3 of `dpotrf_' from 
 incompatible pointer
 type
 wxt1221.c:1291: warning: passing arg 4 of `dpotrs_' from 
 incompatible pointer
 type
 gcc -shared -L/usr/local/lib -o var.so wxt1221.o 
 -L/usr/lib/R/bin -lRlapack 
 -L/usr/local/lib -L/usr/lib/gcc-lib/i386-redhat-linux/3.2.2
 -L/usr/lib/gcc-lib/i386-redhat-linux/3.2.2/../../.. 
 -lfrtbegin -lg2c -lm
 -lgcc_s
 ** R
 ** help
 Warning: \alias{wxt0124} already in wxt0124.Rd -- skipping 
 the one in wxt1221.Rd
   Building/Updating help pages for package 'var'
  Formats: text html latex example
   wxt0124   texthtmllatex   example
   wxt1221   texthtmllatex   example
 * DONE (var)
 
 * checking package directory ... OK
 * checking for portable file names ... OK
 * checking for sufficient/correct file permissions ... OK
 * checking DESCRIPTION meta-information ... OK
 * checking package dependencies ... OK
 * checking index information ... OK
 * checking package subdirectories ... OK
 * checking R files for syntax errors ... OK
 * checking R files for library.dynam ... OK
 * checking S3 generic/method consistency ... OK
 * checking for replacement functions with final arg not named 
 'value' ... OK
 * checking foreign function calls ... OK
 * checking Rd files ... OK
 * checking for missing documentation entries ... WARNING
 Undocumented code objects:
   b
 All user-level objects in a package should have documentation entries.
 See chapter 'Writing R documentation files' in manual 'Writing R
 Extensions'.
 * checking for code/documentation mismatches ... OK
 * checking Rd \usage sections ... OK
 * checking for CRLF line endings in C sources/headers ... OK
 * creating var-Ex.R ... OK
 * checking examples ... ERROR
 Running examples in var-Ex.R failed.
 The error most likely occurred in:
 
  ### * wxt0124
 
  flush(stderr()); flush(stdout())
 
  ### Name: wxt0124
  ### Title: Estimation of Optimal Variance Components in CCC Model
  ### Aliases: wxt0124
  ### Keywords: array
wxt0124()
 Error in .C(wxt1221, PACKAGE = var) : C function name not 
 in DLL for package
 var
 Execution halted
 
  ### ** Examples
 
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! 
 http://www.R-project.org/posting-guide.html
 
 


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html