[R] Creating a directory for my data

2009-03-10 Thread miya

Hi everyone,
I am currently working with a very large data set. It is data collected a
few times a day, so there are repeated titles in the data set. I want to
assign an id number to each different title and enter this information in a
directory that I can access whenever I am working with the data.

An example of a data I am workin with follows:

1 title A
2 title C
3 title B
1 title A
2 title B
3 title C
1 title D
2 title A
3 title C
1 title B
2 title A
3 title C

What I have tried thus far is for loops.

r-matrix(x[,1])
t-matrix(x[,2])
for(i in 1:length(t)) {
for(k in 1:length(r)) {
z = matrix(c(i,t))
A = Article A
for(j in 1:length(z)){
if(B==Article A)

else{enter article in z}
}
}

This is of course giving me a lot of errors. I just have no idea where to go
with this. Does anyone have any ideas on where I can go from here to create
my directory?
I appreciate all your help.
Thank you in advance.

-- 
View this message in context: 
http://www.nabble.com/Creating-a-directory-for-my-data-tp22448335p22448335.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Converting a dataframe to a matrix

2009-03-10 Thread Jennifer Brea

If I have a dataframe which is organized like this:

  name color likes?
1 sally   red0
2 sally  blue1
3 sally green1
4  jake   red0
5  jake  blue1
6  jake green1
7   tom   red1
8   tom  blue0
9   tom green0


And I want to create a matrix in the form:

 red blue green
sally   01 1
jake01 1
tom 10 0


Are there any built-in commands that might help me do this?  Also, I 
can't assume that there is an observation for every person-color.  In 
other words, in the original dataset, there might be some colors for 
which sally offered no opinion.  In some cases, this may be represented 
by NA, in others, it may mean that no row exists for sally for that color.


Thank you!

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] North Arrow (.png file) on a Map

2009-03-10 Thread Yihui Xie
Is this arrow satisfactory for you?

north.arrow = function(x, y, h) {
polygon(c(x, x, x + h/2), c(y - h, y, y - (1 + sqrt(3)/2) * h),
col = black, border = NA)
polygon(c(x, x + h/2, x, x - h/2), c(y - h, y - (1 + sqrt(3)/2) *
h, y, y - (1 + sqrt(3)/2) * h))
text(x, y, N, adj = c(0.5, 0), cex = 4)
}
plot(1, type = n, ylim = c(0, 1))
north.arrow(1, 0.8, 0.3)

Regards,
Yihui
--
Yihui Xie xieyi...@gmail.com
Phone: +86-(0)10-82509086 Fax: +86-(0)10-82509086
Mobile: +86-15810805877
Homepage: http://www.yihui.name
School of Statistics, Room 1037, Mingde Main Building,
Renmin University of China, Beijing, 100872, China



On Tue, Mar 10, 2009 at 7:21 PM, Rodrigo Aluizio r.alui...@gmail.com wrote:
 Hi list.

 I would like to know how do I insert a North arrow, stored as a png file in
 my computer, in a map? I found lots of post asking similar things, one of
 them mentioned the pixmap package.  The map was done using map() and
 shapefiles (the code is below). I’m using the pixmap () and addlogo()
 functions. Well I can import the png with pixmap() function (I guess, once
 there’s no error message), but I can’t put It on the map, I got an error
 message telling me that:



 “Error at t(x...@index[nrow(x...@index):1, , drop = FALSE]) :

  index out of limits”



 Well I tried changing coordinates but I always got the same result. How do I
 do this correctly? Is there a better way?



 Thanks for the help and attention.



 Here is the complete map script:



 library(RODBC)

 library(maps)

 library(mapdata)

 library(maptools)

 library(pixmap)

 #Carregar Coordenadas e dados dos Pontos Amostrais

 Dados-odbcConnectExcel('Campos.xls',readOnly=T)

 Coord-sqlFetch(Dados,'CoordMed',colnames=F,rownames='Ponto')

 odbcClose(Dados)

 N-pixmap('Norte.png',nrow=166,ncol=113)

 # Carregar pontos e shapes

 Batimetria-readShapeSpatial('C:/Users/Rodrigo/Documents/UFPR/Micropaleontol
 ogia/Campos/ShapeFiles/Batimetria_BC.shp')

 Estados-readShapeSpatial('C:/Users/Rodrigo/Documents/UFPR/Micropaleontologi
 a/Campos/ShapeFiles/Estados_Sudeste.shp')

 Faciologia-readShapeSpatial('C:/Users/Rodrigo/Documents/UFPR/Micropaleontol
 ogia/Campos/ShapeFiles/Faciologia_BC.shp')

 # Mapa com os Pontos da Bacia

 postscript('MapaCampos.eps',paper='special',onefile=F,horizontal=F,width=3.5
 ,height=4.5,bg='white',pointsize=3)

 par(mar=c(3,2,2,0))

 map('worldHires','brazil',ylim=c(23.9,20.3),xlim=c(42.1,39.2),type='n')

 plot(Faciologia,ylab='',xlab='',col=c('lightgreen','lightgreen','lightgreen'
 ,'lightgreen','lightgreen','lightgray','lightgray','lightgray','lightgray','
 lightgray','lightgray','lightgray','lightgray','lightgray','lightgray','ligh
 tgray','lightgray','lightgray','lightgray','lightgray','lightgray','lightyel
 low','lightyellow','lightyellow'),add=T,lwd=0.5,border=0)

 plot(Batimetria,ylab='',xlab='',col='darkgray',lty='solid',lwd=0.2,add=T)

 plot(Estados,ylab='',xlab='',lty='solid',add=T,lwd=0.8)

 text(Coord$Longitude[Coord$Réplicas=='1'],Coord$Latitude[Coord$Réplicas=='1'
 ],rownames(Coord)[Coord$Réplicas=='1'],col='red',cex=0.5,font=2)

 text(Coord$Longitude[Coord$Réplicas=='2'],Coord$Latitude[Coord$Réplicas=='2'
 ],rownames(Coord)[Coord$Réplicas=='2'],col='yellow',cex=0.5,font=2)

 text(Coord$Longitude[Coord$Réplicas=='3'],Coord$Latitude[Coord$Réplicas=='3'
 ],rownames(Coord)[Coord$Réplicas=='3'],col='blue',cex=0.5,font=2)

 points(Coord$Longitude,Coord$Latitude-0.045,pch=20,cex=0.7)

 text(c(41.5,41.3),c(21.7,20.6),c('RJ','ES'))

 axis(1,xaxp=c(42.1,39.2,2),cex.axis=1)

 axis(2,yaxp=c(23.9,20.3,4),cex.axis=1)

 title(main='Bacia')

 legend(40.2,23.5,c('Uma','Duas','Três'),pch=21,cex=1,pt.bg=c('red','yellow',
 'blue'),bty='n',pt.cex=2,pt.lwd=0.6,title='Réplicas')

 legend(39.8,23.5,c('Areia','Calcário','Lama'),pch=21,cex=1,pt.bg=c('lightyel
 low','lightgray','lightgreen'),bty='n',pt.cex=2,pt.lwd=0.6,title='Faciologia
 ')

 addlogo(N,px=c(40,39.8),py=c(21,20.8))

 dev.off()

 q('no')



 -

 MSc. Rodrigo Aluizio mailto:r.alui...@gmail.com

 Centro de Estudos do Mar/UFPR
 Laboratório de Micropaleontologia


        [[alternative HTML version deleted]]


 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.



__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] perform subgroup meta-analysis and create forest plot displaying subgroups

2009-03-10 Thread Weiss, Bernd

Steven Lubitz schrieb:

Hello, I'm using the rmeta package to perform a meta analysis using
summary statistics rather than raw data, and would like to analyze
the effects in three different subgroups of my data. Furthermore, I'd
like to plot this on one forest plot, with corresponding summary
weighted averages of the effects displayed beneath each subgroup.

I am able to generate the subgroup analyses by simply performing 3
separate meta-analyses with the desired subset of data. However, I
can't manage to plot everything on the same forest plot.


Maybe I'm wrong but the 'forest'-function (package 'meta', 
http://cran.at.r-project.org/web/packages/meta/meta.pdf) should be 
able to do what you want. I guess you could be interested in the 'byvar' 
argument.


Bernd

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] example of panel data in R

2009-03-10 Thread 정승환

.Bold { font-weight: bold; }
.Title { font-weight: bold; font-size: 18px; color: #cc3300; }
.Code { border: #8b4513 1px solid; padding-right: 5px; padding-left: 5px;color: 
#66; font-family: 'Courier New' , Monospace;background-color: #ff9933; }
I'm studing about the panel data. Can i find a example of panel data in R? also 
what kind of data file structure in reading this panel data? help me ^^ 



À̸§ : Á¤½ÂȯE-mail : jung6...@hanmail.netºÎ¼­ : ¿¬¼¼´ëÇб³ »ó°æ´ëÇÐȸ»çÀüÈ­ :  
   Æѽº : À̵¿Åë½Å : 010-2301-0824ȨÆäÀÌÁö : 

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Changing factor to numeric

2009-03-10 Thread ojal john owino
Dear Users,
I have a variable in my dataset which is of type factor. But it actually
contains numeric entries which like 5.735  4.759 . This is because the
data was read from a CSV file into R and this variable contained other
charaters which were not numeric. I have now dropped the records with the
characters which are not numeric for this variable and want to change it to
numeric srotage type.

I have tried using as.numeric() function but it changes the values in the
variable to what I think are the ranks of the individual values of the
varible in the dataset. For example if 5.735 is the current content in the
field, then the new object created by as.numeric will contain a value like
680 if the 5.735 was the highest value for the varible and the dataset had
680 records.


How can I change the storage type without changing the contents of this
variable in this case?

Thanks for your consideration.



-- 
Ojal John Owino
P.O Box 230-80108
Kilifi, Kenya.
Mobile:+254 728 095 710

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] foreign package install on Solaris 10 + R-2.7.1

2009-03-10 Thread Sarosh Jamal
Hello,

I've been having trouble installing package spdep for R-2.7.1 on our Solaris 
10 (sparc) server.  Namely the two dependencies for this package do not install 
properly: foreign and maptools

I understand that Solaris 10 may not be an officially supported platform but 
any help/feedback you can offer would be most appreciated.

I've updated all packages currently installed on this version of R but the 
install of package foreign complains about an invalid priority field in the 
DESCRIPTION file. I've not had any issues with the other packages.

I'm including our systemInfo() output here:
==
R version 2.7.1 (2008-06-23)
sparc-sun-solaris2.10

locale:
/en_CA.ISO8859-1/C/C/en_CA.ISO8859-1/C/C

attached base packages:
[1] stats graphics  grDevices utils datasets  methods   base

And, I'm including the transcript from the package install attempt:
==
1 /home/sjamal  R

R version 2.7.1 (2008-06-23)
Copyright (C) 2008 The R Foundation for Statistical Computing ISBN 3-900051-07-0

R is free software and comes with ABSOLUTELY NO WARRANTY.
You are welcome to redistribute it under certain conditions.
Type 'license()' or 'licence()' for distribution details.

R is a collaborative project with many contributors.
Type 'contributors()' for more information and 'citation()' on how to cite R or 
R packages in publications.

Type 'demo()' for some demos, 'help()' for on-line help, or 'help.start()' for 
an HTML browser interface to help.
Type 'q()' to quit R.

 install.packages(foreign)
Warning in install.packages(foreign) :
  argument 'lib' is missing: using '/home/sjamal/R/sparc-sun-solaris2.10-library
/2.7'
--- Please select a CRAN mirror for use in this session --- Loading Tcl/Tk 
interface ... done trying URL 
'http://probability.ca/cran/src/contrib/foreign_0.8-33.tar.gz'
Content type 'application/x-gzip' length 315463 bytes (308 Kb) opened URL 
==
downloaded 308 Kb

* Installing *source* package 'foreign' ...
checking for gcc... gcc -std=gnu99
checking for C compiler default output file name... a.out checking whether the 
C compiler works... yes checking whether we are cross compiling... no checking 
for suffix of executables...
checking for suffix of object files... o checking whether we are using the GNU 
C compiler... yes checking whether gcc -std=gnu99 accepts -g... yes checking 
for gcc -std=gnu99 option to accept ANSI C... none needed checking whether gcc 
-std=gnu99 accepts -Wno-long-long... yes checking how to run the C 
preprocessor... gcc -std=gnu99 -E checking for egrep... grep -E checking for 
ANSI C header files... yes checking for sys/types.h... yes checking for 
sys/stat.h... yes checking for stdlib.h... yes checking for string.h... yes 
checking for memory.h... yes checking for strings.h... yes checking for 
inttypes.h... yes checking for stdint.h... yes checking for unistd.h... yes 
checking byteswap.h usability... no checking byteswap.h presence... no checking 
for byteswap.h... no checking for double... yes checking size of double... 8 
checking for int... yes checking size of int... 4 checking for long... yes 
checking size of long... 4
configure: creating ./config.status
config.status: creating src/Makevars
config.status: creating src/swap_bytes.h
config.status: creating src/var.h
Error: Invalid DESCRIPTION file

Invalid Priority field.
Packages with priorities 'base' or 'recommended' or 'defunct-base' must already 
be known to R.

See the information on DESCRIPTION files in section 'Creating R packages' of 
the 'Writing R Extensions' manual.
Execution halted
ERROR: installing package DESCRIPTION failed
** Removing '/home/sjamal/R/sparc-sun-solaris2.10-library/2.7/foreign'
** Restoring previous '/home/sjamal/R/sparc-sun-solaris2.10-library/2.7/foreign'

The downloaded packages are in
/tmp/RtmpkcXy1L/downloaded_packages
Warning message:
In install.packages(foreign) :
  installation of package 'foreign' had non-zero exit status


I look forward to your insights.

Thank you,

Sarosh

---
Sarosh Jamal

Geographic Computing Specialist
Department of Geography
http://geog.utm.utoronto.ca

Staff co-Chair, United Way Campaign
http://www.utm.utoronto.ca/unitedway

University of Toronto Mississauga
sarosh.ja...@utoronto.ca
905.569.4497

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Lattice: Customizing point-sizes with groups

2009-03-10 Thread Paul Boutros

Hello,

I am creating a scatter-plot in lattice, and I would like to customize  
the size of each point so that some points are larger and others  
smaller.  Here's a toy example:


library(lattice);

temp - data.frame(
x = 1:10,
y = 1:10,
cex = rep( c(1,3), 5),
groups = c( rep(A, 5), rep(B, 5) )
);

xyplot(y ~ x, temp, cex = temp$cex, pch = 19);

This works just fine if I create a straight xy-plot, without groups.   
However when I introduce groupings the cex argument specifies the  
point-size for the entire group.  For example:


xyplot(y ~ x, temp, cex = temp$cex, pch = 19, group = groups);

Is it possible to combine per-spot sizing with groups in some way?   
One work-around is to manually specify all graphical parameters, but I  
thought there might be a better way than this:


temp$col - rep(blue, 10);
temp$col[temp$groups == B] - red;
xyplot(y ~ x, temp, cex = temp$cex, pch = 19, col = temp$col);

Any suggestions/advice is much appreciated!
Paul

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Problem with capabilities() in R2-8.1

2009-03-10 Thread Prof Brian Ripley
capabilities(iconv) does work in R 2.8.1 on Window (for me and many, 
many otherss, as well as on the machine that ran 'make check'), so you 
have done something to your installation.  Most likely you have 
somehow mixed it up with a much earliier version of R for which the 
error message would have been true. so if you have any uch version 
installed, please remove it.  Then try starting R with --vanulla, 
since you may hav ebeen picking up libraries containing packages from 
earlier versions.


On Mon, 9 Mar 2009, Marcus, Jeffrey wrote:


I just installed R 2.8.1 on Windows XP. When I ran the source command,
I got the error:

Error in capabilities(iconv) :
 1 argument passed to .Internal(capabilities) which requires 0

I looked at the code for source and it indeed has a call to
capabilities(iconv)

if (capabilities(iconv)) {
   if (identical(encoding, unknown)) {
   enc - utils::localeToCharset()
   encoding - enc[length(enc)]
   }


So then I ran capabilities itself:



capabilities(iconv)

Error in capabilities(iconv) :
 1 argument passed to .Internal(capabilities) which requires 0

I made sure that I hadn't by accident aliased either source or
capabilities by doing
find(source)

find (capabilites)

and both came back with package::base.

Any help would be appreciated. Thanks.


That's only a partial test.  searchpaths() will show where you loaded 
capabiliites() from.


--
Brian D. Ripley,  rip...@stats.ox.ac.uk
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Problem with capabilities() in R2-8.1

2009-03-10 Thread Simon Blomberg
On Tue, 2009-03-10 at 06:50 +, Prof Brian Ripley wrote:
 capabilities(iconv) does work in R 2.8.1 on Window (for me and many, 
 many otherss, as well as on the machine that ran 'make check'), so you 
 have done something to your installation.  Most likely you have 
 somehow mixed it up with a much earliier version of R for which the 
 error message would have been true. so if you have any uch version 
 installed, please remove it.  Then try starting R with --vanulla, 
 since you may hav ebeen picking up libraries containing packages from 
 earlier versions.

M. Vanulla. Arg

S.
 
 On Mon, 9 Mar 2009, Marcus, Jeffrey wrote:
 
  I just installed R 2.8.1 on Windows XP. When I ran the source command,
  I got the error:
 
  Error in capabilities(iconv) :
   1 argument passed to .Internal(capabilities) which requires 0
 
  I looked at the code for source and it indeed has a call to
  capabilities(iconv)
 
  if (capabilities(iconv)) {
 if (identical(encoding, unknown)) {
 enc - utils::localeToCharset()
 encoding - enc[length(enc)]
 }
 
 
  So then I ran capabilities itself:
 
 
  capabilities(iconv)
  Error in capabilities(iconv) :
   1 argument passed to .Internal(capabilities) which requires 0
 
  I made sure that I hadn't by accident aliased either source or
  capabilities by doing
  find(source)
 
  find (capabilites)
 
  and both came back with package::base.
 
  Any help would be appreciated. Thanks.
 
 That's only a partial test.  searchpaths() will show where you loaded 
 capabiliites() from.
 
-- 
Simon Blomberg, BSc (Hons), PhD, MAppStat. 
Lecturer and Consultant Statistician 
School of Biological Sciences
The University of Queensland 
St. Lucia Queensland 4072 
Australia
Room 320 Goddard Building (8)
T: +61 7 3365 2506
http://www.uq.edu.au/~uqsblomb
email: S.Blomberg1_at_uq.edu.au

Policies:
1.  I will NOT analyse your data for you.
2.  Your deadline is your problem.

The combination of some data and an aching desire for 
an answer does not ensure that a reasonable answer can 
be extracted from a given body of data. - John Tukey.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] foreign package install on Solaris 10 + R-2.7.1

2009-03-10 Thread Prof Brian Ripley

On Mon, 9 Mar 2009, Sarosh Jamal wrote:


Hello,

I've been having trouble installing package spdep for R-2.7.1 on 

our Solaris 10 (sparc) server.  Namely the two dependencies for this package do not install 
properly: foreign and maptools


I understand that Solaris 10 may not be an officially supported 
platform but any help/feedback you can offer would be most 
appreciated.


It is a platform we test on.  WHat is not supported is 2.7.1, so 
please update to at least 2.8.1 (as requested in the posting guide).


Something is wrong with your R installation: 'foreign' should be known 
to R 2.7.1, *and* installed as part of the basic installation.  So 
re-installing seems the best option, especially as an update is in 
order.


I've updated all packages currently installed on this version of R 
but the install of package foreign complains about an invalid 
priority field in the DESCRIPTION file. I've not had any issues 
with the other packages.


I'm including our systemInfo() output here:
==
R version 2.7.1 (2008-06-23)
sparc-sun-solaris2.10

locale:
/en_CA.ISO8859-1/C/C/en_CA.ISO8859-1/C/C

attached base packages:
[1] stats graphics  grDevices utils datasets  methods   base

And, I'm including the transcript from the package install attempt:
==
1 /home/sjamal  R

R version 2.7.1 (2008-06-23)
Copyright (C) 2008 The R Foundation for Statistical Computing ISBN 3-900051-07-0

R is free software and comes with ABSOLUTELY NO WARRANTY.
You are welcome to redistribute it under certain conditions.
Type 'license()' or 'licence()' for distribution details.

R is a collaborative project with many contributors.
Type 'contributors()' for more information and 'citation()' on how to cite R or 
R packages in publications.

Type 'demo()' for some demos, 'help()' for on-line help, or 'help.start()' for 
an HTML browser interface to help.
Type 'q()' to quit R.


install.packages(foreign)

Warning in install.packages(foreign) :
 argument 'lib' is missing: using '/home/sjamal/R/sparc-sun-solaris2.10-library
/2.7'
--- Please select a CRAN mirror for use in this session --- Loading Tcl/Tk 
interface ... done trying URL 
'http://probability.ca/cran/src/contrib/foreign_0.8-33.tar.gz'
Content type 'application/x-gzip' length 315463 bytes (308 Kb) opened URL 
==
downloaded 308 Kb

* Installing *source* package 'foreign' ...
checking for gcc... gcc -std=gnu99
checking for C compiler default output file name... a.out checking whether the 
C compiler works... yes checking whether we are cross compiling... no checking 
for suffix of executables...
checking for suffix of object files... o checking whether we are using the GNU 
C compiler... yes checking whether gcc -std=gnu99 accepts -g... yes checking 
for gcc -std=gnu99 option to accept ANSI C... none needed checking whether gcc 
-std=gnu99 accepts -Wno-long-long... yes checking how to run the C 
preprocessor... gcc -std=gnu99 -E checking for egrep... grep -E checking for 
ANSI C header files... yes checking for sys/types.h... yes checking for 
sys/stat.h... yes checking for stdlib.h... yes checking for string.h... yes 
checking for memory.h... yes checking for strings.h... yes checking for 
inttypes.h... yes checking for stdint.h... yes checking for unistd.h... yes 
checking byteswap.h usability... no checking byteswap.h presence... no checking 
for byteswap.h... no checking for double... yes checking size of double... 8 
checking for int... yes checking size of int... 4 checking for long... yes 
checking size of long... 4
configure: creating ./config.status
config.status: creating src/Makevars
config.status: creating src/swap_bytes.h
config.status: creating src/var.h
Error: Invalid DESCRIPTION file

Invalid Priority field.
Packages with priorities 'base' or 'recommended' or 'defunct-base' must already 
be known to R.

See the information on DESCRIPTION files in section 'Creating R packages' of 
the 'Writing R Extensions' manual.
Execution halted
ERROR: installing package DESCRIPTION failed
** Removing '/home/sjamal/R/sparc-sun-solaris2.10-library/2.7/foreign'
** Restoring previous '/home/sjamal/R/sparc-sun-solaris2.10-library/2.7/foreign'

The downloaded packages are in
   /tmp/RtmpkcXy1L/downloaded_packages
Warning message:
In install.packages(foreign) :
 installation of package 'foreign' had non-zero exit status




I look forward to your insights.

Thank you,

Sarosh

---
Sarosh Jamal

Geographic Computing Specialist
Department of Geography
http://geog.utm.utoronto.ca

Staff co-Chair, United Way Campaign
http://www.utm.utoronto.ca/unitedway

University of Toronto Mississauga
sarosh.ja...@utoronto.ca
905.569.4497

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, 

Re: [R] Changing factor to numeric

2009-03-10 Thread Uwe Ligges

From ?factor:

The interpretation of a factor depends on both the codes and the 
levels attribute. Be careful only to compare factors with the same set 
of levels (in the same order). In particular, as.numeric applied to a 
factor is meaningless, and may happen by implicit coercion. To transform 
a factor f to its original numeric values, as.numeric(levels(f))[f] is 
recommended and slightly more efficient than as.numeric(as.character(f)).


Uwe Ligges



ojal john owino wrote:

Dear Users,
I have a variable in my dataset which is of type factor. But it actually
contains numeric entries which like 5.735  4.759 . This is because the
data was read from a CSV file into R and this variable contained other
charaters which were not numeric. I have now dropped the records with the
characters which are not numeric for this variable and want to change it to
numeric srotage type.

I have tried using as.numeric() function but it changes the values in the
variable to what I think are the ranks of the individual values of the
varible in the dataset. For example if 5.735 is the current content in the
field, then the new object created by as.numeric will contain a value like
680 if the 5.735 was the highest value for the varible and the dataset had
680 records.


How can I change the storage type without changing the contents of this
variable in this case?

Thanks for your consideration.





__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] problem with concatinating string while taking as a path of a file

2009-03-10 Thread Uwe Ligges

See ?file.path

Uwe Ligges


Simon Blomberg wrote:

 Does this do what you want?

paste(FPATH, Fname, sep=\\)

Simon.

On Tue, 2009-03-10 at 10:48 +0530, venkata kirankumar wrote:

Hi all,

I have a problem with concatinating strings while taking as a path here the
problem is

i have to take path as
FPATH-D:\\Kiran

and file name as

Fname-FINDINGS.CSV
and while I am reading  this table I have to take path with using these two
strings because in FPATH  there is many files like findings.csv,
and path will be D:\\Kiran\\FINDINGS.CSV


here i tried with FPATH+\\+Fname,  FPATH~\\~Fname ,  FPATH\\Fname
and FPATH::\\::Fname
but I am not able to get path like D:\\Kiran\\FINDINGS.CSV.


can any one help me out of this problem.


thanks in advance.

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] nlme: problem with fitting logistic function

2009-03-10 Thread Dieter Menne
Douglas Bates bates at stat.wisc.edu writes:

  3) Use lmer in lme4. Your mileage may vary, I could not find a speedup
    for my problems, but larger problem might give one.
 
 Did you mean nlmer in the lme4 package?  If so, it may be worthwhile
 trying the development branch but that is not something for the
 faint-hearted.

Thanks, Doug, for you comments. To be fair, I wrote these unordered 
thoughts to get you out of the snowhole :-)
 
  4) Use C for the core function. This is very effective, and there is at 
  least
    on example coming with nlme (was it SSlogist?).
 
 Do you think that evaluation of the model function takes a substantial
 portion of the computing time?  I am asking for my interest, not
 because I think I know the answer.  So, for example, have you profiled
 difficult nlme fits and found that the model function evaluation was
 expensive?

No, I did not profile that function, but 8 years ago tried it once because
at that time I thought it would help. Nowadays, I am more inclined to think
that failure of nlme with a more complex model is a failure of the model,
not of nlme; I don't have speed problems with my data.

However, I remember that in a similar case with ode/lsoda, using c function
made a factor of 20++, so I would have a look into it again if speed 
was a concern for me.

Dieter

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] nlme: problem with fitting logistic function

2009-03-10 Thread Dieter Menne
Douglas Bates bates at stat.wisc.edu writes:

  3) Use lmer in lme4. Your mileage may vary, I could not find a speedup
    for my problems, but larger problem might give one.
 
 Did you mean nlmer in the lme4 package?  If so, it may be worthwhile
 trying the development branch but that is not something for the
 faint-hearted.

Thanks, Doug, for you comments. To be fair, I wrote these unordered 
thoughts to get you out of the snowhole :-)
 
  4) Use C for the core function. This is very effective, and there is at 
  least
    on example coming with nlme (was it SSlogist?).
 
 Do you think that evaluation of the model function takes a substantial
 portion of the computing time?  I am asking for my interest, not
 because I think I know the answer.  So, for example, have you profiled
 difficult nlme fits and found that the model function evaluation was
 expensive?

No, I did not profile that function, but 8 years ago tried it once because
at that time I thought it would help. Nowadays, I am more inclined to think
that failure of nlme with a more complex model is a failure of the model,
not of nlme; I don't have speed problems with my data.

However, I remember that in a similar case with ode/lsoda, using c function
made a factor of 20++, so I would have a look into it again if speed 
was a concern for me.

Dieter

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Adding text to both grid and base graphs

2009-03-10 Thread Rik Schoemaker
Dear Dieter,

The perfect solution; works a charm!

Thanks!

Rik

-Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On
Behalf Of Dieter Menne
Sent: 09 March 2009 18:06
To: r-h...@stat.math.ethz.ch
Subject: Re: [R] Adding text to both grid and base graphs

Rik Schoemaker RikSchoemaker at zonnet.nl writes:

 Unfortunately that doesn't help because it requires you to know 
 beforehand what sort of graph you're generating. I want to be able to 
 generate a graph (irrespective of the type) and then use a common 
 process to label them so I don't have to think about which way I generated
the graph to start with...

Set a hook.

http://finzi.psych.upenn.edu/R/Rhelp08/2009-February/188168.html

Dieter

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] R on netbooks et al?

2009-03-10 Thread Tsjerk Wassenaar
Hi,

For what it's worth, it's a trivial operation to replace the on-board
1Gb with a 2Gb module, which doesn't cost too much. Okay, being a bit
demanding I also replaced the hard-disk with a 320 Gb one to harbour a
dual boot ubuntu-eee / windows XP. But that does give a machine which
is a worthy replacement of the once state-of-the art Acer Travelmate
800 I used to have. I happily run R and even virtual machines using
VMWare. Truth be told, it being a netbook, you may want to rely on and
connect to external computational resources for the real heavy stuff.

Cheers,

Tsjerk

On Sun, Mar 8, 2009 at 7:20 PM, Ted Harding
ted.hard...@manchester.ac.uk wrote:
 On 08-Mar-09 17:44:18, Douglas Bates wrote:
 On Sun, Mar 8, 2009 at 7:08 AM, Michael Dewey i...@aghmed.fsnet.co.uk
 wrote:
 At 08:47 05/03/2009, herrdittm...@yahoo.co.uk wrote:
 Dear useRs,
 With the rise of netbooks and 'lifestyle laptops I am tempted
 to get one of these to mainly run R on it. Processor power and
 hard disk space seem to be ok. What I wonder is the handling and
 feel with respect to R.

 Has anyone here installed or is running R on one of these, and
 if so, what is your experience? Would it be more of a nice looking
 gadget than a feasable platform to do some stats on?

 One issue is whether you wish to use Linux or Windows. If you do
 use Linux I would advise picking a netbook with one of the standard
 distributions. The early EEE PC had Xandros and dire warnings about
 using the Debian repositiories. In fact I had no problem despite a
 total lack of experience although I am not sure what will happy with
 the recent move to lenny.

 Because I have used Debian Linux and Debian-based distributions
 like Ubuntu for many years, I installed a eee-specific version of
 Ubuntu within a day or two of getting an ASUS eee pc1000. There are
 currently at least two versions of Ubuntu, easy peasy and eeebuntu,
 that are specific to the eee pc models.  I started with easy peasy
 at the time it was called something else (Ubuntu eee?) and later
 switched to eeebuntu. In both cases packages for the latest versions
 of R from the Ubuntu package repository on CRAN worked flawlessly.

 I find the netbook to be very convenient.  Having a 5 hour battery
 life and a weight of less than 3 pounds is wonderful. I teach all of
 my classes with it and even use it at home (attached to a monitor,
 USB keyboard and mouse and an external hard drive) in lieu of a
 desktop computer. (I have been eyeing the eee box covetously
 but have not yet convinced myself that I really need yet another
 computer). I develop R packages on it and don't really notice that
 it is under-powered by today's standards. Of course, when I
 started computing and even when I started working with the S
 language the memory capacity of computers was measured in kilobytes
 so the thought of only 1Gb of memory doesn't cause me to shriek
 in horror.

 Thanks for sharing your experiences, Doug. Given that devices like
 the EeePC are marketed in terms of less demanding users, it's good
 to know what it is like for a hard user. Further related comments
 would be welcome!

 I have to agree about the RAM issue too. My once-trusty old Sharp
 MZ-80B CP/M machine (early 1980s), with its 64KB and occupying
 a good 0.25 m^3 of physical space, would have to be replicated
 2^14 = 16384 times over to give the same RAM (and occupy some
 400 m^3 of space, say 7.4m x 7.4m x 7.4m, or about the size of
 my house). Now I have things on my desk, about the size of my
 thumb, with 8MB in each.

 Ted.

 
 E-Mail: (Ted Harding) ted.hard...@manchester.ac.uk
 Fax-to-email: +44 (0)870 094 0861
 Date: 08-Mar-09                                       Time: 18:20:45
 -- XFMail --

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.



-- 
Tsjerk A. Wassenaar, Ph.D.
Junior UD (post-doc)
Biomolecular NMR, Bijvoet Center
Utrecht University
Padualaan 8
3584 CH Utrecht
The Netherlands
P: +31-30-2539931
F: +31-30-2537623

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] example of panel data in R

2009-03-10 Thread Daniel Malter
google: the plm package 


-
cuncta stricte discussurus
-

-Ursprüngliche Nachricht-
Von: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] Im 
Auftrag von ???
Gesendet: Tuesday, March 10, 2009 1:32 AM
An: r-help@r-project.org
Betreff: [R] example of panel data in R


.Bold { font-weight: bold; }
.Title { font-weight: bold; font-size: 18px; color: #cc3300; } .Code { border: 
#8b4513 1px solid; padding-right: 5px; padding-left: 5px;color: #66; 
font-family: 'Courier New' , Monospace;background-color: #ff9933; } I'm studing 
about the panel data. Can i find a example of panel data in R? also what kind 
of data file structure in reading this panel data? help me ^^ 



@L8' : A$=BH/E-mail : jung6...@hanmail.net:N- : ?,4kGP13 ;s0f4kGPH8;g@|H- :  
   FQ=: : @L5?Ek=E : 010-2301-0824H(f...@lav : 

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] mean reverting model: THANKS !

2009-03-10 Thread Josuah Rechtsteiner

dear useRs (especially andrew and gabor),

you have helped me a lot, the ar(1)/ornstein-uhlenbeck type is exactly  
it (0a1 is necessary).


thank you,

josuah

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] FW: Adding text to both grid and base graphs

2009-03-10 Thread Rik Schoemaker
Dear Dieter,

The perfect solution; works a charm!

Thanks!

Rik

-Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On
Behalf Of Dieter Menne
Sent: 09 March 2009 18:06
To: r-h...@stat.math.ethz.ch
Subject: Re: [R] Adding text to both grid and base graphs

Rik Schoemaker RikSchoemaker at zonnet.nl writes:

 Unfortunately that doesn't help because it requires you to know 
 beforehand what sort of graph you're generating. I want to be able to 
 generate a graph (irrespective of the type) and then use a common 
 process to label them so I don't have to think about which way I generated
the graph to start with...

Set a hook.

http://finzi.psych.upenn.edu/R/Rhelp08/2009-February/188168.html

Dieter

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] HAC corrected standard errors

2009-03-10 Thread Shruthi Jayaram

Hi,

I have a simple linear regression for which I want to obtain HAC corrected
standard errors, since I have significant serial/auto correlation in my
residuals, and also potential heteroskedasticity.

Would anyone be able to direct me to the function that implements this in R?
It's a basic question and I'm sure I'm missing something obvious here. I
looked up this post:

http://www.nabble.com/Re%3A-Moving-Window-regressions-with-corrections-for-Heteroscedasticity-and-Autocorrelations(HAC)-td6075371.html#a6075371

which recommended that I use the coeftest() function in package lmtest, but
when I tried to assign an object:

result - coeftest(regre, NeweyWest), where regre is an object of class lm,
this returned an error. 

I'd be grateful for any advice, since I'm sure I'm making one of those
simple bloopers.

Thanks!

Shruthi
-- 
View this message in context: 
http://www.nabble.com/HAC-corrected-standard-errors-tp22430163p22430163.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Changing factor to numeric

2009-03-10 Thread Krzysztof Sakrejda-Leavitt
If the real problem is that R reads your data file and converts 
everything it can into factors try including stringsAsFactors=FALSE
in your read.table (or similar) statement.  I run into this often enough 
that I set it as an option (I think it's: 
options(stringsAsFactors=FALSE).  Then you can do the conversion as 
needed.  Using as.numeric(as.character(f)) often changes values (due to 
the whole factor/levels business).


... I also seem to remember read.table will let you pre-specify the data 
type of each column.


Krzysztof

Uwe Ligges wrote:

 From ?factor:

The interpretation of a factor depends on both the codes and the 
levels attribute. Be careful only to compare factors with the same set 
of levels (in the same order). In particular, as.numeric applied to a 
factor is meaningless, and may happen by implicit coercion. To transform 
a factor f to its original numeric values, as.numeric(levels(f))[f] is 
recommended and slightly more efficient than as.numeric(as.character(f)).


Uwe Ligges



ojal john owino wrote:

Dear Users,
I have a variable in my dataset which is of type factor. But it actually
contains numeric entries which like 5.735  4.759 . This is because 
the

data was read from a CSV file into R and this variable contained other
charaters which were not numeric. I have now dropped the records with the
characters which are not numeric for this variable and want to change 
it to

numeric srotage type.

I have tried using as.numeric() function but it changes the values in the
variable to what I think are the ranks of the individual values of the
varible in the dataset. For example if 5.735 is the current content in 
the
field, then the new object created by as.numeric will contain a value 
like
680 if the 5.735 was the highest value for the varible and the dataset 
had

680 records.


How can I change the storage type without changing the contents of this
variable in this case?

Thanks for your consideration.





__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide 
http://www.R-project.org/posting-guide.html

and provide commented, minimal, self-contained, reproducible code.



--

---
Krzysztof Sakrejda-Leavitt

Organismic and Evolutionary Biology
University of Massachusetts, Amherst
319 Morrill Science Center South
611 N. Pleasant Street
Amherst, MA 01003

work #: 413-325-6555
email: sakre...@nsm.umass.edu

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Lattice: Customizing point-sizes with groups

2009-03-10 Thread Sundar Dorai-Raj
Try this:

xyplot(y ~ x, temp, groups = groups,
   par.settings = list(
 superpose.symbol = list(
   cex = c(1, 3),
   pch = 19,
   col = c(blue, red

See:

str(trellis.par.get())

for other settings you might want to change.

Also, you should drop the ; from all your scripts.

HTH,

--sundar

On Mon, Mar 9, 2009 at 6:49 PM, Paul Boutros paul.bout...@utoronto.ca wrote:
 Hello,

 I am creating a scatter-plot in lattice, and I would like to customize the
 size of each point so that some points are larger and others smaller.
  Here's a toy example:

 library(lattice);

 temp - data.frame(
        x = 1:10,
        y = 1:10,
        cex = rep( c(1,3), 5),
        groups = c( rep(A, 5), rep(B, 5) )
        );

 xyplot(y ~ x, temp, cex = temp$cex, pch = 19);

 This works just fine if I create a straight xy-plot, without groups.
  However when I introduce groupings the cex argument specifies the
 point-size for the entire group.  For example:

 xyplot(y ~ x, temp, cex = temp$cex, pch = 19, group = groups);

 Is it possible to combine per-spot sizing with groups in some way?  One
 work-around is to manually specify all graphical parameters, but I thought
 there might be a better way than this:

 temp$col - rep(blue, 10);
 temp$col[temp$groups == B] - red;
 xyplot(y ~ x, temp, cex = temp$cex, pch = 19, col = temp$col);

 Any suggestions/advice is much appreciated!
 Paul

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] popular R packages

2009-03-10 Thread Jim Lemon

Gabor Grothendieck wrote:

R-Forge already has this but I don't think its used much.  R-Forge
does allow authors to opt out which seems sensible lest it deter
potential authors from submitting packages.

I think objective quality metrics are better than ratings, e.g. does package
have a vignette, has package had a release within the last year,
does package have free software license, etc.  That would have
the advantage that authors might react to increase their package's
quality assessment resulting in an overall improvement in quality on CRAN
that would result in more of a pro-active cycle whereas ratings are reactive
and don't really encourage improvement.
  
I beg to offer an alternative assessment of quality. Do users download 
the package and find it useful? If so, they are likely to download it 
again when it is updated. Much as I appreciate the convenience of 
vignettes, regular updates and the absolute latest GPL license, a 
perfectly dud package can have all of these things. If a package is 
downloaded upon first release and not much thereafter, the maintainer 
might be motivated to attend to its shortcomings of utility rather than 
incrementing the version number every month or so. Downloads, as many 
have pointed out, are not a direct assessment of quality, but if I saw a 
package that just kept getting downloaded, version after version, I 
would be much more likely to check it out myself and perhaps even write 
a review for Hadley's neat site. Which I will try to do tonight.


Jim

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] rcorr.cens Goodman-Kruskal gamma

2009-03-10 Thread Kim Vanselow
Thanks to David and Frank for the suggestions. With a 2-dimensional input 
rcorr.cens and John Baron's implementation works good. But I am not able to 
calculate gamma for a multivariate matrix

example: columns=species; rows=releves; the numbers are BB-values (ordinal 
scale; 13 but 3-1 is not necessarily 2)

   K. ceratoides S. caucasica A. tibeticum
A13   11
A20   32
A31   10
A42   20
A50   32
B11   11
B24   31

I want to calculate a distance matrix with scale unit Goodman-Kruskals gamma 
(instead of classical euclidean, bray curtis, manhattan etc.) which I can use 
for hierachical cluster analysis (e.g. amap, vegan, cluster) in order to 
compare the different releves.
  
Further suggestions would be greatly appreciated,
Thank you very much,
Kim



 
 Original-Nachricht 
 Datum: Mon, 09 Mar 2009 13:27:29 -0500
 Von: Frank E Harrell Jr f.harr...@vanderbilt.edu
 An: David Winsemius dwinsem...@comcast.net
 CC: Kim Vanselow vanse...@gmx.de, r-help@r-project.org
 Betreff: Re: [R] rcorr.cens Goodman-Kruskal gamma

 David Winsemius wrote:
  I looked at the help page for rcorr.cens and was surprised that 
  function, designed for censored data and taking input as a Surv object, 
  was being considered for that purpose.  This posting to r-help may be of
  interest. John Baron offers a simple implementation that takes its input
  as (x,y):
  
  http://finzi.psych.upenn.edu/R/Rhelp02/archive/19749.html
  
  goodman - function(x,y){
Rx - outer(x,x,function(u,v) sign(u-v))
Ry - outer(y,y,function(u,v) sign(u-v))
S1 - Rx*Ry
return(sum(S1)/sum(abs(S1)))}
  
  I then read Frank's response to John and it's clear that my impression 
  regarding potential uses of rcorr.cens was too limited. Appears that you
  could supply a y vector to the S argument and get more efficient 
  execution.
 
 Yes rcorr.cens was designed to handle censored data but works fine with 
 uncensored Y.  You may need so specify Surv(Y) but first try just Y.  It 
 would be worth testing the execution speed of the two approaches.
 
 Frank
 
 -- 
 Frank E Harrell Jr   Professor and Chair   School of Medicine
   Department of Biostatistics   Vanderbilt University

Dear r-helpers!
I want to classify my vegetation data with hierachical cluster analysis.
My Dataset consist of Abundance-Values (Braun-Blanquet ordinal scale; ranked) 
for each plant species and relevé.
I found a lot of r-packages dealing with cluster analysis, but none of them is 
able to calculate a distance measure for ranked data.
Podani recommends the use of Goodman and Kruskals' Gamma for the distance. I 
found the function rcorr.cens (outx=true) of the Hmisc package which should do 
it.
What I don't understand is how to define the input vectors x, y with my 
vegetation dataset. The other thing how I can use the output of rcorr.cens for 
a distance measure in the cluster analysis (e.g. in vegan or amap).
Any help would be greatly appreciated,
Thank you very much,
Kim
-- 
Computer Bild Tarifsieger! GMX FreeDSL - Telefonanschluss + DSL
für nur 17,95 Euro/mtl.!* http://dsl.gmx.de/?ac=OM.AD.PD003K11308T4569a

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] HAC corrected standard errors

2009-03-10 Thread ronggui
sandwich package is what you want.

Best

2009/3/10 Shruthi Jayaram shruthi.jayaram...@gmail.com:

 Hi,

 I have a simple linear regression for which I want to obtain HAC corrected
 standard errors, since I have significant serial/auto correlation in my
 residuals, and also potential heteroskedasticity.

 Would anyone be able to direct me to the function that implements this in R?
 It's a basic question and I'm sure I'm missing something obvious here. I
 looked up this post:

 http://www.nabble.com/Re%3A-Moving-Window-regressions-with-corrections-for-Heteroscedasticity-and-Autocorrelations(HAC)-td6075371.html#a6075371

 which recommended that I use the coeftest() function in package lmtest, but
 when I tried to assign an object:

 result - coeftest(regre, NeweyWest), where regre is an object of class lm,
 this returned an error.

 I'd be grateful for any advice, since I'm sure I'm making one of those
 simple bloopers.

 Thanks!

 Shruthi
 --
 View this message in context: 
 http://www.nabble.com/HAC-corrected-standard-errors-tp22430163p22430163.html
 Sent from the R help mailing list archive at Nabble.com.

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.




-- 
HUANG Ronggui, Wincent
Tel: (00852) 3442 3832
PhD Candidate
Dept of Public and Social Administration
City University of Hong Kong
Home page: http://asrr.r-forge.r-project.org/rghuang.html

A sociologist is someone who, when a beautiful women enters the room
and everybody look at her, looks at everybody.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Creating an Excel file with multiple spreadsheets

2009-03-10 Thread Gavin Kelly
On Mon, 09-Mar-2009 at 02:34PM -0400, Jorge Ivan Velez wrote:
| DeaR all,
|
| I'd like to know how to create an Excel file with multiple
| spreadsheets from R. I searched the help files and found [1] but it
| is not what I want to do.

If you're happy to limit yourself to distributing your excel file to people
who have excel 2007 (or have the converters for the older versions of
office), or with a manual step of opening it yourself in 2007 and saving-as
in an older version, I have a script that utilises the xml format that Excel
now accepts.  It's a bit ugly as it's got lots of stuff specific to my
organisation, but if you want it can be obtained from
http://bioinformatics.cancerresearchuk.org/cms/index.php?page=gpk-resources

I really should contact the authors of the r/excel packages to see if any of
them would want a cleaned up version of this.

Regards - Gavin

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] problem with creating webserver

2009-03-10 Thread venkata kirankumar
Hi all,
I am not able to built a webservice with using R-Project
is there any package I have to import while building webservice
my problem here is I installed in one server and while I am working in
another server i have to pass a request to the server
where r-project is installed and I have to execute the function what ever I
am running in in present server
for that i have to write a webservice function which calls the server having
r-project

can any one help me out to get solved this problem.

thanks in advance

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Install JRI?

2009-03-10 Thread Maxl18

Hi all,

Who can help me to install JRI
I downloaed the file JRI_0.3-6.tar and although I read the documentation I
don't know what to do now.
I use Windows XP, Java 1.5.0_17 and R 2.8.1.
Thanks for your response,

Max
-- 
View this message in context: 
http://www.nabble.com/Install-JRI--tp22430947p22430947.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Interpreting GLM coefficients

2009-03-10 Thread Pablo Pita Orduna
Thank you very much!

It helps a lot. You are right about the NA´s in the coefficients, the model 
needs some simplification.

Thank you again.
---
Pablo Pita Orduna
Grupo de Recursos Marinos y Pesquerías.
Universidade de A Coruña. Campus da Zapateira s/n. E-15071. A Coruña, Spain.
Tel. +34(981) 167000 ext. 2204 Fax. +34(981) 167065.
www.fismare.net/verdeprofundo
www.recursosmarinos.net
  - Original Message - 
  From: joris meys 
  To: Pablo Pita Orduna 
  Cc: r-help@r-project.org ; jfre...@udc.es ; ppitaord...@gmail.com 
  Sent: Saturday, March 07, 2009 12:28 AM
  Subject: Re: [R] Interpreting GLM coefficients


  One thing I notice immediately is a number of NA values for your 
coefficients. If I were you, I would try a model with less parameters, and use 
the anova() function to compare models, to see if the extra terms really 
improve the model.
  e.g. 
  fit1 - glm(Y~X1+X2+X3,...)
  fit2 - glm(Y~X1+X2+X3+X1:X2,...)
  anova(fit1, fit2, test=F) 

  If you checked all these, understanding the interaction terms will be most 
easy if you normalized your numeric data before the analysis. For the 
interpretations, you just fill in some values to get an idea. For example :

  given the model : Y= a+b1*X1+b2*X2+b3*X1*X2

  Say X1 and X2 are numeric :
  interpretation of the main term : Y increases with b2 for an increase of 1 
unit in X2, given X1 is average.
  interpretation of the interaction term : For an X1 value of n units from the 
mean, X2 increases with b2+n*b3 (n is negative when value is lower than the 
mean).
  In a Y ~ X2 plot, you can make this visible by plotting 3 different functions 
: one for a low X1 value, one for an average X1 value and one for a high X1 
value. This gives you an indication of the effect of X1 on X2.

  for an interaction between a categorical terms or a categorical and a 
numerical, you follow exact the same reasoning, but keep in mind that the 
reference level represents a 0, and the mentioned level represents a 1. Fill in 
the values in the equation, and you can understand the meaning of the terms. 
Then again, you can plot a seperate function Y~X1 for every level of a certain 
factor.

  This isn't a straight answer on your question, but I'm afraid there is none. 
I hope this helps you with building your model.

  Kind regards.
  Joris


  On Fri, Mar 6, 2009 at 11:04 PM, Pablo Pita Orduna pp...@udc.es wrote:

Hi all,

I´m fitting GLM´s and I can´t interprete the coefficients when I run a 
model with interaction terms.

When I run the simpliest model there is no problem:

Model1-glm (Fishes ~ Year + I(Year^2) + Kind.Geographic + Kind.Fishers + 
Zone.2 + Hours + Fishers + Month, family = poisson(log)) # Fishes, Year, Hours, 
and Fishers are numeric, Kind.Geographic, Kind.Fishers, Zone.2 and Month are 
factors with 4, 3, 5 and 12 levels respectively.

Model1$coefficients (whith Helmert contrasts):

  (Intercept) Year   IYear^2 Kind.Geographic1 
Kind.Geographic2 Kind.Geographic3Kind.Fishers1Kind.Fishers2  
Zone.21  Zone.22  Zone.23  Zone.24
 -4.416915e+02 4.758455e-01-1.270986e-04-5.436199e-01
-1.068809e-01-1.498580e-01 2.958462e-01 1.316589e-01
-1.328204e-01-1.605802e-01 5.281869e-03 7.422885e-02
Hours  Fishers   Month1   Month2 
Month3   Month4   Month5   Month6 Month7
   Month8   Month9  Month10
 9.772076e-02-2.709955e-03-1.586887e-01-1.887837e-02
-5.183241e-03 5.870942e-02 7.075386e-02 2.061223e-02 
7.372268e-03-1.204835e-02-5.047994e-03 2.441498e-02
  Month11
 -5.665261e-03

So I can write, for example:

y = -4.416915e+02 + -1.270986e-04*x^2 + 4.758455e-01*x # And add this 
function to a plot(Year,Fishes).

My problem is to understand the coefficients for the model with interaction:

Model2-glm(Fishes ~ Year + I(Year^2) + Kind.Geographic + Kind.Fishers + 
Zone.2 + Hours + Fishers + Month + Year:Kind.Geographic + Year:Kind.Fishers + 
Year:Zone.2 + Year:Hours + Year:Fishers + Year:Month + Kind.Geographic:Hours + 
Kind.Fishers:Hours + Zone.2:Hours + Hours:Fishers + Hours:Month 
+Kind.Geographic:Fishers + Zone.2:Fishers + Fishers:Month , poisson (log))

Model2$coefficients (with Helmert contrast):

  (Intercept) YearI(Year^2) 
Kind.Geographic1 Kind.Geographic2 Kind.Geographic3  
  Kind.Fishers1Kind.Fishers2
 1.641473e+03-1.748703e+00 4.664752e-04 
   -6.721427e+00 1.856033e+01  -3.762727e-02
 2.903564e+01 9.022858e+01
  Zone.21 

[R] North Arrow (.png file) on a Map

2009-03-10 Thread Rodrigo Aluizio
Hi list.

I would like to know how do I insert a North arrow, stored as a png file in
my computer, in a map? I found lots of post asking similar things, one of
them mentioned the pixmap package.  The map was done using map() and
shapefiles (the code is below). I’m using the pixmap () and addlogo()
functions. Well I can import the png with pixmap() function (I guess, once
there’s no error message), but I can’t put It on the map, I got an error
message telling me that:

 

“Error at t(x...@index[nrow(x...@index):1, , drop = FALSE]) : 

  index out of limits”

 

Well I tried changing coordinates but I always got the same result. How do I
do this correctly? Is there a better way?

 

Thanks for the help and attention.

 

Here is the complete map script:

 

library(RODBC)

library(maps)

library(mapdata)

library(maptools)

library(pixmap)

#Carregar Coordenadas e dados dos Pontos Amostrais

Dados-odbcConnectExcel('Campos.xls',readOnly=T)

Coord-sqlFetch(Dados,'CoordMed',colnames=F,rownames='Ponto')

odbcClose(Dados)

N-pixmap('Norte.png',nrow=166,ncol=113)

# Carregar pontos e shapes

Batimetria-readShapeSpatial('C:/Users/Rodrigo/Documents/UFPR/Micropaleontol
ogia/Campos/ShapeFiles/Batimetria_BC.shp')

Estados-readShapeSpatial('C:/Users/Rodrigo/Documents/UFPR/Micropaleontologi
a/Campos/ShapeFiles/Estados_Sudeste.shp')

Faciologia-readShapeSpatial('C:/Users/Rodrigo/Documents/UFPR/Micropaleontol
ogia/Campos/ShapeFiles/Faciologia_BC.shp')

# Mapa com os Pontos da Bacia

postscript('MapaCampos.eps',paper='special',onefile=F,horizontal=F,width=3.5
,height=4.5,bg='white',pointsize=3)

par(mar=c(3,2,2,0))

map('worldHires','brazil',ylim=c(23.9,20.3),xlim=c(42.1,39.2),type='n')

plot(Faciologia,ylab='',xlab='',col=c('lightgreen','lightgreen','lightgreen'
,'lightgreen','lightgreen','lightgray','lightgray','lightgray','lightgray','
lightgray','lightgray','lightgray','lightgray','lightgray','lightgray','ligh
tgray','lightgray','lightgray','lightgray','lightgray','lightgray','lightyel
low','lightyellow','lightyellow'),add=T,lwd=0.5,border=0)

plot(Batimetria,ylab='',xlab='',col='darkgray',lty='solid',lwd=0.2,add=T)

plot(Estados,ylab='',xlab='',lty='solid',add=T,lwd=0.8)

text(Coord$Longitude[Coord$Réplicas=='1'],Coord$Latitude[Coord$Réplicas=='1'
],rownames(Coord)[Coord$Réplicas=='1'],col='red',cex=0.5,font=2)

text(Coord$Longitude[Coord$Réplicas=='2'],Coord$Latitude[Coord$Réplicas=='2'
],rownames(Coord)[Coord$Réplicas=='2'],col='yellow',cex=0.5,font=2)

text(Coord$Longitude[Coord$Réplicas=='3'],Coord$Latitude[Coord$Réplicas=='3'
],rownames(Coord)[Coord$Réplicas=='3'],col='blue',cex=0.5,font=2)

points(Coord$Longitude,Coord$Latitude-0.045,pch=20,cex=0.7)

text(c(41.5,41.3),c(21.7,20.6),c('RJ','ES'))

axis(1,xaxp=c(42.1,39.2,2),cex.axis=1)

axis(2,yaxp=c(23.9,20.3,4),cex.axis=1)

title(main='Bacia')

legend(40.2,23.5,c('Uma','Duas','Três'),pch=21,cex=1,pt.bg=c('red','yellow',
'blue'),bty='n',pt.cex=2,pt.lwd=0.6,title='Réplicas')

legend(39.8,23.5,c('Areia','Calcário','Lama'),pch=21,cex=1,pt.bg=c('lightyel
low','lightgray','lightgreen'),bty='n',pt.cex=2,pt.lwd=0.6,title='Faciologia
')

addlogo(N,px=c(40,39.8),py=c(21,20.8))

dev.off()

q('no')

 

-

MSc. Rodrigo Aluizio mailto:r.alui...@gmail.com 

Centro de Estudos do Mar/UFPR
Laboratório de Micropaleontologia


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Is there any method to identify the distribution of a given dataset?

2009-03-10 Thread GreenBrower

Thanks for everybody, I feel shame after I posting the above message because
I found there is a tutorial Fitting Distributions with R on
http://cran.ii.uib.no/ It's seems not everybody check the dataset's
distribution.

Bert Gunter wrote:
 
 Below. Brief summary is: You **need** to consult a statistician. You know
 far too little statistics to do statistical analysis on your own.
 
 -- Bert 
 
 
 Bert Gunter
 Genentech Nonclinical Biostatistics
 650-467-7374
 
 -Original Message-
 From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org]
 On
 Behalf Of GreenBrower
 Sent: Monday, March 09, 2009 9:12 AM
 To: r-help@r-project.org
 Subject: [R] Is there any method to identify the distribution of a given
 dataset?
 
 
 It's important to identify the distribution of a dataset before do
 analysis
 and inference.
 
 -- Not necessarily. Indeed, often not.
 
  Is there any method to identify the distribution of a given
 dataset?
  -- Yes. It's discrete. The question you mean to ask is : How do I choose
 a
 suitable model for my data?
 
  For example, I want to identify a dataset belong to normal of
 poisson distribution, how can I do that?
 
 -- Whew! That you even ask this question is why you need to work with
 someone who knows more about statistics. No insult intended. It's kind of
 like me asking a biologist what's the difference between a mitochondrion
 and
 a nucleus. If I know so little about cell biology that I must ask, I
 probably need to work with someone more knowledgeable.
 
 
 
 -- 
 View this message in context:
 http://www.nabble.com/Is-there-any-method-to-identify-the-distribution-of-a-
 given-dataset--tp22413674p22413674.html
 Sent from the R help mailing list archive at Nabble.com.
 
 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.
 
 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.
 
 

-- 
View this message in context: 
http://www.nabble.com/Is-there-any-method-to-identify-the-distribution-of-a-given-dataset--tp22413674p22430437.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] R-question

2009-03-10 Thread thabet chelligue

Hello,

 I’m a french student in Master level “Statistics and Econometrics” . Now I’m 
in training for a marketing firm.

 I don’t find the information about the capacity of R .

 Can you tell me how much lignes and colones can R manipulates?

 Can I work with a data of 2 milions lignes?

When I asked M.Maechler , he advices me to ask you about this.

Thank you very much.






 Date: Sun, 8 Mar 2009 20:03:02 +0100
 Subject: Re: Hello
 From: mmaech...@gmail.com
 To: chelliguetha...@hotmail.com
 
 Dear M.  Thabet Chelligue,
 
 please do ask such questions on one of the public R mailing lists;
 typically 'R-help' is perfect.
 -- http://stat.ethz.ch/mailman/listinfo/r-help
 
 
 On Fri, Mar 6, 2009 at 17:05, thabet chelligue
 chelliguetha...@hotmail.com wrote:
 
 
  Hello,
 
  I’m a french student in Master level “Statistics and Econometrics” . Now I’m
  in training for a marketing firm.
 
  I don’t find the information about the capacity of R .
 
  Can you respond me how much lignes and colones can R manipulates?
 
  Can I work with a data of 2 milions lignes?
 
  Thank you .
 
  
  Découvrez toutes les possibilités de communication avec vos proches

_
[[elided Hotmail spam]]

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] understanding the output from survival analysis

2009-03-10 Thread Chris Andrews

Ullrika Sahlin ullrika.sahlin at ekol.lu.se writes:

 
 Why do I get different sign of the coefficients of covariates when I run the
 semi-parametric proportional hazard model (coxph) compared to the parametric
 proportional hazard model (survreg)?
 
 Anyone with experience in extracting information from survreg to make
 predictions are free to contact me.
 
 Cheers,
 
 Ullrika 
 

coxph fits a proportional hazards model.
survreg fits an accelerated failure time model.

These are parametrized differently.  In the first, a higher linear predictor
means greater hazard (i.e., shorter lifetime).  In the second, a higher linear
predictor means greater expected (log) lifetime.

Consult an introductory survival text such as Klein and Moeschberger for more on
these types of model.  The Design library may help you with predictions.

Chris

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] R-question

2009-03-10 Thread jim holtman
Let's assume that you are running on a system with 2GB of memory.  All
of R's data is held in memory and I would suggest that no single
object have more that 25% of memory.  That would suggest that 500MB
for a single object would be a reasonable limit.  If you are working
with something like 2M rows of numeric data (each element takes 8
bytes), then you should be able to handle about 32 columns in the
matrix without much trouble.  If you have a 64-bit version of R, then
the limit is how much you want to spend on memory.

On Tue, Mar 10, 2009 at 7:12 AM, thabet chelligue
chelliguetha...@hotmail.com wrote:

 Hello,

  I’m a french student in Master level “Statistics and Econometrics” . Now I’m 
 in training for a marketing firm.

  I don’t find the information about the capacity of R .

  Can you tell me how much lignes and colones can R manipulates?

  Can I work with a data of 2 milions lignes?

 When I asked M.Maechler , he advices me to ask you about this.

 Thank you very much.






 Date: Sun, 8 Mar 2009 20:03:02 +0100
 Subject: Re: Hello
 From: mmaech...@gmail.com
 To: chelliguetha...@hotmail.com

 Dear M.  Thabet Chelligue,

 please do ask such questions on one of the public R mailing lists;
 typically 'R-help' is perfect.
 -- http://stat.ethz.ch/mailman/listinfo/r-help


 On Fri, Mar 6, 2009 at 17:05, thabet chelligue
 chelliguetha...@hotmail.com wrote:
 
 
  Hello,
 
  I’m a french student in Master level “Statistics and Econometrics” . Now 
  I’m
  in training for a marketing firm.
 
  I don’t find the information about the capacity of R .
 
  Can you respond me how much lignes and colones can R manipulates?
 
  Can I work with a data of 2 milions lignes?
 
  Thank you .
 
  
  Découvrez toutes les possibilités de communication avec vos proches

 _
 [[elided Hotmail spam]]

        [[alternative HTML version deleted]]


 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.





-- 
Jim Holtman
Cincinnati, OH
+1 513 646 9390

What is the problem that you are trying to solve?

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] popular R packages

2009-03-10 Thread Gabor Grothendieck
On Tue, Mar 10, 2009 at 6:14 AM, Jim Lemon j...@bitwrit.com.au wrote:
 Gabor Grothendieck wrote:

 R-Forge already has this but I don't think its used much.  R-Forge
 does allow authors to opt out which seems sensible lest it deter
 potential authors from submitting packages.

 I think objective quality metrics are better than ratings, e.g. does
 package
 have a vignette, has package had a release within the last year,
 does package have free software license, etc.  That would have
 the advantage that authors might react to increase their package's
 quality assessment resulting in an overall improvement in quality on CRAN
 that would result in more of a pro-active cycle whereas ratings are
 reactive
 and don't really encourage improvement.


 I beg to offer an alternative assessment of quality. Do users download the
 package and find it useful? If so, they are likely to download it again when
 it is updated.

I was referring to motivating authors, not users, so that CRAN improves.

 Much as I appreciate the convenience of vignettes, regular
 updates and the absolute latest GPL license, a perfectly dud package can
 have all of these things. If a package is downloaded upon first release and

These are nothing but the usual  FUD against quality improvement, i.e. the
quality metrics are not measuring what you want but the fact is that
quality metrics can work and have had huge successes.  Also I think
objective measures would be more accepted by authors than ratings.
No one is going to be put off that their package has no vignette when
obviously it doesn't and the authors are free to add one and instantly
improve their package's rating.

 not much thereafter, the maintainer might be motivated to attend to its
 shortcomings of utility rather than incrementing the version number every
 month or so. Downloads, as many have pointed out, are not a direct
 assessment of quality, but if I saw a package that just kept getting
 downloaded, version after version, I would be much more likely to check it
 out myself and perhaps even write a review for Hadley's neat site. Which I
 will try to do tonight.

I was arguing for objective metrics rather than ratings. Downloading is not
a rating but is objective although there are measurement problems as has
been pointed out.  Also, the worst feature is that it does not react to changes
in quality very quickly making it anti-motivating.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] rcorr.cens Goodman-Kruskal gamma

2009-03-10 Thread Frank E Harrell Jr

Kim Vanselow wrote:

Thanks to David and Frank for the suggestions. With a 2-dimensional input 
rcorr.cens and John Baron's implementation works good. But I am not able to 
calculate gamma for a multivariate matrix

example: columns=species; rows=releves; the numbers are BB-values (ordinal scale; 
13 but 3-1 is not necessarily 2)

   K. ceratoides S. caucasica A. tibeticum
A13   11
A20   32
A31   10
A42   20
A50   32
B11   11
B24   31

I want to calculate a distance matrix with scale unit Goodman-Kruskals gamma 
(instead of classical euclidean, bray curtis, manhattan etc.) which I can use for 
hierachical cluster analysis (e.g. amap, vegan, cluster) in order to compare the 
different releves.
  
Further suggestions would be greatly appreciated,

Thank you very much,
Kim



 
 Original-Nachricht 

Datum: Mon, 09 Mar 2009 13:27:29 -0500
Von: Frank E Harrell Jr f.harr...@vanderbilt.edu
An: David Winsemius dwinsem...@comcast.net
CC: Kim Vanselow vanse...@gmx.de, r-help@r-project.org
Betreff: Re: [R] rcorr.cens Goodman-Kruskal gamma



David Winsemius wrote:
I looked at the help page for rcorr.cens and was surprised that 
function, designed for censored data and taking input as a Surv object, 
was being considered for that purpose.  This posting to r-help may be of

interest. John Baron offers a simple implementation that takes its input
as (x,y):

http://finzi.psych.upenn.edu/R/Rhelp02/archive/19749.html

goodman - function(x,y){
  Rx - outer(x,x,function(u,v) sign(u-v))
  Ry - outer(y,y,function(u,v) sign(u-v))
  S1 - Rx*Ry
  return(sum(S1)/sum(abs(S1)))}

I then read Frank's response to John and it's clear that my impression 
regarding potential uses of rcorr.cens was too limited. Appears that you
could supply a y vector to the S argument and get more efficient 
execution.
Yes rcorr.cens was designed to handle censored data but works fine with 
uncensored Y.  You may need so specify Surv(Y) but first try just Y.  It 
would be worth testing the execution speed of the two approaches.


Frank

--
Frank E Harrell Jr   Professor and Chair   School of Medicine
  Department of Biostatistics   Vanderbilt University


Dear r-helpers!
I want to classify my vegetation data with hierachical cluster analysis.
My Dataset consist of Abundance-Values (Braun-Blanquet ordinal scale; ranked) 
for each plant species and relevé.
I found a lot of r-packages dealing with cluster analysis, but none of them is 
able to calculate a distance measure for ranked data.
Podani recommends the use of Goodman and Kruskals' Gamma for the distance. I 
found the function rcorr.cens (outx=true) of the Hmisc package which should do 
it.
What I don't understand is how to define the input vectors x, y with my 
vegetation dataset. The other thing how I can use the output of rcorr.cens for 
a distance measure in the cluster analysis (e.g. in vegan or amap).
Any help would be greatly appreciated,
Thank you very much,
Kim


A function related to that is Hmisc's varclus function which will use 
Spearman, Pearson, or Hoeffding indexes for similarity measures.

Frank

--
Frank E Harrell Jr   Professor and Chair   School of Medicine
 Department of Biostatistics   Vanderbilt University

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] popular R packages

2009-03-10 Thread Frank E Harrell Jr

Gabor Grothendieck wrote:

On Tue, Mar 10, 2009 at 6:14 AM, Jim Lemon j...@bitwrit.com.au wrote:

Gabor Grothendieck wrote:

R-Forge already has this but I don't think its used much.  R-Forge
does allow authors to opt out which seems sensible lest it deter
potential authors from submitting packages.

I think objective quality metrics are better than ratings, e.g. does
package
have a vignette, has package had a release within the last year,
does package have free software license, etc.  That would have
the advantage that authors might react to increase their package's
quality assessment resulting in an overall improvement in quality on CRAN
that would result in more of a pro-active cycle whereas ratings are
reactive
and don't really encourage improvement.


I beg to offer an alternative assessment of quality. Do users download the
package and find it useful? If so, they are likely to download it again when
it is updated.


I was referring to motivating authors, not users, so that CRAN improves.


Much as I appreciate the convenience of vignettes, regular
updates and the absolute latest GPL license, a perfectly dud package can
have all of these things. If a package is downloaded upon first release and


These are nothing but the usual  FUD against quality improvement, i.e. the
quality metrics are not measuring what you want but the fact is that
quality metrics can work and have had huge successes.  Also I think
objective measures would be more accepted by authors than ratings.
No one is going to be put off that their package has no vignette when
obviously it doesn't and the authors are free to add one and instantly
improve their package's rating.


not much thereafter, the maintainer might be motivated to attend to its
shortcomings of utility rather than incrementing the version number every
month or so. Downloads, as many have pointed out, are not a direct
assessment of quality, but if I saw a package that just kept getting
downloaded, version after version, I would be much more likely to check it
out myself and perhaps even write a review for Hadley's neat site. Which I
will try to do tonight.


I was arguing for objective metrics rather than ratings. Downloading is not
a rating but is objective although there are measurement problems as has
been pointed out.  Also, the worst feature is that it does not react to changes
in quality very quickly making it anti-motivating.


Gabor I think your approach will have more payoff in the long run.  I 
would suggest one other metric: the number of lines of code in the 
'examples' section of all the package's help files.


Frank
--
Frank E Harrell Jr   Professor and Chair   School of Medicine
 Department of Biostatistics   Vanderbilt University

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Course - March/April ** R / Splus ** course in New York City and San Francisco*** by XLSolutions Corp

2009-03-10 Thread s...@xlsolutions-corp.com
XLSolutions Corporation (www.xlsolutions-corp.com) is proud to announce
our*** R/Splus Fundamentals and Programming Techniques and  R Advanced
Programming***courses at USA locations for March - April 2009.


* New York City  ** March 19-20, 2009
* San Francisco  ** April 23-24, 2009

R/Splus Fundamentals and Programming Techniques
http://www.xlsolutions-corp.com/rplus.asp

Looking for   R/Splus Advanced Programming  ?

http://www.xlsolutions-corp.com/rplus.asp 

* San Francisco  ** April 27-28, 2009
* New York City  ** April 20-21, 2009


Ask for group discount and reserve your seat Now - Earlybird Rates.
Payment due after the class! Email Sue Turner:  s...@xlsolutions-corp.com

Phone: 206-686-1578


Please let us know if you and your colleagues are interested in this
class to take advantage of group discount. Register now to secure your
seat!

Cheers,
Elvis Miller, PhD
Manager Training.
XLSolutions Corporation
206 686 1578
www.xlsolutions-corp.com
el...@xlsolutions-corp.com

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] find the important inputs to the neural network model in nnet package

2009-03-10 Thread abbas tavassoli

Hi, I have a binary variable and many explanatory variables and I want to 
use the package nnet  to model these data, (instead of logistic regression).
I want to find the more effective  variables (inputs to the network) in 
the neural network model. how can I do this?
thanks.



  
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] simple question beginner

2009-03-10 Thread Λεωνίδας Μπαντής
 
Hi there,
 
I am beginner in R and I have some basic question. Suppose I run a common 
procedure such as a t test or cox model like below:
 
out-coxph( Surv(tstart,tstop, death1) ~ x1+x1:log(tstop+1) , 
test1,method=c(breslow)) 
 
Which yields the following result:
 
Call:
coxph(formula = Surv(tstart, tstop, death1) ~ x1 + x1:log(tstop + 
    1), data = test1, method = c(breslow))

   coef exp(coef) se(coef) z    p
x1    -9.58  6.89e-05 6.83 -1.40 0.16
x1:log(tstop + 1)  6.90  9.93e+02 5.63  1.23 0.22
Likelihood ratio test=2.97  on 2 df, p=0.226  n= 120 
 
 
Now I simply want to create an array (let a) with the coefficients. I.e. I 
want
 
a-c(-9.58, 6.90)
 
Generally how can take the elements I want from the output matrix above for 
further manipulation?
 
Thanks in advance for any answer!!
 
 


  
___ 
×ñçóéìïðïéåßôå Yahoo!; 
ÂáñåèÞêáôå ôá åíï÷ëçôéêÜ ìçíýìáôá (spam); Ôï Yahoo! Mail 
äéáèÝôåé ôçí êáëýôåñç äõíáôÞ ðñïóôáóßá êáôÜ ôùí åíï÷ëçôéêþí 
ìçíõìÜôùí http://login.yahoo.com/config/mail?.intl=gr

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] find the important inputs to the neural network model in nnet package

2009-03-10 Thread Mike Lawrence
One thought is to train the net and obtain a performance measure on a
testing corpus. Next, for each input, run the testing corpus again,
but zero all values for that input and obtain a measure of
performance. Zeroing an important node will hurt performance more than
zeroing an unimportant node.

On Tue, Mar 10, 2009 at 9:41 AM, abbas tavassoli tavassoli...@yahoo.com wrote:

 Hi, I have a binary variable and many explanatory variables and I want to
 use the package nnet  to model these data, (instead of logistic regression).
 I want to find the more effective  variables (inputs to the network) in
 the neural network model. how can I do this?
 thanks.




        [[alternative HTML version deleted]]

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.




-- 
Mike Lawrence
Graduate Student
Department of Psychology
Dalhousie University

Looking to arrange a meeting? Check my public calendar:
http://tinyurl.com/mikes-public-calendar

~ Certainty is folly... I think. ~

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] simple question beginner

2009-03-10 Thread Mike Lawrence
names() is a great function for finding out how to get info from
objects of unknown structure, so try:
names(out)



2009/3/10 Λεωνίδας Μπαντής bleonida...@yahoo.gr:

 Hi there,

 I am beginner in R and I have some basic question. Suppose I run a common 
 procedure such as a t test or cox model like below:

 out-coxph( Surv(tstart,tstop, death1) ~ x1+x1:log(tstop+1) , 
 test1,method=c(breslow))

 Which yields the following result:

 Call:
 coxph(formula = Surv(tstart, tstop, death1) ~ x1 + x1:log(tstop +
     1), data = test1, method = c(breslow))

    coef exp(coef) se(coef) z    p
 x1    -9.58  6.89e-05 6.83 -1.40 0.16
 x1:log(tstop + 1)  6.90  9.93e+02 5.63  1.23 0.22
 Likelihood ratio test=2.97  on 2 df, p=0.226  n= 120


 Now I simply want to create an array (let a) with the coefficients. I.e. I 
 want

 a-c(-9.58, 6.90)

 Generally how can take the elements I want from the output matrix above for 
 further manipulation?

 Thanks in advance for any answer!!





 ___
 ×ñçóéìïðïéåßôå Yahoo!;
 ÂáñåèÞêáôå ôá åíï÷ëçôéêÜ ìçíýìáôá (spam); Ôï Yahoo! Mail
 äéáèÝôåé ôçí êáëýôåñç äõíáôÞ ðñïóôáóßá êáôÜ ôùí åíï÷ëçôéêþí
 ìçíõìÜôùí http://login.yahoo.com/config/mail?.intl=gr

        [[alternative HTML version deleted]]


 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.





-- 
Mike Lawrence
Graduate Student
Department of Psychology
Dalhousie University

Looking to arrange a meeting? Check my public calendar:
http://tinyurl.com/mikes-public-calendar

~ Certainty is folly... I think. ~

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Changing factor to numeric

2009-03-10 Thread Mike Lawrence
Try:

as.numeric(as.character(x))

I usually define the following for this purpose:
factor.to.number=function(x){
as.numeric(as.character(x))
}


On Tue, Mar 10, 2009 at 2:25 AM, ojal john owino
ojal.johnow...@googlemail.com wrote:
 Dear Users,
 I have a variable in my dataset which is of type factor. But it actually
 contains numeric entries which like 5.735  4.759 . This is because the
 data was read from a CSV file into R and this variable contained other
 charaters which were not numeric. I have now dropped the records with the
 characters which are not numeric for this variable and want to change it to
 numeric srotage type.

 I have tried using as.numeric() function but it changes the values in the
 variable to what I think are the ranks of the individual values of the
 varible in the dataset. For example if 5.735 is the current content in the
 field, then the new object created by as.numeric will contain a value like
 680 if the 5.735 was the highest value for the varible and the dataset had
 680 records.


 How can I change the storage type without changing the contents of this
 variable in this case?

 Thanks for your consideration.



 --
 Ojal John Owino
 P.O Box 230-80108
 Kilifi, Kenya.
 Mobile:+254 728 095 710

        [[alternative HTML version deleted]]

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.




-- 
Mike Lawrence
Graduate Student
Department of Psychology
Dalhousie University

Looking to arrange a meeting? Check my public calendar:
http://tinyurl.com/mikes-public-calendar

~ Certainty is folly... I think. ~

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Changing factor to numeric

2009-03-10 Thread milton ruser
Hi Ojal,

I don't know why it happens, but try

as.numeric(as.character(XXX))

Good luck

miltinho astronauta
brazil

On Tue, Mar 10, 2009 at 1:25 AM, ojal john owino 
ojal.johnow...@googlemail.com wrote:

 Dear Users,
 I have a variable in my dataset which is of type factor. But it actually
 contains numeric entries which like 5.735  4.759 . This is because the
 data was read from a CSV file into R and this variable contained other
 charaters which were not numeric. I have now dropped the records with the
 characters which are not numeric for this variable and want to change it to
 numeric srotage type.

 I have tried using as.numeric() function but it changes the values in the
 variable to what I think are the ranks of the individual values of the
 varible in the dataset. For example if 5.735 is the current content in the
 field, then the new object created by as.numeric will contain a value like
 680 if the 5.735 was the highest value for the varible and the dataset had
 680 records.


 How can I change the storage type without changing the contents of this
 variable in this case?

 Thanks for your consideration.



 --
 Ojal John Owino
 P.O Box 230-80108
 Kilifi, Kenya.
 Mobile:+254 728 095 710

[[alternative HTML version deleted]]

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Changing factor to numeric

2009-03-10 Thread Gabor Grothendieck
If you don't need any factors use read.csv(...whatever..., as.is = TRUE)
and then as.numeric will work.

On Tue, Mar 10, 2009 at 1:25 AM, ojal john owino
ojal.johnow...@googlemail.com wrote:
 Dear Users,
 I have a variable in my dataset which is of type factor. But it actually
 contains numeric entries which like 5.735  4.759 . This is because the
 data was read from a CSV file into R and this variable contained other
 charaters which were not numeric. I have now dropped the records with the
 characters which are not numeric for this variable and want to change it to
 numeric srotage type.

 I have tried using as.numeric() function but it changes the values in the
 variable to what I think are the ranks of the individual values of the
 varible in the dataset. For example if 5.735 is the current content in the
 field, then the new object created by as.numeric will contain a value like
 680 if the 5.735 was the highest value for the varible and the dataset had
 680 records.


 How can I change the storage type without changing the contents of this
 variable in this case?

 Thanks for your consideration.



 --
 Ojal John Owino
 P.O Box 230-80108
 Kilifi, Kenya.
 Mobile:+254 728 095 710

        [[alternative HTML version deleted]]

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Odp: simple question beginner

2009-03-10 Thread Petr PIKAL
Hi

r-help-boun...@r-project.org napsal dne 10.03.2009 12:21:38:

  
 Hi there,
  
 I am beginner in R and I have some basic question. Suppose I run a 
common 
 procedure such as a t test or cox model like below:
  
 out-coxph( Surv(tstart,tstop, death1) ~ x1+x1:log(tstop+1) , 
test1,method=c
 (breslow)) 
  
 Which yields the following result:
  
 Call:
 coxph(formula = Surv(tstart, tstop, death1) ~ x1 + x1:log(tstop + 
 1), data = test1, method = c(breslow))
 
coef exp(coef) se(coef) zp
 x1-9.58  6.89e-05 6.83 -1.40 0.16
 x1:log(tstop + 1)  6.90  9.93e+02 5.63  1.23 0.22
 Likelihood ratio test=2.97  on 2 df, p=0.226  n= 120 
  
  
 Now I simply want to create an array (let a) with the coefficients. 
I.e. I want

Most probably coef will work here

If you had tried help.search
??coefficients
you would find several appropriate items.

Besides 

str(object) is also a good starting point for inspection of any object.

Regards
Petr


  
 a-c(-9.58, 6.90)
  
 Generally how can take the elements I want from the output matrix above 
for 
 further manipulation?
  
 Thanks in advance for any answer!!
  
  
 
 
 
 ___ 
 ×ńçóéěďđďéĺßôĺ Yahoo!; 
 ÂáńĺčŢęáôĺ ôá ĺíď÷ëçôéęÜ ěçíýěáôá (spam); Ôď Yahoo! Mail 
 äéáčÝôĺé ôçí ęáëýôĺńç äőíáôŢ đńďóôáóßá ęáôÜ ôůí ĺíď÷ëçôéęţí 
 ěçíőěÜôůí http://login.yahoo.com/config/mail?.intl=gr
 
[[alternative HTML version deleted]]
 
 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide 
http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] using chm help files under linux

2009-03-10 Thread Jose Quesada
Hi,

Chm (compiled help) is a microsoft invention. It's the default help
system under windows, but not so under linux.
I found that (at times) I like better how chm help looks.
Since there are chm viewers under linux, using chm help files should
be possible.
Has anybody tried to set R so it opens chm by default? I'm sure
there's some flag or Rprofile var that could get this done.

Thanks,

-- 
-Jose
--
Jose Quesada, PhD
http://josequesada.name

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] understanding the output from survival analysis

2009-03-10 Thread Terry Therneau
 Why do I get different sign of the coefficients of covariates when I run the
 semi-parametric proportional hazard model (coxph) compared to the parametric
 proportional hazard model (survreg)?

  Survreg models the time till death, a postive coefficient -- a longer time.
  Coxph models the death rate, a positive coefficient -- higher death rate.
  
  So in the first a positive is good, in the second it is bad.
  
 Anyone with experience in extracting information form survreg to make
 predictions are free to contact me.

  Commonly one would use predict(fit) to get predictions.  There are several 
options.
  
Terry Therneau

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] reliability, scale scores in the psych package

2009-03-10 Thread Ista Zahn
Dear Professor Revelle and R-helpers,
This is a two-part question: the first part is general, and the second
is specific to the psych package.

First question: In the past I've constructed composite variables from
questionnaire data by using rowMeans(), and then correlating items
with the scale using cor() as an informal check for bad items. Over
the weekend I decided to take a look at some of the packages in the
psychometric task view, to see if there was a way to simplify this
process. I looked at several packages, including psychometric, CTT,
and psych. I'm interested in hearing from others who need to do this
kind of thing frequently. What approach do you use? Do you use one of
the packages mentioned above? Are there other packages I might want to
take a look at?

Second question: I spent some time with the psych package trying to
figure out how to use the score.items() function, and it's become
clear to me that I don't understand what it's doing. I assumed that
setting a key equal to -1 would result in the item being reverse
scored, but I get weird results, as shown below. When I try to reverse
score (by setting a value of -1 in the key), I get scale scores that
don't add up (e.g., the mean score is reported as being larger than
the maximum item score). How is the score.items() function intended to
be used? Do I need to reverse score items before using score.items()?

Thanks,
Ista

## score.items() example begins here ##
 library(psych)
 Data.score - 
 as.data.frame(matrix(c(40,29,40,32,1,1,3,1,5,3,3,44,24,47,31,4,4,1,1,4,2,1,13,5,14,5,5,4,3,3,4,4,3,7,2,2,0,5,4,2,2,4,4,4,7,6,5,4,1,1,3,4,3,2,1,18,15,21,8,6,6,1,1,6,6,6,9,10,15,7,5,4,2,1,5,5,5,10,7,12,6,2,2,4,4,3,3,3,8,7,13,8,1,1,4,2,2,2,1,10,5,13,7,4,3,3,3,3,3,3),
  nrow=10, byrow=TRUE))
 names(Data.score) - 
 c(s1,s2,s3,s4,imi1,imi2,imi3,imi4,imi5,imi6,imi7)
 Data.score
   s1 s2 s3 s4 imi1 imi2 imi3 imi4 imi5 imi6 imi7
1  40 29 40 321131533
2  44 24 47 314411421
3  13  5 14  55433443
4   7  2  2  05422444
5   7  6  5  41134321
6  18 15 21  86611666
7   9 10 15  75421555
8  10  7 12  62244333
9   8  7 13  81142221
10 10  5 13  74333333

 #This works fine
 key.list - list(silence=1:4, interest=5:11)
 keys - make.keys(length(names(Data.score)), key.list, 
 item.labels=names(Data.score))
 scored - score.items(keys, Data.score, missing=FALSE, totals=FALSE)
 scored$scores
  silence interest
 [1,]   35.25 2.428571
 [2,]   36.50 2.428571
 [3,]9.25 3.714286
 [4,]2.75 3.571429
 [5,]5.50 2.142857
 [6,]   15.50 4.571429
 [7,]   10.25 3.857143
 [8,]8.75 3.00
 [9,]9.00 1.857143
[10,]8.75 3.142857

 #This does not do what I expected. Mean interest scores are higher than score 
 of the highest item.
 key.list2 - list(silence=1:4, interest=c(5,6,-7,-8,9,10,11))
 keys2 - make.keys(length(names(Data.score)), key.list2, 
 item.labels=names(Data.score))
 scored2 - score.items(keys2, Data.score, missing=FALSE, totals=FALSE)
 scored2$scores
  silence interest
 [1,]   35.25 14.71429
 [2,]   36.50 15.28571
 [3,]9.25 15.42857
 [4,]2.75 15.85714
 [5,]5.50 13.57143
 [6,]   15.50 17.42857
 [7,]   10.25 16.42857
 [8,]8.75 14.14286
 [9,]9.00 13.57143
[10,]8.75 14.85714

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] R console misc questions

2009-03-10 Thread Oliver
I don't see where I can find this 'Rconsole' file on Mac.
The closest I can get to is /Library/Frameworks/R.Framework/etc ...
but then there is no such file. A bit more clarification would be
appreciated.

Oliver

On Mar 8, 8:25 pm, Jun Shen jun.shen...@gmail.com wrote:
 Oliver,

 Go and find the file named 'Rconsole' under ~/etc folder, then you can
 change whatever you want, the font size, color etc. The settings will be
 your default.

 For your second question, you need to set it up in Rprofile.site. Refer to
 the Rprofile help.

 Jun



 On Sun, Mar 8, 2009 at 11:20 AM, Oliver fwa...@gmail.com wrote:
  hi, all

  I have two questions on using R console effectively (this is on Mac,
  not sure if it applies to win platform):

  First, I'd like to make the console font bigger, the default is too
  small for my screen. There is a Show Fonts from Format menu where
  you can adjust it, but it seems only for current session. Next time I
  start R, I have to redo everything. My question is, is there any way
  to save the preference?

  Second, Package Manager show available packages, and you can click
  loaded to load it. Again, it is only for current session, how can I
  make my selection permanent?

  Thanks for help.

  Oliver

  __
  r-h...@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
  PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html
  and provide commented, minimal, self-contained, reproducible code.

 --
 Jun Shen PhD
 PK/PD Scientist
 BioPharma Services
 Millipore Corporation
 15 Research Park Dr.
 St Charles, MO 63304
 Direct: 636-720-1589

         [[alternative HTML version deleted]]

 __
 r-h...@r-project.org mailing listhttps://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guidehttp://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] reliability, scale scores in the psych package

2009-03-10 Thread Doran, Harold
Ista

There are several functions in the MiscPsycho package that can be sued
for classical item analysis. 

 -Original Message-
 From: r-help-boun...@r-project.org 
 [mailto:r-help-boun...@r-project.org] On Behalf Of Ista Zahn
 Sent: Tuesday, March 10, 2009 10:28 AM
 To: reve...@northwestern.edu
 Cc: r-help@r-project.org
 Subject: [R] reliability, scale scores in the psych package
 
 Dear Professor Revelle and R-helpers,
 This is a two-part question: the first part is general, and 
 the second is specific to the psych package.
 
 First question: In the past I've constructed composite 
 variables from questionnaire data by using rowMeans(), and 
 then correlating items with the scale using cor() as an 
 informal check for bad items. Over the weekend I decided to 
 take a look at some of the packages in the psychometric task 
 view, to see if there was a way to simplify this process. I 
 looked at several packages, including psychometric, CTT, and 
 psych. I'm interested in hearing from others who need to do 
 this kind of thing frequently. What approach do you use? Do 
 you use one of the packages mentioned above? Are there other 
 packages I might want to take a look at?
 
 Second question: I spent some time with the psych package 
 trying to figure out how to use the score.items() function, 
 and it's become clear to me that I don't understand what it's 
 doing. I assumed that setting a key equal to -1 would result 
 in the item being reverse scored, but I get weird results, as 
 shown below. When I try to reverse score (by setting a value 
 of -1 in the key), I get scale scores that don't add up 
 (e.g., the mean score is reported as being larger than the 
 maximum item score). How is the score.items() function 
 intended to be used? Do I need to reverse score items before 
 using score.items()?
 
 Thanks,
 Ista
 
 ## score.items() example begins here ##
  library(psych)
  Data.score - 
  
 as.data.frame(matrix(c(40,29,40,32,1,1,3,1,5,3,3,44,24,47,31,4,4,1,1,4
  
 ,2,1,13,5,14,5,5,4,3,3,4,4,3,7,2,2,0,5,4,2,2,4,4,4,7,6,5,4,1,1,3,4,3,2
  
 ,1,18,15,21,8,6,6,1,1,6,6,6,9,10,15,7,5,4,2,1,5,5,5,10,7,12,6,2,2,4,4,
  3,3,3,8,7,13,8,1,1,4,2,2,2,1,10,5,13,7,4,3,3,3,3,3,3), nrow=10, 
  byrow=TRUE))
  names(Data.score) - 
  
 c(s1,s2,s3,s4,imi1,imi2,imi3,imi4,imi5,imi6,imi7
  )
  Data.score
s1 s2 s3 s4 imi1 imi2 imi3 imi4 imi5 imi6 imi7
 1  40 29 40 321131533
 2  44 24 47 314411421
 3  13  5 14  55433443
 4   7  2  2  05422444
 5   7  6  5  41134321
 6  18 15 21  86611666
 7   9 10 15  75421555
 8  10  7 12  62244333
 9   8  7 13  81142221
 10 10  5 13  74333333
 
  #This works fine
  key.list - list(silence=1:4, interest=5:11) keys - 
  make.keys(length(names(Data.score)), key.list, 
  item.labels=names(Data.score)) scored - score.items(keys, 
 Data.score, 
  missing=FALSE, totals=FALSE) scored$scores
   silence interest
  [1,]   35.25 2.428571
  [2,]   36.50 2.428571
  [3,]9.25 3.714286
  [4,]2.75 3.571429
  [5,]5.50 2.142857
  [6,]   15.50 4.571429
  [7,]   10.25 3.857143
  [8,]8.75 3.00
  [9,]9.00 1.857143
 [10,]8.75 3.142857
 
  #This does not do what I expected. Mean interest scores are 
 higher than score of the highest item.
  key.list2 - list(silence=1:4, interest=c(5,6,-7,-8,9,10,11))
  keys2 - make.keys(length(names(Data.score)), key.list2, 
  item.labels=names(Data.score))
  scored2 - score.items(keys2, Data.score, missing=FALSE, 
 totals=FALSE) 
  scored2$scores
   silence interest
  [1,]   35.25 14.71429
  [2,]   36.50 15.28571
  [3,]9.25 15.42857
  [4,]2.75 15.85714
  [5,]5.50 13.57143
  [6,]   15.50 17.42857
  [7,]   10.25 16.42857
  [8,]8.75 14.14286
  [9,]9.00 13.57143
 [10,]8.75 14.85714
 
 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide 
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.
 

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] reliability, scale scores in the psych package

2009-03-10 Thread Ista Zahn
snip
 Second question: I spent some time with the psych package trying to
 figure out how to use the score.items() function, and it's become
 clear to me that I don't understand what it's doing. I assumed that
 setting a key equal to -1 would result in the item being reverse
 scored, but I get weird results, as shown below. When I try to reverse
 score (by setting a value of -1 in the key), I get scale scores that
 don't add up (e.g., the mean score is reported as being larger than
 the maximum item score). How is the score.items() function intended to
 be used? Do I need to reverse score items before using score.items()?

I did it again--it seems like I always figure out the answer just
after I ask for help. The score.items() function needs to know the
maximum of the scale in order to reverse score. For some reason, the
maximum appears to be calculated from all the scores, not just scores
that have a 1 or a -1 in the key. On a hunch I set the max argument to
a vector of scale maxima, and it worked. I'm still interested in
responses to question 1 though.

Thanks again,
Ista

snip

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] foreign package install on Solaris 10 + R-2.7.1

2009-03-10 Thread Sarosh Jamal
Hello Prof. Ripley,

I believe I downloaded R-2.7.1 from sunfreeware.com (and now going back to my 
notes realize that the foreign package is not included with it).

They currently have R-2.7.2 available with the foreign package again not 
included.

I'll give R-2.8.1 a try.

Thank you,

Sarosh

---
Sarosh Jamal

Geographic Computing Specialist
Department of Geography
http://geog.utm.utoronto.ca

Staff co-Chair, United Way Campaign
http://www.utm.utoronto.ca/unitedway

University of Toronto Mississauga
sarosh.ja...@utoronto.ca
905.569.4497

-Original Message-
From: Prof Brian Ripley [mailto:rip...@stats.ox.ac.uk]
Sent: Tuesday, March 10, 2009 3:02 AM
To: Sarosh Jamal
Cc: r-h...@lists.r-project.org
Subject: Re: [R] foreign package install on Solaris 10 + R-2.7.1

On Mon, 9 Mar 2009, Sarosh Jamal wrote:

 Hello,

 I've been having trouble installing package spdep for R-2.7.1 on
our Solaris 10 (sparc) server.  Namely the two dependencies for this package do 
not install properly: foreign and maptools

 I understand that Solaris 10 may not be an officially supported
 platform but any help/feedback you can offer would be most
 appreciated.

It is a platform we test on.  WHat is not supported is 2.7.1, so
please update to at least 2.8.1 (as requested in the posting guide).

Something is wrong with your R installation: 'foreign' should be known
to R 2.7.1, *and* installed as part of the basic installation.  So
re-installing seems the best option, especially as an update is in
order.

 I've updated all packages currently installed on this version of R
 but the install of package foreign complains about an invalid
 priority field in the DESCRIPTION file. I've not had any issues
 with the other packages.

 I'm including our systemInfo() output here:
 ==
 R version 2.7.1 (2008-06-23)
 sparc-sun-solaris2.10

 locale:
 /en_CA.ISO8859-1/C/C/en_CA.ISO8859-1/C/C

 attached base packages:
 [1] stats graphics  grDevices utils datasets  methods   base

 And, I'm including the transcript from the package install attempt:
 ==
 1 /home/sjamal  R

 R version 2.7.1 (2008-06-23)
 Copyright (C) 2008 The R Foundation for Statistical Computing ISBN 
 3-900051-07-0

 R is free software and comes with ABSOLUTELY NO WARRANTY.
 You are welcome to redistribute it under certain conditions.
 Type 'license()' or 'licence()' for distribution details.

 R is a collaborative project with many contributors.
 Type 'contributors()' for more information and 'citation()' on how to cite R 
 or R packages in publications.

 Type 'demo()' for some demos, 'help()' for on-line help, or 'help.start()' 
 for an HTML browser interface to help.
 Type 'q()' to quit R.

 install.packages(foreign)
 Warning in install.packages(foreign) :
  argument 'lib' is missing: using 
 '/home/sjamal/R/sparc-sun-solaris2.10-library
 /2.7'
 --- Please select a CRAN mirror for use in this session --- Loading Tcl/Tk 
 interface ... done trying URL 
 'http://probability.ca/cran/src/contrib/foreign_0.8-33.tar.gz'
 Content type 'application/x-gzip' length 315463 bytes (308 Kb) opened URL 
 ==
 downloaded 308 Kb

 * Installing *source* package 'foreign' ...
 checking for gcc... gcc -std=gnu99
 checking for C compiler default output file name... a.out checking whether 
 the C compiler works... yes checking whether we are cross compiling... no 
 checking for suffix of executables...
 checking for suffix of object files... o checking whether we are using the 
 GNU C compiler... yes checking whether gcc -std=gnu99 accepts -g... yes 
 checking for gcc -std=gnu99 option to accept ANSI C... none needed checking 
 whether gcc -std=gnu99 accepts -Wno-long-long... yes checking how to run the 
 C preprocessor... gcc -std=gnu99 -E checking for egrep... grep -E checking 
 for ANSI C header files... yes checking for sys/types.h... yes checking for 
 sys/stat.h... yes checking for stdlib.h... yes checking for string.h... yes 
 checking for memory.h... yes checking for strings.h... yes checking for 
 inttypes.h... yes checking for stdint.h... yes checking for unistd.h... yes 
 checking byteswap.h usability... no checking byteswap.h presence... no 
 checking for byteswap.h... no checking for double... yes checking size of 
 double... 8 checking for int... yes checking size of int... 4 checking for 
 long... yes checking size of long... 4
 configure: creating ./config.status
 config.status: creating src/Makevars
 config.status: creating src/swap_bytes.h
 config.status: creating src/var.h
 Error: Invalid DESCRIPTION file

 Invalid Priority field.
 Packages with priorities 'base' or 'recommended' or 'defunct-base' must 
 already be known to R.

 See the information on DESCRIPTION files in section 'Creating R packages' of 
 the 'Writing R Extensions' manual.
 Execution halted
 ERROR: installing package DESCRIPTION failed
 ** Removing 

Re: [R] How to write a function that accepts unlimited number of input arguments?

2009-03-10 Thread Adrian Dusa

I might very well be wrong, but something tells me Sean really wants:
sum(1:5)

or (more close to the kind of unlimited number of arguments):
sum(c(1,2,3,4,5,17))

But then again, I might be mistaken.
Best wishes,
Adrian

On Monday 09 March 2009, Gabor Grothendieck wrote:
 Try this:

 sum.test - function(...) sum(c(...))

 More commonly one uses the list(...) construct.

 On Mon, Mar 9, 2009 at 11:32 AM, Sean Zhang seane...@gmail.com wrote:
  Dear R-helpers:
  I am an R newbie and have a question related to writing functions that
  accept unlimited number of input arguments.
  (I tried to peek into functions such as paste and cbind, but failed, I
  cannot see their codes..)
 
  Can someone kindly show me through a summation example?
  Say, we have input scalar,  1 2 3 4 5
  then the ideal function, say sum.test, can do
  (1+2+3+4+5)==sum.test(1,2,3,4,5)
 
  Also sum.test can work as the number of input scalar changes.
 
  Many thanks in advance!



-- 
Adrian Dusa
Romanian Social Data Archive
1, Schitu Magureanu Bd.
050025 Bucharest sector 5
Romania
Tel.:+40 21 3126618 \
 +40 21 3120210 / int.101
Fax: +40 21 3158391


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] S4 coerce as.data.frame for lm

2009-03-10 Thread Thomas Roth (geb. Kaliwe)

#Hi,
#
#For a given class test, an object of class test cannot be used as data 
in the lm method although as.data.frame was implemented... where's my 
mistake?

#
#Suppose i have defined a S4 class test

#S4 Class test containting a slot data
#which is of type data.frame

setClass(Class = test, representation = representation(name = 
character, data = data.frame)  


temp = new(test)   #temp is of class test

t...@data = faithful   #assign some data to it

#now define as.data.frame for class test
setMethod(as.data.frame, test, function(x, row.names = NULL, 
optional = FALSE)

{

 return(x...@data)

}
)

as.data.frame(temp)   #works

lm(eruptions ~ waiting, data = temp)   #doesn't work


#Thank you for any hints
#Thomas Roth



#from the lm help page
|#data| - an optional data frame, list or environment (or object 
coercible by |as.data.frame| to a data frame) containing the variables 
in the model. If not found in |data|, the variables are taken from 
|environment(formula)|, typically the #environment from which |lm| is 
called.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] help structuring mixed model using lmer()

2009-03-10 Thread Simon Pickett
Hi all,

This is partly a statistical question as well as a question about R, but I am 
stumped!

I have count data from various sites across years. (Not all of the sites in the 
study appear in all years). Each site has its own habitat score habitat that 
remains constant across all years.

I want to know if counts declined faster on sites with high habitat scores.

I can construct a model that tests for the effect of habitat as a main effect, 
controlling for year

model1-lmer(count~habitat+yr+(1|site), family=quasibinomial,data=m)
model2-lmer(count~yr+(1|site), family=quasibinomial,data=m)
anova(model1,model2)

, but how do I test the interaction?

Thanks in advance,

Simon.





Dr. Simon Pickett
Research Ecologist
Land Use Department
Terrestrial Unit
British Trust for Ornithology
The Nunnery
Thetford
Norfolk
IP242PU
01842750050

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] puzzled by math on date-time objects

2009-03-10 Thread Denis Chabot

Hi,

I don't understand the following. When I create a small artificial set  
of date information in class POSIXct, I can calculate the mean and the  
median:


a = as.POSIXct(Sys.time())
a = a + 60*0:10; a

 [1] 2009-03-10 11:30:16 EDT 2009-03-10 11:31:16 EDT 2009-03-10  
11:32:16 EDT
 [4] 2009-03-10 11:33:16 EDT 2009-03-10 11:34:16 EDT 2009-03-10  
11:35:16 EDT
 [7] 2009-03-10 11:36:16 EDT 2009-03-10 11:37:16 EDT 2009-03-10  
11:38:16 EDT

[10] 2009-03-10 11:39:16 EDT 2009-03-10 11:40:16 EDT

median(a)
[1] 2009-03-10 11:35:16 EDT
mean(a)
[1] 2009-03-10 11:35:16 EDT


But for real data (for this post, a short subset is in object c)  that  
I have converted into a POSIXct object, I cannot calculate the median  
with median(), though I do get it with summary():


c
 [1] 2009-02-24 14:51:18 EST 2009-02-24 14:51:19 EST 2009-02-24  
14:51:19 EST
 [4] 2009-02-24 14:51:20 EST 2009-02-24 14:51:20 EST 2009-02-24  
14:51:21 EST
 [7] 2009-02-24 14:51:21 EST 2009-02-24 14:51:22 EST 2009-02-24  
14:51:22 EST

[10] 2009-02-24 14:51:22 EST

class(c)
[1] POSIXt  POSIXct

median(c)
Erreur dans Summary.POSIXct(c(1235505080.6, 1235505081.1), na.rm =  
FALSE) :

  'sum' not defined for POSIXt objects

One difference is that in my own date-time series, some events are  
repeated (the original data contained fractions of seconds). But then,  
why can I get a median through summary()?


summary(c)
 Min.   1st  
Qu.Median
2009-02-24 14:51:18 EST 2009-02-24 14:51:19 EST 2009-02-24  
14:51:20 EST
 Mean   3rd  
Qu.  Max.
2009-02-24 14:51:20 EST 2009-02-24 14:51:21 EST 2009-02-24  
14:51:22 EST


Thanks in advance,


Denis Chabot

sessionInfo()
R version 2.8.1 Patched (2009-01-19 r47650)
i386-apple-darwin9.6.0

locale:
fr_CA.UTF-8/fr_CA.UTF-8/C/C/fr_CA.UTF-8/fr_CA.UTF-8

attached base packages:
[1] stats graphics  grDevices utils datasets  methods   base

other attached packages:
[1] doBy_3.7 chron_2.3-30

loaded via a namespace (and not attached):
[1] Hmisc_3.5-2 cluster_1.11.12 grid_2.8.1  lattice_0.17-20  
tools_2.8.1


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] perform subgroup meta-analysis and create forest plot displaying subgroups

2009-03-10 Thread Steven Lubitz
Bernd,
Thanks. I believe this will do what I need, but do you know how I can set up my 
meta object so that the meta is performed on each subgroup individually? I can 
get the overall meta of all 6 observations, or I can get a separate meta of 
each subgroup using the subset command, but I can't get subgroup A, subgroup B, 
and subgroup C all into the same object. In order to use the plot feature it 
appears I need a byvar, so I think I'll need them all in the same object.

subgroup    study    beta    se
subgroupA    site1    -0.35    0.12
subgroupA    site2    -0.34   
 0.1
subgroupB    site1    -0.28    0.06
subgroupB    site2    -0.29    0.07
subgroupC    site1    0.34    0.03
subgroupC    site2    0.36    0.04
Generic inverse variance meta-analysismetagen(beta, se, studlab=study, sm=OR)

Thanks,
Steve

--- On Tue, 3/10/09, Weiss, Bernd bernd.we...@uni-koeln.de wrote:
From: Weiss, Bernd bernd.we...@uni-koeln.de
Subject: Re: [R] perform subgroup meta-analysis and create forest plot 
displaying subgroups
To: slubi...@yahoo.com, r-help@r-project.org
Date: Tuesday, March 10, 2009, 2:31 AM

Steven Lubitz schrieb:
 Hello, I'm using the rmeta package to perform a meta analysis
 using
 summary statistics rather than raw data, and would like to analyze
 the effects in three different subgroups of my data. Furthermore, I'd
 like to plot this on one forest plot, with corresponding summary
 weighted averages of the effects displayed beneath each subgroup.
 
 I am able to generate the subgroup analyses by simply performing
 3
 separate meta-analyses with the desired subset of data. However, I
 can't manage to plot everything on the same forest plot.

Maybe I'm wrong but the 'forest'-function (package 'meta',
http://cran.at.r-project.org/web/packages/meta/meta.pdf) should be able
to do what you want. I guess you could be interested in the 'byvar'
argument.

Bernd



  
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] R console misc questions

2009-03-10 Thread David Winsemius
If you are like many users of the Mac, the R GUI called R.app in your  
Applications Folder will be the way you start R. If that is the case  
for you, then you should change the console font size settings in the  
GUI menu with Format::Font::Bigger.


As far as I can see on my Mac there is no Rconsole file (or ~/etc  
folder for that matter). The only such file that shows up on a search  
is on an old file left over from the days when I was using R on Windows.


If you want packages loaded at the time of startup you will need to  
create an .Rprofile file. That is going to require that you get your  
editor to create a system file (i.e., one that starts with a period),  
which means in Terminal that you would need to uncheck a couple of  
defaults. See the R for Mac OS X FAQ which should be available within  
the GUI.


Further questions of this sort should go to the R-SIG-Mac mailing list:

r-sig-...@stat.math.ethz.ch


--
David Winsemius

On Mar 10, 2009, at 10:29 AM, Oliver wrote:


I don't see where I can find this 'Rconsole' file on Mac.
The closest I can get to is /Library/Frameworks/R.Framework/etc ...
but then there is no such file. A bit more clarification would be
appreciated.

Oliver

On Mar 8, 8:25 pm, Jun Shen jun.shen...@gmail.com wrote:

Oliver,

Go and find the file named 'Rconsole' under ~/etc folder, then you  
can
change whatever you want, the font size, color etc. The settings  
will be

your default.

For your second question, you need to set it up in Rprofile.site.  
Refer to

the Rprofile help.

Jun



On Sun, Mar 8, 2009 at 11:20 AM, Oliver fwa...@gmail.com wrote:

hi, all



I have two questions on using R console effectively (this is on Mac,
not sure if it applies to win platform):



First, I'd like to make the console font bigger, the default is too
small for my screen. There is a Show Fonts from Format menu where
you can adjust it, but it seems only for current session. Next  
time I

start R, I have to redo everything. My question is, is there any way
to save the preference?



Second, Package Manager show available packages, and you can click
loaded to load it. Again, it is only for current session, how  
can I

make my selection permanent?



Thanks for help.



Oliver



__
r-h...@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


--
Jun Shen PhD
PK/PD Scientist
BioPharma Services
Millipore Corporation
15 Research Park Dr.
St Charles, MO 63304
Direct: 636-720-1589

[[alternative HTML version deleted]]

__
r-h...@r-project.org mailing listhttps://stat.ethz.ch/mailman/ 
listinfo/r-help

PLEASE do read the posting guidehttp://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


David Winsemius, MD
Heritage Laboratories
West Hartford, CT

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] R console misc questions

2009-03-10 Thread Jun Shen
Sorry for the confusion. What I described is only for R in Windows.

On Tue, Mar 10, 2009 at 10:51 AM, David Winsemius dwinsem...@comcast.netwrote:

 If you are like many users of the Mac, the R GUI called R.app in your
 Applications Folder will be the way you start R. If that is the case for
 you, then you should change the console font size settings in the GUI menu
 with Format::Font::Bigger.

 As far as I can see on my Mac there is no Rconsole file (or ~/etc folder
 for that matter). The only such file that shows up on a search is on an old
 file left over from the days when I was using R on Windows.

 If you want packages loaded at the time of startup you will need to create
 an .Rprofile file. That is going to require that you get your editor to
 create a system file (i.e., one that starts with a period), which means in
 Terminal that you would need to uncheck a couple of defaults. See the R for
 Mac OS X FAQ which should be available within the GUI.

 Further questions of this sort should go to the R-SIG-Mac mailing list:

 r-sig-...@stat.math.ethz.ch


 --
 David Winsemius


 On Mar 10, 2009, at 10:29 AM, Oliver wrote:

  I don't see where I can find this 'Rconsole' file on Mac.
 The closest I can get to is /Library/Frameworks/R.Framework/etc ...
 but then there is no such file. A bit more clarification would be
 appreciated.

 Oliver

 On Mar 8, 8:25 pm, Jun Shen jun.shen...@gmail.com wrote:

 Oliver,

 Go and find the file named 'Rconsole' under ~/etc folder, then you can
 change whatever you want, the font size, color etc. The settings will be
 your default.

 For your second question, you need to set it up in Rprofile.site. Refer
 to
 the Rprofile help.

 Jun



 On Sun, Mar 8, 2009 at 11:20 AM, Oliver fwa...@gmail.com wrote:

 hi, all


  I have two questions on using R console effectively (this is on Mac,
 not sure if it applies to win platform):


  First, I'd like to make the console font bigger, the default is too
 small for my screen. There is a Show Fonts from Format menu where
 you can adjust it, but it seems only for current session. Next time I
 start R, I have to redo everything. My question is, is there any way
 to save the preference?


  Second, Package Manager show available packages, and you can click
 loaded to load it. Again, it is only for current session, how can I
 make my selection permanent?


  Thanks for help.


  Oliver


  __
 r-h...@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


 --
 Jun Shen PhD
 PK/PD Scientist
 BioPharma Services
 Millipore Corporation
 15 Research Park Dr.
 St Charles, MO 63304
 Direct: 636-720-1589

[[alternative HTML version deleted]]

 __
 r-h...@r-project.org mailing listhttps://stat.ethz.ch/mailman/
 listinfo/r-help
 PLEASE do read the posting guidehttp://
 www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


 David Winsemius, MD
 Heritage Laboratories
 West Hartford, CT




-- 
Jun Shen PhD
PK/PD Scientist
BioPharma Services
Millipore Corporation
15 Research Park Dr.
St Charles, MO 63304
Direct: 636-720-1589

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Lattice: Customizing point-sizes with groups

2009-03-10 Thread Sundar Dorai-Raj
Sorry, I missed your point the first time. Why not create a group for
each subset then?

xyplot(y ~ x, temp, groups = interaction(cex, groups),
   par.settings = list(
 superpose.symbol = list(
   cex = c(1, 2, 3, 4),
   pch = 19,
   col = c(blue, red, green, purple


On Tue, Mar 10, 2009 at 8:11 AM, Paul C. Boutros
paul.bout...@utoronto.ca wrote:
 Hi Sundar,

 Thanks for your help!  Unfortunately your code seems to give the same
 result.  Compare this:

 temp - data.frame(
       x = 1:10,
       y = 1:10,
       cex = rep( c(1,3), 5),
       col = c( rep(blue, 5), rep(red, 5) ),
       groups = c( rep(A, 5), rep(B, 5) )
       );

 xyplot(y ~ x, temp, groups = groups,
       par.settings = list(
         superpose.symbol = list(
           cex = c(1, 3),
           pch = 19,
           col = c(blue, red

 And this:
 xyplot(y ~ x, temp, cex = temp$cex, col = temp$col, pch = 19);

 Once I introduce groups, I lose the ability to customize individual
 data-points and seem only to be able to customize entire groups.

 Paul

 -Original Message-
 From: Sundar Dorai-Raj [mailto:sdorai...@gmail.com]
 Sent: Tuesday, March 10, 2009 5:49 AM
 To: paul.bout...@utoronto.ca
 Cc: r-help@r-project.org
 Subject: Re: [R] Lattice: Customizing point-sizes with groups

 Try this:

 xyplot(y ~ x, temp, groups = groups,
       par.settings = list(
         superpose.symbol = list(
           cex = c(1, 3),
           pch = 19,
           col = c(blue, red

 See:

 str(trellis.par.get())

 for other settings you might want to change.

 Also, you should drop the ; from all your scripts.

 HTH,

 --sundar

 On Mon, Mar 9, 2009 at 6:49 PM, Paul Boutros paul.bout...@utoronto.ca
 wrote:
 Hello,

 I am creating a scatter-plot in lattice, and I would like to customize the
 size of each point so that some points are larger and others smaller.
  Here's a toy example:

 library(lattice);

 temp - data.frame(
        x = 1:10,
        y = 1:10,
        cex = rep( c(1,3), 5),
        groups = c( rep(A, 5), rep(B, 5) )
        );

 xyplot(y ~ x, temp, cex = temp$cex, pch = 19);

 This works just fine if I create a straight xy-plot, without groups.
  However when I introduce groupings the cex argument specifies the
 point-size for the entire group.  For example:

 xyplot(y ~ x, temp, cex = temp$cex, pch = 19, group = groups);

 Is it possible to combine per-spot sizing with groups in some way?  One
 work-around is to manually specify all graphical parameters, but I thought
 there might be a better way than this:

 temp$col - rep(blue, 10);
 temp$col[temp$groups == B] - red;
 xyplot(y ~ x, temp, cex = temp$cex, pch = 19, col = temp$col);

 Any suggestions/advice is much appreciated!
 Paul

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.




__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] File permissions

2009-03-10 Thread ig2ar-saf1

Hello fellow R-ists,

How do I change file permissions?

I know that file.access can display permission information but how do I SET 
those permissions?

Thank you

Your culpritNr1




__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] (no subject)

2009-03-10 Thread ARNAUD_MOSNIER

Dear R users,

I have a table with the following form

STATION  X   Y
1-7030
1-7030
1-7030
2-7229
2-7229
2-7229
2-7229

How want to extract unique value for those columns ... I am sure it is very 
simple, but I can not achieve to find the correct way !

I want to obtain something like

STATIONX Y
1-7030
2-7229

Thanks !!!

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] reliability, scale scores in the psych package

2009-03-10 Thread Ken Knoblauch
Doran, Harold HDoran at air.org writes:

 
 Ista
 
 There are several functions in the MiscPsycho package that can be sued
 for classical item analysis. 
 

Since when is classical item analysis a crime?

No wonder the USA is considered such a litigious society!

Ken

-- 
Ken Knoblauch
Inserm U846
Stem-cell and Brain Research Institute
Department of Integrative Neurosciences
18 avenue du Doyen Lépine
69500 Bron
France
tel: +33 (0)4 72 91 34 77
fax: +33 (0)4 72 91 34 61
portable: +33 (0)6 84 10 64 10
http://www.sbri.fr/members/kenneth-knoblauch.html

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] using chm help files under linux

2009-03-10 Thread Prof Brian Ripley

On Tue, 10 Mar 2009, Jose Quesada wrote:


Hi,

Chm (compiled help) is a microsoft invention. It's the default help
system under windows, but not so under linux.
I found that (at times) I like better how chm help looks.
Since there are chm viewers under linux, using chm help files should
be possible.


You don't just need a viewer, you need a help compiler.  We were 
unable to find a complete one other than from Microsoft, and the 
latter seems no longer to be under development (and has a serious 
security alert on it).  So the future of CHM help even for R under 
Windows is uncertain.



Has anybody tried to set R so it opens chm by default? I'm sure
there's some flag or Rprofile var that could get this done.


There is for R for Windows, it that is what you are asking.  And you 
can run Windows R under WINE.


Personally, I like the Mac OS X compiled help system better than the 
Windows one, but we do not provide that even on Mac OS X.  And these 
things *are* a matter of personal preference.



Thanks,

--
-Jose
--
Jose Quesada, PhD
http://josequesada.name



--
Brian D. Ripley,  rip...@stats.ox.ac.uk
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] File permissions

2009-03-10 Thread Prof Brian Ripley

On Tue, 10 Mar 2009, ig2ar-s...@yahoo.co.uk wrote:



Hello fellow R-ists,

How do I change file permissions?

I know that file.access can display permission information but how 
do I SET those permissions?


Well, file.info is better are displaying permission information, and 
Sys.chmod set them in the format used by file.info.



Thank you

Your culpritNr1


--
Brian D. Ripley,  rip...@stats.ox.ac.uk
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] popular R packages

2009-03-10 Thread Max Kuhn
If is easy to get the download numbers, we should do it and deal with
the interpretation issues. I'd like to know the numbers so I can
understand which (of my) packages have the most usage.

One other compication about # downloads: I suspect that a package
being on teh depends/suggests/imports list of another package might be
a big driver with respect to how many times that it was downloaded.

If I remember correctly, about 5 years ago Bioconductor asked for
volunteers to review packages to get detailed, specific feedback by
people who use the package (and should be fairly R proficient). I
think that this is pretty important and something like Crantastic is a
good interface. I personally got a lot out of the comments the a JSS
reviewer had for a package.

-- 

Max

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] (no subject)

2009-03-10 Thread Usuario R
Hi Arnaud, is very simple:


 unique( datos)


Regards


2009/3/10 arnaud_mosn...@uqar.qc.ca


 Dear R users,

 I have a table with the following form

 STATION  X   Y
 1-7030
 1-7030
 1-7030
 2-7229
 2-7229
 2-7229
 2-7229

 How want to extract unique value for those columns ... I am sure it is very
 simple, but I can not achieve to find the correct way !

 I want to obtain something like

 STATIONX Y
 1-7030
 2-7229

 Thanks !!!

[[alternative HTML version deleted]]

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Re : (no subject)

2009-03-10 Thread justin bem
see ?unique

 Justin BEM
BP 1917 Yaoundé
Tél (237) 99597295
(237) 22040246





De : arnaud_mosn...@uqar.qc.ca arnaud_mosn...@uqar.qc.ca
À : r-help@r-project.org
Envoyé le : Mardi, 10 Mars 2009, 17h15mn 57s
Objet : [R] (no subject)


Dear R users,

I have a table with the following form

STATION  X   Y
1-7030
1-7030
1-7030
2-7229
2-7229
2-7229
2-7229

How want to extract unique value for those columns ... I am sure it is 
very[[elided Yahoo spam]]

I want to obtain something like

STATIONX Y
1-7030
2-7229

Thanks !!!

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.



  
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Lattice: Customizing point-sizes with groups

2009-03-10 Thread Paul C. Boutros
Hi Sundar,

Thanks for your help!  Unfortunately your code seems to give the same
result.  Compare this:

temp - data.frame(
   x = 1:10,
   y = 1:10,
   cex = rep( c(1,3), 5),
   col = c( rep(blue, 5), rep(red, 5) ),
   groups = c( rep(A, 5), rep(B, 5) )
   );

xyplot(y ~ x, temp, groups = groups,
   par.settings = list(
 superpose.symbol = list(
   cex = c(1, 3),
   pch = 19,
   col = c(blue, red

And this:
xyplot(y ~ x, temp, cex = temp$cex, col = temp$col, pch = 19);

Once I introduce groups, I lose the ability to customize individual
data-points and seem only to be able to customize entire groups.

Paul

-Original Message-
From: Sundar Dorai-Raj [mailto:sdorai...@gmail.com] 
Sent: Tuesday, March 10, 2009 5:49 AM
To: paul.bout...@utoronto.ca
Cc: r-help@r-project.org
Subject: Re: [R] Lattice: Customizing point-sizes with groups

Try this:

xyplot(y ~ x, temp, groups = groups,
   par.settings = list(
 superpose.symbol = list(
   cex = c(1, 3),
   pch = 19,
   col = c(blue, red

See:

str(trellis.par.get())

for other settings you might want to change.

Also, you should drop the ; from all your scripts.

HTH,

--sundar

On Mon, Mar 9, 2009 at 6:49 PM, Paul Boutros paul.bout...@utoronto.ca
wrote:
 Hello,

 I am creating a scatter-plot in lattice, and I would like to customize the
 size of each point so that some points are larger and others smaller.
  Here's a toy example:

 library(lattice);

 temp - data.frame(
        x = 1:10,
        y = 1:10,
        cex = rep( c(1,3), 5),
        groups = c( rep(A, 5), rep(B, 5) )
        );

 xyplot(y ~ x, temp, cex = temp$cex, pch = 19);

 This works just fine if I create a straight xy-plot, without groups.
  However when I introduce groupings the cex argument specifies the
 point-size for the entire group.  For example:

 xyplot(y ~ x, temp, cex = temp$cex, pch = 19, group = groups);

 Is it possible to combine per-spot sizing with groups in some way?  One
 work-around is to manually specify all graphical parameters, but I thought
 there might be a better way than this:

 temp$col - rep(blue, 10);
 temp$col[temp$groups == B] - red;
 xyplot(y ~ x, temp, cex = temp$cex, pch = 19, col = temp$col);

 Any suggestions/advice is much appreciated!
 Paul

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] nonmetric clustering

2009-03-10 Thread Roberta Carabalona
Hi all,
does anybody know where it is possible to find the Riffle package?

Thank you
R


  
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Help installing Kernlab: cannot find -lgfortran

2009-03-10 Thread azege






I am trying to install a package kernlab on Linux machine. After downloading 
and unpacking, installation goes through a number of C code compiles which ends 
with code linking, where error is generated as follows:


g++ -shared -Bdirect,--hash-stype=both,-Wl,-O1 -o kernlab.so brweight.o 
ctable.o cweight.o dbreakpt.o dcauchy.o dgpnrm.o dgpstep.o dprecond.o dprsrch.o 
dspcg.o dtron.o dtrpcg.o dtrqsol.o esa.o expdecayweight.o inductionsort.o 
kspectrumweight.o lcp.o misc.o msufsort.o solvebqp.o stringk.o stringkernel.o 
svm.o wkasailcp.o wmsufsort.o -L/usr/lib64/R/lib -lRblas -lgfortran -lm 
-L/usr/lib64/R/lib -lRlapack  -L/usr/lib64/R/lib -lR


/usr/bin/ld: cannot find -lgfortran
collect2: ld returned 1 exit status

I suppose compiler cannot find libgfortran, which is in /usr/lib64/ in my case. 
Does anyone know where I specify or how I pass this info to R. I suppose
there must be some UNIX environment variable or argument to install.packages

Thanks, 
Andre

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.



Re: [R] reliability, scale scores in the psych package

2009-03-10 Thread William Revelle

Ista,
  As you figured out, psych reverses items by subtracting from the 
maximimum + minimum possible for each item.  (i.e., for  items going 
from 1 to 4, it reverses items by subtracting from 5).


If all of the items have the same potential range  then you can just 
let it figure out the range by itself.  If they differ in their 
ranges (some items are 0 - 1 items, some are 1-9 items, etc., then 
you need to give it the maximum and minimum vectors to use.


The min and max are figured out from all the items used in an 
inventory, rather than just the items used in a particular scale. 
This makes particular sense when you are scoring multiple scales from 
the same inventory.


In answer to your first question (what packages do I tend to use for 
scale construction?), the answer is that I tend to use the psych 
package for basic analysis, and then the sem package for structural 
equation analysis.


Bill





At 10:45 AM -0400 3/10/09, Ista Zahn wrote:

snip

 Second question: I spent some time with the psych package trying to
 figure out how to use the score.items() function, and it's become
 clear to me that I don't understand what it's doing. I assumed that
 setting a key equal to -1 would result in the item being reverse
 scored, but I get weird results, as shown below. When I try to reverse
 score (by setting a value of -1 in the key), I get scale scores that
 don't add up (e.g., the mean score is reported as being larger than
 the maximum item score). How is the score.items() function intended to
 be used? Do I need to reverse score items before using score.items()?


I did it again--it seems like I always figure out the answer just
after I ask for help. The score.items() function needs to know the
maximum of the scale in order to reverse score. For some reason, the
maximum appears to be calculated from all the scores, not just scores
that have a 1 or a -1 in the key. On a hunch I set the max argument to
a vector of scale maxima, and it worked. I'm still interested in
responses to question 1 though.

Thanks again,
Ista

snip



--
William Revelle http://personality-project.org/revelle.html
Professor   http://personality-project.org/personality.html
Department of Psychology http://www.wcas.northwestern.edu/psych/
Northwestern University http://www.northwestern.edu/
Use R for psychology   http://personality-project.org/r

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Lattice: Customizing point-sizes with groups

2009-03-10 Thread Paul C. Boutros
Yup, that would be my work-around.

I was hoping for a cleaner way of doing this, though, because I am
calculating cex based on other properties of the data-points, so that it
becomes a continuous variable.

-Original Message-
From: Sundar Dorai-Raj [mailto:sdorai...@gmail.com] 
Sent: Tuesday, March 10, 2009 12:01 PM
To: Paul C. Boutros
Cc: r-help@r-project.org
Subject: Re: [R] Lattice: Customizing point-sizes with groups

Sorry, I missed your point the first time. Why not create a group for
each subset then?

xyplot(y ~ x, temp, groups = interaction(cex, groups),
   par.settings = list(
 superpose.symbol = list(
   cex = c(1, 2, 3, 4),
   pch = 19,
   col = c(blue, red, green, purple


On Tue, Mar 10, 2009 at 8:11 AM, Paul C. Boutros
paul.bout...@utoronto.ca wrote:
 Hi Sundar,

 Thanks for your help!  Unfortunately your code seems to give the same
 result.  Compare this:

 temp - data.frame(
   x = 1:10,
   y = 1:10,
   cex = rep( c(1,3), 5),
   col = c( rep(blue, 5), rep(red, 5) ),
   groups = c( rep(A, 5), rep(B, 5) )
   );

 xyplot(y ~ x, temp, groups = groups,
   par.settings = list(
 superpose.symbol = list(
   cex = c(1, 3),
   pch = 19,
   col = c(blue, red

 And this:
 xyplot(y ~ x, temp, cex = temp$cex, col = temp$col, pch = 19);

 Once I introduce groups, I lose the ability to customize individual
 data-points and seem only to be able to customize entire groups.

 Paul

 -Original Message-
 From: Sundar Dorai-Raj [mailto:sdorai...@gmail.com]
 Sent: Tuesday, March 10, 2009 5:49 AM
 To: paul.bout...@utoronto.ca
 Cc: r-help@r-project.org
 Subject: Re: [R] Lattice: Customizing point-sizes with groups

 Try this:

 xyplot(y ~ x, temp, groups = groups,
   par.settings = list(
 superpose.symbol = list(
   cex = c(1, 3),
   pch = 19,
   col = c(blue, red

 See:

 str(trellis.par.get())

 for other settings you might want to change.

 Also, you should drop the ; from all your scripts.

 HTH,

 --sundar

 On Mon, Mar 9, 2009 at 6:49 PM, Paul Boutros paul.bout...@utoronto.ca
 wrote:
 Hello,

 I am creating a scatter-plot in lattice, and I would like to customize
the
 size of each point so that some points are larger and others smaller.
  Here's a toy example:

 library(lattice);

 temp - data.frame(
x = 1:10,
y = 1:10,
cex = rep( c(1,3), 5),
groups = c( rep(A, 5), rep(B, 5) )
);

 xyplot(y ~ x, temp, cex = temp$cex, pch = 19);

 This works just fine if I create a straight xy-plot, without groups.
  However when I introduce groupings the cex argument specifies the
 point-size for the entire group.  For example:

 xyplot(y ~ x, temp, cex = temp$cex, pch = 19, group = groups);

 Is it possible to combine per-spot sizing with groups in some way?  One
 work-around is to manually specify all graphical parameters, but I
thought
 there might be a better way than this:

 temp$col - rep(blue, 10);
 temp$col[temp$groups == B] - red;
 xyplot(y ~ x, temp, cex = temp$cex, pch = 19, col = temp$col);

 Any suggestions/advice is much appreciated!
 Paul

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.




__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] How to stop loop inside status ? ( haplo.stats package )

2009-03-10 Thread Nash
How to stop loop inside status ?

require(haplo.stats)

## normal status
y=rep(c(0,1),each=50)
geno=as.data.frame(matrix(sample(c(A,G,T,C),600,replace=T),100,6))

hs- haplo.score(y, geno, trait.type=binomial, offset = NA, x.adj = NA,
min.count=5, 
locus.label=NA, miss.val=c(0,NA), haplo.effect=additive,
eps.svd=1e-5, simulate=TRUE,
 sim.control=score.sim.control(min.sim=200,max.sim=500))

hs$score.global.p.sim

## find loop inside ,and I can't stop it!
geno=as.data.frame(matrix(G,100,6))
hs- haplo.score(y, geno, trait.type=binomial, offset = NA, x.adj = NA,
min.count=5, 
locus.label=NA, miss.val=c(0,NA), haplo.effect=additive,
eps.svd=1e-5, simulate=TRUE,
 sim.control=score.sim.control(min.sim=200,max.sim=500))

hs$score.global.p.sim

How to stop loop inside status ?

--
Nash - morri...@ibms.sinica.edu.tw

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] File permissions

2009-03-10 Thread culpritNr1

Great. I tried chmod (which does not exist) but I didn't know that there was
sys.chmod.

Thank you.



Prof Brian Ripley wrote:
 
 On Tue, 10 Mar 2009, ig2ar-s...@yahoo.co.uk wrote:
 

 Hello fellow R-ists,

 How do I change file permissions?

 I know that file.access can display permission information but how 
 do I SET those permissions?
 
 Well, file.info is better are displaying permission information, and 
 Sys.chmod set them in the format used by file.info.
 
 Thank you

 Your culpritNr1
 
 -- 
 Brian D. Ripley,  rip...@stats.ox.ac.uk
 Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
 University of Oxford, Tel:  +44 1865 272861 (self)
 1 South Parks Road, +44 1865 272866 (PA)
 Oxford OX1 3TG, UKFax:  +44 1865 272595
 
 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.
 
 

-- 
View this message in context: 
http://www.nabble.com/File-permissions-tp22437684p22439004.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Plots of different aspect ratios on one page, base aligned(trellis.print)

2009-03-10 Thread Saptarshi Guha
Hello,
I have an example of a 2 paneled plot, with two different aspect
ratios displayed on one page.
An example would help

n=20
x1 - cumsum(runif(n))
x2 - cumsum(runif(n))
d - data.frame(val=c(x1,x2),id=c(1:n,1:n), nt=c(rep(A,n),rep(B,n)))
u1 - xyplot(val~id | nt, data=d,aspect=1,layout=c(1,2))
u2 - xyplot(val~id|nt, data=d,aspect=0.5,layout=c(1,2))
postscript(~/k.ps,colormodel=rgb,paper=letter,horiz=T)
print(u1,position=c(0,0,1/3,1),more=T,newpage=T)
print(u2,position=c(1/3,0,1,1),more=F,newpage=F)
dev.off()


The two figures are not base aligned. I would like them share the same
the baseline and same height, if necessary the paper width and height
can be adjusted
( i tried setting the paper width and height to no avail).

Is the way to base align the two figures? Do I have to get down the grid level?

Regards
Saptarshi Guha

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] popular R packages

2009-03-10 Thread Dylan Beaudette
On Tuesday 10 March 2009, Frank E Harrell Jr wrote:
 Gabor Grothendieck wrote:
  On Tue, Mar 10, 2009 at 6:14 AM, Jim Lemon j...@bitwrit.com.au wrote:
  Gabor Grothendieck wrote:
  R-Forge already has this but I don't think its used much.  R-Forge
  does allow authors to opt out which seems sensible lest it deter
  potential authors from submitting packages.
 
  I think objective quality metrics are better than ratings, e.g. does
  package
  have a vignette, has package had a release within the last year,
  does package have free software license, etc.  That would have
  the advantage that authors might react to increase their package's
  quality assessment resulting in an overall improvement in quality on
  CRAN that would result in more of a pro-active cycle whereas ratings
  are reactive
  and don't really encourage improvement.
 
  I beg to offer an alternative assessment of quality. Do users download
  the package and find it useful? If so, they are likely to download it
  again when it is updated.
 
  I was referring to motivating authors, not users, so that CRAN improves.
 
  Much as I appreciate the convenience of vignettes, regular
  updates and the absolute latest GPL license, a perfectly dud package can
  have all of these things. If a package is downloaded upon first release
  and
 
  These are nothing but the usual  FUD against quality improvement, i.e.
  the quality metrics are not measuring what you want but the fact is that
  quality metrics can work and have had huge successes.  Also I think
  objective measures would be more accepted by authors than ratings. No one
  is going to be put off that their package has no vignette when obviously
  it doesn't and the authors are free to add one and instantly improve
  their package's rating.
 
  not much thereafter, the maintainer might be motivated to attend to its
  shortcomings of utility rather than incrementing the version number
  every month or so. Downloads, as many have pointed out, are not a direct
  assessment of quality, but if I saw a package that just kept getting
  downloaded, version after version, I would be much more likely to check
  it out myself and perhaps even write a review for Hadley's neat site.
  Which I will try to do tonight.
 
  I was arguing for objective metrics rather than ratings. Downloading is
  not a rating but is objective although there are measurement problems as
  has been pointed out.  Also, the worst feature is that it does not react
  to changes in quality very quickly making it anti-motivating.

 Gabor I think your approach will have more payoff in the long run.  I
 would suggest one other metric: the number of lines of code in the
 'examples' section of all the package's help files.

 Frank

Absolutely. From the perspective of a user, not an expert, packages with a 
good vignette and lots of examples are by far my favorite and most used.

Dylan

-- 
Dylan Beaudette
Soil Resource Laboratory
http://casoilresource.lawr.ucdavis.edu/
University of California at Davis
530.754.7341

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] popular R packages

2009-03-10 Thread Ajay ohri
Pricing each download at 99 cents ( the same as a song from I Tunes) can
measure users more accurately.
Thats my 2 cents anyways.

On Tue, Mar 10, 2009 at 9:54 PM, Max Kuhn mxk...@gmail.com wrote:

 If is easy to get the download numbers, we should do it and deal with
 the interpretation issues. I'd like to know the numbers so I can
 understand which (of my) packages have the most usage.

 One other compication about # downloads: I suspect that a package
 being on teh depends/suggests/imports list of another package might be
 a big driver with respect to how many times that it was downloaded.

 If I remember correctly, about 5 years ago Bioconductor asked for
 volunteers to review packages to get detailed, specific feedback by
 people who use the package (and should be fairly R proficient). I
 think that this is pretty important and something like Crantastic is a
 good interface. I personally got a lot out of the comments the a JSS
 reviewer had for a package.

 --

 Max

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] How to color certain area under curve

2009-03-10 Thread guox
For a given random variable rv, for instance, rv = rnorm(1000),
I plot its density curve and calculate some quantiles:
plot(density(rv))
P10P50P90 = = quantile(rv,probs = c(10,50,90)/100)
I would like to color the area between P10 and P90 and under the curve
and mark the P50 on the curve.

 rv = rnorm(1000)
 plot(density(rv))
 P10P50P90 = = quantile(rv,probs = c(10,50,90)/100)

Could you please teach me how to do these using R?
Thanks,
-james

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] help structuring mixed model using lmer()

2009-03-10 Thread Douglas Bates
On Tue, Mar 10, 2009 at 10:15 AM, Simon Pickett simon.pick...@bto.org wrote:

 This is partly a statistical question as well as a question about R, but I am 
 stumped!

 I have count data from various sites across years. (Not all of the sites in 
 the study appear in all years). Each site has its own habitat score habitat 
 that remains constant across all years.

 I want to know if counts declined faster on sites with high habitat scores.

 I can construct a model that tests for the effect of habitat as a main 
 effect, controlling for year

 model1-lmer(count~habitat+yr+(1|site), family=quasibinomial,data=m)
 model2-lmer(count~yr+(1|site), family=quasibinomial,data=m)
 anova(model1,model2)

I'm curious as to why you use the quasibinomial family for count data.
 When you say count data do you mean just presence/absence or an
actual count of the number present?  Generally the binomial and
quasibinomial families are used when you have a binary response, and
the poisson or quasipoisson family are used for responses that are
counts.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] (no subject)

2009-03-10 Thread Shuying Yang

Dear Members,

 

I have a question about using R2WinBUGS to obtain the WinBUGS results. 

 

By default, when R2WinBUGS returns summary stats, I got mean, sd, 2.5%, 25%, 
median, 75% and 97.5%.  Could anyone tell me how to modify the code to obtain 
5% and 95% summary results?

 

Many thanks

 

Alice

 

 

_
[[elided Hotmail spam]]

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Alternative to interp.surface() offered

2009-03-10 Thread Waichler, Scott R
I wanted a simple function for bilinear interpolation on a 2-D grid, and
interp.surface() in the fields package didn't quite suit my needs.  In
particular, it requires uniform spacing between grid points.  It also
didn't have the visual reference frame I was looking for.  Here is an
alternative function, followed by an example.

# A function for  bilinear interpolation on a 2-d grid, based on the
# interp.surface() from the fields package and code by Steve Koehler.
# The points of the grid do not have to be uniformly spaced.  Looking at
the 2-d
# grid in plan view, the origin is at upper left, so y (row) index
increases
# downward.  findInterval() is used to locate the new coordinates on the
grid.
#
# Scott Waichler, scott.waich...@pnl.gov, 03/10/09.  

my.interp.surface - function (obj, loc) {
  # obj is a surface object like the list for contour or image.
  # loc is a matrix of (x, y) locations 
  x - obj$x
  y - obj$y
  x.new - loc[,1]
  y.new - loc[,2]
  z - obj$z

  ind.x - findInterval(x.new, x, all.inside=T)
  ind.y - findInterval(y.new, y, all.inside=T)

  ex - (x.new - x[ind.x]) / (x[ind.x + 1] - x[ind.x])
  ey - (y.new - y[ind.y]) / (y[ind.y + 1] - y[ind.y])

  # set weights for out-of-bounds locations to NA
  ex[ex  0 | ex  1] - NA
  ey[ey  0 | ey  1] - NA

  return(
  z[cbind(ind.y, ind.x)] * (1 - ex) * (1 - ey) +  # upper
left
  z[cbind(ind.y + 1, ind.x)] * (1 - ex) * ey   +  # lower
left
  z[cbind(ind.y + 1, ind.x + 1)] * ex   * ey   +  # lower
right
  z[cbind(ind.y, ind.x + 1)] * ex   * (1 - ey)# upper
right
)
}

## # An example.
## # z matrix, y index increasing downwards
## #   4 5 6 7 8
## #   3 4 5 6 7
## #   2 3 4 5 6
## #   1 2 3 4 5
## z.vec - c(4,5,6,7,8,3,4,5,6,7,2,3,4,5,6,1,2,3,4,5) # read in the
data for the matrix
## x.mat - 1:5# x coordinates of the z values
## y.mat - seq(100, 400, by=100)  # y coordinates of the z values
## obj - list(x = x.mat, y = y.mat, z = matrix(z.vec, ncol=5, byrow=T))
# grid you want to interpolate on
## x.out - round(runif(6, min = min(x.mat), max = max(x.mat)), 2)  # x
for points you want interpolate to
## y.out - round(runif(6, min = min(y.mat), max = max(y.mat)), 2)  # y
for points you want interpolate to
## loc - cbind(x.out, y.out)
## z.out - my.interp.surface(obj, loc)
## cat(file=, x.out = , loc[,1], \n, y.out = , loc[,2], \n,
z.out = , round(z.out, 2), \n)

Regards,
Scott Waichler
Pacific Northwest National Laboratory
Richland, WA   99352USA
scott.waich...@pnl.gov

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Changing factor to numeric

2009-03-10 Thread Tal Galili
By the way, it could be that one of your numbers has space in them. in
which case, R tends to turn the entire vector into a factor. try opening the
file in a spreadsheet like excel, and do search replace on   with .
and see how many it catches.


Tal





On Tue, Mar 10, 2009 at 7:25 AM, ojal john owino 
ojal.johnow...@googlemail.com wrote:

 Dear Users,
 I have a variable in my dataset which is of type factor. But it actually
 contains numeric entries which like 5.735  4.759 . This is because the
 data was read from a CSV file into R and this variable contained other
 charaters which were not numeric. I have now dropped the records with the
 characters which are not numeric for this variable and want to change it to
 numeric srotage type.

 I have tried using as.numeric() function but it changes the values in the
 variable to what I think are the ranks of the individual values of the
 varible in the dataset. For example if 5.735 is the current content in the
 field, then the new object created by as.numeric will contain a value like
 680 if the 5.735 was the highest value for the varible and the dataset had
 680 records.


 How can I change the storage type without changing the contents of this
 variable in this case?

 Thanks for your consideration.



 --
 Ojal John Owino
 P.O Box 230-80108
 Kilifi, Kenya.
 Mobile:+254 728 095 710

[[alternative HTML version deleted]]

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.




-- 
--


My contact information:
Tal Galili
Phone number: 972-50-3373767
FaceBook: Tal Galili
My Blogs:
www.talgalili.com
www.biostatistics.co.il

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] a general question

2009-03-10 Thread culpritNr1

Hello Bogdan,

Put in those terms, option b looks more defensible. It sounds like a test of
two proportions, sometimes called z-test. The problem is that, for that test
to be used, you must be sampling from large population.

You know that under regular ChIP-seq sequencing we are lucky if we get 10
reads in a particuar region of interest. So, we get a population and that
population itself is small. So, z-test does not look applicable. Google for
z-test and you'll see the conditions.

What to do then?

Well, forget comparing signal to backgroung and, instead, compare directly
number of reads in the experiment versus number of reads in the control.
Actually, you simplify your algorithm by not having to define (and defend)
an arbitrary area to call background.

An example?

Sure! Check out PeakSeq enables systematic scoring of ChIP-seq experiments
relative to controls, Joel Rozowsky1, Ghia Euskirchen2, Raymond K
Auerbach3, Zhengdong D Zhang1, Theodore Gibson1, Robert Bjornson4, Nicholas
Carriero4, Michael Snyder1,2  Mark B Gerstein1,3,4. Nature Biotechnology,
2009.

Take a look at it and let us know what you think.

Your culpritNr1

PS: next time, please go for a more descriptive subject such as ChIP-seq.
That would help in the future when we need to over old messages. Don't you
think?




Bogdan Tanasa wrote:
 
 Hi everyone,
 
 Although this question is more related to ChIP and ChIP-seq, it could be
 probably anchored in a more general statistical context.
 
 The question is : what method is better  to assess the significance of the
 change  in a signal (the signal can be DNA binding, for instance) given
 the
 background and 2 conditions.
 
 . condition1 (eg no treatment) :  background = 1;
 signal = 5;
 
 . condition2 (eg hormonal treatment) : background = 3;
signal = 6.
 
 The methods can be :
 
 a. substract the background : i.e. (signal_treatment -
 background_treatment)
 / (signal_no_treatment - background_no_treatment)
 
 b. calculate the fold change: i.e. (signal_treatment /
 background_treatment)
 / (signal_no_treatment / background_no_treatment)
 
 c. any other method ? i.e. (signal_treatment - signal_no_treatment)  / (
 background_treatment - background_no_treatment)
 
 Thank you very much.
 
 Bogdan
 
   [[alternative HTML version deleted]]
 
 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.
 
 

-- 
View this message in context: 
http://www.nabble.com/a-general-question-tp22382289p22440722.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] puzzled by math on date-time objects

2009-03-10 Thread William Dunlap
Your problem arises in R 2.8.1 (and 2.9.0-devel, but not 2.7.0) when
length(POSIXct object) is even, because median(POSIXct object)
passes a POSIXct object to median.default, which calls
sum() in the even length case.

  median( as.POSIXct(Sys.time()))
[1] 2009-03-10 10:28:46 PDT
  median( as.POSIXct(rep(Sys.time(),2)))
Error in Summary.POSIXct(c(1236706132.54740, 1236706132.54740), na.rm =
FALSE) :
  'sum' not defined for POSIXt objects
 traceback()
4: stop(gettextf('%s' not defined for \POSIXt\ objects, .Generic),
   domain = NA)
3: Summary.POSIXct(c(1236706132.54740, 1236706132.54740), na.rm = FALSE)
2: median.default(as.POSIXct(rep(Sys.time(), 2)))
1: median(as.POSIXct(rep(Sys.time(), 2)))
 version
   _
platform   i686-pc-linux-gnu
arch   i686
os linux-gnu
system i686, linux-gnu
status
major  2
minor  8.1
year   2008
month  12
day22
svn rev47281
language   R
version.string R version 2.8.1 (2008-12-22)

Bill Dunlap
TIBCO Software Inc - Spotfire Division
wdunlap tibco.com 


[R] puzzled by math on date-time objects

Denis Chabot chabotd at globetrotter.net 
Tue Mar 10 16:44:07 CET 2009
Previous message: [R] nonmetric clustering
Next message: [R] perform subgroup meta-analysis and create forest plot
displaying subgroups
Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
Hi,

I don't understand the following. When I create a small artificial set  
of date information in class POSIXct, I can calculate the mean and the  
median:

a = as.POSIXct(Sys.time())
a = a + 60*0:10; a

  [1] 2009-03-10 11:30:16 EDT 2009-03-10 11:31:16 EDT 2009-03-10  
11:32:16 EDT
  [4] 2009-03-10 11:33:16 EDT 2009-03-10 11:34:16 EDT 2009-03-10  
11:35:16 EDT
  [7] 2009-03-10 11:36:16 EDT 2009-03-10 11:37:16 EDT 2009-03-10  
11:38:16 EDT
[10] 2009-03-10 11:39:16 EDT 2009-03-10 11:40:16 EDT

median(a)
[1] 2009-03-10 11:35:16 EDT
mean(a)
[1] 2009-03-10 11:35:16 EDT


But for real data (for this post, a short subset is in object c)  that  
I have converted into a POSIXct object, I cannot calculate the median  
with median(), though I do get it with summary():

c
  [1] 2009-02-24 14:51:18 EST 2009-02-24 14:51:19 EST 2009-02-24  
14:51:19 EST
  [4] 2009-02-24 14:51:20 EST 2009-02-24 14:51:20 EST 2009-02-24  
14:51:21 EST
  [7] 2009-02-24 14:51:21 EST 2009-02-24 14:51:22 EST 2009-02-24  
14:51:22 EST
[10] 2009-02-24 14:51:22 EST

class(c)
[1] POSIXt  POSIXct

median(c)
Erreur dans Summary.POSIXct(c(1235505080.6, 1235505081.1), na.rm =  
FALSE) :
   'sum' not defined for POSIXt objects

One difference is that in my own date-time series, some events are  
repeated (the original data contained fractions of seconds). But then,  
why can I get a median through summary()?

summary(c)
  Min.   1st  
Qu.Median
2009-02-24 14:51:18 EST 2009-02-24 14:51:19 EST 2009-02-24  
14:51:20 EST
  Mean   3rd  
Qu.  Max.
2009-02-24 14:51:20 EST 2009-02-24 14:51:21 EST 2009-02-24  
14:51:22 EST

Thanks in advance,


Denis Chabot

sessionInfo()
R version 2.8.1 Patched (2009-01-19 r47650)
i386-apple-darwin9.6.0

locale:
fr_CA.UTF-8/fr_CA.UTF-8/C/C/fr_CA.UTF-8/fr_CA.UTF-8

attached base packages:
[1] stats graphics  grDevices utils datasets  methods   base

other attached packages:
[1] doBy_3.7 chron_2.3-30

loaded via a namespace (and not attached):
[1] Hmisc_3.5-2 cluster_1.11.12 grid_2.8.1  lattice_0.17-20  
tools_2.8.1

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] puzzled by math on date-time objects

2009-03-10 Thread William Dunlap
median.default was changed between 2.7.1 and 2.8.1 to
call sum(...)/2 instead of mean(...) and that causes
the problem for POSIXct objects (sum fails but mean
works for them).

Bill Dunlap
TIBCO Software Inc - Spotfire Division
wdunlap tibco.com  

 -Original Message-
 From: William Dunlap 
 Sent: Tuesday, March 10, 2009 11:37 AM
 To: 'r-help@r-project.org'
 Subject: Re: [R] puzzled by math on date-time objects
 
 Your problem arises in R 2.8.1 (and 2.9.0-devel, but not 2.7.0) when
 length(POSIXct object) is even, because median(POSIXct object)
 passes a POSIXct object to median.default, which calls
 sum() in the even length case.
 
   median( as.POSIXct(Sys.time()))
 [1] 2009-03-10 10:28:46 PDT
   median( as.POSIXct(rep(Sys.time(),2)))
 Error in Summary.POSIXct(c(1236706132.54740, 
 1236706132.54740), na.rm = FALSE) :
   'sum' not defined for POSIXt objects
  traceback()
 4: stop(gettextf('%s' not defined for \POSIXt\ objects, .Generic),
domain = NA)
 3: Summary.POSIXct(c(1236706132.54740, 1236706132.54740), 
 na.rm = FALSE)
 2: median.default(as.POSIXct(rep(Sys.time(), 2)))
 1: median(as.POSIXct(rep(Sys.time(), 2)))
  version
_
 platform   i686-pc-linux-gnu
 arch   i686
 os linux-gnu
 system i686, linux-gnu
 status
 major  2
 minor  8.1
 year   2008
 month  12
 day22
 svn rev47281
 language   R
 version.string R version 2.8.1 (2008-12-22)
 
 Bill Dunlap
 TIBCO Software Inc - Spotfire Division
 wdunlap tibco.com 
 
 
 [R] puzzled by math on date-time objects
 
 Denis Chabot chabotd at globetrotter.net 
 Tue Mar 10 16:44:07 CET 2009
 Previous message: [R] nonmetric clustering
 Next message: [R] perform subgroup meta-analysis and create 
 forest plot   displaying subgroups
 Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
 Hi,
 
 I don't understand the following. When I create a small 
 artificial set  
 of date information in class POSIXct, I can calculate the 
 mean and the  
 median:
 
 a = as.POSIXct(Sys.time())
 a = a + 60*0:10; a
 
   [1] 2009-03-10 11:30:16 EDT 2009-03-10 11:31:16 EDT 
 2009-03-10  
 11:32:16 EDT
   [4] 2009-03-10 11:33:16 EDT 2009-03-10 11:34:16 EDT 
 2009-03-10  
 11:35:16 EDT
   [7] 2009-03-10 11:36:16 EDT 2009-03-10 11:37:16 EDT 
 2009-03-10  
 11:38:16 EDT
 [10] 2009-03-10 11:39:16 EDT 2009-03-10 11:40:16 EDT
 
 median(a)
 [1] 2009-03-10 11:35:16 EDT
 mean(a)
 [1] 2009-03-10 11:35:16 EDT
 
 
 But for real data (for this post, a short subset is in object 
 c)  that  
 I have converted into a POSIXct object, I cannot calculate 
 the median  
 with median(), though I do get it with summary():
 
 c
   [1] 2009-02-24 14:51:18 EST 2009-02-24 14:51:19 EST 
 2009-02-24  
 14:51:19 EST
   [4] 2009-02-24 14:51:20 EST 2009-02-24 14:51:20 EST 
 2009-02-24  
 14:51:21 EST
   [7] 2009-02-24 14:51:21 EST 2009-02-24 14:51:22 EST 
 2009-02-24  
 14:51:22 EST
 [10] 2009-02-24 14:51:22 EST
 
 class(c)
 [1] POSIXt  POSIXct
 
 median(c)
 Erreur dans Summary.POSIXct(c(1235505080.6, 1235505081.1), na.rm =  
 FALSE) :
'sum' not defined for POSIXt objects
 
 One difference is that in my own date-time series, some events are  
 repeated (the original data contained fractions of seconds). 
 But then,  
 why can I get a median through summary()?
 
 summary(c)
   Min.   1st  
 Qu.Median
 2009-02-24 14:51:18 EST 2009-02-24 14:51:19 EST 2009-02-24  
 14:51:20 EST
   Mean   3rd  
 Qu.  Max.
 2009-02-24 14:51:20 EST 2009-02-24 14:51:21 EST 2009-02-24  
 14:51:22 EST
 
 Thanks in advance,
 
 
 Denis Chabot
 
 sessionInfo()
 R version 2.8.1 Patched (2009-01-19 r47650)
 i386-apple-darwin9.6.0
 
 locale:
 fr_CA.UTF-8/fr_CA.UTF-8/C/C/fr_CA.UTF-8/fr_CA.UTF-8
 
 attached base packages:
 [1] stats graphics  grDevices utils datasets  methods   base
 
 other attached packages:
 [1] doBy_3.7 chron_2.3-30
 
 loaded via a namespace (and not attached):
 [1] Hmisc_3.5-2 cluster_1.11.12 grid_2.8.1  lattice_0.17-20  
 tools_2.8.1
 

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] help structuring mixed model using lmer()

2009-03-10 Thread Simon Pickett

Cheers,

Actually I was using quasipoisson for my models, but for the puposes of my 
example, it doesnt really matter.


I am trying to work out a way of quantifying whether the slopes (for years) 
are covary with habitat scores.


The more I think about it, the more I am convinced that it isnt possible do 
to that using a glm approach. I think I have to run separate models for each 
site, calculate the gradient, then do a lm with gradient explained by 
habitat score


Thanks, Simon.




On Tue, Mar 10, 2009 at 10:15 AM, Simon Pickett simon.pick...@bto.org 
wrote:


This is partly a statistical question as well as a question about R, but 
I am stumped!


I have count data from various sites across years. (Not all of the sites 
in the study appear in all years). Each site has its own habitat score 
habitat that remains constant across all years.


I want to know if counts declined faster on sites with high habitat 
scores.


I can construct a model that tests for the effect of habitat as a main 
effect, controlling for year



model1-lmer(count~habitat+yr+(1|site), family=quasibinomial,data=m)
model2-lmer(count~yr+(1|site), family=quasibinomial,data=m)
anova(model1,model2)


I'm curious as to why you use the quasibinomial family for count data.
When you say count data do you mean just presence/absence or an
actual count of the number present?  Generally the binomial and
quasibinomial families are used when you have a binary response, and
the poisson or quasipoisson family are used for responses that are
counts.



__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] ordering

2009-03-10 Thread aaron wells

Hello, I would like to order a matrix by a specific column. For instance:

 

 test
  [,1] [,2] [,3]
 [1,]1  100   21
 [2,]23   22
 [3,]3  100   23
 [4,]4   60   24
 [5,]5   55   25
 [6,]6   45   26
 [7,]7   75   27
 [8,]8   12   28
 [9,]9   10   29
[10,]   10   22   30


 test[order(test[,2]),]
  [,1] [,2] [,3]
 [1,]23   22
 [2,]9   10   29
 [3,]8   12   28
 [4,]   10   22   30
 [5,]6   45   26
 [6,]5   55   25
 [7,]4   60   24
 [8,]7   75   27
 [9,]1  100   21
[10,]3  100   23


This works well and good in the above example matrix.  However in the matrix 
that I actually want to sort (derived from a function that I wrote) I get 
something like this:

 

 test[order(as.numeric(test[,2])),] ### First column is row.names


 f con f.1 cov f.2 minimum f.3 maximum f.4   cl
asahi* 100   *   1   * 0.1   *   2   * test
castet   * 100   *   2   * 0.1   *   5   * test
clado* 100   *   1   * 0.7   *   2   * test
aulac*  33   *   0   * 0.1   * 0.1   * test
buell*  33   *   0   * 0.1   * 0.1   * test
camlas   *  33   *   0   * 0.1   * 0.1   * test
carbig   *  33   *   1   *   1   *   1   * test
poaarc   *  67   *   0   * 0.1   * 0.1   * test
polviv   *  67   *   0   * 0.1   * 0.1   * test


 

where R interprets 100 to be the lowest value and orders increasing from there. 
 

 

 is.numeric(test[,2])
[1] FALSE
 is.double(test[,2])
[1] FALSE
 is.integer(test[,2])
[1] FALSE
 is.real(test[,2])
[1] FALSE


 

My questions are:  Why is this happening? and How do I fix  it? 

 

Thanks in advance!

 

  Aaron Wells

_


cns!503D1D86EBB2B53C!2285.entry?ocid=TXT_TAGLM_WL_UGC_Contacts_032009
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] ordering

2009-03-10 Thread Peter Alspach
Kia ora Aaron

As you have identified, test[,2] is not numeric - it is probably factor.
Your function must have made the conversion, so you may want to modify
that.  Alternative, try:

test[order(as.numeric(as.character(test[,2]))),] 

BTW, str(test) is a good way to find out more about the structure of
your object.

HTH 

Peter Alspach




 -Original Message-
 From: r-help-boun...@r-project.org 
 [mailto:r-help-boun...@r-project.org] On Behalf Of aaron wells
 Sent: Wednesday, 11 March 2009 8:30 a.m.
 To: r-help@r-project.org
 Subject: [R] ordering
 
 
 Hello, I would like to order a matrix by a specific column. 
 For instance:
 
  
 
  test
   [,1] [,2] [,3]
  [1,]1  100   21
  [2,]23   22
  [3,]3  100   23
  [4,]4   60   24
  [5,]5   55   25
  [6,]6   45   26
  [7,]7   75   27
  [8,]8   12   28
  [9,]9   10   29
 [10,]   10   22   30
 
 
  test[order(test[,2]),]
   [,1] [,2] [,3]
  [1,]23   22
  [2,]9   10   29
  [3,]8   12   28
  [4,]   10   22   30
  [5,]6   45   26
  [6,]5   55   25
  [7,]4   60   24
  [8,]7   75   27
  [9,]1  100   21
 [10,]3  100   23
 
 
 This works well and good in the above example matrix.  
 However in the matrix that I actually want to sort (derived 
 from a function that I wrote) I get something like this:
 
  
 
  test[order(as.numeric(test[,2])),] ### First column is row.names
 
 
  f con f.1 cov f.2 minimum f.3 maximum f.4   cl
 asahi* 100   *   1   * 0.1   *   2   * test
 castet   * 100   *   2   * 0.1   *   5   * test
 clado* 100   *   1   * 0.7   *   2   * test
 aulac*  33   *   0   * 0.1   * 0.1   * test
 buell*  33   *   0   * 0.1   * 0.1   * test
 camlas   *  33   *   0   * 0.1   * 0.1   * test
 carbig   *  33   *   1   *   1   *   1   * test
 poaarc   *  67   *   0   * 0.1   * 0.1   * test
 polviv   *  67   *   0   * 0.1   * 0.1   * test
 
 
  
 
 where R interprets 100 to be the lowest value and orders 
 increasing from there.  
 
  
 
  is.numeric(test[,2])
 [1] FALSE
  is.double(test[,2])
 [1] FALSE
  is.integer(test[,2])
 [1] FALSE
  is.real(test[,2])
 [1] FALSE
 
 
  
 
 My questions are:  Why is this happening? and How do I fix  it? 
 
  
 
 Thanks in advance!
 
  
 
   Aaron Wells
 
 _
 
 
 cns!503D1D86EBB2B53C!2285.entry?ocid=TXT_TAGLM_WL_UGC_Contacts_032009
   [[alternative HTML version deleted]]
 
 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide 
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.
 

The contents of this e-mail are confidential and may be subject to legal 
privilege.
 If you are not the intended recipient you must not use, disseminate, 
distribute or
 reproduce all or any part of this e-mail or attachments.  If you have received 
this
 e-mail in error, please notify the sender and delete all material pertaining 
to this
 e-mail.  Any opinion or views expressed in this e-mail are those of the 
individual
 sender and may not represent those of The New Zealand Institute for Plant and
 Food Research Limited.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Sparse PCA in R

2009-03-10 Thread joris meys
Dear all,

I would like to perform a sparse PCA, but I didn't find any library offering
me this in R. Is there one available, or do I have to write the functions
myself?

Kind regards
Joris Meys

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] How to color certain area under curve

2009-03-10 Thread Matthieu Dubois
 guox at ucalgary.ca writes:

 
 For a given random variable rv, for instance, rv = rnorm(1000),
 I plot its density curve and calculate some quantiles:
 plot(density(rv))
 P10P50P90 = = quantile(rv,probs = c(10,50,90)/100)
 I would like to color the area between P10 and P90 and under the curve
 and mark the P50 on the curve.
 
  rv = rnorm(1000)
  plot(density(rv))
  P10P50P90 = = quantile(rv,probs = c(10,50,90)/100)
 
 Could you please teach me how to do these using R?
 Thanks,
 -james
 

see ?polygon

Here after is an example of the use of polygon to solve your problem: 
rv - rnorm(1000)
drv - density(rv)
plot(drv)

# further steps: 
# 1. compute quantiles
# 2. determine the x and y of the area that must be drawn
# 3. drawn
# 4. add q.5 info
qrv - quantile(rv, prob=c(0.1, 0.9))
select - qrv[1] = drv$x  drv$x = qrv[2]
polygon(x = c(qrv[1], drv$x[select], qrv[2]), 
y = c(0, drv$y[select], 0, col='blue')
abline(v= quantile(rv, p=0.5), lty=2)

Hope this will help. 

Matthieu

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] ordering

2009-03-10 Thread David Winsemius
A) I predict that if you apply the str function to the second test  
that you will find that con is not numeric but rather of class  
character or factor. And the second test is probably not a matrix but  
rather a dataframe. Matrices in R need to have all their elements of  
the same class.


B) Read the FAQ ... find the one that tells you how to convert factors  
to numeric.


--
David Winsemius

On Mar 10, 2009, at 3:29 PM, aaron wells wrote:



Hello, I would like to order a matrix by a specific column. For  
instance:





test

 [,1] [,2] [,3]
[1,]1  100   21
[2,]23   22
[3,]3  100   23
[4,]4   60   24
[5,]5   55   25
[6,]6   45   26
[7,]7   75   27
[8,]8   12   28
[9,]9   10   29
[10,]   10   22   30




test[order(test[,2]),]
 [,1] [,2] [,3]
[1,]23   22
[2,]9   10   29
[3,]8   12   28
[4,]   10   22   30
[5,]6   45   26
[6,]5   55   25
[7,]4   60   24
[8,]7   75   27
[9,]1  100   21
[10,]3  100   23


This works well and good in the above example matrix.  However in  
the matrix that I actually want to sort (derived from a function  
that I wrote) I get something like this:





test[order(as.numeric(test[,2])),] ### First column is row.names



f con f.1 cov f.2 minimum f.3 maximum f.4   cl
asahi* 100   *   1   * 0.1   *   2   * test
castet   * 100   *   2   * 0.1   *   5   * test
clado* 100   *   1   * 0.7   *   2   * test
aulac*  33   *   0   * 0.1   * 0.1   * test
buell*  33   *   0   * 0.1   * 0.1   * test
camlas   *  33   *   0   * 0.1   * 0.1   * test
carbig   *  33   *   1   *   1   *   1   * test
poaarc   *  67   *   0   * 0.1   * 0.1   * test
polviv   *  67   *   0   * 0.1   * 0.1   * test




where R interprets 100 to be the lowest value and orders increasing  
from there.





is.numeric(test[,2])

[1] FALSE

is.double(test[,2])

[1] FALSE

is.integer(test[,2])

[1] FALSE

is.real(test[,2])

[1] FALSE




My questions are:  Why is this happening? and How do I fix  it?



Thanks in advance!



 Aaron Wells

_


cns!503D1D86EBB2B53C!2285.entry?ocid=TXT_TAGLM_WL_UGC_Contacts_032009
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


David Winsemius, MD
Heritage Laboratories
West Hartford, CT

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] ordering

2009-03-10 Thread aaron wells

Thanks Peter, that did the trick.  I'll modify my function so that the numeric 
conversion is done automatically thus saving me the extra step of converting 
later on.

 

  Aaron Wells
 
 Subject: RE: [R] ordering
 Date: Wed, 11 Mar 2009 08:41:50 +1300
 From: palsp...@hortresearch.co.nz
 To: awell...@hotmail.com; r-help@r-project.org
 
 Kia ora Aaron
 
 As you have identified, test[,2] is not numeric - it is probably factor.
 Your function must have made the conversion, so you may want to modify
 that. Alternative, try:
 
 test[order(as.numeric(as.character(test[,2]))),] 
 
 BTW, str(test) is a good way to find out more about the structure of
 your object.
 
 HTH 
 
 Peter Alspach
 
 
 
 
  -Original Message-
  From: r-help-boun...@r-project.org 
  [mailto:r-help-boun...@r-project.org] On Behalf Of aaron wells
  Sent: Wednesday, 11 March 2009 8:30 a.m.
  To: r-help@r-project.org
  Subject: [R] ordering
  
  
  Hello, I would like to order a matrix by a specific column. 
  For instance:
  
  
  
   test
  [,1] [,2] [,3]
  [1,] 1 100 21
  [2,] 2 3 22
  [3,] 3 100 23
  [4,] 4 60 24
  [5,] 5 55 25
  [6,] 6 45 26
  [7,] 7 75 27
  [8,] 8 12 28
  [9,] 9 10 29
  [10,] 10 22 30
  
  
  test[order(test[,2]),]
  [,1] [,2] [,3]
  [1,] 2 3 22
  [2,] 9 10 29
  [3,] 8 12 28
  [4,] 10 22 30
  [5,] 6 45 26
  [6,] 5 55 25
  [7,] 4 60 24
  [8,] 7 75 27
  [9,] 1 100 21
  [10,] 3 100 23
  
  
  This works well and good in the above example matrix. 
  However in the matrix that I actually want to sort (derived 
  from a function that I wrote) I get something like this:
  
  
  
   test[order(as.numeric(test[,2])),] ### First column is row.names
  
  
  f con f.1 cov f.2 minimum f.3 maximum f.4 cl
  asahi * 100 * 1 * 0.1 * 2 * test
  castet * 100 * 2 * 0.1 * 5 * test
  clado * 100 * 1 * 0.7 * 2 * test
  aulac * 33 * 0 * 0.1 * 0.1 * test
  buell * 33 * 0 * 0.1 * 0.1 * test
  camlas * 33 * 0 * 0.1 * 0.1 * test
  carbig * 33 * 1 * 1 * 1 * test
  poaarc * 67 * 0 * 0.1 * 0.1 * test
  polviv * 67 * 0 * 0.1 * 0.1 * test
  
  
  
  
  where R interprets 100 to be the lowest value and orders 
  increasing from there. 
  
  
  
   is.numeric(test[,2])
  [1] FALSE
   is.double(test[,2])
  [1] FALSE
   is.integer(test[,2])
  [1] FALSE
   is.real(test[,2])
  [1] FALSE
  
  
  
  
  My questions are: Why is this happening? and How do I fix it? 
  
  
  
  Thanks in advance!
  
  
  
  Aaron Wells
  
  _
  
  
  cns!503D1D86EBB2B53C!2285.entry?ocid=TXT_TAGLM_WL_UGC_Contacts_032009
  [[alternative HTML version deleted]]
  
  __
  R-help@r-project.org mailing list
  https://stat.ethz.ch/mailman/listinfo/r-help
  PLEASE do read the posting guide 
  http://www.R-project.org/posting-guide.html
  and provide commented, minimal, self-contained, reproducible code.
  
 
 The contents of this e-mail are confidential and may be subject to legal 
 privilege.
 If you are not the intended recipient you must not use, disseminate, 
 distribute or
 reproduce all or any part of this e-mail or attachments. If you have received 
 this
 e-mail in error, please notify the sender and delete all material pertaining 
 to this
 e-mail. Any opinion or views expressed in this e-mail are those of the 
 individual
 sender and may not represent those of The New Zealand Institute for Plant and
 Food Research Limited.

_



[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Nesting order for mixed models

2009-03-10 Thread Jon Zadra

Hello,

I am confused about the order of nesting in mixed models using functions 
like aov(), lme(), lmer().


I have the following data:
n subjects in either condition A or B
each subject tested at each of 3 numerical values (distance = 
40,50,60), repeated 4 times for each of the 3 numerical values (trial 
= 1,2,3,4)


Variable summary:
Condition: 2 level factor
Distance: numerical (but only 3 values) in the same units as y
Trial: 4 level factor

I expect the subjects' data to differ due to condition and distance, and 
am doing repeated measurements to reduce any variability due to 
measurement error.


Currently I'm using this model:

lme(y ~ Condition + Distance, random = ...)

the question is how do I organize the random statement?  Is it:
random = ~1 | Subject
random = ~1 | Subject/Trial
random = ~1 | Trial/Subject
random = ~1 | Condition/Distance/Subject/Trial
...etc, or something else entirely?

Mostly I'm unclear about whether the Trials should be grouped under 
subject because I expect the trials to be more similar within a subject 
than across subjects, or whether subjects should be grouped under trials 
because the trials are going to differ depending on the subject.  If 
trials should be grouped under subjects, then do the condition or 
distance belong as well, since the trials will be most similar within 
each distance within each subject?


Thanks in advance!

- Jon


--
Jon Zadra
Department of Psychology
University of Virginia
P.O. Box 400400
Charlottesville VA 22904
(434) 982-4744
email: za...@virginia.edu
http://www.google.com/calendar/embed?src=jzadra%40gmail.com

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] How to color certain area under curve

2009-03-10 Thread Matthieu Dubois

Just a small typo. I forgot a ) in the polygon function. 
The code must be: 
polygon(x = c(qrv[1], drv$x[select], qrv[2]), 
y = c(0, drv$y[select], 0), col='blue')

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Sparse PCA in R

2009-03-10 Thread Christos Hatzis
Take a look at the elasticnet package.

-Christos 

 -Original Message-
 From: r-help-boun...@r-project.org 
 [mailto:r-help-boun...@r-project.org] On Behalf Of joris meys
 Sent: Tuesday, March 10, 2009 3:43 PM
 To: R-help Mailing List
 Subject: [R] Sparse PCA in R
 
 Dear all,
 
 I would like to perform a sparse PCA, but I didn't find any 
 library offering me this in R. Is there one available, or do 
 I have to write the functions myself?
 
 Kind regards
 Joris Meys
 
   [[alternative HTML version deleted]]
 
 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide 
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.
 


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] system() not accepting strings from cat()

2009-03-10 Thread ig2ar-saf1

Hi again R-ists,

How do you construct a string that you can pass to system()?

For instance. Say I do

 system(echo Hello!)
Hello!

That works. Now the alternative: I need to construct the string like this

 a - echo
 b - Hello!
 c - \n
 cat(a, b, c)
echo Hello!

Looks nice... but see what happens when I try to use it

 system(cat(a, b, c))
echo Hello! 
Error in system(command, intern) : non-empty character argument expected

I have googled extensively in and out of r-lists but I can't find a solution.

Can anybody help?




__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] help structuring mixed model using lmer()

2009-03-10 Thread Mark Difford

Hi Simon,

Have a look at Chap. 11 of An Introduction to R (one of R's manuals),
which explains the different ways of specifying models using formulae.

Briefly, y ~ x1 * x2 expands to y ~ x1 + x2 + x1:x2, where the last term
(interaction term) amounts to a test of slope. Normally you would read its
significance from F/chisq/p-value. Many practitioners consider the L.Ratio
test to be a better option. For the fixed effects part in lmer() do:

mod1 - y ~ x1 + x2  == y ~ x1 + x2
mod2 - y ~ x1 * x2  == y ~ x1 + x2 + x1:x2

anova(mod1, mod2)

This will tell you if you need to worry about interaction or whether slopes
are parallel.

Regards, Mark.


Simon Pickett-4 wrote:
 
 Cheers,
 
 Actually I was using quasipoisson for my models, but for the puposes of my 
 example, it doesnt really matter.
 
 I am trying to work out a way of quantifying whether the slopes (for
 years) 
 are covary with habitat scores.
 
 The more I think about it, the more I am convinced that it isnt possible
 do 
 to that using a glm approach. I think I have to run separate models for
 each 
 site, calculate the gradient, then do a lm with gradient explained by 
 habitat score
 
 Thanks, Simon.
 
 
 
 
 On Tue, Mar 10, 2009 at 10:15 AM, Simon Pickett simon.pick...@bto.org 
 wrote:

 This is partly a statistical question as well as a question about R, but 
 I am stumped!

 I have count data from various sites across years. (Not all of the sites 
 in the study appear in all years). Each site has its own habitat score 
 habitat that remains constant across all years.

 I want to know if counts declined faster on sites with high habitat 
 scores.

 I can construct a model that tests for the effect of habitat as a main 
 effect, controlling for year

 model1-lmer(count~habitat+yr+(1|site), family=quasibinomial,data=m)
 model2-lmer(count~yr+(1|site), family=quasibinomial,data=m)
 anova(model1,model2)

 I'm curious as to why you use the quasibinomial family for count data.
 When you say count data do you mean just presence/absence or an
 actual count of the number present?  Generally the binomial and
 quasibinomial families are used when you have a binary response, and
 the poisson or quasipoisson family are used for responses that are
 counts.

 
 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.
 
 

-- 
View this message in context: 
http://www.nabble.com/help-structuring-mixed-model-using-lmer%28%29-tp22436596p22441985.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


  1   2   >