Re: [R] Authoring a book

2006-08-25 Thread Stefan Grosse

 //people.ee.ethz.ch/~oetiker/lshort/lshort.pdf

 It really was a not too short intro.  I'll have a look at it.

Yes definitly not too short. But it states in LaTeX in 133 min ...

Seems to be for Linux only.  My server is Windows, even if I have the
rest of the components.

Hm at the projects' homepage its stated that it is OS independent:
http://sourceforge.net/projects/uniwakka/

I also dont see a hint that it is Linux dependent in the installation notes.

Maybe you think its Linux stuff because of the *.tar.gz ending? You can
unzip those files also on Windows, its just a packing file format.
For unpacking on a windows machine use either 7zip:
http://www.7-zip.org/ or the total commander: http://www.ghisler.com .
The first is a free packing program and the second is the best ever
windows file manager (if you want to open it there just double click at
the archive).

Stefan Grosse

The unpacked instl. notes:

Wakka Installation

Not much to it (as long as it works, ahem). Unpack/upload the
distribution files
into a directory that can be accessed via the web. Then go to the
corresponding URL.
A web-based installer will walk you through the rest.

Example:

If your website, say, http://www.mysite.com, is mapped to the directory
/home/jdoe/www/,
and you place the Wakka distribution files into /home/jdoe/www/wakka/,
you should go to
http://www.mysite.com/wakka/.

Note that Wakka distributions normally unpack into directories that
include the version
in their name; you'll probably want to rename those to just wakka --
or, if you're
on a unixoid system, set up a symbolic link.

During first installs, the installer will try to create a file called
wakka.config.php
in your Wakka directory. In order to do this, you will need to either
make the Wakka
directory writable by the web server, or create a new (empty) file called
wakka.config.php which is writable by the web server. If the installer
still fails to
create the file, it will dump the file's contents which you can then
upload manually.

IMPORTANT: for installing or upgrading Wakka, do NOT access any of the
files contained
in the setup/ subdirectory. They're used by the web-based
installer/updater, but you
should really just access the Wakka directory itself, and it will (or at
least should)
work perfectly.

Detailed instructions are available at
http://www.wakkawiki.com/WakkaInstallation.

- Hendrik Mans [EMAIL PROTECTED]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] tktoplevel tkgetSaveFile options

2006-08-25 Thread Manel Salamero
Dear list, 

Previously I posted these question to R-SIG-GUI. Perhaps here is a better place.

1. Is there some option for mantaining the tktoplevel window always on top?
2. Is there some option to eliminate the border icons maximize, minimize and 
close of a tktoplevel window?
3. Is possible to avoid the warning message (or changing its contents) in 
tkgetSaveFile when the file to save already exists?

Thanks,

Manel

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] xyplot tick marks and line thickness

2006-08-25 Thread Dieter Menne
Deepayan Sarkar deepayan.sarkar at gmail.com writes:

 Right. the grid call shouldn't be necessary, axis.line controls the
 panel borders. And tck can be a vector, to get rid of the ugly bumps
 on top:
 
 xyplot(x ~ x | g, type = l, lwd = lwd,
scales = list(tck = c(-1, 0)),
par.settings =
list(axis.line = list(lwd = lwd),
 strip.border = list(lwd = lwd)))
 

The tickmarks look strange for me (Windows, R 2.3.1, all packages updated today)

See http://www.menne-biomed.de/uni/gridit.png.

This was created on screen, saved as a bitmap, and 3x enlarged lower portion.

Dieter

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] R in Nature

2006-08-25 Thread Simon Blomberg
Hi all,

We've just had a paper accepted for publication in Nature. We used R for 
95% of our analyses (one of my co-authors sneaked in some GenStat when I 
wasn't looking.). The preprint is available from the Nature web site, in 
the open peer-review trial section. I searched Nature for previous 
references to R Development Core Team, and I received no hits. So I 
tentatively conclude that our paper is the first Nature paper to cite R.

A great many thanks to the R Development Core Team for R, and Prof. 
Bates for lmer.

Cheers,

Simon.
(I'm off to the pub to celebrate.)

-- 
Simon Blomberg, B.Sc.(Hons.), Ph.D, M.App.Stat. 
Centre for Resource and Environmental Studies
The Australian National University  
Canberra ACT 0200   
Australia   
T: +61 2 6125 7800 email: Simon.Blomberg_at_anu.edu.au
F: +61 2 6125 0757
CRICOS Provider # 00120C

The combination of some data and an aching desire for 
an answer does not ensure that a reasonable answer 
can be extracted from a given body of data.
- John Tukey.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] exact Wilcoxon signed rank test with ties and the no longer under development exactRanksumTests package

2006-08-25 Thread Stefan Grosse
Dear List,

after updating the exactRanksumTests package I receive a warning that
the package is not developed any further and that one should consider
the coin package.

I don't find the signed rank test in the coin package, only the Wilcoxon
Mann Whitney U-Test. I only found a signed rank test in the stats
package (wilcox.test) which is able to calculate the exact pvalues but
unfortunately the procedure cannot calculate the exact values with ties.

Is there any other package that is providing a similiar test? Or is
there an easy work out how to take the ties into account? (Or a chance
that the correction is taken into account for the stats package?)

Stefan Grosse

Take the following example from Bortz/Lienert/Boehnke:

 x1-c(9,14,8,11,14,10,8,14,12,14,13,9,15,12,9)
 x2-c(13,15,9,12,16,10,8,13,12,16,9,10,16,12,9)

# exactRankTests package:

 wilcox.exact(x1,x2,paired=TRUE)

Exact Wilcoxon signed rank test

data:  x1 and x2
V = 13, p-value = 0.1367
alternative hypothesis: true mu is not equal to 0

# wilcox.test by stats package:

 wilcox.test(x1,x2,paired=TRUE,exact=TRUE)

Wilcoxon signed rank test with continuity correction

data:  x1 and x2
V = 13, p-value = 0.1436
alternative hypothesis: true mu is not equal to 0

Warning messages:
1: cannot compute exact p-value with ties in: wilcox.test.default(x1,
x2, paired = TRUE, exact = TRUE)
2: cannot compute exact p-value with zeroes in: wilcox.test.default(x1,
x2, paired = TRUE, exact = TRUE)

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] xyplot tick marks and line thickness

2006-08-25 Thread Karl Ove Hufthammer
Piet Bell skreiv:

 The publisher of our paper has requested:
 
 1. all tick marks should point inwards instead of outwards.

Point him to William S. Cleveland’s excellent book /The Elements of Graphing
Data/, where Cleveland strongly recommends that tick marks should point
*outwards* ‘because ticks that point inward can obscure data’. See the
discussion on pages 31–35, and especially figure 2.12 and 2.13.

 2. All lines should be thicker (lines, axes, boxes, etc. Everything).
 Lines is easy...I used:  lwd=1.5   but what about the lines of the axes,
 and the lines that build up the plot itself??

I find that

library(Hmisc)
setps(filename) # Or setpdf. You might also want to add 'color=TRUE'.
... plotting commands ...
dev.off()

usually gives much better-looking plots, and with thicker lines.

-- 
Karl Ove Hufthammer
E-mail and Jabber: [EMAIL PROTECTED]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Check values in colums matrix

2006-08-25 Thread Muhammad Subianto
Dear all,
I would like to thank everybody who replied for their useful 
suggestions. Maybe, I am going through the book statistics to teach 
(fresh) myself.
Wish you have a nice weekend.

Regards, Muhammad Subianto


On this day 24/08/2006 18:59, Muhammad Subianto wrote:
 Dear all,
 I apologize if my question is quite simple.
 I have a dataset (20 columns  1000 rows) which
 some of columns have the same value and the others
 have different values.
 Here are some piece of my dataset:
 obj - cbind(c(1,1,1,4,0,0,1,4,-1),
  c(0,1,1,4,1,0,1,4,-1),
  c(1,1,1,4,2,0,1,4,-1),
  c(1,1,1,4,3,0,1,4,-1),
  c(1,1,1,4,6,0,1,5,-1),
  c(1,1,1,4,6,0,1,6,-1),
  c(1,1,1,4,6,0,1,7,-1),
  c(1,1,1,4,6,0,1,8,-1))
 obj.tr - t(obj)
 obj.tr
 obj.tr
  [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9]
 [1,]11140014   -1
 [2,]01141014   -1
 [3,]11142014   -1
 [4,]11143014   -1
 [5,]11146015   -1
 [6,]11146016   -1
 [7,]11146017   -1
 [8,]11146018   -1
 
 How can I do to check columns 2,3,4,6,7 and 9 have
 the same value, and columns 1,5 and 8 have different values.
 
 Best, Muhammad Subianto
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Authoring a book

2006-08-25 Thread Ramon Diaz-Uriarte
Dear Tom,

To add a few things to explore:

- I'd definitely go with LaTeX. Depending on how much formatting control you 
want, though, and if your coworkers are reluctant to jump into LaTeX, you 
might start with reStructuredText (http://docutils.sourceforge.net/rst.html) 
or text2tags (http://txt2tags.sourceforge.net/). With both, you can produce 
LaTeX, but innitially at least it allows you to write text with structure 
using markup that is a lot simpler than latex.


- I'd definitely use a version control system. Instead of CVS or SVN, though, 
I'd suggest you take a look at some of the distributed ones, in particular 
Bazaar-NG (http://bazaar-vcs.org), Mercurial 
(http://www.selenic.com/mercurial/wiki/index.cgi) or Darcs 
(http://abridgegame.org/darcs/). 

These three are probably among the most mature ones (though oppinions will 
vary, of course; I have some notes and links at: 
http://www.ligarto.org/rdiaz/VersionControl.html). 

What I like about any of these is that I think they provide you essentially 
all SVN can provide (except for the user-base and years of existence of SVN) 
plus a lot more. For instance, if you often work without access to the remote 
repository, with any of these three systems you can enjoy all the benifits of 
version control. Cherry-picking is easier with any of these than with 
CVS/SVN, and Darcs in particular excels at it.


- For bibliography, I find CiteULike (http://www.citeulike.org/) fabulous. 
Needs internet access, and might not work with the journals/data bases that 
you use, though. It can export as bibtex.


- If you find outliners useful (or absolutely essential) then you might want 
to look at Leo (http://webpages.charter.net/edreamleo/front.html). Leo is 
agnostic regarding whether you write LaTeX, plain text, or R code (though it 
has great support for some languages such as Python or rst), and you can use 
Leo and still edit files in your editor of choice (I use Leo for working with 
fairly large latex files that I edit under Emacs). However, for this to work, 
all of you should agree to use Leo (or at least not disturb the sentinel 
lines that leo uses).


Hope this helps (or at least provides entertaining links :-).

R.


On Thursday 24 August 2006 21:10, Tom Backer Johnsen wrote:
 Stefan Grosse wrote:
  I think Peter Dalgaard is right.
 
  Since you are able to use R I believe you will be very fast in learning
  LaTeX.
 
  I think it needs less then a week to learn the most common LaTeX
  commands. And setting up a wiki and trying then to convert this into a
  printable document format plus learning the wiki syntax is probably more
  time consuming. Beside this R is able to work perfectly together with
  LaTeX, it creates LaTeX output and is doing excellent graphics in the
  EPS/PS format.
 
  The best introduction for LaTeX is the not so short introduction:
  http://people.ee.ethz.ch/~oetiker/lshort/lshort.pdf

 It really was a not too short intro.  I'll have a look at it.

  If you still are not convinced have a look at UniWakkaWiki:
  http://uniwakka.sourceforge.net/HomePage
 
  It is a Wiki for Science and University purposes and claims to be able
  to export to Openoffice as well as to LaTeX.

 Looks interesting and I really like the concept, but how stable is it?
   It looks rather fresh from the web page, but I may be wrong.  A
 bibliography function is really a big advantage, so ... perhaps.

 Tom

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html and provide commented, minimal,
 self-contained, reproducible code.

-- 
Ramón Díaz-Uriarte
Bioinformatics 
Centro Nacional de Investigaciones Oncológicas (CNIO)
(Spanish National Cancer Center)
Melchor Fernández Almagro, 3
28029 Madrid (Spain)
Fax: +-34-91-224-6972
Phone: +-34-91-224-6900

http://ligarto.org/rdiaz
PGP KeyID: 0xE89B3462
(http://ligarto.org/rdiaz/0xE89B3462.asc)



**NOTA DE CONFIDENCIALIDAD** Este correo electrónico, y en s...{{dropped}}

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] exact Wilcoxon signed rank test with ties and the no longerunder development exactRanksumTests package

2006-08-25 Thread Torsten Hothorn

On Fri, 25 Aug 2006, Stefan Grosse wrote:

 Dear List,


Stefan,

 after updating the exactRanksumTests package I receive a warning that
 the package is not developed any further and that one should consider
 the coin package.

 I don't find the signed rank test in the coin package, only the Wilcoxon
 Mann Whitney U-Test. I only found a signed rank test in the stats
 package (wilcox.test) which is able to calculate the exact pvalues but
 unfortunately the procedure cannot calculate the exact values with ties.


indeed, this is the only gap to be filled in `coin', I just haven't had 
the time to implement this. And of course, `exactRankTests' is still 
available and will be available in the future. So you _can_ use it!

The message just means that I'm not going to add _new_ features
to `exactRankTests' and that we as the authors of both packages
believe that the `coin'-way of doing things is more appropriate.

Hope that helps  sorry for the confusion!

Torsten

ps: please cc emails about packages to the maintainer.

 Is there any other package that is providing a similiar test? Or is
 there an easy work out how to take the ties into account? (Or a chance
 that the correction is taken into account for the stats package?)

 Stefan Grosse

 Take the following example from Bortz/Lienert/Boehnke:

 x1-c(9,14,8,11,14,10,8,14,12,14,13,9,15,12,9)
 x2-c(13,15,9,12,16,10,8,13,12,16,9,10,16,12,9)

 # exactRankTests package:

 wilcox.exact(x1,x2,paired=TRUE)

Exact Wilcoxon signed rank test

 data:  x1 and x2
 V = 13, p-value = 0.1367
 alternative hypothesis: true mu is not equal to 0

 # wilcox.test by stats package:

 wilcox.test(x1,x2,paired=TRUE,exact=TRUE)

Wilcoxon signed rank test with continuity correction

 data:  x1 and x2
 V = 13, p-value = 0.1436
 alternative hypothesis: true mu is not equal to 0

 Warning messages:
 1: cannot compute exact p-value with ties in: wilcox.test.default(x1,
 x2, paired = TRUE, exact = TRUE)
 2: cannot compute exact p-value with zeroes in: wilcox.test.default(x1,
 x2, paired = TRUE, exact = TRUE)

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.



__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] R in Nature

2006-08-25 Thread Andrej Kastrin
Simon Blomberg wrote:
 Hi all,

 We've just had a paper accepted for publication in Nature. We used R for 
 95% of our analyses (one of my co-authors sneaked in some GenStat when I 
 wasn't looking.). The preprint is available from the Nature web site, in 
 the open peer-review trial section. I searched Nature for previous 
 references to R Development Core Team, and I received no hits. So I 
 tentatively conclude that our paper is the first Nature paper to cite R.

 A great many thanks to the R Development Core Team for R, and Prof. 
 Bates for lmer.

 Cheers,

 Simon.
 (I'm off to the pub to celebrate.)

   
Congratulations
but I cannot find your article on 
http://blogs.nature.com/nature/peerreview/trial
Can you post valid link to it? Thanks in advance.

Andrej

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] R in Nature

2006-08-25 Thread Peter Dalgaard
Andrej Kastrin [EMAIL PROTECTED] writes:

 Simon Blomberg wrote:
  Hi all,
 
  We've just had a paper accepted for publication in Nature. We used R for 
  95% of our analyses (one of my co-authors sneaked in some GenStat when I 
  wasn't looking.). The preprint is available from the Nature web site, in 
  the open peer-review trial section. I searched Nature for previous 
  references to R Development Core Team, and I received no hits. So I 
  tentatively conclude that our paper is the first Nature paper to cite R.
 
  A great many thanks to the R Development Core Team for R, and Prof. 
  Bates for lmer.
 
  Cheers,
 
  Simon.
  (I'm off to the pub to celebrate.)
 

 Congratulations
 but I cannot find your article on 
 http://blogs.nature.com/nature/peerreview/trial
 Can you post valid link to it? Thanks in advance.
 
 Andrej

Found it in the Ecology section. 

http://blogs.nature.com/nature/peerreview/trial/Fisher%20manuscript.pdf

(BTW: The year in the reference is wrong, R 2.3.1 is from 2006.)

-- 
   O__   Peter Dalgaard Øster Farimagsgade 5, Entr.B
  c/ /'_ --- Dept. of Biostatistics PO Box 2099, 1014 Cph. K
 (*) \(*) -- University of Copenhagen   Denmark  Ph:  (+45) 35327918
~~ - ([EMAIL PROTECTED])  FAX: (+45) 35327907

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] extremely slow recursion in R?

2006-08-25 Thread Joerg van den Hoff
Thomas Lumley wrote:
 On Thu, 24 Aug 2006, Jason Liao wrote:
 
 I recently coded a recursion algorithm in R and ir ran a few days
 without returning any result. So I decided to try a simple case of
 computing binomial coefficient using recusrive relationship

 choose(n,k) = choose(n-1, k)+choose(n-1,k-1)

 I implemented in R and Fortran 90 the same algorithm (code follows).
 The R code finishes 31 minutes and the Fortran 90 program finishes in 6
 seconds. So the Fortran program is 310 times faster. I thus wondered if
 there is room for speeding up recursion in R. Thanks.

 
 Recursive code that computes the same case many times can often be sped up 
 by memoization, eg
 
 memo-new.env(hash=TRUE)
 chewse-function(n,k) {
  if (n==k) return(1)
  if(k==1) return(n)
 
  if(exists(paste(n,k),memo,inherits=FALSE))
  return(get(paste(n,k),memo))
  rval-chewse(n-1,k)+chewse(n-1,k-1)
  assign(paste(n,k),rval,envir=memo)
  return(rval)
 }
 
 This idea was discussed in an early Programmers' Niche article by Bill 
 Venables in R News.
 
 However, I'm surprised that you're surprised that compiled Fortran 90 is 
 310 times faster than interpreted R.  That would be about what I would 
 expect for code that isn't making use of vectorized functions in R.
 
 
   -thomas


maybe someone's interested:
I made the same observation of seemingly very slow recursion recently: 
just for fun I used the (in)famously inefficient

fib - function(n = 1) {
if (n  2)
   fn - 1
else
   fn - fib(n - 1) + fib(n - 2)
fn
}

for calculating the fibonacci numbers and compared `fib(30)' (about 
1.3e6 recursive function calls ...) to some other languages (times in sec):

language  time
==
C  0.034  (compiled, using integers)
Ocaml  0.041  (compiled, using integers)
Ocaml  0.048  (interpreted, using integers)
C  0.059  (compiled, using floats)
Lua1.1
ruby   3.4
R  21
octave120

apart from octave (which seems to have a _real_ issue with recursive 
function calls), R is by far the slowest in this list and still a factor 
7-20 slower than the interpreter based Lua and ruby. the speed loss 
compared to C or Ocaml is about a factor of 350-600 here (Ocaml keeps 
its speed more or less in this simple example even in 'script mode', 
which is remarkable, I think (and it usually loses only a factor of 
about 7 or so in script mode compared to the compiled variant)

for the specialists the bad performance of R in this situation might not 
be surprising, but I was not aware that recursive function calls are 
seemingly as expensive as explicit loops (where the execution time ratio 
of R to C again is of the same order, i.e. =~ 400).

of course, such comparsions don't make too much sense: the reason to use 
R will definitely not be its speed (which, luckily, often does not 
matter), but the combination of flexible language, the number of 
available libraries and the good 2D graphics.



joerg

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Using a 'for' loop : there should be a better way in R

2006-08-25 Thread John Kane

--- Gabor Grothendieck [EMAIL PROTECTED]
wrote:

 Use cbind to create a two column matrix, mat,
 and multiply that by the appropriate inflation
 factors.
 Then use rowsum to sum the rows according to the
 id grouping factor.
 
 inf.fac - list(year1 = 1, year2 = 5, year3 = 10)
 mat - cbind(s1 = df1$cat1 + df1$cat2, s2 = df1$cat3
 + df1$cat4)
 rowsum(mat * unlist(inf.fac[df1$year]), df1$id)

Thanks very much.  It took me a few minutes to see
what was happening but it is lovely.  I would never
have thought of using a list like that.
 
 
 On 8/24/06, John Kane [EMAIL PROTECTED] wrote:
  I need to apply a yearly inflation factor to some
  wages and supply some simple sums by work
 category.  I
  have gone at it with a brute force for loop
 approach
   which seems okay as it is a small dataset.  It
 looks
  a bit inelegant and given all the warnings in the
  Intro to R, etc, about using loops I wondered  if
  anyone could suggest something a bit simpler or
 more
  efficent?
 
  Example:
 
  cat1 - c( 1,1,6,1,1,5)
  cat2 - c( 1,2,3,4,5,6)
  cat3 - c( 5,4,6,7,8,8)
  cat4 - c( 1,2,1,2,1,2)
  years - c( 'year1', 'year2', 'year3', 'year3',
  'year1', 'year1')
  id -  c('a','a','b','c','c','a')
  df1 - data.frame(id,years,cat1,cat2, cat3, cat4)
 
  nn - levels(df1$id)# levels for outer loop
  hh - levels(df1$years) # levels for inter loop
 
 
  mp - c(1, 5, 10)   # inflation factor
 
  tt - data.frame(matrix(NA, length(nn), 2))
  names(tt) - c(s1,s2)
  rownames(tt) - nn
 
  for (i in 1:length(nn)){
  scat - data.frame(matrix(NA, length(hh),2))
  dd1 - subset(df1, id==nn[i])
  for (j in 1:length(hh)){
  dd2 - subset(dd1, dd1$years==hh[j])
  s1 - sum(dd2$cat1,dd2$cat2, na.rm=T)
  s2 - sum(dd2$cat3,dd2$cat4,na.rm=T)
  scat[j,] - c(s1,s2) *mp[j]# multiply by the
  inflation factor
  }
  crush - apply(scat, 2, sum)
  tt[i,] - crush
  }
  tt
 
  __
  R-help@stat.math.ethz.ch mailing list
  https://stat.ethz.ch/mailman/listinfo/r-help
  PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html
  and provide commented, minimal, self-contained,
 reproducible code.
 


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] plot question

2006-08-25 Thread Catherine Carter
Hi everyone,

I have what may appear to be a newbie question, but I have looked 
everywhere I can think to look and I cannot find an answer. On page 35 
of An Introduction to R the following command appears: 
plot(ecdf(eruptions), do.points=FALSE, verticals=TRUE). What is the 
do.points argument? I know what it does (suppresses printing of the 
points) but where can I find help on it? I want to be able to explain it 
fully to my students.

Thanks for your help,
Cathy

-- 
Dr. Cathy Carter
Department of Geography
University of Maryland
A LeFrak Hall
301.405.4620

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] plot question

2006-08-25 Thread Roger D. Peng
Take a look at ?plot.stepfun.

'ecdf()' returns an object of class ecdf inheriting from class stepfun and 
'plot.ecdf()' calls 'plot.stepfun'.

-roger

Catherine Carter wrote:
 Hi everyone,
 
 I have what may appear to be a newbie question, but I have looked 
 everywhere I can think to look and I cannot find an answer. On page 35 
 of An Introduction to R the following command appears: 
 plot(ecdf(eruptions), do.points=FALSE, verticals=TRUE). What is the 
 do.points argument? I know what it does (suppresses printing of the 
 points) but where can I find help on it? I want to be able to explain it 
 fully to my students.
 
 Thanks for your help,
 Cathy
 

-- 
Roger D. Peng  |  http://www.biostat.jhsph.edu/~rpeng/

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] extremely slow recursion in R?

2006-08-25 Thread Jason Liao

Thank you, Thomas and Joerg! Joerg's example is extremely useful. In
fact I had wanted to compare R with other interpreting language myself.
I heard that Ruby is not fast among scripting languages. I thus believe
that there is room to improve R's recursive function.

Jason
 
--- Joerg van den Hoff [EMAIL PROTECTED] wrote:
 maybe someone's interested:
 I made the same observation of seemingly very slow recursion
 recently: 
 just for fun I used the (in)famously inefficient
 
 fib - function(n = 1) {
 if (n  2)
fn - 1
 else
fn - fib(n - 1) + fib(n - 2)
 fn
 }
 
 for calculating the fibonacci numbers and compared `fib(30)' (about 
 1.3e6 recursive function calls ...) to some other languages (times in
 sec):
 
 language  time
 ==
 C  0.034  (compiled, using integers)
 Ocaml  0.041  (compiled, using integers)
 Ocaml  0.048  (interpreted, using integers)
 C  0.059  (compiled, using floats)
 Lua1.1
 ruby   3.4
 R  21
 octave120
 
 apart from octave (which seems to have a _real_ issue with recursive 
 function calls), R is by far the slowest in this list and still a
 factor 
 7-20 slower than the interpreter based Lua and ruby. the speed loss 
 compared to C or Ocaml is about a factor of 350-600 here (Ocaml keeps
 
 its speed more or less in this simple example even in 'script mode', 
 which is remarkable, I think (and it usually loses only a factor of 
 about 7 or so in script mode compared to the compiled variant)
 
 for the specialists the bad performance of R in this situation might
 not 
 be surprising, but I was not aware that recursive function calls are 
 seemingly as expensive as explicit loops (where the execution time
 ratio 
 of R to C again is of the same order, i.e. =~ 400).
 
 of course, such comparsions don't make too much sense: the reason to
 use 
 R will definitely not be its speed (which, luckily, often does not 
 matter), but the combination of flexible language, the number of 
 available libraries and the good 2D graphics.
 
 
 
 joerg
 


Jason Liao, http://www.geocities.com/jg_liao
Department of Epidemiology and Biostatistics
Drexel University School of Public Health
245 N. 15th Street, Mail Stop 660
Philadelphia, PA 19102-1192
phone 215-762-3934

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] plot question

2006-08-25 Thread Marc Schwartz
On Fri, 2006-08-25 at 09:08 -0400, Catherine Carter wrote:
 Hi everyone,
 
 I have what may appear to be a newbie question, but I have looked 
 everywhere I can think to look and I cannot find an answer. On page 35 
 of An Introduction to R the following command appears: 
 plot(ecdf(eruptions), do.points=FALSE, verticals=TRUE). What is the 
 do.points argument? I know what it does (suppresses printing of the 
 points) but where can I find help on it? I want to be able to explain it 
 fully to my students.
 
 Thanks for your help,
 Cathy

A couple of options:

1. If you are aware of how R uses method dispatch, then you might know
that the plot() function is a generic method and that plot.ecdf() is the
specific method that is called when the initial argument is of class
ecdf, as is the case above. 

Thus, using ?plot.ecdf will get you to the help page, where there is a
notation in the Arguments section that the '...' arguments are then
passed to plot.stepfun(). 'do.points' is passed in this fashion, so
using ?plot.stepfun will then get you to the help page where 'do.points'
is defined as:

  logical; if true, also draw points at the (xlim restricted)
  knot locations.


2. Using:

  RSiteSearch(do.points, restrict = functions)

will search the online function documentation, bringing up a browser
window, where the first link gets you to ?plot.stepfun.

HTH,

Marc Schwartz

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] plot question

2006-08-25 Thread Catherine Carter
Thank you! That is exactly what I needed.

Roger D. Peng wrote:

 Take a look at ?plot.stepfun.

 'ecdf()' returns an object of class ecdf inheriting from class 
 stepfun and 'plot.ecdf()' calls 'plot.stepfun'.

 -roger

 Catherine Carter wrote:

 Hi everyone,

 I have what may appear to be a newbie question, but I have looked 
 everywhere I can think to look and I cannot find an answer. On page 
 35 of An Introduction to R the following command appears: 
 plot(ecdf(eruptions), do.points=FALSE, verticals=TRUE). What is the 
 do.points argument? I know what it does (suppresses printing of the 
 points) but where can I find help on it? I want to be able to explain 
 it fully to my students.

 Thanks for your help,
 Cathy



-- 
Dr. Cathy Carter
Department of Geography
University of Maryland
A LeFrak Hall
301.405.4620

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] correlation between 3 vectors

2006-08-25 Thread Bianca Vieru-Dimulescu
Hi R-users,

I am trying to calculate the correlation between 3 vectors of numbers.
Is there any other solution then passing by kendall.w function which 
need ranks in input?

Thanks,
Bianca Dimulescu

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] extremely slow recursion in R?

2006-08-25 Thread Damien Moore

i tried fib(30):

R: 8.99s
Python (interpreted): 0.96s
Python (interpreted but using psyco library): 0.03s
C++: 0.015s

cheers
Damien

 --- On Fri 08/25, Joerg van den Hoff  [EMAIL PROTECTED]  wrote:
From: Joerg van den Hoff [mailto: [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
 Cc: [EMAIL PROTECTED], r-help@stat.math.ethz.ch
Date: Fri, 25 Aug 2006 13:52:04 +0200
Subject: Re: [R] extremely slow recursion in R?

Thomas Lumley wrote:br On Thu, 24 Aug 2006, Jason Liao wrote:br br I 
recently coded a recursion algorithm in R and ir ran a few daysbr without 
returning any result. So I decided to try a simple case ofbr computing 
binomial coefficient using recusrive relationshipbrbr choose(n,k) = 
choose(n-1, k)+choose(n-1,k-1)brbr I implemented in R and Fortran 90 
the same algorithm (code follows).br The R code finishes 31 minutes and the 
Fortran 90 program finishes in 6br seconds. So the Fortran program is 310 
times faster. I thus wondered ifbr there is room for speeding up recursion 
in R. Thanks.brbr br Recursive code that computes the same case many 
times can often be sped up br by memoization, egbr br 
memo-new.env(hash=TRUE)br chewse-function(n,k) {br  if (n==k) 
return(1)br  if(k==1) return(n)br br  
if(exists(paste(n,k),memo,inherits=FALSE))br  
return(get(paste(n,k),memo))br  
rval-chewse(n-1,k)+chewse(n-1,k-1)br  
assign(paste(n,k),rval,envir=memo)br  return(rval)br }br br 
This idea was discussed in an early Programmers' Niche article by Bill br 
Venables in R News.br br However, I'm surprised that you're surprised 
that compiled Fortran 90 is br 310 times faster than interpreted R.  That 
would be about what I would br expect for code that isn't making use of 
vectorized functions in R.br br br  -thomasbrbrbrmaybe 
someone's interested:brI made the same observation of seemingly very slow 
recursion recently: brjust for fun I used the (in)famously 
inefficientbrbrfib - function(n = 1) {brif (n  2)br   fn - 
1brelsebr   fn - fib(n - 1) + fib(n - 2)brfnbr}brbrfor 
calculating the fibonacci numbers and compared `fib(30)' (about br1.3e6 
recursive function calls ...) to some other languages (times in 
sec):brbrlanguage  timebr==brC   !
   
 0.034  (compiled, using integers)brOcaml  0.041  (compiled, using 
integers)brOcaml  0.048  (interpreted, using integers)brC  
0.059  (compiled, using floats)brLua1.1brruby   3.4brR
  21broctave120brbrapart from octave (which seems to have a _real_ 
issue with recursive brfunction calls), R is by far the slowest in this list 
and still a factor br7-20 slower than the interpreter based Lua and ruby. the 
speed loss brcompared to C or Ocaml is about a factor of 350-600 here (Ocaml 
keeps brits speed more or less in this simple example even in 'script mode', 
brwhich is remarkable, I think (and it usually loses only a factor of 
brabout 7 or so in script mode compared to the compiled variant)brbrfor 
the specialists the bad performance of R in this situation might not brbe 
surprising, but I was not aware that recursive function calls are brseemingly 
as expensive as explicit loops (where the execution time rati!
 o 
brof R to C again is of the same order, i.e. =~ 400).brbrof course, such 
comparsions don't make too much sense: the reason to use brR will definitely 
not be its speed (which, luckily, often does not brmatter), but the 
combination of flexible language, the number of bravailable libraries and the 
good 2D 
graphics.brbrbrbrjoergbrbr__brR-help@stat.math.ethz.ch
 mailing listbrhttps://stat.ethz.ch/mailman/listinfo/r-helpbrPLEASE do read 
the posting guide http://www.R-project.org/posting-guide.htmlbrand provide 
commented, minimal, self-contained, reproducible code.br

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Total (un)standardized effects in SEM?

2006-08-25 Thread John Fox
Dear Rense,

(This question was posted a few days ago when I wasn't reading my email.)

So-called effect decompositions are simple functions of the structural
coefficients of the model, which in a model fit by sem() are contained in
the $A component of the returned object. (See ?sem.) One approach,
therefore, would be to put the coefficients in the appropriate locations of
the estimated Beta, Gamma, Lamda-x, and Lambda-y matrices of the LISREL
model, and then to compute the effects in the usual manner.

It should be possible to do this for the RAM formulation of the model as
well, simply by distinguishing exogenous from endogenous variables. Here's
an illustration using model C in the LISREL 7 Manual, pp. 169-177, for the
Wheaton et al. stability of alienation data (a common example--I happen to
have an old LISREL manual handy):

 S.wh - matrix(c(
+11.834, 0,0,0,   0,0,
+ 6.947,9.364, 0,0,   0,0,
+ 6.819,5.091,   12.532, 0,   0,0,
+ 4.783,5.028,7.495,9.986,0,0,
+-3.839,   -3.889,   -3.841,   -3.625,   9.610, 0,
+-2.190,   -1.883,   -2.175,   -1.878,   3.552,4.503), 
+   6, 6)
 
 rownames(S.wh) - colnames(S.wh) - 
+ c('Anomia67','Powerless67','Anomia71','Powerless71','Education','SEI')

 
 model.wh - specify.model()
1: Alienation67   -  Anomia67,  NA, 1
2: Alienation67   -  Powerless67,   lam1,   NA
3: Alienation71   -  Anomia71,  NA, 1
4: Alienation71   -  Powerless71,   lam2,   NA 
5: SES-  Education, NA, 1 
6: SES-  SEI,   lam3,   NA
7: SES-  Alienation67,  gam1,   NA
8: Alienation67   -  Alienation71,  beta,   NA
9: SES-  Alienation71,  gam2,   NA
10: Anomia67   - Anomia67,  the1,   NA
11: Anomia71   - Anomia71,  the3,   NA
12: Powerless67- Powerless67,   the2,   NA
13: Powerless71- Powerless71,   the4,   NA
14: Education  - Education, thd1,   NA
15: SEI- SEI,   thd2,   NA
16: Anomia67   - Anomia71,  the13,  NA
17: Alienation67   - Alienation67,  psi1,   NA
18: Alienation71   - Alienation71,  psi2,   NA
19: SES- SES,   phi,NA
20: 
Read 19 records
 
 sem.wh - sem(model.wh, S.wh, 932)
 
 summary(sem.wh)

 Model Chisquare =  6.3349   Df =  5 Pr(Chisq) = 0.27498
 Chisquare (null model) =  17973   Df =  15
 Goodness-of-fit index =  0.99773
 Adjusted goodness-of-fit index =  0.99046
 RMSEA index =  0.016934   90 % CI: (NA, 0.05092)
 Bentler-Bonnett NFI =  0.99965
 Tucker-Lewis NNFI =  0.99978
 Bentler CFI =  0.3
 BIC =  -27.852 

 Normalized Residuals
 Min.   1st Qu.Median  Mean   3rd Qu.  Max. 
-9.57e-01 -1.34e-01 -4.24e-02 -9.17e-02  6.43e-05  5.47e-01 

 Parameter Estimates
  Estimate Std Error z value  Pr(|z|) 
lam1   1.02656 0.053424   19.2152 0.e+00 Powerless67 --- Alienation67 
lam2   0.97089 0.049608   19.5712 0.e+00 Powerless71 --- Alienation71 
lam3   0.51632 0.042247   12.2214 0.e+00 SEI --- SES  
gam1  -0.54981 0.054298  -10.1258 0.e+00 Alienation67 --- SES 
beta   0.61732 0.049486   12.4746 0.e+00 Alienation71 --- Alienation67
gam2  -0.21151 0.049862   -4.2419 2.2164e-05 Alienation71 --- SES 
the1   5.06546 0.373464   13.5635 0.e+00 Anomia67 -- Anomia67
the3   4.81176 0.397345   12.1098 0.e+00 Anomia71 -- Anomia71
the2   2.21438 0.3197406.9256 4.3423e-12 Powerless67 -- Powerless67  
the4   2.68322 0.3312748.0997 4.4409e-16 Powerless71 -- Powerless71  
thd1   2.73051 0.5177375.2739 1.3353e-07 Education -- Education  
thd2   2.66905 0.182260   14.6442 0.e+00 SEI -- SEI  
the13  1.88739 0.2416277.8112 5.7732e-15 Anomia71 -- Anomia67
psi1   4.70477 0.427511   11.0050 0.e+00 Alienation67 -- Alienation67
psi2   3.86642 0.343971   11.2406 0.e+00 Alienation71 -- Alienation71
phi6.87948 0.659208   10.4360 0.e+00 SES -- SES  

 Iterations =  58 
 
 A - sem.wh$A  # structural coefficients
 exog - apply(A, 1, function(x) all(x == 0))
 endog - !exog

 (B - A[endog, endog, drop=FALSE])  # direct effects, endogenous -
endogenous
 Anomia67 Powerless67 Anomia71 Powerless71 Education SEI
Anomia670   00   0 0   0
Powerless67 0   00   0 0   0
Anomia710   00   0 0   0
Powerless71 0   00   0 0   0
Education   0   00   0 0   0
SEI 0   00   0 0   0
Alienation670   00   0 0   0
Alienation710   00   

Re: [R] extremely slow recursion in R?

2006-08-25 Thread Thomas Lumley
On Fri, 25 Aug 2006, Joerg van den Hoff wrote:

 maybe someone's interested:
 I made the same observation of seemingly very slow recursion recently: just 
 for fun I used the (in)famously inefficient

 fib - function(n = 1) {
   if (n  2)
  fn - 1
   else
  fn - fib(n - 1) + fib(n - 2)
   fn
 }

 for calculating the fibonacci numbers and compared `fib(30)' (about 1.3e6 
 recursive function calls ...) to some other languages (times in sec):

 language  time
 ==
 C  0.034  (compiled, using integers)
 Ocaml  0.041  (compiled, using integers)
 Ocaml  0.048  (interpreted, using integers)
 C  0.059  (compiled, using floats)
 Lua1.1
 ruby   3.4
 R  21
 octave120

 apart from octave (which seems to have a _real_ issue with recursive function 
 calls), R is by far the slowest in this list and still a factor 7-20 slower 
 than the interpreter based Lua and ruby. the speed loss compared to C or 
 Ocaml is about a factor of 350-600 here (Ocaml keeps its speed more or less 
 in this simple example even in 'script mode', which is remarkable, I think 
 (and it usually loses only a factor of about 7 or so in script mode compared 
 to the compiled variant)

 for the specialists the bad performance of R in this situation might not be 
 surprising, but I was not aware that recursive function calls are seemingly 
 as expensive as explicit loops (where the execution time ratio of R to C 
 again is of the same order, i.e. =~ 400).

Recursive function calls are likely to be *more* expensive than explicit 
loops, because of the memory and function call overhead.  This is true in 
most languages, and is why tail recursion optimization is a basic feature 
of compilers for functional languages.

 of course, such comparsions don't make too much sense: the reason to use R 
 will definitely not be its speed (which, luckily, often does not matter), but 
 the combination of flexible language, the number of available libraries and 
 the good 2D graphics.

The benchmarks are interesting (and reinforce my feeling that I need to
look at ocaml sometime).  They might be more interesting on a problem for 
which recursion was a sensible solution -- something simple like printing 
the nodes of a tree, or a more complicated problem like depth-first search 
in a graph.

It's certainly true that people don't pick R over other minority languages 
like Ocaml for its blazing speed, but for other reasons. As programming 
blogger Steve Yegge has observed when comparing advantages and 
disadvantages of various languages: ML and Haskell are both from Planet 
Weird. And the people of that planet evidently do not believe in 
libraries.

More to the point, the ratio of speeds is much lower for many uses. For 
example, I recently translated to C an R program that computed the 
conditional score estimator for a Cox model with measurement error, and 
the C version ran about five times faster.  This was about the same speed 
increase than I had already obtained in the interpreted code by replacing 
a bisection search with uniroot().

In some cases the ratio may even be less than 1 -- since R can use 
optimized BLAS routines you might expect that some matrix operations would 
be faster in interpreted R than in C code written by the average user.  I 
have a second-hand report from a student here that this was actually the 
case -- he wrote compiled matrix routines and his code got slower.

It would certainly be nice if there were a faster statistical language 
with nice features, whether by speeding up R or by adding statistical 
features to something that compiles to native code, like Common Lisp or 
ocaml, or by taking better advantage of multicore computers as they 
proliferate.  These problems do need more people to work on them 
(especially as Bell Labs isn't in the statistical computing business any 
more).  There's a conference in New Zealand in February and abstract 
submissions are still open.


-thomas

Thomas Lumley   Assoc. Professor, Biostatistics
[EMAIL PROTECTED]   University of Washington, Seattle

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] increasing the # of socket connections

2006-08-25 Thread Marc Kirchner
Dear R-helpers,

using snow on socket connections, I ran into the following error

  cl - makeSOCKcluster(hosts)
Error in socketConnection(port = port, server = TRUE, 
blocking = TRUE : all connections are in use

with showConnections(all=T) showing 50 open connections.

As - for administrative reasons - I would prefer to use snow's
SOCK capabilities (instead of MPI and the like), I wonder if there is a
way to increase the number of simultaneous open connections allowed in
an R session (~100 would be sufficient).

Any help/hints are greatly appreciated,
best regards,
Marc

-- 

Dipl. Inform. Med. Marc Kirchner
Interdisciplinary Centre for Scientific Computing (IWR)
Multidimensional Image Processing
INF 368
University of Heidelberg
D-69120 Heidelberg
Tel: ++49-6221-54 87 97
Fax: ++49-6221-54 88 50
[EMAIL PROTECTED]



signature.asc
Description: Digital signature
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] [R-pkgs] biglm 0.4

2006-08-25 Thread Thomas Lumley

biglm fits linear and generalized linear models to large data sets, using 
bounded memory.

What's New:  generalized linear models.


-thomas

Thomas Lumley   Assoc. Professor, Biostatistics
[EMAIL PROTECTED]   University of Washington, Seattle

___
R-packages mailing list
R-packages@stat.math.ethz.ch
https://stat.ethz.ch/mailman/listinfo/r-packages

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Plot y ~ x under condition of variable a and b

2006-08-25 Thread jennystadt


Hi All,

I want to plot y~ x under the condition of variable a and b. Followed is the 
dataset:

 plotid lndenlnvol source
369  9037.0 10.419002 -4.101039226  S
370  9037.0  9.840548 -2.432385723  S
371  9037.0  8.973351 -1.374842169  S
372  9037.0  8.242756 -0.813800113  S
373  9037.0  8.006368 -0.366743413  S
374  9037.0  7.396335 -0.041375532  S
375  9037.0  6.194405  0.744573249  S
376  9038.0 10.417209 -2.938129138  S
377  9038.0  9.709296 -1.906228589  S
378  9038.0  8.581107 -1.187441385  S
379  9038.0  7.539027 -0.748873856  S
380  9038.0  6.866933 -0.228547521  S
381  9038.0  6.672033  0.222818889  S
382  9038.0  6.380123  0.863026089  S
11003.1  7.281089  5.563470357  P
21003.1  7.165854  5.587837467  P
31003.1  7.126938  5.604757978  P
41003.1  6.833951  5.709078555  P
560 3.1  6.634462  5.678818058  P
610 3.2  7.052830  5.534234273  P
710 3.2  6.905777  5.559511276  P
810 3.2  6.885776  5.590614404  P
910 3.2  6.685106  5.716040812  P
10103.2  6.495349  5.631784504  P
11103.3  6.697376  5.414815010  P
12103.3  6.553336  5.441823472  P
13103.3  6.581116  5.455788329  P
14103.3  6.279641  5.543868038  P
15103.3  6.119298  5.528003301  P
16103.4  7.035589  5.783924732  P
17103.4  6.875624  5.798852319  P
18103.4  6.812445  5.807787244  P

I used  par.plot(lnvol~lnden|source,data=dat,sub=as.factor(plotid),col=T);  It 
gave good plots, but it put the different data sources to separated graphs, 
i.e. S and P. What I want is to plot them on the same graph. If anyone has the 
experience in doing plotting like this, please kindly give me some hints. 
Thanks!

Jen.

[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Plot y ~ x under condition of variable a and b [Broadcast ]

2006-08-25 Thread Wiener, Matthew
It's the |source in your formula that tells lattice to separate them.

If you drop that, you'll get all points without S and P distinguished at
all.  If you add a groups argument, you should get them presented with
different colors/symbols/etc. depending on your trellis settings (warning:
untested code):

par.plot(lnvol~lnden, groups = source,data=dat,sub=as.factor(plotid),col=T)

Hope this helps,

Matt Wiener 

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of jennystadt
Sent: Friday, August 25, 2006 11:28 AM
To: r-help@stat.math.ethz.ch
Subject: [R] Plot y ~ x under condition of variable a and b [Broadcast]



Hi All,

I want to plot y~ x under the condition of variable a and b. Followed is the
dataset:

 plotid lndenlnvol source
369  9037.0 10.419002 -4.101039226  S
370  9037.0  9.840548 -2.432385723  S
371  9037.0  8.973351 -1.374842169  S
372  9037.0  8.242756 -0.813800113  S
373  9037.0  8.006368 -0.366743413  S
374  9037.0  7.396335 -0.041375532  S
375  9037.0  6.194405  0.744573249  S
376  9038.0 10.417209 -2.938129138  S
377  9038.0  9.709296 -1.906228589  S
378  9038.0  8.581107 -1.187441385  S
379  9038.0  7.539027 -0.748873856  S
380  9038.0  6.866933 -0.228547521  S
381  9038.0  6.672033  0.222818889  S
382  9038.0  6.380123  0.863026089  S
11003.1  7.281089  5.563470357  P
21003.1  7.165854  5.587837467  P
31003.1  7.126938  5.604757978  P
41003.1  6.833951  5.709078555  P
560 3.1  6.634462  5.678818058  P
610 3.2  7.052830  5.534234273  P
710 3.2  6.905777  5.559511276  P
810 3.2  6.885776  5.590614404  P
910 3.2  6.685106  5.716040812  P
10103.2  6.495349  5.631784504  P
11103.3  6.697376  5.414815010  P
12103.3  6.553336  5.441823472  P
13103.3  6.581116  5.455788329  P
14103.3  6.279641  5.543868038  P
15103.3  6.119298  5.528003301  P
16103.4  7.035589  5.783924732  P
17103.4  6.875624  5.798852319  P
18103.4  6.812445  5.807787244  P

I used  par.plot(lnvol~lnden|source,data=dat,sub=as.factor(plotid),col=T);
It gave good plots, but it put the different data sources to separated
graphs, i.e. S and P. What I want is to plot them on the same graph. If
anyone has the experience in doing plotting like this, please kindly give me
some hints. Thanks!

Jen.

[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.



--
Notice:  This e-mail message, together with any attachments,...{{dropped}}

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] fitting a gaussian to some x,y data

2006-08-25 Thread Michael Koppelman
I apologize if this is redundant. I've been Googling, searching the  
archive and reading the help all morning and I am not getting closer  
to my goal.

I have a series of data( xi, yi). It is not evenly sampled and it is  
messy (meaning that there is a lot of scatter in the data). I want to  
fit a normal distribution (i.e. a gaussian) to the data in order to  
find the center. (The data has a loose normal look to it.) I have  
done this with polynomial fitting in R with nls but want to try it  
with a Gaussian (I am a student astronomer and have not a lot of  
experience in statistics).

In looking at the fitdistr function, it seems to take as input a  
bunch of x values and it will fit a gaussian to the histogram. That  
is not what I need to do, I want to fit a normal distribution to the  
x,y values and get out the parameters of the fit. I'm fooling with  
nls but it seems to want the data in some other format or something  
because it is complaining about singular gradient.

I'm sure this isn't hard and the answer must be out there somewhere  
but I can't find it. I appreciate any assistance.

Cheers,
Michael



filepath - system.file(data, infile , package=datasets)
mm -read.table(filepath)
mm
dmk - data.frame( x=mm$V1, y=mm$V2)
attach(dmk)
plot(x,y)
#ymk -nls(y~c*x^2+b*x+a,start=list(a=1,b=1,c=1),trace=TRUE)
ymk -nls(y~a*exp(-x^2/sig),start=list(a=1,sig=1),trace=TRUE)
co = coef(ymk)
cmk - list(a=co[[1]], b=co[[2]], c=co[[3]] )

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] increasing the # of socket connections

2006-08-25 Thread Thomas Lumley
On Fri, 25 Aug 2006, Marc Kirchner wrote:


 As - for administrative reasons - I would prefer to use snow's
 SOCK capabilities (instead of MPI and the like), I wonder if there is a
 way to increase the number of simultaneous open connections allowed in
 an R session (~100 would be sufficient).


You need to change
   #define NCONNECTIONS 50
at the top of src/main/connections.c and recompile.

-thomas

Thomas Lumley   Assoc. Professor, Biostatistics
[EMAIL PROTECTED]   University of Washington, Seattle

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] increasing the # of socket connections

2006-08-25 Thread Prof Brian Ripley
On Fri, 25 Aug 2006, Marc Kirchner wrote:

 Dear R-helpers,
 
 using snow on socket connections, I ran into the following error
 
 cl - makeSOCKcluster(hosts)
   Error in socketConnection(port = port, server = TRUE, 
   blocking = TRUE : all connections are in use
 
 with showConnections(all=T) showing 50 open connections.
 
 As - for administrative reasons - I would prefer to use snow's
 SOCK capabilities (instead of MPI and the like), I wonder if there is a
 way to increase the number of simultaneous open connections allowed in
 an R session (~100 would be sufficient).

If you really need them open at once (and are not just forgetting to close 
them), then change the constant in the file and recompile.

 Any help/hints are greatly appreciated,

Is this the appropriate list: not on my reading of the posting guide!

 best regards,
 Marc

-- 
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] xyplot with different symbols and colors?

2006-08-25 Thread Stefan Grosse
Dear List,

I try to make a xyplot with different colors and symbols, I came this far:

library(DAAG)
xyplot(csoa~it|sex*agegp,data=tinting,groups=target,pch=list(1,2),auto.key=list(space
= right))


this produces a plot with different colors and symbols but unfortunately
the legend does only follow the color scheme and not the different symbols.

Any suggestions what to change?

And how do I change or switch of that background colors in each plots title?

Stefan Grosse

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] How to get back POSIXct format after calculating with hist() results

2006-08-25 Thread Prof Brian Ripley
On Fri, 25 Aug 2006, [EMAIL PROTECTED] wrote:

 Hi,
 
 I have a casting/formatting question on hist.POSIXt:
 
 The histogram plot from POSIXct works perfect (with help of Prof. Ripley
 -thanks!).
 When processing the hist(plot=FALSE) output and then plotting the
 results over the x-axis (bins) coming from hist(), I lose the date/time
 labels, getting instead integers displayed.
 
 Trying to cast the $breaks with as.POSIXct gives silly results with
 StarTrek-like years.

But you actually called as.POSIXct(as.Date()) on them, which was `silly'.

class(login_bins$mids) - class(logins$DateTime)

should do the trick.

 What is the proper way to get back the old YY-MM-DD hh:mm:ss labels ?
 
 Peter  
 
 
 
  str(logins)
 `data.frame':   25792 obs. of  1 variable:
  $ DateTime:'POSIXct', format: chr  2006-08-01 00:00:02 2006-08-01
 00:00:25 2006-08-01 00:00:41 ...
 
  login_bins-hist(samples$DateTime, hours, freq=TRUE,plot=FALSE)
 
  str(login_bins)
 List of 7
  $ breaks : num [1:337] 1.15e+09 1.15e+09 1.15e+09 1.15e+09 1.15e+09
 ...
  $ counts : int [1:336] 6 10 45 25 40 87 257 356 309 214 ...
  $ intensities: num [1:336] 0.000233 0.000388 0.001744 0.000969 0.001550
 ...
  $ density: num [1:336] 6.46e-08 1.08e-07 4.85e-07 2.69e-07 4.31e-07
 ...
  $ mids   : num [1:336] 1.15e+09 1.15e+09 1.15e+09 1.15e+09 1.15e+09
 ...
  $ xname  : chr logins$DateTime
  $ equidist   : logi TRUE
  - attr(*, class)= chr histogram
 
 ... Then I do some calculations with the login_bins$counts, resulting in
 a new vector:
 
 str(userabs)
  int [1:336] 1 -3 34 39 68 127 347 641 844 943 ...
 
 When I now plot this y over the x-axis coming from the previous
 histogram x-axis with
 
 plot(login_bins$mids, userabs) 
 
 I get the x-axis of labelled with the nums 115440, 115480...
 (number of seconds since 1970 ?)
 
 Trying to convert this gives funny results.
 
 as.POSIXct(as.Date(login_bins$mids[1]))
 [1] 2568-10-12 02:00:00 W. Europe Daylight Time
 
  as.POSIXct(as.Date(login_bins$mids[2]))
 [1] 2578-08-21 02:00:00 W. Europe Daylight Time
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.
 

-- 
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] How to get back POSIXct format after calculating with hist() results

2006-08-25 Thread Peter.Marx
yep, it works!

impressingly elegant. R seems worth to dig into the details. I'll try harder.

Thank You very much

Peter

-Ursprüngliche Nachricht-
Von: Prof Brian Ripley [mailto:[EMAIL PROTECTED] 
Gesendet: Freitag, 25. August 2006 17:53
An: Marx Peter
Cc: r-help@stat.math.ethz.ch
Betreff: Re: [R] How to get back POSIXct format after calculating with hist() 
results


On Fri, 25 Aug 2006, [EMAIL PROTECTED] wrote:

 Hi,
 
 I have a casting/formatting question on hist.POSIXt:
 
 The histogram plot from POSIXct works perfect (with help of Prof. 
 Ripley -thanks!). When processing the hist(plot=FALSE) output and then 
 plotting the results over the x-axis (bins) coming from hist(), I lose 
 the date/time labels, getting instead integers displayed.
 
 Trying to cast the $breaks with as.POSIXct gives silly results with 
 StarTrek-like years.

But you actually called as.POSIXct(as.Date()) on them, which was `silly'.

class(login_bins$mids) - class(logins$DateTime)

should do the trick.

 What is the proper way to get back the old YY-MM-DD hh:mm:ss labels ?
 
 Peter
 
 
 
  str(logins)
 `data.frame':   25792 obs. of  1 variable:
  $ DateTime:'POSIXct', format: chr  2006-08-01 00:00:02 2006-08-01 
 00:00:25 2006-08-01 00:00:41 ...
 
  login_bins-hist(samples$DateTime, hours, freq=TRUE,plot=FALSE)
 
  str(login_bins)
 List of 7
  $ breaks : num [1:337] 1.15e+09 1.15e+09 1.15e+09 1.15e+09 1.15e+09
 ...
  $ counts : int [1:336] 6 10 45 25 40 87 257 356 309 214 ...
  $ intensities: num [1:336] 0.000233 0.000388 0.001744 0.000969 
 0.001550 ...
  $ density: num [1:336] 6.46e-08 1.08e-07 4.85e-07 2.69e-07 4.31e-07
 ...
  $ mids   : num [1:336] 1.15e+09 1.15e+09 1.15e+09 1.15e+09 1.15e+09
 ...
  $ xname  : chr logins$DateTime
  $ equidist   : logi TRUE
  - attr(*, class)= chr histogram
 
 ... Then I do some calculations with the login_bins$counts, resulting 
 in a new vector:
 
 str(userabs)
  int [1:336] 1 -3 34 39 68 127 347 641 844 943 ...
 
 When I now plot this y over the x-axis coming from the previous 
 histogram x-axis with
 
 plot(login_bins$mids, userabs)
 
 I get the x-axis of labelled with the nums 115440, 115480... 
 (number of seconds since 1970 ?)
 
 Trying to convert this gives funny results.
 
 as.POSIXct(as.Date(login_bins$mids[1]))
 [1] 2568-10-12 02:00:00 W. Europe Daylight Time
 
  as.POSIXct(as.Date(login_bins$mids[2]))
 [1] 2578-08-21 02:00:00 W. Europe Daylight Time
 
 __
 R-help@stat.math.ethz.ch mailing list 
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide 
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.
 

-- 
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] xyplot with different symbols and colors?

2006-08-25 Thread Sebastian P. Luque
On Fri, 25 Aug 2006 17:51:24 +0200,
Stefan Grosse [EMAIL PROTECTED] wrote:

 Dear List, I try to make a xyplot with different colors and symbols, I
 came this far:

 library(DAAG)
 xyplot(csoa~it|sex*agegp,data=tinting,groups=target,pch=list(1,2),auto.key=list(space
 = right))


 this produces a plot with different colors and symbols but unfortunately
 the legend does only follow the color scheme and not the different
 symbols.

 Any suggestions what to change?


I think that 'pch' is passed only to the panel function, and the key uses
whatever the current trellis.par.set() is, if you don't define it
explicitly (all explained in ?xyplot).  I find it easier to call
trellis.par.set before the xyplot:


R trellis.par.set(superpose.symbol=list(pch=c(1, 2)))
Warning message:
Note: The default device has been opened to honour attempt to modify trellis 
settings in: trellis.par.set(superpose.symbol = list(pch = c(1, 2))) 
R 
xyplot(csoa~it|sex*agegp,data=tinting,groups=target,auto.key=list(space=right))


Cheers,

-- 
Seb

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Calculating critical values

2006-08-25 Thread Chuck Cleland
Adam D. I. Kramer wrote:
 Hello,
 
   I have an HLM-like situation: a data frame with rows representing
 individuals, and columns representing some statistics for the individual
 (e.g., a mean and variance for that individual, plus the number of
 observations for that person).
 
   I'm interested in computing a confidence interval around each
 individual's mean, but to do that, I would need to know the critical t-value
 for each individual.
 
   Ergo, is there any sort of function that returns a critical t-value
 for a given p/alpha and df?

?qt

qt(p=.05, df=100)
[1] -1.66023

 Thanks,
 Adam Kramer
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.

-- 
Chuck Cleland, Ph.D.
NDRI, Inc.
71 West 23rd Street, 8th floor
New York, NY 10010
tel: (212) 845-4495 (Tu, Th)
tel: (732) 512-0171 (M, W, F)
fax: (917) 438-0894

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Plot y ~ x under condition of variable a and b[Broadcast ]

2006-08-25 Thread Jenny Stadt
Thanks Matthew. This does help.  But from the discription of the function and 
examples, we are not able to use groups argument and the trellis settings you 
mentioned.

Jen.


- Original Message -

From: Wiener, Matthew,  [EMAIL PROTECTED]
Sent: 2006-08-25,  09:46:44
To: jennystadt; r-help@stat.math.ethz.ch, [EMAIL PROTECTED]; 
r-help@stat.math.ethz.ch
Subject:  RE: [R] Plot y ~ x under condition of variable a and b[Broadcast ]
  
It's the |source in your formula that tells lattice to separate them.

If you drop that, you'll get all points without S and P distinguished at
all.  If you add a groups argument, you should get them presented with
different colors/symbols/etc. depending on your trellis settings (warning:
untested code):

par.plot(lnvol~lnden, groups = source,data=dat,sub=as.factor(plotid),col=T)

Hope this helps,

Matt Wiener 

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of jennystadt
Sent: Friday, August 25, 2006 11:28 AM
To: r-help@stat.math.ethz.ch
Subject: [R] Plot y ~ x under condition of variable a and b [Broadcast]



Hi All,

I want to plot y~ x under the condition of variable a and b. Followed is the
dataset:

 plotid lndenlnvol source
369  9037.0 10.419002 -4.101039226  S
370  9037.0  9.840548 -2.432385723  S
371  9037.0  8.973351 -1.374842169  S
372  9037.0  8.242756 -0.813800113  S
373  9037.0  8.006368 -0.366743413  S
374  9037.0  7.396335 -0.041375532  S
375  9037.0  6.194405  0.744573249  S
376  9038.0 10.417209 -2.938129138  S
377  9038.0  9.709296 -1.906228589  S
378  9038.0  8.581107 -1.187441385  S
379  9038.0  7.539027 -0.748873856  S
380  9038.0  6.866933 -0.228547521  S
381  9038.0  6.672033  0.222818889  S
382  9038.0  6.380123  0.863026089  S
11003.1  7.281089  5.563470357  P
21003.1  7.165854  5.587837467  P
31003.1  7.126938  5.604757978  P
41003.1  6.833951  5.709078555  P
560 3.1  6.634462  5.678818058  P
610 3.2  7.052830  5.534234273  P
710 3.2  6.905777  5.559511276  P
810 3.2  6.885776  5.590614404  P
910 3.2  6.685106  5.716040812  P
10103.2  6.495349  5.631784504  P
11103.3  6.697376  5.414815010  P
12103.3  6.553336  5.441823472  P
13103.3  6.581116  5.455788329  P
14103.3  6.279641  5.543868038  P
15103.3  6.119298  5.528003301  P
16103.4  7.035589  5.783924732  P
17103.4  6.875624  5.798852319  P
18103.4  6.812445  5.807787244  P

I used  par.plot(lnvol~lnden|source,data=dat,sub=as.factor(plotid),col=T);
It gave good plots, but it put the different data sources to separated
graphs, i.e. S and P. What I want is to plot them on the same graph. If
anyone has the experience in doing plotting like this, please kindly give me
some hints. Thanks!

Jen.

[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.



--
Notice:  This e-mail message, together with any attachments,...{{dropped}}

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Check values in colums matrix

2006-08-25 Thread Petr Pikal
Hi

and if speed is an issue and object is numeric matrix something like

function(x) which(colSums(abs(diff(x)))==0)

is a little bit quicker

Cheers
Petr


On 25 Aug 2006 at 13:39, [EMAIL PROTECTED] wrote:

Date sent:  Fri, 25 Aug 2006 13:39:48 +1000
From:   [EMAIL PROTECTED]
To: [EMAIL PROTECTED], [EMAIL PROTECTED],
[EMAIL PROTECTED]
Copies to:  r-help@stat.math.ethz.ch
Subject:Re: [R] Check values in colums matrix

 As a minor footnote to both of these, I would add that both assume
 that all the columns of the dataset are numeric.  It doesn't cost much
 to generalize it to cover any matrix structure, of any mode:
 
 constantColmuns - function(Xmat) 
 which(apply(Xmat, 2, function(z) length(unique(z)) == 1))
 
  -Original Message-
  From: [EMAIL PROTECTED]
 [mailto:[EMAIL PROTECTED] On Behalf Of Berton Gunter 
 Sent: Friday, 25 August 2006 9:37 AM  To: 'Gabor Grothendieck';
 'Muhammad Subianto'  Cc: r-help@stat.math.ethz.ch  Subject: Re: [R]
 Check values in colums matrix   Absolutely. But do note that if the
 values in obj are the product of  numerical computations then columns
 of equal values may turn out to be only  **nearly** equal and so the
 sd may turn out to be **nearly** 0 and not  exactly 0. This is a
 standard issue in numerical computation, of course, and  has been
 commented on in this list at least dozens of times, but it's still  a
 gotcha for the unwary (so now dozens +1).   -- Bert Gunter 
 Genentech Non-Clinical Statistics  South San Francisco, CA  
  -Original Message-   From:
 [EMAIL PROTECTED]  
 [mailto:[EMAIL PROTECTED] On Behalf Of Gabor  
 Grothendieck   Sent: Thursday, August 24, 2006 4:28 PM   To:
 Muhammad Subianto   Cc: r-help@stat.math.ethz.ch   Subject: Re:
 [R] Check values in colums matrix Try sd(obj.tr) which will
 give a vector of standard   deviations, one per column.   A
 column's entry will be zero if and only if all values in the column 
  are the same. On 8/24/06, Muhammad Subianto
 [EMAIL PROTECTED] wrote:Dear all,I apologize if my
 question is quite simple.I have a dataset (20 columns  1000
 rows) whichsome of columns have the same value and the others 
   have different values.Here are some piece of my dataset: 
   obj - cbind(c(1,1,1,4,0,0,1,4,-1),   
 c(0,1,1,4,1,0,1,4,-1),c(1,1,1,4,2,0,1,4,-1),
c(1,1,1,4,3,0,1,4,-1),   
 c(1,1,1,4,6,0,1,5,-1),c(1,1,1,4,6,0,1,6,-1),
c(1,1,1,4,6,0,1,7,-1),   
 c(1,1,1,4,6,0,1,8,-1))obj.tr - t(obj)obj.tr
 obj.tr[,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9]   
 [1,]11140014   -1[2,]01
141014   -1[3,]11142
014   -1[4,]11143014
   -1[5,]11146015   -1   
 [6,]11146016   -1[7,]11
146017   -1[8,]11146
018   -1   How can I do to check columns
 2,3,4,6,7 and 9 havethe same value, and columns 1,5 and 8 have
 different values.   Best, Muhammad Subianto
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html and provide commented,
 minimal, self-contained, reproducible code.

Petr Pikal
[EMAIL PROTECTED]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Need help with difficulty loading page www.bioconductor.org

2006-08-25 Thread Martin Morgan
I didn't see a response to this, and am not an expert, but wanted to
point you in the right direction.

You should ask this question on the bioconductor mailing list.

The site is very much alive at this end (though I'm very close to it,
physically), so you'll need to provide some more information about the
problems you're experiencing. At a minimum it would help to include
the output of the R command

sessionInfo()

If you're using Windows you might also look at mailing list threads
dealing with server proxies. Searchable archives are at

http://dir.gmane.org/gmane.science.biology.informatics.conductor

this link

http://article.gmane.org/gmane.science.biology.informatics.conductor/2152/match=internet2

and searching for internet2 might be a good start.

Martin


Debashis Bhattacharya [EMAIL PROTECTED] writes:

 The page is either too busy, or there is something seriously wrong with 
 access to this page.

 Most of the time, trying to reach www.bioconductor.org results in 
 failure. Only once in a
 blue moon, do I get through.

 In fact, thus far, I have not been able to install bioconductor, since 
 the first source(...)
 command from the R command window -- following instruction on 
 www.bioconductor.org
 page, that I did manage to reach, one time -- has failed, every time.

 Please help.



 Debashis Bhattacharya.

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] xyplot with different symbols and colors?

2006-08-25 Thread Stefan Grosse
R trellis.par.set(superpose.symbol=list(pch=c(1, 2)))
Warning message:
Note: The default device has been opened to honour attempt to modify trellis 
settings in: trellis.par.set(superpose.symbol = list(pch = c(1, 2))) 
R 
xyplot(csoa~it|sex*agegp,data=tinting,groups=target,auto.key=list(space=right))



Thanks that did it!

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] xyplot with different symbols and colors?

2006-08-25 Thread Deepayan Sarkar
On 8/25/06, Sebastian P. Luque [EMAIL PROTECTED] wrote:
 On Fri, 25 Aug 2006 17:51:24 +0200,
 Stefan Grosse [EMAIL PROTECTED] wrote:

  Dear List, I try to make a xyplot with different colors and symbols, I
  came this far:

  library(DAAG)
  xyplot(csoa~it|sex*agegp,data=tinting,groups=target,pch=list(1,2),auto.key=list(space
  = right))


  this produces a plot with different colors and symbols but unfortunately
  the legend does only follow the color scheme and not the different
  symbols.

  Any suggestions what to change?


 I think that 'pch' is passed only to the panel function, and the key uses
 whatever the current trellis.par.set() is, if you don't define it
 explicitly (all explained in ?xyplot).  I find it easier to call
 trellis.par.set before the xyplot:


 R trellis.par.set(superpose.symbol=list(pch=c(1, 2)))
 Warning message:
 Note: The default device has been opened to honour attempt to modify trellis 
 settings in: trellis.par.set(superpose.symbol = list(pch = c(1, 2)))
 R 
 xyplot(csoa~it|sex*agegp,data=tinting,groups=target,auto.key=list(space=right))

Also, see the entry for 'par.settings' in help(xyplot).

-Deepayan

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] R.squared in Weighted Least Square using the Lm Function

2006-08-25 Thread Charles
Hello all,
I am using the function lm to do my weighted least
square regression.

model-lm(Y~X1+X2, weight=w)

What I am confused is the r.squared.
It does not seem that the r.squared for the weighted
case is an ordinary 1-RSS/TSS.
What is that precisely?
Is the r.squared measure comparable to that obtained
by the ordinary least square?

I also notice that
model$res is the unweighted residual while
summary(model)$res  is the weighted residual


Thank you in advance for helping!

Charles

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] fitting a gaussian to some x,y data

2006-08-25 Thread Rolf Turner
Michael Koppelman wrote:

 I apologize if this is redundant. I've been Googling, searching the
 archive and reading the help all morning and I am not getting closer
 to my goal.
 
 I have a series of data( xi, yi). It is not evenly sampled and it is
 messy (meaning that there is a lot of scatter in the data). I want to
 fit a normal distribution (i.e. a gaussian) to the data in order to
 find the center. (The data has a loose normal look to it.) I have
 done this with polynomial fitting in R with nls but want to try it
 with a Gaussian (I am a student astronomer and have not a lot of
 experience in statistics).
 
 In looking at the fitdistr function, it seems to take as input a
 bunch of x values and it will fit a gaussian to the histogram. That
 is not what I need to do, I want to fit a normal distribution to the
 x,y values and get out the parameters of the fit. I'm fooling with
 nls but it seems to want the data in some other format or something
 because it is complaining about singular gradient.
 
 I'm sure this isn't hard and the answer must be out there somewhere  
 but I can't find it. I appreciate any assistance.

Fitting a distribution simply means estimating
the parameters of that distribution.

For a Gaussian distribution this problem is trivial.
The parameters are the mean vector mu and the
covariance matrix Sigma.

The (optimal; maximum-likelihood-adjusted-to-be-unbiased)
estimates of these are the obvious ones --- the sample
mean and the sample covariance.  In R you most easily
get these by

o cbinding your ``x'' and ``y'' vectors into
matrix:

 xy - cbind(x,y)

o applying mean() to this matrix:

 mu.hat - apply(xy,2,mean)

o calling upon var():

 Sigma.hat - var(xy)

That is all there is to it.

If all you really want is an estimate of the ``centre''
then this estimate is just mu.hat; you don't need to
``fit a distribution'' at all.

From your description of the problem there may well be
other issues --- lack of independence of your data,
biased sample, outliers, skewness, god knows what. Such
issues should be dealt with as carefully as possible.
It may be the case that you should be doing some sort
of robust estimation.  Or it may be the case that this
data set is hopeless and should be thrown away and a
new data set collected in a manner that will actually
yield some information.

But ``fitting a distribution'' is *NOT* the issue. 

I don't mean to be rude, but this question illustrates the
point that you really shouldn't be *doing* statistics unless
you *understand* some statistics.  At the very best you'll
waste a great deal of time.

cheers,

Rolf Turner
[EMAIL PROTECTED]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] R.squared in Weighted Least Square using the Lm Function

2006-08-25 Thread Prof Brian Ripley
On Fri, 25 Aug 2006, Charles wrote:

 Hello all,
 I am using the function lm to do my weighted least
 square regression.
 
 model-lm(Y~X1+X2, weight=w)
 
 What I am confused is the r.squared.

What r.squared?  There is no r.squared in that object, but it is 
calculated by the summary method.

 It does not seem that the r.squared for the weighted
 case is an ordinary 1-RSS/TSS.
 What is that precisely?

Precisely that, with weights in the SS.  The code is

r - z$residuals
f - z$fitted
w - z$weights
if (is.null(w)) {
mss - if (attr(z$terms, intercept))
sum((f - mean(f))^2)
else sum(f^2)
rss - sum(r^2)
} else {
mss - if (attr(z$terms, intercept)) {
m - sum(w * f/sum(w))
sum(w * (f - m)^2)
}
else sum(w * f^2)
rss - sum(w * r^2)
r - sqrt(w) * r
}
ans$r.squared - mss/(mss + rss)

That's the great thing about R: you can answer your own question by 
reading the code for yourself.

 Is the r.squared measure comparable to that obtained
 by the ordinary least square?
 
 I also notice that
 model$res is the unweighted residual while
 summary(model)$res  is the weighted residual

Yes, as documented with added emphasis in ?summary.lm .

-- 
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Modifying the embed-results

2006-08-25 Thread Atte Tenkanen
Hi,

Here is a vector and the result from the embed-command:

VECTOR=c(0,3,6,3,11,2,4,3,7,6,4,5,10,2,3,5,8)

 embed(VECTOR, dimension=5)
  [,1] [,2] [,3] [,4] [,5]
 [1,]   113630
 [2,]2   11363
 [3,]42   1136
 [4,]342   113
 [5,]7342   11
 [6,]67342
 [7,]46734
 [8,]54673
 [9,]   105467
[10,]2   10546
[11,]32   1054
[12,]532   105
[13,]8532   10

Is there a way to little modify the algorithm so that the result looks
like this:

[1]  0  3  6 11  2 - beginning from the first number of the VECTOR
[1]  3  6 11  2  4 - beginning from the second number of the VECTOR etc
[1]  6  3 11  2  4 
[1]  3 11  2  4  7
[1] 11  2  4  3  7
[1] 2 4 3 7 6
[1] 4 3 7 6 5
[1] 3 7 6 4 5
[1]  7  6  4  5 10
[1]  6  4  5 10  2
[1]  4  5 10  2  3
[1]  5 10  2  3  8
[1] 10  2  3  5  8

Every row consists of next five unique(!) member of the VECTOR. I made
this example result with a time consuming algorithm which uses for-loops
and whiles. 

How to do this better??

Thanks in advance!

Atte Tenkanen
University of Turku

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] issues with Sweave and inclusion of graphics in a document

2006-08-25 Thread Thomas Harte
--- Prof Brian Ripley [EMAIL PROTECTED] wrote:

 savePlot is just an internal version of dev.copy, part of the support for 
 the menus on the windows() graphics device.
 
 It is described in `An Introduction to R' (the most basic R manual).

the most basic R manual doesn't quite answer my question. 

by itself, dev.copy doesn't copy the width and height of the device whereas 
savePlot
copies whatever is displayed on the screen giving me 
what-you-see-is-what-you-save
capabilities (but only under the Windows OS). 

i can get pretty close to this in linux by writing a function to save the
plot to a pdf device:
label=first.ar.1, results=hide=
# no savePlot in Linux ... so write my own function
savePlotAsPdf- function(pdfname, from=dev.cur()) {
from- from
pdf(pdfname, width=width, height=height)
to- dev.cur()
dev.set(from)
dev.copy(which=to)
dev.off()
}
# a long AR process is best viewed in a wide window ... 
# width  height are now variables
width- 20; height- 5
x11(width=width, height=height)
sp- make.ar.1(alpha=.5, n=800)
plot(sp, type=l, col=blue)
# width  height via dynamic scoping in savePlotAsPdf
savePlotAsPdf(ar.pdf)
@

the only (minor) inconvenience is that i have to specify width and height as 
variables to
take advantage of dynamic scoping in order to minimize mess.

while this is a workaround, via dev.copy, as you pointed out, it still doesn't 
answer why
Sweave doesn't like x11() devices (or windows() devices under the Windows OS 
for that
matter) within figure environments. 

perhaps this is a question for the package maintainer? but i was hoping that 
all the avid
Sweave users would pitch in with how they work with graphics in practice.

here is a revised .Rnw example illustrating the above:

%  example-linux2.Rnw
\documentclass[a4paper]{article}

\begin{document}

\noindent This is an example of how I can use \texttt{Sweave} under Linux. 

echo=false,results=hide=
# create a simple AR process:
make.ar.1- function(alpha=1,n=300) {
Z- rnorm(n); 
Y- numeric(n); 
Y[1]- Z[1]; 
for (i in 2:n) Y[i]- alpha*Y[i-1]+Z[i]; 
return(Y)
}
@

\texttt{Sweave} doesn't like the [EMAIL PROTECTED](width=width, height=height)@ 
command 
in the following code chunk if it is placed within a \texttt{figure} 
environment.
Instead, I have to save the plot to a file and then I use [EMAIL PROTECTED]@ in 
the \texttt{figure} environment. This isn't a bad thing, as it allows me to 
fine-tune
the \LaTeX\ graphics placement.
label=first.ar.1, results=hide=
# no savePlot in Linux ... so write my own function
savePlotAsPdf- function(pdfname, from=dev.cur()) {
from- from
pdf(pdfname, width=width, height=height)
to- dev.cur()
dev.set(from)
dev.copy(which=to)
dev.off()
}
# a long AR process is best viewed in a wide window ... 
# width  height are now variables
width- 20; height- 5
x11(width=width, height=height)
sp- make.ar.1(alpha=.5, n=800)
plot(sp, type=l, col=blue)
# width  height via dynamic scoping in savePlotAsPdf
savePlotAsPdf(ar.pdf)
@
\begin{figure}
\begin{center}
%   i retain direct control over graphics in LaTeX; i can fine-tune the 
%   the graphics placement as much as i want:
\includegraphics[width=14.5cm]{./ar.pdf}
\caption{An AR(1) process of length~\protect\Sexpr{length(sp)} 
is best viewed in a wide window.}
\end{center}
\end{figure}

Under X, then,
\begin{itemize}
\item Why doesn't \texttt{Sweave} like [EMAIL PROTECTED](width=width, 
height=height)@?
\end{itemize}
There are, however, advantages to doing things this way:
\begin{itemize}
\item I can save the plot to a file without writing any other code;
\item I can include the saved plot in my \LaTeX\ figure, allowing me to 
fine-tune with the [EMAIL PROTECTED]@ command.
\end{itemize}

\end{document}
%  example-linux2.Rnw




 
 On Sat, 19 Aug 2006, Thomas Harte wrote:
 
  the problem is a little hard to explain; the .Rnw files (below)
  probably do a better job, but here goes ...
  
  Sweave doesn't like it when i size a graphical device in a code
  chunk using either, e.g.:
  
  windows(width=20, height=5)
  
  in Windows, or, e.g.
  
  x11(width=20, height=5)
  
  under X, when i then plot something in said device and try to 
  include this graphical output in the resulting document.
  
  Sweave does not object to my writing code chunks in the above
  manner, so long as i do not wish to include the code in a LaTeX 
  figure environment.
  
  oftentimes i want to do precisely what Sweave doesn't appear
  to allow. for example, with time-series data, i want to see a 

[R] tcltk command to figure out which widget is on focus (or clicked)

2006-08-25 Thread Vladislav Petyuk
Hi,
I'm making an interface, where a Tcl/Tk window have few listbox widgets.
I need to select separate parameters from separate listboxes.
It is clear how to get cursor selection value, once you know which listbox 
widget you clicked.
The problem is I can't figure out which one tcltk command to use to get an 
information which listbox widget is in focus (or clicked).
Thank you,
Vlad

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] how to contrast with factorial experiment

2006-08-25 Thread szhan
Hello, Ted,
Thank you for your help!
So I can not contrast the mean yields between sections 1-8 and 9-11  
under Trt but I can contrast mean yields for sections 1-3 and 6-11  
because there exists significant interaction between two factors  
(Trt:section4, Trt:section5). Could I use the commands below to test  
the difference between sections 1-3 and 6-11 ?
 contrasts(section)-c(-2,-2,-2,0,0,1,1,1,1,1,1)
 newobj-lm(log2(yield)~treat*section)
 summary(newobj)

Call:
lm(formula = log2(yield) ~ treat * section)

Residuals:
  Min   1Q   Median   3Q  Max
-0.49647 -0.14913 -0.01521  0.17471  0.51105

Coefficients:
Estimate Std. Error t value Pr(|t|)
(Intercept) 6.288400.05003 125.682   2e-16 ***
treatTrt1.221220.07076  17.259   2e-16 ***
section10.178310.03911   4.559 4.08e-05 ***
section2   -0.231020.16595  -1.392  0.17087
section32.381700.16595  14.352   2e-16 ***
section43.368340.16595  20.298   2e-16 ***
section5   -1.568730.16595  -9.453 3.67e-12 ***
section6   -0.415220.16595  -2.502  0.01613 *
section7   -0.899430.16595  -5.420 2.38e-06 ***
section80.095220.16595   0.574  0.56901
section9   -0.787840.16595  -4.748 2.21e-05 ***
section10   0.748210.16595   4.509 4.79e-05 ***
treatTrt:section1   0.101010.05532   1.826  0.07461 .
treatTrt:section2   0.272700.23468   1.162  0.25151
treatTrt:section3  -1.222100.23468  -5.207 4.85e-06 ***
treatTrt:section4  -1.391870.23468  -5.931 4.26e-07 ***
treatTrt:section5  -0.761370.23468  -3.244  0.00225 **
treatTrt:section6   0.073200.23468   0.312  0.75658
treatTrt:section7   0.331080.23468   1.411  0.16535
treatTrt:section8  -0.136860.23468  -0.583  0.56276
treatTrt:section9   0.220860.23468   0.941  0.35180
treatTrt:section10 -0.144760.23468  -0.617  0.54054
---
Signif. codes:  0 `***' 0.001 `**' 0.01 `*' 0.05 `.' 0.1 ` ' 1

Residual standard error: 0.2874 on 44 degrees of freedom
Multiple R-Squared: 0.973,  Adjusted R-squared: 0.9601
F-statistic: 75.55 on 21 and 44 DF,  p-value:  2.2e-16

How can I infer that there is significant difference between sections  
1-3 and sections 6-11 for the Trt from the output above?

Joshua
Quoting (Ted Harding) [EMAIL PROTECTED]:

 On 24-Aug-06 [EMAIL PROTECTED] wrote:
 Hello, R users,
 I have two factors (treat, section) anova design experiment where
 there are 3 replicates. The objective of the experiment is to test if
 there is significant difference of yield between top (section 9 to 11)
 and bottom (section 9 to 11)
 [I think you mean sections 1 to 8]

 of the fruit tree under treatment. I found that there are interaction
 between two factors. I wonder if I can contrast means from levels of
 one factor (section) under another factor (treat)? if so, how to do
 it in R and how to interpret the output?

 I think you would be well advised to look at a plot of the data.
 For example, let Y stand for yield, R for replicate, T for treat
 and S for section.

   ix-(T==Trt);plot(S[ix],Y[ix],col=red,ylim=c(0,1000))
   ix-(T==Ctl);points(S[ix],Y[ix],col=blue)

 From this it is clear that sections 4 and 5 are in a class of
 their own. Also, in sections 1-3 and 6-11 the Ctl yields
 are not only lower, but have smaller (in some cases hardly any)
 variance, compared with the Trt yields. The variances for
 sections 7,8,9,10,11 are greater than for 1,2,3,6 without
 great change in mean value.

 While there is an evident difference between Trt yields and
 Ctrl yields for sections 1-3 and 6-11, this is not so for
 sections 4 and 5.

 This sort of behaviour no doubt provides some reasons for the
 interaction you observed. You seem to have a quite complex
 phenomenon here!

 To some extent the problems with variance can be diminished by
 working with logarithms. Compare the previous plot with

   ix-(T==Trt);plot(S[ix],log10(Y[ix]),col=red,ylim=c(0,3))
   ix-(T==Ctl);points(S[ix],log10(Y[ix]),col=blue)

 (you have used log2() in your commands). The above observations
 can be seen reflected in R if you look at the output of

   summary(obj)

 where in particular:

 treatTrt:section2  -1.116910.33189  -3.365 0.001595 **
 treatTrt:section3  -0.456340.33189  -1.375 0.176099
 treatTrt:section4  -1.566270.33189  -4.719 2.42e-05 ***
 treatTrt:section5  -1.736040.33189  -5.231 4.48e-06 ***
 treatTrt:section6  -0.913110.33189  -2.751 0.008588 **
 treatTrt:section7  -0.078530.33189  -0.237 0.814055
 treatTrt:section8   0.179350.33189   0.540 0.591654
 treatTrt:section9  -0.288590.33189  -0.870 0.389277
 treatTrt:section10  0.069130.33189   0.208 0.835972
 treatTrt:section11 -0.296490.33189  -0.893 0.376543

 which, precisely, contrasts means from levels of one factor
 (section) under another factor (treat), and shows that most
 of the interaction arises in sections 4 and 5.

 

[R] How to iteratively extract elements out of a list

2006-08-25 Thread xpRt.wannabe
Dear List,

The following code produces a list, which is what I what:

set.seed(123)
tmpf - function() {
x - rpois(rpois(1,4),2)
}
n - 3
m - replicate(n,tmpf())
m

[[1]]
[1] 3 2 4

[[2]]
[1] 0 2 4 2 2 5 2

[[3]]
[1] 2 0 4 1 0


Now I need something that would to extract iteratively (or as many
times as
the size of 'n') the values that are greater than 2 in each component
of
'm' into another list such that the sub-list would be:

[[1]]
[1] 3 4

[[2]]
[1] 4 5

[[3]]
[1] 4

Below is what I tried:

for(i in 1:3)
sub.list - lapply(m,subset,m[[i]]2)

 sub.list

which gives me something different from what I want:

[[1]]
[1] 4

[[2]]
[1] 4

[[3]]
[1] 4

Any help would be appreciated.

 version
 _
platform i386-pc-mingw32
arch i386
os   mingw32
system   i386, mingw32
status
major2
minor2.1
year 2005
month12
day  20
svn rev  36812
language R

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] tick.number for date in xyplot

2006-08-25 Thread Benjamin Tyner
I would like a tick mark for each month; for example,

xyplot(runif(365)~I(as.Date(1999-01-01) + 1:365),
   scales=list(x=list(format=%b %Y,tick.number=12)))

I know I could make x numeric and use 'at' and 'labels', but I was
wondering if there is a more direct route I'm missing. (In particular,
one that doesn't have to be modified for new data).

Thanks,
Ben

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Quickie : unload library

2006-08-25 Thread Horace Tso
Dear list,

I know it must be obvious and I did my homework. (In fact I've
RSiteSearched with keyword remove AND library but got timed
out.(why?))

How do I unload a library? I don't mean getting ride of it permanently
but just to unload it for the time being.

A related problem : I have some libraries loaded at startup in .First()
which I have in .Rprofile. Now, I exited R and commented out the lines
in .First(). Next time I launch R the same libraries are loaded again.
I.e. there seems to be a memory of the old .First() somewhere which
refuses to die.

Thanks in adv.

Horace

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] fitting a gaussian to some x,y data

2006-08-25 Thread Michael Koppelman
Thank you. Yes, I do feel that I am under-qualified to even ask  
questions of y'all. Plus I'm an astronomer, which doesn't help! ;)  
I'll try again.


I have two columns of data, the first column (x) is a distance (or  
length or separation) and the second column (y) is a flux (or number  
of counts or brightness) at that distance. Thus, when you plot y vs.  
x you get a spatial profile which shows how bright this thing is as a  
function of position. (See the small, attached PNG file. You can see  
there is a vague gaussian shape to the data.) This is measured data  
from a bizarre technique which yields data that is not evenly-spaced  
in x and it does not represent a pure mathematical function (i.e. it  
is not a point spread function or something like that), it represents  
the actual, non-uniform shape of an astronomical object. We are  
making the assumption that the shape of this object can be roughly  
represented with a gaussian.


I want to fit a gaussian to this with the purpose of determining  
systematically the center of the normal-like shape of the spatial  
feature. I have successfully done so in R with a polynomial but I  
can't figure out how to do it with a gaussian.


Does that make sense?

Thanks!
Michael


On Aug 25, 2006, at 2:04 PM, MARK LEEDS wrote:


hi : i'm not clear on what you mean but someone else might be ? if you
say ( x,y), then it sounds like you are talking about a bivariate  
normal

distribution. to fit a regular one dimensional gaussian distribution,
you can't have 2 dimensional data. i honestly don't mean to sound  
rude but
i think you should explain what you want to do  more clearly  
because I don't think

I am the only one that will be confused by what you said.
send out a clearer email and you will get quite a response because
there are a lot of really smart people ( compared to me ) on this  
list  that love to help.

it's an amazing list in that sense.




__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] rgl: exporting to pdf or png does not work

2006-08-25 Thread Duncan Murdoch
On 8/24/2006 4:34 PM, Gaspard Lequeux wrote:
 Hej,
 
 On Thu, 24 Aug 2006, Duncan Murdoch wrote:
 
 On 8/24/2006 11:19 AM, Gaspard Lequeux wrote:

 On Wed, 23 Aug 2006, Duncan Murdoch wrote:

 On 8/23/2006 5:15 PM, Gaspard Lequeux wrote:

 When exporting a image from rgl, the following error is encountered:

 rgl.postscript('testing.pdf', fmt=pdf)
 RGL: ERROR: can't bind glx context to window
 RGL: ERROR: can't bind glx context to window
 Warning messages:
 1: X11 protocol error: GLXBadContextState
 2: X11 protocol error: GLXBadContextState

 The pdf file is created and is readable, but all the labels are gone.

 Taking a snapshot (to png) gives 'failed' and no file is created.

 Version of rgl used: 0.67-2 (2006-07-11)
 Version of R used: R 2.3.1; i486-pc-linux-gnu; 2006-07-13 01:31:16;
 Running Debian GNU/Linux testing (Etch).

 That looks like an X11 error to me, not something that I'm very likely
 to be able to fix.  If you can debug the error, it would be helpful.

 Actually after upgrading everything (debian testing (etch)) and restarting
 X, I didn't get that error anymore. It was worse: R crashed:

 library(rgl);triangles3d(c(1,,2,3),c(1,2,4),c(1,3,5));rgl.postscript('testing.pdf','pdf')
 X Error of failed request:  GLXBadContextState
Major opcode of failed request:  142 (GLX)
Minor opcode of failed request:  5 (X_GLXMakeCurrent)
Serial number of failed request:  85
Current serial number in output stream:  85
 [EMAIL PROTECTED]:~/seqanal$


 I downloaded the source package (debian testing (etch), rgl-0.67-2).

 In rgl-0.67-2/src/ I changed the following files:

 rglview.cpp, around line 587. Commenting the function call gl2psBeginPage
 removed the crash (but also no pdf output...)

 I enabled this function again and went to gl2ps.c, to the function
 gl2psBeginPage. At the end of that function, around line 4426, commenting
 out the line
 glRenderMode(GL_FEEDBACK);
 removes the R crash, but of course still no pdf output (well, only the
 background).

 GL_FEEDBACK is defined in /usr/include/GL/gl.h as:

 /* Render Mode */
 #define GL_FEEDBACK 0x1C01
 #define GL_RENDER   0x1C00
 #define GL_SELECT   0x1C02

 Trying glRenderMode(GL_RENDER) removed the crash, but still only the
 background in the pdf.

 If someone has some suggestions about what to do next...

 gl2ps is a separate project, whose source has been included into rgl.
 You can see the gl2ps project page at http://www.geuz.org/gl2ps/.

 We're using version 1.2.2, which is a couple of years old.  The current
 stable release of gl2ps is 1.3.1.  It might fix your problem.

 I don't know if we modified gl2ps.c or gl2ps.h when they were included,
 but they haven't been modified since.  (Daniel put them in, based on a
 patch from Albrecht Gebhardt, according to the log.)

 It would be helpful to know:

 1.  Is the rgl source identical to 1.2.2?
 
 Yes. The version of gl2ps in rgl is identical to gl2ps version 1.2.2.
 
 2.  Does rgl work if 1.3.1 is dropped in instead?
 
 No:
 
 In version 1.3.1:
 
 #define GL2PS_PS  0
 #define GL2PS_EPS 1
 #define GL2PS_TEX 2
 #define GL2PS_PDF 3
 #define GL2PS_SVG 4
 #define GL2PS_PGF 5
 
 while in version 1.2.2:
 
 #define GL2PS_PS  1
 #define GL2PS_EPS 2
 #define GL2PS_TEX 3
 #define GL2PS_PDF 4
 
 Thus rgl.postscript('probeer.pdf','tex') should be used to generate a pdf. 
 The pdf has still no characters (axes annotations).
 
 In R/enum.R
 
 The last line (line 54)
 
 rgl.enum (postscripttype, ps=1, eps=2, tex=3, pdf=4)
 
 should be
 
 rgl.enum (postscripttype, ps=0, eps=1, tex=2, pdf=3)
 
 and mayebe add svg and pgf...
 
 
 3.  Does 1.3.1 fix the bug you're seeing?
 
 No. Same error.
 
 The error occurs also on ubuntu dapper. On that ubuntu machine, when 
 installing the libgl1-mesa-swrast, the packages libgl1-mesa 
 libgl1-mesa-dri and x-window-system-core are removed. rgl.postscript 
 doesn't produce any errors anymore, the pdf is created but no text (axes 
 decorations) is written to the pdf.
 
 On debian testing, libgl1-mesa-swx11 can be installed. This removes the 
 follwing packages:
 
 freeglut3-dev libgl1-mesa-dev libgl1-mesa-dri libgl1-mesa-glx 
 libglitz-glx1-dev libglitz1-dev libglu1-mesa-dev libglui-dev libglut3-dev 
 x-window-system-core xlibmesa-gl-dev xorg
 
 but R doesn't crash anymore and the figure is written to file (still 
 without axes annotations).
 
 Reinstal libgl1-mesa-glx removes libgl1-mesa-swx11 and the R crash 
 returns.
 
 So it seems the bug is really triggered by libgl1-mesa. I filled in a bug 
 report for the debian package libgl1-mesa-glx.
 
 I'll look into these at some point, but probably not this week.
 
 Thanks. No hurry however, as I can still use the classical screenshots. 
 The figures will probable not have to be published, as the expected 
 results are not attained.

Thanks for your work on this.  I'll put the gl2ps 1.3.1 code into rgl 
with the changes you found.

Duncan Murdoch


[R] horizontal direct product

2006-08-25 Thread Luke Keele
II am translating some gauss code into R, and gauss has a matrix  
product function called the horizontal direct product (*~), which is  
some sort of variant on the Kronecker product.

For example if x is 2x2 and y is 2x2

the horizontal direct product, z, of x and y is defined (in the Gauss  
manual) as:

row 1 = x11*y11 x11*y12 x12*y11 x12*y12
row 2 = x21*y21 x21*y22 x22*y21 x22*y22

Or in R code if:

x - matrix(seq(1,4,by=1),2,2, byrow=TRUE)
y - matrix(seq(5,8,by=1),2,2, byrow=TRUE)

The resulting matrix, if I had an operator, would be the following  
matrix z, here formed in a contrived manner:

z.1 - c(5, 6, 10, 12)
z.2 - c(21,24,28,32)
z - rbind(z.1,z.2)

I realize that this is just the first and last row of x%*%y when x  
and y are two by two but this  won't generalize with larger  
matrices.  Any ideas about whether this can be done with existing R  
functions in a general way short of writing my own function?

   Thanks

Luke




Luke Keele
Department of Political Science
Ohio State University
[EMAIL PROTECTED]




[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Problem in library.dynam problems on Linux

2006-08-25 Thread Sinnwell, Jason P.
I don't know if mount -l shows something interesting below.  

We're wondering if there is an issue with the gcc compiler, which has been 
updated since we last updated R.  Do you see anything that could be wrong with 
the info below? 

We plan on updating to R 2.3.1, hoping that will clear things up. 

Thanks for your help, Jason

[EMAIL PROTECTED] haplo.stats]$ which gcc
/usr/bin/gcc
[EMAIL PROTECTED] haplo.stats]$ ls -al /usr/bin/gcc
-rwxr-xr-x  2 root root 103336 Mar  8 15:18 /usr/bin/gcc
[EMAIL PROTECTED] haplo.stats]$ ls -al /usr/local/biotools/r
total 36
drwxr-xr-x   4 magee rcf  4096 May  1 13:17 .
drwxr-xr-x  23 magee rcf  4096 Jun 21 15:25 ..
lrwxrwxrwx   1 root  root7 May  1 13:17 current - R-2.2.1
drwxr-xr-x   5 magee rcf  4096 Jan  3  2005 R-2.0.1
drwxr-xr-x   5 magee rcf  4096 Feb 22  2006 R-2.2.1
[EMAIL PROTECTED] rpack]$ gcc -dumpversion
3.4.5
[EMAIL PROTECTED] rpack]$ mount -l
/dev/sda5 on / type ext3 (rw) [/]
none on /proc type proc (rw)
none on /sys type sysfs (rw)
none on /dev/pts type devpts (rw,gid=5,mode=620)
usbfs on /proc/bus/usb type usbfs (rw)
/dev/sda1 on /boot type ext3 (rw) [/boot]
none on /dev/shm type tmpfs (rw)
/dev/sda10 on /local1 type ext3 (rw) [/local1]
/dev/sda9 on /tmp type ext3 (rw) [/tmp]
/dev/sda7 on /usr type ext3 (rw) [/usr]
/dev/sda8 on /var type ext3 (rw) [/var]
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
rcfcluster-nfs2:/home on /home type nfs 
(rw,rsize=8192,wsize=8192,intr,timeo=15,addr=172.23.70.7)
rcfcluster-nfs2:/data on /data type nfs 
(rw,rsize=8192,wsize=8192,intr,timeo=15,addr=172.23.70.7)
rcfcluster-nfs2:/scratch on /scratch type nfs 
(rw,rsize=8192,wsize=8192,intr,timeo=15,addr=172.23.70.7)

 -Original Message-
 From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On 
 Behalf Of Peter Dalgaard
 Sent: Thursday, August 24, 2006 6:02 PM
 To: Sinnwell, Jason P.
 Cc: 'r-help@stat.math.ethz.ch'
 Subject: Re: [R] Problem in library.dynam problems on Linux
 
 Sinnwell, Jason P. [EMAIL PROTECTED] writes:
 
  We have R 2.2.1 installed on a Linux cluster that seems to 
 have problems loading either of our shared object libraries 
 for packages.  This seems to be happening on both local and 
 global versions of packages that we install.  However, we 
 have only noticed this problem in the past 3 months on this R 
 installation, whereas some users had success before then.  It 
 could be that something on our system changed, but I am not 
 an admin so I wouldn't know where to look.  Can anyone help 
 with this problem?
  
  ## A LOCAL INSTALLATION OF HAPLO.STATS APPEARS SUCCESSFUL 
  [EMAIL PROTECTED] rpack]$ R CMD INSTALL -l 
 /home/sinnwell/rdir/tmplib 
  haplo.stats
  * Installing *source* package 'haplo.stats' ...
  ** libs
  make: `haplo.stats.so' is up to date.
  ** R
  ** data
  ** demo
  ** inst
  ** preparing package for lazy loading
  
  ** help
Building/Updating help pages for package 'haplo.stats'
   Formats: text html latex example
  ** building package indices ...
  * DONE (haplo.stats)
  
  ## TRY AND LOAD THE LOCALLY INSTALLED LIBRARY, YET THE 
 SYSTEM COMMAND 
  SHOWS THE .so FILE IS THERE
  
  R library(haplo.stats, lib.loc=/home/sinnwell/rdir/tmplib/)
  Error in dyn.load(x, as.logical(local), as.logical(now)) : 
  unable to load shared library 
 '/home/sinnwell/rdir/tmplib/haplo.stats/libs/haplo.stats.so':

 /home/sinnwell/rdir/tmplib/haplo.stats/libs/haplo.stats.so: cannot 
  open shared object file: No such file or directory Error in 
 library(haplo.stats, lib.loc = /home/sinnwell/rdir/tmplib/) :
  .First.lib failed for 'haplo.stats'
  R system('ls -al /home/sinnwell/rdir/tmplib/haplo.stats/libs')
  total 88
  drwxr-xr-x   2 sinnwell sinnwell  4096 Aug 24 15:14 .
  drwxr-xr-x  13 sinnwell sinnwell  4096 Aug 24 15:14 ..
  -rwxr-xr-x   1 sinnwell sinnwell 61566 Aug 24 15:14 haplo.stats.so
 
 H.. Does mount -l show something interesting on the 
 filesystem containing the .so file? (e.g., option noexec).
 
 -- 
O__   Peter Dalgaard Øster Farimagsgade 5, Entr.B
   c/ /'_ --- Dept. of Biostatistics PO Box 2099, 1014 Cph. K
  (*) \(*) -- University of Copenhagen   Denmark  Ph:  
 (+45) 35327918
 ~~ - ([EMAIL PROTECTED])  FAX: 
 (+45) 35327907


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Quickie : unload library

2006-08-25 Thread Sachin J
see ?detach 
  

Horace Tso [EMAIL PROTECTED] wrote:
  Dear list,

I know it must be obvious and I did my homework. (In fact I've
RSiteSearched with keyword remove AND library but got timed
out.(why?))

How do I unload a library? I don't mean getting ride of it permanently
but just to unload it for the time being.

A related problem : I have some libraries loaded at startup in .First()
which I have in .Rprofile. Now, I exited R and commented out the lines
in .First(). Next time I launch R the same libraries are loaded again.
I.e. there seems to be a memory of the old .First() somewhere which
refuses to die.

Thanks in adv.

Horace

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.



-

[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] polychor error

2006-08-25 Thread John Fox
Dear Janet,

Performing a traceback after the error gives a hint:

 tmp.pcc-polychor(tmp.mat, ML=T, std.err=T)
 traceback()
8: stop(at least one element of , sQuote(lower),  is larger than , 
   sQuote(upper))
7: checkmvArgs(lower = lower, upper = upper, mean = mean, corr = corr, 
   sigma = sigma)
6: pmvnorm(lower = c(row.cuts[i], col.cuts[j]), upper = c(row.cuts[i + 
   1], col.cuts[j + 1]), corr = R)
5: binBvn(rho, row.cuts, col.cuts)
4: fn(par, ...)
3: function (par) 
   fn(par, ...)(c(0.422816748044139, -2.20343446037221, -2.2163627792244, 
   -1.79075055316993, -1.11077161663679, -0.323037731981826,
0.664036943094355, 
   -2.71305188847272, -1.58338678422633, -0.534011853182102,
0.65365619155084
   ))
2: optim(c(optimise(f, interval = c(-1, 1))$minimum, rc, cc), f, 
   control = control, hessian = std.err)
1: polychor(tmp.mat, ML = T, std.err = T)

The parameters are in the order correlation, row thresholds (of which there
are 6), column thresholds (of which there are 4), and at this point in the
maximization of the likelihood the first row threshold (-2.20343446037221)
is *above* the second threshold (-2.2163627792244), causing pmvnorm() to
complain. This can happen when the problem is ill-conditioned, since optim()
doesn't constrain the order of the thresholds.

So, why is the problem ill-conditioned? Here is your contingency table:

 tmp.mat
  3 4  5  6  7
1 0 2  0  0  0
2 0 0  1  1  0
3 0 0  5  1  1
4 0 5 10 12  2
5 0 5 27 29 11
6 1 3 20 57 31
7 0 1  9 34 32

There are only two observations in the first row, two in the second row, and
one in the first column.  You're expecting a lot out of ML to get estimates
of the first couple of thresholds for rows and the first for columns.

One approach would be to eliminate one or more sparse rows or columns; e.g.,

 polychor(tmp.mat[-1,], ML = TRUE, std.err = TRUE)

Polychoric Correlation, ML est. = 0.3932 (0.05719)
Test of bivariate normality: Chisquare = 14.21, df = 19, p = 0.7712

  Row Thresholds
  Threshold Std.Err.
1   -2.4410  0.24270
2   -1.8730  0.14280
3   -1.1440  0.09236
4   -0.3353  0.07408
50.6636  0.07857


  Column Thresholds
  Threshold Std.Err.
1   -2.6500  0.30910
2   -1.6420  0.12100
3   -0.5494  0.07672
40.6508  0.07833
 polychor(tmp.mat[,-1], ML = TRUE, std.err = TRUE)

Polychoric Correlation, ML est. = 0.4364 (0.05504)
Test of bivariate normality: Chisquare = 14.85, df = 17, p = 0.6062

  Row Thresholds
  Threshold Std.Err.
1   -2.4940  0.26020
2   -2.2080  0.19160
3   -1.7850  0.13400
4   -1.1090  0.09113
5   -0.3154  0.07371
60.6625  0.07821


  Column Thresholds
  Threshold Std.Err.
1   -1.6160  0.11970
2   -0.5341  0.07639
30.6507  0.07800
 

A more defensible alternative would be to collapse sparse rows or columns.

BTW, you *can* get estimated standard-errors from polychor() for your
original table for the two-step estimator:

 polychor(tmp.mat, ML = FALSE, std.err = TRUE)

Polychoric Correlation, 2-step est. = 0.4228 (0.05298)
Test of bivariate normality: Chisquare = 19.22, df = 23, p = 0.6883

That's because the two-step estimator estimates the thresholds from the
marginal distributions of the variables rather than from their joint
distribution.

So, is this a bug or a feature? I suppose that it's a bug to allow the
thresholds to get out of order, though to constrain the optimization to
prevent that from happening is probably not worth the effort and could cause
some strange results. On the other hand, the error tells you something about
the data, so maybe it's a feature.

I noticed that you posted another version of this question two days before
this one. I apologize for the slow response -- I wasn't able to read my
email for a few days and it's taken me most of today to catch up.

Regards,
 John

 original message ---

Janet Rosenbaum jrosenba at rand.org
Thu Aug 24 00:41:16 CEST 2006

Hi.

Does anyone know whether the following error is a result of a bug or a 
feature?

I can eliminate the error by making ML=F, but I would like to see the 
values of the cut-points and their variance.  Is there anything that I 
can do?


tmp.vec-c(0,  0,  0 , 0  ,0 , 1,  0,  2,  0 , 0,  5  ,5  ,3  ,1,  0 , 
1,  5, 10, 27, 20,  9,  0,  1,  1, 12, 29, 57, 34,  0,  0,  1,  2, 11, 
31, 32)
tmp.mat-matrix(tmp.vec, nrow=7)
rownames(tmp.mat)-1:7
colnames(tmp.mat)-3:7
tmp.pcc-polychor(tmp.mat, ML=T, std.err=T)
Error in checkmvArgs(lower = lower, upper = upper, mean = mean, corr = 
corr,  :
 at least one element of 'lower' is larger than 'upper'

Thanks,

Janet


John Fox
Department of Sociology
McMaster University
Hamilton, Ontario
Canada L8S 4M4
905-525-9140x23604
http://socserv.mcmaster.ca/jfox

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Quickie : unload library

2006-08-25 Thread Sachin J
try detach(package:zoo)
   
  Sachin

Horace Tso [EMAIL PROTECTED] wrote:
  Sachin,

I did try that, ex

detach(zoo)

Error in detach(zoo) : invalid name

detach(zoo)

Error in detach(zoo) : invalid name

But zoo has been loaded,

sessionInfo()
Version 2.3.1 (2006-06-01) 
i386-pc-mingw32 

attached base packages:
[1] methods datasets stats tcltk utils 
graphics 
[7] grDevices base 

other attached packages:
tseries quadprog zoo MASS Rpad 
0.10-1 1.4-8 1.2-0 7.2-27.1 1.1.1 

Thks,

H.


 Sachin J 8/25/2006 12:56 PM 
see ?detach 


Horace Tso wrote:
Dear list,

I know it must be obvious and I did my homework. (In fact I've
RSiteSearched with keyword remove AND library but got timed
out.(why?))

How do I unload a library? I don't mean getting ride of it permanently
but just to unload it for the time being.

A related problem : I have some libraries loaded at startup in
.First()
which I have in .Rprofile. Now, I exited R and commented out the lines
in .First(). Next time I launch R the same libraries are loaded again.
I.e. there seems to be a memory of the old .First() somewhere which
refuses to die.

Thanks in adv.

Horace

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help 
PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html 
and provide commented, minimal, self-contained, reproducible code.



-

[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help 
PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html 
and provide commented, minimal, self-contained, reproducible code.



-

[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Quickie : unload library

2006-08-25 Thread Horace Tso
Sachin,

I did try that, ex

detach(zoo)

Error in detach(zoo) : invalid name

detach(zoo)

Error in detach(zoo) : invalid name

But zoo has been loaded,

sessionInfo()
Version 2.3.1 (2006-06-01) 
i386-pc-mingw32 

attached base packages:
[1] methods   datasets  stats tcltk utils
graphics 
[7] grDevices base 

other attached packages:
   tseries   quadprogzoo   MASS   Rpad 
  0.10-11.4-81.2-0 7.2-27.11.1.1 

Thks,

H.


 Sachin J [EMAIL PROTECTED] 8/25/2006 12:56 PM 
see ?detach 
  

Horace Tso [EMAIL PROTECTED] wrote:
  Dear list,

I know it must be obvious and I did my homework. (In fact I've
RSiteSearched with keyword remove AND library but got timed
out.(why?))

How do I unload a library? I don't mean getting ride of it permanently
but just to unload it for the time being.

A related problem : I have some libraries loaded at startup in
.First()
which I have in .Rprofile. Now, I exited R and commented out the lines
in .First(). Next time I launch R the same libraries are loaded again.
I.e. there seems to be a memory of the old .First() somewhere which
refuses to die.

Thanks in adv.

Horace

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help 
PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html 
and provide commented, minimal, self-contained, reproducible code.



-

[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help 
PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html 
and provide commented, minimal, self-contained, reproducible code.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Quickie : unload library

2006-08-25 Thread Prof Brian Ripley
On Fri, 25 Aug 2006, Horace Tso wrote:

 Dear list,
 
 I know it must be obvious and I did my homework. (In fact I've
 RSiteSearched with keyword remove AND library but got timed
 out.(why?))

Probably because the site is offline.

 How do I unload a library? I don't mean getting ride of it permanently
 but just to unload it for the time being.

Do you mean 'package' (see detach and unLoadNamespace) or 'library' (see
dyn.unload and library.dynam.unload)?

 A related problem : I have some libraries loaded at startup in .First()
 which I have in .Rprofile. Now, I exited R and commented out the lines
 in .First(). Next time I launch R the same libraries are loaded again.
 I.e. there seems to be a memory of the old .First() somewhere which
 refuses to die.

Are you restoring a workspace containing .First?  Always try with 
--vanilla to check.

-- 
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Quickie : unload library

2006-08-25 Thread Horace Tso
Aah, that works. The missing package:...

H.

 Sachin J [EMAIL PROTECTED] 8/25/2006 1:16 PM 
try detach(package:zoo)
   
  Sachin

Horace Tso [EMAIL PROTECTED] wrote:
  Sachin,

I did try that, ex

detach(zoo)

Error in detach(zoo) : invalid name

detach(zoo)

Error in detach(zoo) : invalid name

But zoo has been loaded,

sessionInfo()
Version 2.3.1 (2006-06-01) 
i386-pc-mingw32 

attached base packages:
[1] methods datasets stats tcltk utils 
graphics 
[7] grDevices base 

other attached packages:
tseries quadprog zoo MASS Rpad 
0.10-1 1.4-8 1.2-0 7.2-27.1 1.1.1 

Thks,

H.


 Sachin J 8/25/2006 12:56 PM 
see ?detach 


Horace Tso wrote:
Dear list,

I know it must be obvious and I did my homework. (In fact I've
RSiteSearched with keyword remove AND library but got timed
out.(why?))

How do I unload a library? I don't mean getting ride of it permanently
but just to unload it for the time being.

A related problem : I have some libraries loaded at startup in
.First()
which I have in .Rprofile. Now, I exited R and commented out the lines
in .First(). Next time I launch R the same libraries are loaded again.
I.e. there seems to be a memory of the old .First() somewhere which
refuses to die.

Thanks in adv.

Horace

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help 
PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html 
and provide commented, minimal, self-contained, reproducible code.



-

[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help 
PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html 
and provide commented, minimal, self-contained, reproducible code.



-

[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help 
PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html 
and provide commented, minimal, self-contained, reproducible code.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] horizontal direct product

2006-08-25 Thread Dimitrios Rizopoulos
maybe something like:

%*~% - function(x, y){
 n - nrow(x)
 out - matrix(0, n, ncol(x) * ncol(y))
 for(i in 1:n)
 out[i, ] - c(y[i, ] %o% x[i, ])
 out
}

x - matrix(1:4, 2, 2, TRUE)
y - matrix(5:8, 2, 2, TRUE)

x %*~% y


I hope it helps.

Best,
Dimitris


Dimitris Rizopoulos
Ph.D. Student
Biostatistical Centre
School of Public Health
Catholic University of Leuven

Address: Kapucijnenvoer 35, Leuven, Belgium
Tel: +32/(0)16/336899
Fax: +32/(0)16/337015
Web: http://med.kuleuven.be/biostat/
  http://www.student.kuleuven.be/~m0390867/dimitris.htm


Quoting Luke Keele [EMAIL PROTECTED]:

 II am translating some gauss code into R, and gauss has a matrix
 product function called the horizontal direct product (*~), which is
 some sort of variant on the Kronecker product.

 For example if x is 2x2 and y is 2x2

 the horizontal direct product, z, of x and y is defined (in the Gauss
 manual) as:

 row 1 = x11*y11 x11*y12 x12*y11 x12*y12
 row 2 = x21*y21 x21*y22 x22*y21 x22*y22

 Or in R code if:

 x - matrix(seq(1,4,by=1),2,2, byrow=TRUE)
 y - matrix(seq(5,8,by=1),2,2, byrow=TRUE)

 The resulting matrix, if I had an operator, would be the following
 matrix z, here formed in a contrived manner:

 z.1 - c(5, 6, 10, 12)
 z.2 - c(21,24,28,32)
 z - rbind(z.1,z.2)

 I realize that this is just the first and last row of x%*%y when x
 and y are two by two but this  won't generalize with larger
 matrices.  Any ideas about whether this can be done with existing R
 functions in a general way short of writing my own function?

Thanks

 Luke




 Luke Keele
 Department of Political Science
 Ohio State University
 [EMAIL PROTECTED]




   [[alternative HTML version deleted]]

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.





Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] increasing the # of socket connections

2006-08-25 Thread Luke Tierney
On Fri, 25 Aug 2006, Prof Brian Ripley wrote:

 On Fri, 25 Aug 2006, Marc Kirchner wrote:

 Dear R-helpers,

 using snow on socket connections, I ran into the following error

   cl - makeSOCKcluster(hosts)
  Error in socketConnection(port = port, server = TRUE,
  blocking = TRUE : all connections are in use

 with showConnections(all=T) showing 50 open connections.

 As - for administrative reasons - I would prefer to use snow's
 SOCK capabilities (instead of MPI and the like), I wonder if there is a
 way to increase the number of simultaneous open connections allowed in
 an R session (~100 would be sufficient).

 If you really need them open at once (and are not just forgetting to close
 them), then change the constant in the file and recompile.

 Any help/hints are greatly appreciated,

 Is this the appropriate list: not on my reading of the posting guide!


Would you have asked that question if the answer had been
options(max.connections=100)?  I for one would not.

Best,

luke

-- 
Luke Tierney
Chair, Statistics and Actuarial Science
Ralph E. Wareham Professor of Mathematical Sciences
University of Iowa  Phone: 319-335-3386
Department of Statistics andFax:   319-335-3017
Actuarial Science
241 Schaeffer Hall  email:  [EMAIL PROTECTED]
Iowa City, IA 52242 WWW:  http://www.stat.uiowa.edu

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Quickie : unload library

2006-08-25 Thread Sarah Goslee
Hi,

There are two ways that I know of:
detach(package:zoo)
or use detach() with the number of that package's
place in the search list.

 library(zoo)
 search()
 [1] .GlobalEnvpackage:zoo   package:methods
 [4] package:graphics  package:grDevices package:utils
 [7] package:datasets  package:ecoutils  package:ecodist
[10] package:stats package:gtkDevice Autoloads
[13] package:base
 detach(package:zoo)
 search()
 [1] .GlobalEnvpackage:methods   package:graphics
 [4] package:grDevices package:utils package:datasets
 [7] package:ecoutils  package:ecodist   package:stats
[10] package:gtkDevice Autoloads package:base

OR

 library(zoo)
 search()
 [1] .GlobalEnvpackage:zoo   package:methods
 [4] package:graphics  package:grDevices package:utils
 [7] package:datasets  package:ecoutils  package:ecodist
[10] package:stats package:gtkDevice Autoloads
[13] package:base
 detach(2)
 search()
 [1] .GlobalEnvpackage:methods   package:graphics
 [4] package:grDevices package:utils package:datasets
 [7] package:ecoutils  package:ecodist   package:stats
[10] package:gtkDevice Autoloads package:base

Sarah

-- 
Sarah Goslee
http://www.stringpage.com

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] fitting a gaussian to some x,y data

2006-08-25 Thread MARK LEEDS
hi michael : ( i stupidly didn't send the initial email to the whole list so 
they will have to read from the bottom ),  it's clearer now but fitting a 
gaussian still may not be the right thing to do in this case. someone else 
can answer better but it sounds to me like you want to do some kind of 
regression ( i dont know whether it would be linear or glm or what )  using 
distance versus brightness.
i have no clue about astronomy but if you think there is a relationship 
between them  then this is probably the more appropriate approach to take. 
again, don't take my word in stone. someone else
will likely reply.

- Original Message - 
From: Michael Koppelman [EMAIL PROTECTED]
To: r-help@stat.math.ethz.ch
Sent: Friday, August 25, 2006 3:35 PM
Subject: Re: [R] fitting a gaussian to some x,y data


 Thank you. Yes, I do feel that I am under-qualified to even ask
 questions of y'all. Plus I'm an astronomer, which doesn't help! ;)
 I'll try again.

 I have two columns of data, the first column (x) is a distance (or
 length or separation) and the second column (y) is a flux (or number
 of counts or brightness) at that distance. Thus, when you plot y vs.
 x you get a spatial profile which shows how bright this thing is as a
 function of position. (See the small, attached PNG file. You can see
 there is a vague gaussian shape to the data.) This is measured data
 from a bizarre technique which yields data that is not evenly-spaced
 in x and it does not represent a pure mathematical function (i.e. it
 is not a point spread function or something like that), it represents
 the actual, non-uniform shape of an astronomical object. We are
 making the assumption that the shape of this object can be roughly
 represented with a gaussian.

 I want to fit a gaussian to this with the purpose of determining
 systematically the center of the normal-like shape of the spatial
 feature. I have successfully done so in R with a polynomial but I
 can't figure out how to do it with a gaussian.

 Does that make sense?

 Thanks!
 Michael


 On Aug 25, 2006, at 2:04 PM, MARK LEEDS wrote:

 hi : i'm not clear on what you mean but someone else might be ? if you
 say ( x,y), then it sounds like you are talking about a bivariate
 normal
 distribution. to fit a regular one dimensional gaussian distribution,
 you can't have 2 dimensional data. i honestly don't mean to sound
 rude but
 i think you should explain what you want to do  more clearly
 because I don't think
 I am the only one that will be confused by what you said.
 send out a clearer email and you will get quite a response because
 there are a lot of really smart people ( compared to me ) on this
 list  that love to help.
 it's an amazing list in that sense.








 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide 
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] tick.number for date in xyplot

2006-08-25 Thread jim holtman
Try this:


xyplot(runif(365)~I(as.Date(1999-01-01) + 1:365),
  scales=list(x=list(at=seq(as.Date('1999-1-1'), length=12, by='1
month'), labels=NULL)))



On 8/25/06, Benjamin Tyner [EMAIL PROTECTED] wrote:
 I would like a tick mark for each month; for example,

 xyplot(runif(365)~I(as.Date(1999-01-01) + 1:365),
   scales=list(x=list(format=%b %Y,tick.number=12)))

 I know I could make x numeric and use 'at' and 'labels', but I was
 wondering if there is a more direct route I'm missing. (In particular,
 one that doesn't have to be modified for new data).

 Thanks,
 Ben

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.



-- 
Jim Holtman
Cincinnati, OH
+1 513 646 9390

What is the problem you are trying to solve?

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] How to iteratively extract elements out of a list

2006-08-25 Thread jim holtman
try this:

 set.seed(123)
 tmpf - function() {
+ x - rpois(rpois(1,4),2)
+ }
 n - 3
 m - replicate(n,tmpf())
 m
[[1]]
[1] 3 2 4

[[2]]
[1] 0 2 4 2 2 5 2

[[3]]
[1] 2 0 4 1 0

 lapply(m, function(x)x[x2])
[[1]]
[1] 3 4

[[2]]
[1] 4 5

[[3]]
[1] 4



On 8/25/06, xpRt.wannabe [EMAIL PROTECTED] wrote:
 Dear List,

 The following code produces a list, which is what I what:

 set.seed(123)
 tmpf - function() {
 x - rpois(rpois(1,4),2)
 }
 n - 3
 m - replicate(n,tmpf())
 m

 [[1]]
 [1] 3 2 4

 [[2]]
 [1] 0 2 4 2 2 5 2

 [[3]]
 [1] 2 0 4 1 0


 Now I need something that would to extract iteratively (or as many
 times as
 the size of 'n') the values that are greater than 2 in each component
 of
 'm' into another list such that the sub-list would be:

 [[1]]
 [1] 3 4

 [[2]]
 [1] 4 5

 [[3]]
 [1] 4

 Below is what I tried:

 for(i in 1:3)
 sub.list - lapply(m,subset,m[[i]]2)

  sub.list

 which gives me something different from what I want:

 [[1]]
 [1] 4

 [[2]]
 [1] 4

 [[3]]
 [1] 4

 Any help would be appreciated.

  version
 _
 platform i386-pc-mingw32
 arch i386
 os   mingw32
 system   i386, mingw32
 status
 major2
 minor2.1
 year 2005
 month12
 day  20
 svn rev  36812
 language R

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.



-- 
Jim Holtman
Cincinnati, OH
+1 513 646 9390

What is the problem you are trying to solve?

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Problem with geeglm

2006-08-25 Thread Rick Bilonick
event.nab.2 is 0/1 and I dichotomized va to get va.2 to see if I could
get geeglm to work. glm has no problem with the data but geeglm chokes.
Each subject (patient.id) has at most 2 observations and more than 3/4
of the subjects have 2 observations. I have even worse problems trying
to use glmmPQL from MASS and worse still trying to use lmer from lme4.
But I figured a marginal model would work. (geeglm seems to work OK with
most of the explanatory factors in the data set I have but a couple of
them show similar problems.)

 summary(glm(event.nab.2~va.2,family=binomial(link=logit),data=test))

Call:
glm(formula = event.nab.2 ~ va.2, family = binomial(link = logit),
data = test)

Deviance Residuals:
Min   1Q   Median   3Q  Max
-0.3787  -0.3787  -0.2246  -0.2246   2.7177

Coefficients:
  Estimate Std. Error z value Pr(|z|)
(Intercept)-2.5993 0.1804  -14.41   2e-16 ***
va.2(84, Inf]  -1.0685 0.3435   -3.11  0.00187 **
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

(Dispersion parameter for binomial family taken to be 1)

Null deviance: 363.19  on 958  degrees of freedom
Residual deviance: 352.28  on 957  degrees of freedom
AIC: 356.28

Number of Fisher Scoring iterations: 6


summary(geeglm(event.nab.2~va.2,family=binomial(link=logit),id=patient.id,cor=exch,data=test))
Error in geese.fit(xx, yy, id, offset, soffset, w, waves = waves,
zsca,  :
nrow(zsca) and length(y) not match


 head(test)
  patient.id event.nab.2  va.2  
1  1   0 (-Inf,84] 
2  1   0 (-Inf,84] 
3  2   0 (84, Inf] 
4  2   0 (84, Inf] 
5  3   0 (84, Inf] 
6  3   0 (84, Inf] 

I'm using R 2.3.1 and the latest version of geepack. I get a similar
error message if I use va which is continuous.

I don't know what the error message from geeglm/geese means.

Rick B.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] horizontal direct product

2006-08-25 Thread Thomas Lumley
On Fri, 25 Aug 2006, Luke Keele wrote:

 II am translating some gauss code into R, and gauss has a matrix
 product function called the horizontal direct product (*~), which is
 some sort of variant on the Kronecker product.

 For example if x is 2x2 and y is 2x2

 the horizontal direct product, z, of x and y is defined (in the Gauss
 manual) as:

 row 1 = x11*y11 x11*y12 x12*y11 x12*y12
 row 2 = x21*y21 x21*y22 x22*y21 x22*y22


It looks as though
%~% - function (A, B)
{
 m - ncol(A)
 n - ncol(B)
 A[, rep(1:m, each = n)] * B[, rep(1:n, m)]
}

would do it.


-thomas

Thomas Lumley   Assoc. Professor, Biostatistics
[EMAIL PROTECTED]   University of Washington, Seattle

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] R in Nature

2006-08-25 Thread simon blomberg
AAh. Then my hypothesis has been rejected. Oh well!

Cheers,

Simon.


Simon,

Congratulations!

It used to be that

   R Ihaka and R Gentleman
   R: A Language for Data Analysis and Graphics
   Journal of Computational and Graphical Statistics, 1996

was used to cite R.

I see a handful of these in Nature from around 2003.

Chuck

On Fri, 25 Aug 2006, Simon Blomberg wrote:

Hi all,

We've just had a paper accepted for publication in Nature. We used R for
95% of our analyses (one of my co-authors sneaked in some GenStat when I
wasn't looking.). The preprint is available from the Nature web site, in
the open peer-review trial section. I searched Nature for previous
references to R Development Core Team, and I received no hits. So I
tentatively conclude that our paper is the first Nature paper to cite R.

A great many thanks to the R Development Core Team for R, and Prof.
Bates for lmer.

Cheers,

Simon.
(I'm off to the pub to celebrate.)

--
Simon Blomberg, B.Sc.(Hons.), Ph.D, M.App.Stat.
Centre for Resource and Environmental Studies
The Australian National University
Canberra ACT 0200
Australia
T: +61 2 6125 7800 email: Simon.Blomberg_at_anu.edu.au
F: +61 2 6125 0757
CRICOS Provider # 00120C

The combination of some data and an aching desire for
an answer does not ensure that a reasonable answer
can be extracted from a given body of data.
- John Tukey.



[ Part 3.91: Included Message ]


Charles C. Berry(858) 534-2098
  Dept of Family/Preventive Medicine
E mailto:[EMAIL PROTECTED]  UC San Diego
http://biostat.ucsd.edu/~cberry/ La Jolla, San Diego 92093-0717


-- 
Simon Blomberg, B.Sc.(Hons.), Ph.D, M.App.Stat.
Centre for Resource and Environmental Studies
The Australian National University
Canberra ACT 0200
Australia

T: +61 2 6125 7800
F: +61 2 6125 0757

CRICOS Provider # 00120C

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] generating an expression for a formula automatically

2006-08-25 Thread Maria Montez
Thank you for your answers yesterday. I now have another question!

Suppose that instead of creating a formula for a regression model I 
simply wanted to add the variables. I believe I cannot use the 
as.formula anymore. Again I tried to use expression to no avail. I get 
an expression but I can't use it.

fit.sum - function(x) {
fo - expression(paste(x, collapse = +))
   eval( fo)
}
fit.sum(c(x1,x2))

Basically what I need is to learn how to use variables when what is 
given to me are their names (character list).

Thanks again, Maria




Sundar Dorai-Raj wrote:


 Maria Montez wrote:

 Hi!

 I would like to be able to create formulas automatically. For 
 example, I want to be able to create a function that takes on two 
 values: resp and x, and then creates the proper formula to regress 
 resp on x.

 My code:

 fit.main - function(resp,x) {
  form - expression(paste(resp, ~ ,paste(x,sep=,collapse= + 
 ),sep=))
   z - lm(eval(form))
  z
 }
 main - fit.main(y,c(x1,x2,x3,x4))

 and I get this error:
 Error in terms.default(formula, data = data) :
 no terms component

 Any suggestions?

 Thanks, Maria

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide 
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


 Hi, Maria,

 Try

 regr - paste(x, collapse = +)
 form - as.formula(sprintf(%s ~ %s, resp, regr))

 HTH,

 --sundar


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] generating an expression for a formula automatically

2006-08-25 Thread Gabor Grothendieck
Try this and note we must pay attention to which environment
the expression is evaluated in.


fit.sum - function(x, env = parent.frame())
   eval(parse(text = paste(x, collapse = +)), env = env)

# test
x1 - x2 - 1
fit.sum(c(x1,x2))



On 8/25/06, Maria Montez [EMAIL PROTECTED] wrote:
 Thank you for your answers yesterday. I now have another question!

 Suppose that instead of creating a formula for a regression model I
 simply wanted to add the variables. I believe I cannot use the
 as.formula anymore. Again I tried to use expression to no avail. I get
 an expression but I can't use it.

 fit.sum - function(x) {
fo - expression(paste(x, collapse = +))
   eval( fo)
 }
 fit.sum(c(x1,x2))

 Basically what I need is to learn how to use variables when what is
 given to me are their names (character list).

 Thanks again, Maria




 Sundar Dorai-Raj wrote:

 
  Maria Montez wrote:
 
  Hi!
 
  I would like to be able to create formulas automatically. For
  example, I want to be able to create a function that takes on two
  values: resp and x, and then creates the proper formula to regress
  resp on x.
 
  My code:
 
  fit.main - function(resp,x) {
   form - expression(paste(resp, ~ ,paste(x,sep=,collapse= +
  ),sep=))
z - lm(eval(form))
   z
  }
  main - fit.main(y,c(x1,x2,x3,x4))
 
  and I get this error:
  Error in terms.default(formula, data = data) :
  no terms component
 
  Any suggestions?
 
  Thanks, Maria
 
  __
  R-help@stat.math.ethz.ch mailing list
  https://stat.ethz.ch/mailman/listinfo/r-help
  PLEASE do read the posting guide
  http://www.R-project.org/posting-guide.html
  and provide commented, minimal, self-contained, reproducible code.
 
 
  Hi, Maria,
 
  Try
 
  regr - paste(x, collapse = +)
  form - as.formula(sprintf(%s ~ %s, resp, regr))
 
  HTH,
 
  --sundar
 

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Quickie : unload library

2006-08-25 Thread Gabor Grothendieck
You likely want the answer that Sarah has already given but in
addition you might also want to look at the thread below,
the point being that detaching a package still leaves portions:

https://www.stat.math.ethz.ch/pipermail/r-help/2006-July/109056.html


On 8/25/06, Sarah Goslee [EMAIL PROTECTED] wrote:
 Hi,

 There are two ways that I know of:
 detach(package:zoo)
 or use detach() with the number of that package's
 place in the search list.

  library(zoo)
  search()
  [1] .GlobalEnvpackage:zoo   package:methods
  [4] package:graphics  package:grDevices package:utils
  [7] package:datasets  package:ecoutils  package:ecodist
 [10] package:stats package:gtkDevice Autoloads
 [13] package:base
  detach(package:zoo)
  search()
  [1] .GlobalEnvpackage:methods   package:graphics
  [4] package:grDevices package:utils package:datasets
  [7] package:ecoutils  package:ecodist   package:stats
 [10] package:gtkDevice Autoloads package:base

 OR

  library(zoo)
  search()
  [1] .GlobalEnvpackage:zoo   package:methods
  [4] package:graphics  package:grDevices package:utils
  [7] package:datasets  package:ecoutils  package:ecodist
 [10] package:stats package:gtkDevice Autoloads
 [13] package:base
  detach(2)
  search()
  [1] .GlobalEnvpackage:methods   package:graphics
  [4] package:grDevices package:utils package:datasets
  [7] package:ecoutils  package:ecodist   package:stats
 [10] package:gtkDevice Autoloads package:base

 Sarah

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Memory usage decreases drastically after save workspace, quit, restart, load workspace

2006-08-25 Thread Klaus Thul
Dear all,

I have the following problem:

  - I have written a program in R which runs out of physical memory  
on my MacBook Pro with 1.5 GB RAM

  - The memory usage is much higher then I would expect from the  
actual data in my global environment

  - When I save the workspace, quit R, restart R and load the  
workspace again, memory usage is much less then before
(~50 MB instead of ~1.5 GB), although nothing seems to be missing.

  - It doesn't seem to be possible to reduce the amount of memory  
used by calling gc()

  - the behavior is independent of if I use R in command line or from  
GUI, so I can exclude a problem of the Mac GUI

Any suggestions what the problem might be or how I could debug this?

Thanks in advance for any help.

Best regards,
Klaus


platform   i386-apple-darwin8.6.1
arch   i386
os darwin8.6.1
system i386, darwin8.6.1
status
major  2
minor  3.1
year   2006
month  06
day01
svn rev38247
language   R
version.string Version 2.3.1 (2006-06-01)

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] How to iteratively extract elements out of a list

2006-08-25 Thread xpRt.wannabe
Jim and Patrick,

Both of you made the same suggestion, which works great!

A follow-up question: Suppose I change the condition 'x2' in 'lapply'
to 'x4', as follows:

set.seed(123)
tmpf - function() {
x - rpois(rpois(1,4),2)
}
n - 3
m - replicate(n,tmpf())
m
sub.m - lapply(m, function(x)x[x4])  # was x2

As a result, I'd get:

 sub.m

[[1]]
numeric(0)

[[2]]
[1] 5

[[3]]
numeric(0)

However, what would I need to do such that 'sub.m' contains only the
non-zero length component; namely, the 'sub.m[[2]]'?  In essence, I'd
like to drop all the components of zero length such that 'sub.m'
results in:

[[1]]
[1] 5

My best effort was to use 'lapply' again:

lapply(sub.m, function(x)x[length(x)0])

which still gives me:

[[1]]
numeric(0)

[[2]]
[1] 5

[[3]]
numeric(0)

Again, any help would be greately appreciated.

p.s. Sorry to bug you again.   I should have thought through a little
more prior to composing an example that would represent all possible
scenarios.

On 8/25/06, jim holtman [EMAIL PROTECTED] wrote:
 try this:

  set.seed(123)
  tmpf - function() {
 + x - rpois(rpois(1,4),2)
 + }
  n - 3
  m - replicate(n,tmpf())
  m
 [[1]]
 [1] 3 2 4

 [[2]]
 [1] 0 2 4 2 2 5 2

 [[3]]
 [1] 2 0 4 1 0

  lapply(m, function(x)x[x2])
 [[1]]
 [1] 3 4

 [[2]]
 [1] 4 5

 [[3]]
 [1] 4

 

 On 8/25/06, xpRt.wannabe [EMAIL PROTECTED] wrote:
  Dear List,
 
  The following code produces a list, which is what I what:
 
  set.seed(123)
  tmpf - function() {
  x - rpois(rpois(1,4),2)
  }
  n - 3
  m - replicate(n,tmpf())
  m
 
  [[1]]
  [1] 3 2 4
 
  [[2]]
  [1] 0 2 4 2 2 5 2
 
  [[3]]
  [1] 2 0 4 1 0
 
 
  Now I need something that would to extract iteratively (or as many
  times as
  the size of 'n') the values that are greater than 2 in each component
  of
  'm' into another list such that the sub-list would be:
 
  [[1]]
  [1] 3 4
 
  [[2]]
  [1] 4 5
 
  [[3]]
  [1] 4
 
  Below is what I tried:
 
  for(i in 1:3)
  sub.list - lapply(m,subset,m[[i]]2)
 
   sub.list
 
  which gives me something different from what I want:
 
  [[1]]
  [1] 4
 
  [[2]]
  [1] 4
 
  [[3]]
  [1] 4
 
  Any help would be appreciated.
 
   version
  _
  platform i386-pc-mingw32
  arch i386
  os   mingw32
  system   i386, mingw32
  status
  major2
  minor2.1
  year 2005
  month12
  day  20
  svn rev  36812
  language R
 
  __
  R-help@stat.math.ethz.ch mailing list
  https://stat.ethz.ch/mailman/listinfo/r-help
  PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
  and provide commented, minimal, self-contained, reproducible code.
 


 --
 Jim Holtman
 Cincinnati, OH
 +1 513 646 9390

 What is the problem you are trying to solve?


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] How to iteratively extract elements out of a list

2006-08-25 Thread xpRt.wannabe
Yet another question:

Let's say I do the following:

set.seed(123)
tmpf - function(){
x - rpois(rpois(1,4),2)
}
n - 3
m - replicate(n, tmpf())
m
sub.m - lapply(m, function(x)x[x2])

'sub.m' gives me:

[[1]]
[1] 3 4

[[2]]
[1] 4 5

[[3]]
[1] 4

The question is:  What do I need to do such that I can extract
componets of length 2 in 'sub.m' into another sublist, which would
look like this:

[[1]]
[1] 3 4

[[2]]
[1] 4 5

I think that's all the questions I can think of -- for now.

Many, many thanks!!!

On 8/25/06, xpRt. wannabe [EMAIL PROTECTED] wrote:
 Jim and Patrick,

 Both of you made the same suggestion, which works great!

 A follow-up question: Suppose I change the condition 'x2' in 'lapply'
 to 'x4', as follows:

 set.seed(123)
 tmpf - function() {
 x - rpois(rpois(1,4),2)
 }
 n - 3
 m - replicate(n,tmpf())
 m
 sub.m - lapply(m, function(x)x[x4])  # was x2

 As a result, I'd get:

  sub.m

 [[1]]
 numeric(0)

 [[2]]
 [1] 5

 [[3]]
 numeric(0)

 However, what would I need to do such that 'sub.m' contains only the
 non-zero length component; namely, the 'sub.m[[2]]'?  In essence, I'd
 like to drop all the components of zero length such that 'sub.m'
 results in:

 [[1]]
 [1] 5

 My best effort was to use 'lapply' again:

 lapply(sub.m, function(x)x[length(x)0])

 which still gives me:

 [[1]]
 numeric(0)

 [[2]]
 [1] 5

 [[3]]
 numeric(0)

 Again, any help would be greately appreciated.

 p.s. Sorry to bug you again.   I should have thought through a little
 more prior to composing an example that would represent all possible
 scenarios.

 On 8/25/06, jim holtman [EMAIL PROTECTED] wrote:
  try this:
 
   set.seed(123)
   tmpf - function() {
  + x - rpois(rpois(1,4),2)
  + }
   n - 3
   m - replicate(n,tmpf())
   m
  [[1]]
  [1] 3 2 4
 
  [[2]]
  [1] 0 2 4 2 2 5 2
 
  [[3]]
  [1] 2 0 4 1 0
 
   lapply(m, function(x)x[x2])
  [[1]]
  [1] 3 4
 
  [[2]]
  [1] 4 5
 
  [[3]]
  [1] 4
 
  
 
  On 8/25/06, xpRt.wannabe [EMAIL PROTECTED] wrote:
   Dear List,
  
   The following code produces a list, which is what I what:
  
   set.seed(123)
   tmpf - function() {
   x - rpois(rpois(1,4),2)
   }
   n - 3
   m - replicate(n,tmpf())
   m
  
   [[1]]
   [1] 3 2 4
  
   [[2]]
   [1] 0 2 4 2 2 5 2
  
   [[3]]
   [1] 2 0 4 1 0
  
  
   Now I need something that would to extract iteratively (or as many
   times as
   the size of 'n') the values that are greater than 2 in each component
   of
   'm' into another list such that the sub-list would be:
  
   [[1]]
   [1] 3 4
  
   [[2]]
   [1] 4 5
  
   [[3]]
   [1] 4
  
   Below is what I tried:
  
   for(i in 1:3)
   sub.list - lapply(m,subset,m[[i]]2)
  
sub.list
  
   which gives me something different from what I want:
  
   [[1]]
   [1] 4
  
   [[2]]
   [1] 4
  
   [[3]]
   [1] 4
  
   Any help would be appreciated.
  
version
   _
   platform i386-pc-mingw32
   arch i386
   os   mingw32
   system   i386, mingw32
   status
   major2
   minor2.1
   year 2005
   month12
   day  20
   svn rev  36812
   language R
  
   __
   R-help@stat.math.ethz.ch mailing list
   https://stat.ethz.ch/mailman/listinfo/r-help
   PLEASE do read the posting guide 
   http://www.R-project.org/posting-guide.html
   and provide commented, minimal, self-contained, reproducible code.
  
 
 
  --
  Jim Holtman
  Cincinnati, OH
  +1 513 646 9390
 
  What is the problem you are trying to solve?
 


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.