Re: [R] How to append to a data.frame?

2003-12-09 Thread David Kreil

Dear Prof. Ripley,

Thank you very much for your fast and helpful reply!

Is there a canonical way of doing this without a kludge or is the below 
adequate?

  d-as.data.frame(matrix(nrow=1000))

With many thanks again for your help,

Yours sincerely,

David Kreil.



Dr David Philip Kreil (`-''-/).___..--''`-._
Research Fellow`6_ 6  )   `-.  ( ).`-.__.`)
University of Cambridge(_Y_.)'  ._   )  `._ `. ``-..-'
++44 1223 764107, fax 333992 _..`--'_..-_/  /--'_.' ,'
www.inference.phy.cam.ac.uk/dpk20   (il),-''  (li),'  ((!.-'

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] How to append to a data.frame?

2003-12-09 Thread Prof Brian Ripley
On Tue, 9 Dec 2003, David Kreil wrote:

 
 Dear Prof. Ripley,
 
 Thank you very much for your fast and helpful reply!
 
 Is there a canonical way of doing this without a kludge or is the below 
 adequate?
 
   d-as.data.frame(matrix(nrow=1000))

You need to set up the columns as you need them to be eventually.

 
 With many thanks again for your help,
 
 Yours sincerely,
 
 David Kreil.
 
 
 
 Dr David Philip Kreil (`-''-/).___..--''`-._
 Research Fellow`6_ 6  )   `-.  ( ).`-.__.`)
 University of Cambridge(_Y_.)'  ._   )  `._ `. ``-..-'
 ++44 1223 764107, fax 333992 _..`--'_..-_/  /--'_.' ,'
 www.inference.phy.cam.ac.uk/dpk20   (il),-''  (li),'  ((!.-'
 
 
 
 

-- 
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] Font problem

2003-12-09 Thread John Dougherty
Some plots fail due to a problem with the X11 fonts.  I get a message that 
X11 font at size 22 could not be loaded.  The demo() graphics routine for 
instance dies during the third chart.  The graphics demo calls font.main=1 
and that seems to be where the error is.  I believe this is due to a 
configuration problem on my system, however I can't find where in the 
environment font.main looks for the font to use.

I am running SuSE 9.0 and use the KDE desktop.  However, I have also 
replicated this in GNOME and WindowMaker.  Varying the fonts used by the 
console does notb effect the result.

Thanks,
John Dougherty

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] How to append to a data.frame?

2003-12-09 Thread David Kreil
Yes, once I've named the first column, I can add further ones by saying 
d[c(x,y,z)]=NA or such. I was just wondering whether that was the way to 
do it or whether there was a more elegant approach. Preallocation was the 
critical clue I needed.

Thanks again for your help,

David.



Dr David Philip Kreil (`-''-/).___..--''`-._
Research Fellow`6_ 6  )   `-.  ( ).`-.__.`)
University of Cambridge(_Y_.)'  ._   )  `._ `. ``-..-'
++44 1223 764107, fax 333992 _..`--'_..-_/  /--'_.' ,'
www.inference.phy.cam.ac.uk/dpk20   (il),-''  (li),'  ((!.-'

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] axes that meet

2003-12-09 Thread Uwe Ligges
Hank Stevens wrote:
R v. 1.7.1, Windows 2000.
A particular journal wants me to provide scatter plots with no box, but 
with axes that meet in the lower left corner. It seems as though there 
must be an easy way of doing this, but my reading the help on 
plot.default, axis, and box have not provided any clues. I would be most 
appreciative of any feedback.
Thank you,
Hank Stevens

Dr. Martin Henry H. Stevens, Assistant Professor
338 Pearson Hall
Botany Department
Miami University
Oxford, OH 45056
Office: (513) 529-4206
Lab: (513) 529-4262
FAX: (513) 529-4243
http://www.cas.muohio.edu/botany/bot/henry.html
http://www.muohio.edu/ecology
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
See argument bty in ?par, e.g.:

 par(bty=l)
 plot(1:10)
Uwe Ligges

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] Interfacing R and Python in MS Windows

2003-12-09 Thread Uwe Ligges
Héctor Villafuerte D. wrote:

Hi all,
I need the power of R from within some of my Python programs...
I use debian linux (woody) at home and windows XP at work (the
latter is where I need to get things done!)
This are my packages:
R 1.8.0
Python 2.3
RSPython 0.5-3
This is what I've done:
(1) Since the Windows Binary of RSPython is compiled against
Python 2.2 I downloaded the tarball
(2) Followed the instructions in INSTALL.win (with pexports and
everything)
(3) In the RGUI Install package(s) from local zip files...
If you have downloaded the tarball of RSPython, you have to install the 
package using Rcmd INSTALL, and you cannot use the RGUI which is 
designed to install binary packages.

Uwe Ligges


(4) NO errors reported during this process
(5) When I try to Load package in R it show this error:
  local({pkg - select.list(sort(.packages(all.available = TRUE)))
+ if(nchar(pkg)) library(pkg, character.only=TRUE)})
Error in testRversion(descfields) : This package has not been installed 
properly
See the Note in ?library

(6) In Python
  import RS
Traceback (most recent call last):
 File stdin, line 1, in ?
ImportError: No module named RS
Please help me to get this excelent tools going on in Windows.
Thanks in advance,
Hector
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] How to append to a data.frame?

2003-12-09 Thread Prof Brian Ripley
On Tue, 9 Dec 2003, David Kreil wrote:

 Yes, once I've named the first column, I can add further ones by saying 
 d[c(x,y,z)]=NA or such. I was just wondering whether that was the way to 
 do it or whether there was a more elegant approach. Preallocation was the 
 critical clue I needed.

Use an initial data.frame call naming all the columns.

-- 
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] How to append to a data.frame?

2003-12-09 Thread David Kreil
Ok, how can I both allocate storage and specify column names in a data.frame 
call, please? Apologies if I'm being slow here.

With many thanks again,

David.

  Yes, once I've named the first column, I can add further ones by saying 
  d[c(x,y,z)]=NA or such. I was just wondering whether that was the way to 
  do it or whether there was a more elegant approach. Preallocation was the 
  critical clue I needed.
 
 Use an initial data.frame call naming all the columns.
 
 -- 
 Brian D. Ripley,  [EMAIL PROTECTED]
 Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
 University of Oxford, Tel:  +44 1865 272861 (self)
 1 South Parks Road, +44 1865 272866 (PA)
 Oxford OX1 3TG, UKFax:  +44 1865 272595
 



Dr David Philip Kreil (`-''-/).___..--''`-._
Research Fellow`6_ 6  )   `-.  ( ).`-.__.`)
University of Cambridge(_Y_.)'  ._   )  `._ `. ``-..-'
++44 1223 764107, fax 333992 _..`--'_..-_/  /--'_.' ,'
www.inference.phy.cam.ac.uk/dpk20   (il),-''  (li),'  ((!.-'

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] p-value from chisq.test working strangely on 1.8.1

2003-12-09 Thread Torsten Hothorn

 Hello everybody,

 I'm seeing some strange behavior on R 1.8.1 on Intel/Linux compiled
 with gcc 3.2.2.  The p-value calculated from the chisq.test function is
 incorrect for some input values:


   chisq.test(matrix(c(0, 1, 1, 12555), 2, 2), simulate.p.value=TRUE)

  Pearson's Chi-squared test with simulated p-value (based on 2000
  replicates)

 data:  matrix(c(0, 1, 1, 12555), 2, 2)
 X-squared = 1e-04, df = NA, p-value = 1

   chisq.test(matrix(c(0, 1, 1, 12556), 2, 2), simulate.p.value=TRUE)
 [...]
 data:  matrix(c(0, 1, 1, 12556), 2, 2)
 X-squared = 1e-04, df = NA, p-value =  2.2e-16

   chisq.test(matrix(c(0, 1, 1, 12557), 2, 2), simulate.p.value=TRUE)
 [...]
 data:  matrix(c(0, 1, 1, 12557), 2, 2)
 X-squared = 1e-04, df = NA, p-value = 1


this does not happen with R-1.8.1 and gcc 2.95.4 on Debian stable:

R chisq.test(matrix(c(0, 1, 1, 12555), 2, 2),
simulate.p.value=TRUE)$p.value
[1] 1
R chisq.test(matrix(c(0, 1, 1, 12556), 2, 2),
simulate.p.value=TRUE)$p.value
[1] 1
R chisq.test(matrix(c(0, 1, 1, 12557), 2, 2),
simulate.p.value=TRUE)$p.value
[1] 1

neither with R-1.9.0 (unstable). Is this reproducible without using
`set.seed' on your system?

Best,

Torsten


 In these three calls to chisq.test, I'm varying the input matrix by
 only 1 observation, but the p-value changes by 16 orders of magnitude.
 This is reproducible on my system.  Please let me know if any other
 information would be useful.

 chisq.test works properly for these inputs on Mac OS X 10.3.1 with R
 1.8.0.  I don't know if the problem is with Linux or 1.8.1.

 This bug looks very similar to bug 4718, which was reported in R 1.8.0
 and fixed in R 1.8.1.  They may be related.
 http://r-bugs.biostat.ku.dk/cgi-bin/R/Analyses-fixed?id=4718;
 user=guest;selectid=4718

 Jeff

 __
 [EMAIL PROTECTED] mailing list
 https://www.stat.math.ethz.ch/mailman/listinfo/r-help



__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] The spdep package

2003-12-09 Thread Osmo Kolehmainen
Hi,

Here is a listw object z corresponding to the matrix W. I understand n, 
nn,  S0, S1 and S2 in the weights constants summary. Is it simply so that 
n1 = n-1, n2 = n-2 and n3 = n-3? If this is true where they are needed?

Just wondering

Osmo

 W
  [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9]
 [1,]010100000
 [2,]101010000
 [3,]010001000
 [4,]100010100
 [5,]010101010
 [6,]001010001
 [7,]000100010
 [8,]000010101
 [9,]000001010
 z=mat2listw(W);z
Characteristics of weights list object:
Neighbour list object:
Number of regions: 9
Number of nonzero links: 24
Percentage nonzero weights: 29.62963
Average number of links: 2.67
Weights style: M
Weights constants summary:
  n n1 n2 n3 nn S0 S1  S2
M 9  8  7  6 81 24 48 272
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] R: the spdep package

2003-12-09 Thread Osmo Kolehmainen
Hi,
Here is a listw object z corresponding to the matrix W. I understand n, nn, 
S0, S1 and S2 in the weights constants summary. Is it simply so that n1 = 
n-1, n2 = n-2 and n3 = n-3? If this is true where they are needed?
Just wondering
Osmo

 W
[,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9]
[1,] 0 1 0 1 0 0 0 0 0
[2,] 1 0 1 0 1 0 0 0 0
[3,] 0 1 0 0 0 1 0 0 0
[4,] 1 0 0 0 1 0 1 0 0
[5,] 0 1 0 1 0 1 0 1 0
[6,] 0 0 1 0 1 0 0 0 1
[7,] 0 0 0 1 0 0 0 1 0
[8,] 0 0 0 0 1 0 1 0 1
[9,] 0 0 0 0 0 1 0 1 0
 z=mat2listw(W);z
Characteristics of weights list object:
Neighbour list object:
Number of regions: 9
Number of nonzero links: 24
Percentage nonzero weights: 29.62963
Average number of links: 2.67
Weights style: M
Weights constants summary:
n n1 n2 n3 nn S0 S1 S2
M 9 8 7 6 81 24 48 272
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] Alpha

2003-12-09 Thread Poizot Emmanuel
Hello,
I just install red-hat 7.2 for alpha station.
I downloaded R-1.8.1-1.alpha.rpm, but unable to install it.
I need the libblas.so.3 library. I did have a look around and did not find 
where to get that library for linux alpha version.

-- 
regards

Emmanuel POIZOT
Cnam/Intechmer
Digue de Collignon
50110 Tourlaville
Tél : (33)(0)2 33 88 73 42
Fax : (33)(0)2 33 88 73 39

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] Alpha

2003-12-09 Thread alessandro . semeria

Try to install ATLAS (improved version of BLAS libraries):
 sources form http://www.netlib.org/atlas/index.html#software

Best regards
A.S.



Alessandro Semeria
Models and Simulations Laboratory
Montecatini Environmental Research Center (Edison Group),
Via Ciro Menotti 48,
48023 Marina di Ravenna (RA), Italy
Tel. +39 544 536811
Fax. +39 544 538663
E-mail: [EMAIL PROTECTED]



   
 
  Poizot Emmanuel  
 
  [EMAIL PROTECTED]  To:  [EMAIL PROTECTED]  
 
  Sent by:  cc:
 
  [EMAIL PROTECTED] Subject: [R] Alpha 
 
  ath.ethz.ch  
 
   
 
   
 
  09-12-2003 10.07 
 
  Please respond to
 
  poizot   
 
   
 
   
 




Hello,
I just install red-hat 7.2 for alpha station.
I downloaded R-1.8.1-1.alpha.rpm, but unable to install it.
I need the libblas.so.3 library. I did have a look around and did not find
where to get that library for linux alpha version.

--
regards

Emmanuel POIZOT
Cnam/Intechmer
Digue de Collignon
50110 Tourlaville
Tél : (33)(0)2 33 88 73 42
Fax : (33)(0)2 33 88 73 39

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


RE: [R] axes that meet

2003-12-09 Thread Pikounis, Bill
Hank,
I think the graphical parameter you are looking for is bty, as in

 par(bty=l)

Details on the ?par help topic.

Hope that helps.

Bill


Bill Pikounis, Ph.D.

Biometrics Research Department
Merck Research Laboratories
PO Box 2000, MailDrop RY33-300  
126 E. Lincoln Avenue
Rahway, New Jersey 07065-0900
USA

Phone: 732 594 3913
Fax: 732 594 1565


 -Original Message-
 From: [EMAIL PROTECTED] 
 [mailto:[EMAIL PROTECTED] On Behalf Of Hank Stevens
 Sent: Tuesday, December 09, 2003 1:18 AM
 To: [EMAIL PROTECTED]
 Subject: [R] axes that meet
 
 
 R v. 1.7.1, Windows 2000.
 A particular journal wants me to provide scatter plots with 
 no box, but 
 with axes that meet in the lower left corner. It seems as 
 though there must 
 be an easy way of doing this, but my reading the help on 
 plot.default, 
 axis, and box have not provided any clues. I would be most 
 appreciative of 
 any feedback.
 Thank you,
 Hank Stevens
 
 Dr. Martin Henry H. Stevens, Assistant Professor
 338 Pearson Hall
 Botany Department
 Miami University
 Oxford, OH 45056
 
 Office: (513) 529-4206
 Lab: (513) 529-4262
 FAX: (513) 529-4243
 http://www.cas.muohio.edu/botany/bot/henry.html
 http://www.muohio.edu/ecology
 
 __
 [EMAIL PROTECTED] mailing list
 https://www.stat.math.ethz.ch/mailman/listinfo/r-help
 


__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] How to append to a data.frame?

2003-12-09 Thread Jason Turner
David Kreil wrote:

Ok, how can I both allocate storage and specify column names in a data.frame 
call, please? Apologies if I'm being slow here.

With many thanks again,

David.

Something like (UNTESTED code follows)

templateColumn - rep(NA,1000) # for 1000 rows

foo - data.frame( x = templateColumn,
  y = templateColumn,
  z = templateColumn )  # or however many columns you need
bar - foo

for(ii in your.iterative.sequence) {
  if(ii  1000) {
bar - rbind(bar,foo)
  }
  bar[ii,] - your.function()
}
Cheers

Jason
--
Indigo Industrial Controls Ltd.
http://www.indigoindustrial.co.nz
64-21-343-545
[EMAIL PROTECTED]
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] How to append to a data.frame?

2003-12-09 Thread Peter Dalgaard
David Kreil [EMAIL PROTECTED] writes:

 Ok, how can I both allocate storage and specify column names in a data.frame 
 call, please? Apologies if I'm being slow here.

It gets a little tricky. I'd try something along the lines of

data.frame(age=as.numeric(NA),sex=factor(NA,levels=c(m,f)))[rep(1,20),]

or 

data.frame(age=0,sex=factor(m,levels=c(m,f)))[rep(NA,20),]

and of course the brute force way is

data.frame(age=rep(as.numeric(NA),20),
   sex=factor(rep(NA,20),levels=c(m,f))
  )

Also, 

(a) there's no idea in ensuring that you're filling with NA if they
are all going to be changed anyway, and
(b) recycling works so that you only need to specify the length of one
variable, so 

data.frame(age=numeric(20), sex=factor(,levels=c(m,f)) )

works too.

Extending a data frame can be as simple as

mydata - mydata[1:newlength,]
 
(plus fixup of row names later on).

 With many thanks again,
 
 David.
 
   Yes, once I've named the first column, I can add further ones by saying 
   d[c(x,y,z)]=NA or such. I was just wondering whether that was the way to 
   do it or whether there was a more elegant approach. Preallocation was the 
   critical clue I needed.
  
  Use an initial data.frame call naming all the columns.


-- 
   O__   Peter Dalgaard Blegdamsvej 3  
  c/ /'_ --- Dept. of Biostatistics 2200 Cph. N   
 (*) \(*) -- University of Copenhagen   Denmark  Ph: (+45) 35327918
~~ - ([EMAIL PROTECTED]) FAX: (+45) 35327907

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] Font problem

2003-12-09 Thread Peter Dalgaard
John Dougherty [EMAIL PROTECTED] writes:

 Some plots fail due to a problem with the X11 fonts.  I get a message that 
 X11 font at size 22 could not be loaded.  The demo() graphics routine for 
 instance dies during the third chart.  The graphics demo calls font.main=1 
 and that seems to be where the error is.  I believe this is due to a 
 configuration problem on my system, however I can't find where in the 
 environment font.main looks for the font to use.
 
 I am running SuSE 9.0 and use the KDE desktop.  However, I have also 
 replicated this in GNOME and WindowMaker.  Varying the fonts used by the 
 console does notb effect the result.

I think that in principle the bug is in R, but as far as I remember,
the workaround is to ensure that you either have scalable PostScript
fonts or have non-scalable versions in both 100 and 75 dpi. This in
turn is assured by configuring the font server.

On RedHat (dunno about SuSE),/etc/X11/fs/config needs to have
something like

catalogue = /usr/X11R6/lib/X11/fonts/misc:unscaled,
/usr/X11R6/lib/X11/fonts/75dpi:unscaled,
/usr/X11R6/lib/X11/fonts/100dpi:unscaled,
/usr/X11R6/lib/X11/fonts/misc,
/usr/X11R6/lib/X11/fonts/Type1,
...

or lose the :unscaled, but that tends to look horrible.
-- 
   O__   Peter Dalgaard Blegdamsvej 3  
  c/ /'_ --- Dept. of Biostatistics 2200 Cph. N   
 (*) \(*) -- University of Copenhagen   Denmark  Ph: (+45) 35327918
~~ - ([EMAIL PROTECTED]) FAX: (+45) 35327907

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] trouble with predict.l1ce

2003-12-09 Thread Martin Maechler
 clayton == clayton springer [EMAIL PROTECTED]
 on Mon, 8 Dec 2003 13:48:11 -0500 writes:

clayton Dear R-help, I am having trouble with the predict
clayton function in lasso2. For example:

 data(Iowa) l1c.I - l1ce(Yield ~ ., Iowa, bound = 10,
 absolute.t=TRUE) predict (l1c.I) # this works is fine
 predict (l1c.I,Iowa)
clayton Error in eval(exper,envir, enclos) : couldn't find
clayton function Yield


clayton And I have similar trouble whenever I use the
clayton newdata argument in prediction.

yes.
This is something that needs to be added to the lasso2 package.

Volunteers are sought ...

Martin Maechler [EMAIL PROTECTED] http://stat.ethz.ch/~maechler/
Seminar fuer Statistik, ETH-Zentrum  LEO C16Leonhardstr. 27
ETH (Federal Inst. Technology)  8092 Zurich SWITZERLAND
phone: x-41-1-632-3408  fax: ...-1228   

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R]: global and local variables

2003-12-09 Thread allan clark

   Hi all

   I have a problem pertaining to local and global variables.

   Say I have a function defined as follows:

   a-function(x)
   {yx^2}

   i.e
   a(2)
   [1] 4

   function b is now defined to take the value of y and do some
   manipulation with it. As it stands I dont know how to store the
   variable y such that other functions are able to reference its value.

   I know that I can simply put the operations found in b simply into a
   but this is not what I want.

   I would like to have stand alone functions say

   a, b and c which could be run independently as well as have a function
   called say

   control that can run a, b and c.

   i.e.

   control- function( x)
   {
   a(x)
   b(x)
   c(x)
   }

   I hope that you guys understand what I'm trying to do.

   Cheers
   Allan
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R]: global and local variables

2003-12-09 Thread Uwe Ligges
allan clark wrote:

   Hi all

   I have a problem pertaining to local and global variables.

   Say I have a function defined as follows:

   a-function(x)
   {yx^2}
   i.e
   a(2)
   [1] 4
The function a specified above won't return 4!



   function b is now defined to take the value of y and do some
   manipulation with it. As it stands I dont know how to store the
   variable y such that other functions are able to reference its value.
   I know that I can simply put the operations found in b simply into a
   but this is not what I want.
   I would like to have stand alone functions say

   a, b and c which could be run independently as well as have a function
   called say
   control that can run a, b and c.

   i.e.

   control- function( x)
   {
   a(x)
   b(x)
   c(x)
   }
   I hope that you guys understand what I'm trying to do.
You are trying to read An Introduction to R???
If not, please try!
What you are really going to do: using assigments and return() 
statements as in:

a - function(x) return(x^2)

foo - function(x) {
  y - a(x)
  z - b(x)
  return(list(y=y, z=z))
}
foo(.)

Uwe Ligges


   Cheers
   Allan
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] levelplot parameters

2003-12-09 Thread Joerg Schaber
Hi,

I have a levelplot with one panel. I just can't find out how I can 
manipulate the size of the axis lables. e.g. scales.cex doesn't work,  
the usual par-parameters either.
Any hint?

joerg

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


RE: [R] axes that meet

2003-12-09 Thread partha_bagchi
It might also mean that he is looking for par(xaxs = i, yaxs = i)





Pikounis, Bill [EMAIL PROTECTED]
Sent by: [EMAIL PROTECTED]
12/09/2003 05:23 AM

 
To: 'Hank Stevens' [EMAIL PROTECTED], [EMAIL PROTECTED]
cc: 
Subject:RE: [R] axes that meet


Hank,
I think the graphical parameter you are looking for is bty, as in

par(bty=l)

Details on the ?par help topic.

Hope that helps.

Bill


Bill Pikounis, Ph.D.

Biometrics Research Department
Merck Research Laboratories
PO Box 2000, MailDrop RY33-300
126 E. Lincoln Avenue
Rahway, New Jersey 07065-0900
USA

Phone: 732 594 3913
Fax: 732 594 1565


 -Original Message-
 From: [EMAIL PROTECTED]
 [mailto:[EMAIL PROTECTED] On Behalf Of Hank Stevens
 Sent: Tuesday, December 09, 2003 1:18 AM
 To: [EMAIL PROTECTED]
 Subject: [R] axes that meet


 R v. 1.7.1, Windows 2000.
 A particular journal wants me to provide scatter plots with
 no box, but
 with axes that meet in the lower left corner. It seems as
 though there must
 be an easy way of doing this, but my reading the help on
 plot.default,
 axis, and box have not provided any clues. I would be most
 appreciative of
 any feedback.
 Thank you,
 Hank Stevens

 Dr. Martin Henry H. Stevens, Assistant Professor
 338 Pearson Hall
 Botany Department
 Miami University
 Oxford, OH 45056

 Office: (513) 529-4206
 Lab: (513) 529-4262
 FAX: (513) 529-4243
 http://www.cas.muohio.edu/botany/bot/henry.html
 http://www.muohio.edu/ecology

 __
 [EMAIL PROTECTED] mailing list
 https://www.stat.math.ethz.ch/mailman/listinfo/r-help



__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help

--
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R]: global and local variables

2003-12-09 Thread allan clark

   Hi

   Thanx for those who responded to my problem. In my previous email I
   tried to ask a general question and probably never explained myself
   correctly.  I wanted to prevent sending this long email. My apologies.

   This is my actual problem.

   I have a regression problem. I am writing some R code in order to
   calculate some collinearity diagnostics. The diagnostics all rely on a
   function named preprocess. I've written the different diagnostics as
   separate functions so that they may be evaluated separately if
   required.

   The two functions are named mci and vif. (I will be writing some
   others later)

   mci calculates the mixed condition index as well as the condition
   indices of a given X matrix while
   vif calculates the variance inflation factors of the X matrix.

   Another function named colldiag has been written. This function will
   calculate all of the collinearity diagnostics by simply calling the
   separate functions defined previously.

   I've attached the code of the different functions as well as a data
   file (say a2) below.

   The functions mci and vif work perfectly.

   i.e.

mci(a2)
   [1] DATA MATRIX CENTERED AND SCALED
   [1] CENTERED AND SCALED MATRIX = $data
   [1] MEANS OF XDATA = $means
   [1] STDS OF XDATA = $stds
   [1] THE CONDITION NUMBER AND THE CONDITION INDICES
   $CN
   [1] 27.34412

   $CI
   [1]  1.00  1.615690 27.344123

   $MCI
 Principal.Component Singular.Values Condition.Index
   1   1   1.47206801.00
   2   2   0.91110781.615690
   3   3   0.0538349   27.344123

vif(a2)
   [1] DATA MATRIX CENTERED AND SCALED
   [1] CENTERED AND SCALED MATRIX = $data
   [1] MEANS OF XDATA = $means
   [1] STDS OF XDATA = $stds
   [1] THE VARIANCE INFLATION FACTORS
   $vif
 x1   x2   x3
   169.3542 175.6667   1.6875

   The output from colldiag is as follows:

colldiag(a2)
   [1] DATA MATRIX CENTERED AND SCALED
   [1] CENTERED AND SCALED MATRIX = $data
   [1] MEANS OF XDATA = $means
   [1] STDS OF XDATA = $stds
   [1] THE CONDITION NUMBER AND THE CONDITION INDICES
   $CN
   [1] 27.34412

   $CI
   [1]  1.00  1.615690 27.344123

   $MCI
 Principal.Component Singular.Values Condition.Index
   1   1   1.47206801.00
   2   2   0.91110781.615690
   3   3   0.0538349   27.344123

   [1] DATA MATRIX CENTERED AND SCALED
   [1] CENTERED AND SCALED MATRIX = $data
   [1] MEANS OF XDATA = $means
   [1] STDS OF XDATA = $stds
   [1] THE VARIANCE INFLATION FACTORS
   $vif
 x1   x2   x3
   169.3542 175.6667   1.6875




   Once you check the colldiag code below you will see that it calls mci
   and vif. In both of these functions they call preprocess. This is
   unnecessary. How can I write the code such that R only calls
   preprocess once?

   ONCE AGAIN I APOLOGIZE FOR THE LENGTH OF THIS EMAIL!!!


   Cheers
   Allan





   The data file:

  x1 x2 x3
   1  20 -4  5
   2  21 -4  4
   3  22 -3  3
   4  23 -2  2
   5  24 -1  1
   6  25  0  2
   7  26  1  3
   8  27  2  4
   9  28  3  5
   10 29  4  6
   11 20 -4  5
   12 21 -4  4
   13 22 -3  3
   14 23 -2  2
   15 24 -1  1
   16 25  0  2
   17 26  1  3
   18 27  2  4
   19 28  3  5
   20 29  4  6

   preprocess-function (xdata,center=1,scale=1)
   {
   if(center==1  scale==1)
   {
   means-apply(xdata,2,mean)
   stds-apply(xdata,2, function(x) sqrt(var(x)))
   scalefactor-((nrow(xdata)-1)^.5)*stds
   data.centsca-sweep(sweep(xdata,2,means,-),2,scalefactor,/)
   print(DATA MATRIX CENTERED AND SCALED)
   print(CENTERED AND SCALED MATRIX = $data)
   print(MEANS OF XDATA = $means)
   print(STDS OF XDATA = $stds)
   list(data=data.centsca,means=means,stds=stds,prep=1)
   }

   else if(center==1  scale==0)
   {
   means-apply(xdata,2,mean)
   data.cen-sweep(xdata,2,means,-)
   print(DATA MATRIX CENTERED)
   list(data=data.cen,means=means,prep=1)
   }

   else if(center==0  scale==1)
   {
   stds-apply(xdata,2, function(x) sqrt(var(x)))
   scalefactor-((nrow(xdata)-1)^.5)*stds
   data.sca-sweep(xdata,2,scalefactor,/)
   print(DATA MATRIX SCALED)
   list(data=data.sca,stds=stds,prep=1)
   }

   else
   {
   print(YOU HAVE TO SPECIFY WHETHER YOU WANT TO SCALE OR CENTER THE
   MATRIX)
   print(THE preprocess FUNCTION HAS THREE ARGUMENTS. i.e.
   preprocess(xdata,center,scale))
   print(xdata IS THE MATRIX TO BE TRANSFORMED)
   print(TO CENTER SPECIFY center=1)
   print(TO SCALE SPECIFY scale=1)
   }

   # A matrix is standardised as follows:
   # X*(i,j) = ( X(i,j)- XBAR(j) ) / (   sqrt(n-1)* STD(j)   )

   }

   mci-function (xdata)
   {
   a-preprocess(xdata)
   b-svd(a$data)
   cn-(b$d)[1]/(b$d)[ncol(a$data)]
   ci-(b$d)[1]/(b$d)[1:ncol(a$data)]

   #paste(THE CONDITION NUMBER = ,cn)

   #the following produces a table in order to output the mci values
   

Re: [R] p-value from chisq.test working strangely on 1.8.1

2003-12-09 Thread Marc Schwartz
On Tue, 2003-12-09 at 02:23, Torsten Hothorn wrote:
  Hello everybody,
 
  I'm seeing some strange behavior on R 1.8.1 on Intel/Linux compiled
  with gcc 3.2.2.  The p-value calculated from the chisq.test function is
  incorrect for some input values:
 
 
chisq.test(matrix(c(0, 1, 1, 12555), 2, 2), simulate.p.value=TRUE)
 
   Pearson's Chi-squared test with simulated p-value (based on 2000
   replicates)
 
  data:  matrix(c(0, 1, 1, 12555), 2, 2)
  X-squared = 1e-04, df = NA, p-value = 1
 
chisq.test(matrix(c(0, 1, 1, 12556), 2, 2), simulate.p.value=TRUE)
  [...]
  data:  matrix(c(0, 1, 1, 12556), 2, 2)
  X-squared = 1e-04, df = NA, p-value =  2.2e-16
 
chisq.test(matrix(c(0, 1, 1, 12557), 2, 2), simulate.p.value=TRUE)
  [...]
  data:  matrix(c(0, 1, 1, 12557), 2, 2)
  X-squared = 1e-04, df = NA, p-value = 1
 
 
 this does not happen with R-1.8.1 and gcc 2.95.4 on Debian stable:
 
 R chisq.test(matrix(c(0, 1, 1, 12555), 2, 2),
 simulate.p.value=TRUE)$p.value
 [1] 1
 R chisq.test(matrix(c(0, 1, 1, 12556), 2, 2),
 simulate.p.value=TRUE)$p.value
 [1] 1
 R chisq.test(matrix(c(0, 1, 1, 12557), 2, 2),
 simulate.p.value=TRUE)$p.value
 [1] 1
 
 neither with R-1.9.0 (unstable). Is this reproducible without using
 `set.seed' on your system?
 
 Best,
 
 Torsten


snip

Confirmed on Fedora Core 1 with R Version 1.8.1 Patched (2003-12-07)
compiled using gcc (GCC) 3.3.2 20031107 (Red Hat Linux 3.3.2-2).


 chisq.test(matrix(c(0, 1, 1, 12555), 2, 2), simulate.p.value=TRUE)
...
X-squared = 1e-04, df = NA, p-value = 1

 chisq.test(matrix(c(0, 1, 1, 12556), 2, 2), simulate.p.value=TRUE)

X-squared = 1e-04, df = NA, p-value =  2.2e-16
...
 chisq.test(matrix(c(0, 1, 1, 12557), 2, 2), simulate.p.value=TRUE)
...
X-squared = 1e-04, df = NA, p-value = 1


HTH,

Marc Schwartz

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] Importing TIFF files into a R matrix

2003-12-09 Thread peter . hagedorn
Hi

I am facing a problem where I would like to import a TIFF image (of spots on a nylon 
filter) into R (into a matrix for example). When plotting the matrix using fx. 
scatterplot3d I would then be able to see how the pixel-intensities are distributed in 
spot-areas on the filter - which would be very helpful.

Does anynone know of a way to do this?

Best regards,

Peter Hagedorn

...
Peter Hagedorn

Risø National Laboratory
Plant Research Department
Building PRD-330
P.O. Box 49
Frederiksborgvej 399
DK-4000 Roskilde 
Denmark

Phone +45 4677 4293
Fax +45 4677 4109
e-mail  [EMAIL PROTECTED]
web http://www.risoe.dk/pbk/staff_uk/phah.htm

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] Font problem

2003-12-09 Thread Marc Schwartz
On Tue, 2003-12-09 at 08:07, Peter Dalgaard wrote:
 Marc Schwartz [EMAIL PROTECTED] writes:

snip

 Actually, I think you can leave the :unscaled on. Seems to work for me.

Could be. That is, I believe, the default on a new install. It was on
FC1 for me (I have not had font problems so far) and I recall the same
default on RH 8.0 and RH 9. Though I do have recollections of font
problems on 8.0, solved by adding the two additional lines without the
':unscaled', which is why this issue sticks in my mind. 

Then again, it may just be old age and the snow this morning...

It may be as simple as being sure that both the 75 dpi and 100 dpi fonts
are loaded as the SUSE query earlier this year seemed to indicate that
the 100 dpi fonts were not installed initially.

  I believe that a restart of X may be required to make the change, but a
  restart of the X font server may suffice using:
  
  /sbin/service xfs restart
 
 Restarting xfs is certainly necessary (and, I believe, not implied by
 restarting X) but beware that it may freeze currently running X
 applications (that used to be the case anyway), so don't do it
 with important stuff running on your desktop. 

Good point  :-)

Marc

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] PROC MIXED vs. lme()

2003-12-09 Thread Manuel A. Morales
I'm trying to learn how to do a repeated measures ANOVA in R using lme().

A data set that comes from the book Design and Analysis has the following
structure: Measurements (DV) were taken on 8 subjects (SUB) with two
experimental levels (GROUP) at four times (TRIAL).

In SAS, I use the code:

PROC MIXED DATA=[data set below];
  CLASS sub group trial;
  MODEL dv = group trial group*trial;
  REPEATED trial / SUBJECT=sub TYPE=CS;
run;

which gives the results:

Tests of Fixed Effects

SourceNDF   DDF  Type III F  Pr  F
GROUP   1 62.51  0.1645
TRIAL   318   22.34  0.0001
GROUP*TRIAL 3180.58  0.6380

In R, I'm trying the code:

results.cs - lme(DV ~ factor(GROUP)*factor(TRIAL), data=[data set below],
random= ~factor(TRIAL)|SUB, correlation=corCompSymm() )
anova(results.cs)

which gives the results:

numDF denDF  F-value p-value
(Intercept) 118 3383.953  .0001
factor(GROUP)   1 64.887  0.0691
factor(TRIAL)   318  239.102  .0001
factor(GROUP):factor(TRIAL) 3181.283  0.3103

Why are these results different? I'm a newbie to R, have the book Mixed
Effects Models in S and S-Plus, but can't seem to get this analysis to
work. Any suggestions?

Thanks!

Manuel

Data:
SUB GROUP   DV  TRIAL
1   1   3   1
1   1   4   2
1   1   7   3
1   1   3   4
2   1   6   1
2   1   8   2
2   1   12  3
2   1   9   4
3   1   7   1
3   1   13  2
3   1   11  3
3   1   11  4
4   1   0   1
4   1   3   2
4   1   6   3
4   1   6   4
5   2   5   1
5   2   6   2
5   2   11  3
5   2   7   4
6   2   10  1
6   2   12  2
6   2   18  3
6   2   15  4
7   2   10  1
7   2   15  2
7   2   15  3
7   2   14  4
8   2   5   1
8   2   7   2
8   2   11  3
8   2   9   4

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


RE: [R] levelplot parameters

2003-12-09 Thread Liaw, Andy
See the scales argument in ?xyplot, which has:

  scales: list determining how the x- and y-axes (tick marks and
  labels) are drawn. The list contains parameters in name=value
  form, and may also contain two other lists called 'x' and 'y'
  of the same form (described below). Components of 'x' and 'y'
  affect the respective axes only, while those in 'scales'
  affect both. (When parameters are specified in both lists,
  the values in 'x' or 'y' are used.) The components are :
[...]
  cex: factor to control character sizes for axis labels. Can
  be a vector of length 2, to control left/bottom and right/top
  separately.

HTH,
Andy
 

 -Original Message-
 From: [EMAIL PROTECTED] 
 [mailto:[EMAIL PROTECTED] 
 On Behalf Of Joerg Schaber
 Sent: Tuesday, December 09, 2003 7:12 AM
 To: [EMAIL PROTECTED]
 Subject: [R] levelplot parameters
 
 
 Hi,
 
 I have a levelplot with one panel. I just can't find out how I can 
 manipulate the size of the axis lables. e.g. scales.cex 
 doesn't work,  
 the usual par-parameters either.
 Any hint?
 
 joerg
 
 __
 [EMAIL PROTECTED] mailing list
 https://www.stat.math.ethz.ch/mailman/listinfo/r-help
 


__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


RE: [R] Importing TIFF files into a R matrix

2003-12-09 Thread Henrik Bengtsson
Hi, 

I do not know of any free TIFF readers for R, so I suggest that you
use an external TIFF-to-Portable Pixmap coverter and then use the
pixmap package available on CRAN. I recommend ImageMagick's convert
program available for Unix, Linux, Windows, Windows/Cygwin etc at
http://www.imagemagick.org/. 

(If you have one already installed, be careful not to work with an old
version; I ran into a problem convert 16-bits TIFF with a 2 years old
convert and it made it only into 8-bit images without warnings. That
should not be a problem now.)

The Portable Pixmap format includes i) RGB images (PPM), gray scale
images (PGM) and monochrome images (PBM). In your case (I assume)
you're working with 16-bits grayscale TIFF images so you should
convert to PGM.

If you have your PATH setup correctly an example would then be:

 library(pixmap)
 system(convert foo.tiff foo.pgm)
 img - read.pnm(foo.pgm)

and then work from there.

Hope this helps...

Henrik Bengtsson

Dept. of Mathematical Statistics @ Centre for Mathematical Sciences
Lund Institute of Technology/Lund University, Sweden 
(Sweden +1h UTC, Melbourne +11 UTC, Calif. -8h UTC)
+46 708 909208 (cell), +46 46 320 820 (home), 
+1 (508) 464 6644 (global fax),
+46 46 2229611 (off), +46 46 2224623 (dept. fax)
h b @ m a t h s . l t h . s e, http://www.maths.lth.se/~hb/



 -Original Message-
 From: [EMAIL PROTECTED] 
 [mailto:[EMAIL PROTECTED] On Behalf Of 
 [EMAIL PROTECTED]
 Sent: den 9 december 2003 15:16
 To: [EMAIL PROTECTED]
 Subject: [R] Importing TIFF files into a R matrix
 
 
 Hi
 
 I am facing a problem where I would like to import a TIFF 
 image (of spots on a nylon filter) into R (into a matrix for 
 example). When plotting the matrix using fx. scatterplot3d I 
 would then be able to see how the pixel-intensities are 
 distributed in spot-areas on the filter - which would be 
 very helpful.
 
 Does anynone know of a way to do this?
 
 Best regards,
 
 Peter Hagedorn
 
 ...
 Peter Hagedorn
 
 Risø National Laboratory
 Plant Research Department
 Building PRD-330
 P.O. Box 49
 Frederiksborgvej 399
 DK-4000 Roskilde 
 Denmark
 
 Phone +45 4677 4293
 Fax +45 4677 4109
 e-mail  [EMAIL PROTECTED]
 web http://www.risoe.dk/pbk/staff_uk/phah.htm
 
 __
 [EMAIL PROTECTED] mailing list 
 https://www.stat.math.ethz.ch/mailma n/listinfo/r-help
 


__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] Font problem

2003-12-09 Thread Peter Dalgaard
Marc Schwartz [EMAIL PROTECTED] writes:

 It may be as simple as being sure that both the 75 dpi and 100 dpi fonts
 are loaded as the SUSE query earlier this year seemed to indicate that
 the 100 dpi fonts were not installed initially.

I actually wrote the bug so I'm supposed to know what it is about
It is just that it was so difficult to write that I have been
reluctant to try and fix it.

The exact issue is that the X11 driver jumps through a few hoops to
help you get the real Adobe-designed fonts rather that some ugly
rescaled ones. These exist in pixel sizes

8,10,11,12,14,17,18,20,24,25,34 

(of which 10/11 and 24/25 are actually identical).

If you use unscaled fonts, the logic inside RLoadFont will try to give
you one of the above, by choosing the one closest to the one you
specified. If you request a 22 pixel font, the system will load the 20
pixel font. The catch is that e.g. the 20 pixel font only exists in
100 dpi

blueberry:~/xlsfonts -fn '-adobe-helvetica-medium-r-*-*-20-*-*-*-*-*-*-*'
-adobe-helvetica-medium-r-normal--20-140-100-100-p-100-iso10646-1
-adobe-helvetica-medium-r-normal--20-140-100-100-p-100-iso8859-1

(actually, on my machine, only the 12 pixel version exists in both 100
and 75 dpi).

I.e. if you use unscaled fonts, you need to have both dpi sets
installed. With scaled fonts you don't run into this issue, but some
screen fonts may (will!) be ugly. Of course the default SuSE install
gives you only one dpi set, unscaled.

The bug is that there is a gap in the font-substitution logic so that
if you have only one set of fonts, then RLoadFont may end up returning
NULL, rather than the fixed font or a suitable fallback size.

-- 
   O__   Peter Dalgaard Blegdamsvej 3  
  c/ /'_ --- Dept. of Biostatistics 2200 Cph. N   
 (*) \(*) -- University of Copenhagen   Denmark  Ph: (+45) 35327918
~~ - ([EMAIL PROTECTED]) FAX: (+45) 35327907

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] Interfacing R and Python in MS Windows

2003-12-09 Thread Héctor Villafuerte D.
Uwe Ligges wrote:

Héctor Villafuerte D. wrote:

Hi all,
I need the power of R from within some of my Python programs...
I use debian linux (woody) at home and windows XP at work (the
latter is where I need to get things done!)
This are my packages:
R 1.8.0
Python 2.3
RSPython 0.5-3
This is what I've done:
(1) Since the Windows Binary of RSPython is compiled against
Python 2.2 I downloaded the tarball
(2) Followed the instructions in INSTALL.win (with pexports and
everything)
(3) In the RGUI Install package(s) from local zip files...


If you have downloaded the tarball of RSPython, you have to install 
the package using Rcmd INSTALL, and you cannot use the RGUI which is 
designed to install binary packages.

Uwe Ligges


(4) NO errors reported during this process
(5) When I try to Load package in R it show this error:
  local({pkg - select.list(sort(.packages(all.available = TRUE)))
+ if(nchar(pkg)) library(pkg, character.only=TRUE)})
Error in testRversion(descfields) : This package has not been 
installed properly
See the Note in ?library

(6) In Python
  import RS
Traceback (most recent call last):
 File stdin, line 1, in ?
ImportError: No module named RS
Please help me to get this excelent tools going on in Windows.
Thanks in advance,
Hector 

Thanks for your help guys.
Tim: I had already seen RPy; but its Windows binary is compiled against
Python 2.2 (and I have 2.3) so it didn't work.
Uwe:
(1) I installed Active Perl (it seems to be needed by Rcmd INSTALL)
(2) I then created a tar.gz with the modifications found in INSTALL.WIN
(3) Here's what I got:
E:\to_doRcmd INSTALL -c e:/to_do/RSPython.tar.gz

-- Making package RSPython 

  **
  WARNING: this package has a configure script
It probably needs manual configuration
  **
 installing inst files
A package must contain a DESCRIPTION file
make[1]: *** [frontmatter] Error 27
make: *** [pkg-RSPython] Error 2
*** Installation of RSPython failed ***
make --no-print-directory DLLNM= RHOME=C:/PROGRA~1/R/rw1080 BUILD=MINGW \
 -C E:/to_do/R.INSTALL/RSPython PKG=RSPython -f 
C:/PROGRA~1/R/rw1080/src/gnuwin32/MakePkg clean
make: *** [pkgclean-RSPython] Error 255

What's a description file? Any suggestions compiling this?
Thanks again in advance,
Hector
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


RE: [R] axes that meet

2003-12-09 Thread Hank Stevens
Thank you to all who replied.
par(bty) and par(xaxs, yaxs) gave me all I need (and more!) for specifiying 
axes in the fashion required.
Thanks,
Hank
At 07:28 AM 12/9/2003, [EMAIL PROTECTED] wrote:
It might also mean that he is looking for par(xaxs = i, yaxs = i)





Pikounis, Bill [EMAIL PROTECTED]
Sent by: [EMAIL PROTECTED]
12/09/2003 05:23 AM
To: 'Hank Stevens' [EMAIL PROTECTED], 
[EMAIL PROTECTED]
cc:
Subject:RE: [R] axes that meet

Hank,
I think the graphical parameter you are looking for is bty, as in
par(bty=l)

Details on the ?par help topic.

Hope that helps.

Bill


Bill Pikounis, Ph.D.
Biometrics Research Department
Merck Research Laboratories
PO Box 2000, MailDrop RY33-300
126 E. Lincoln Avenue
Rahway, New Jersey 07065-0900
USA
Phone: 732 594 3913
Fax: 732 594 1565
 -Original Message-
 From: [EMAIL PROTECTED]
 [mailto:[EMAIL PROTECTED] On Behalf Of Hank Stevens
 Sent: Tuesday, December 09, 2003 1:18 AM
 To: [EMAIL PROTECTED]
 Subject: [R] axes that meet


 R v. 1.7.1, Windows 2000.
 A particular journal wants me to provide scatter plots with
 no box, but
 with axes that meet in the lower left corner. It seems as
 though there must
 be an easy way of doing this, but my reading the help on
 plot.default,
 axis, and box have not provided any clues. I would be most
 appreciative of
 any feedback.
 Thank you,
 Hank Stevens

 Dr. Martin Henry H. Stevens, Assistant Professor
 338 Pearson Hall
 Botany Department
 Miami University
 Oxford, OH 45056

 Office: (513) 529-4206
 Lab: (513) 529-4262
 FAX: (513) 529-4243
 http://www.cas.muohio.edu/botany/bot/henry.html
 http://www.muohio.edu/ecology

 __
 [EMAIL PROTECTED] mailing list
 https://www.stat.math.ethz.ch/mailman/listinfo/r-help


__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
--
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.
Dr. Martin Henry H. Stevens, Assistant Professor
338 Pearson Hall
Botany Department
Miami University
Oxford, OH 45056
Office: (513) 529-4206
Lab: (513) 529-4262
FAX: (513) 529-4243
http://www.cas.muohio.edu/botany/bot/henry.html
http://www.muohio.edu/ecology
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] Design functions after Multiple Imputation

2003-12-09 Thread Frank E Harrell Jr
On Mon, 08 Dec 2003 21:23:40 +
Umberto Maggiore [EMAIL PROTECTED] wrote:

 I am a new user of R for Windows, enthusiast about the many functions
 of the Design and Hmisc libraries.
 I combined the results of a Cox regression model after multiple
 imputation(of missing values in some covariates).
 Now I got my vector of coefficients (and of standard errors).
 My question is: How could I use directly that vector to run programs
 such as 'nomogram', 'calibrate', 'validate.cph' which, in contrast, call
 for the saved results form 'cph' ?
 I did not use 'aregImpute' for multiple imputation. However, even if
 I did it, 'fit.mult.impute'  seems not to allow specifying the option
 'surv=TRUE' (essential to get a nomogram) or 'x=TRUE, y=TRUE' (which
 are essential for 'calibrate' and 'validate.cph'. Therefore, I dont't
 see how I could get a nomogram or run other Design functions after
 'aregImpute'.
 
 thank you so much in advance
 Umberto

Good questions Umberto.  The Design package handles multiple imputation
and model validation, but currently not at the same time.  But model
descriptions such as those provided by summary.Design, nomogram.Design,
contrast.Design are fully operational after the variance-covariance matrix
is corrected for multiple imputation by fit.mult.impute.  There is one
caveat.  When the nomogram not only has predicted relative hazards but
absolute survival estimates (probabilities, quantiles, restricted mean
life), fit.mult.impute gets the baseline survival estimates from surv=T
from the first imputation.  We need to extend that to average baseline
survival estimates over all multiple imputations.

You should have had no difficulty using nomogram after fit.mult.impute
(after aregImpute); only fully trust the relative hazard estimates from
the resulting nomogram.

I have put out a note on the IMPUTE e-mail list (see
http://hesweb1.med.virginia.edu/biostat/rms for subscription information)
about how to develop a reasonable algorithm for simultaneous multiple
imputation and bootstrap model validation/calibration.  So far no takers.

Frank

---
Frank E Harrell JrProfessor and ChairSchool of Medicine
  Department of BiostatisticsVanderbilt University

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] p-value from chisq.test working strangely on 1.8.1

2003-12-09 Thread John Dougherty
It happens with suse 9.0 as well.

...
chisq.test(matrix(c(0, 1, 1, 12556), 2, 2), simulate.p.value=TRUE)

Pearson's Chi-squared test with simulated p-value (based on 2000
replicates)

data:  matrix(c(0, 1, 1, 12556), 2, 2)
X-squared = 1e-04, df = NA, p-value = 5e-04
.

JWDougherty

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] FW: Symposium COMPSTAT 2004

2003-12-09 Thread Rau, Roland
Dear all,

I just received the following message, and I think it might be of interest
to the R-list.

Cordially,
Roland


 - all people interested in COMPSTAT 2004 Symposium
 
Prague December 8, 2004
 
 Dear colleague,
 
 thank you very much for your interest in COMPSTAT 2004 Symposium. This
 is to remind you that the deadline for delivering your manuscript in
 electronic form to the hands of the editors is approaching very fast.
 
 Due to a series of question to us concerning the selection of papers
 for proceedings, we repeat (especially for those who participated to
 the previous COMPSTAT's) that the choice will be done based on
 complete papers and not on abstracts as in the past.
 
 Submission of Papers for inclusion in the Proceedings Volume (subject
 to selection by SPC) is February 2, 2004 !!! For details see:
 
  http://www.compstat2004.cuni.cz
 
 Please, be in time.
 Thanks a lot in advance
 
 Looking forward seeing you next year for COMPSTAT 2004 in Prague
 
 We remain sincerely yours
 LOC + editors
 
 Please, forgive us if you have received this e-mail more than ones due
 to a duplication in different databases.
 
 
 PS Most frequently asked questions.
 
 1) I HAVE PARTICIPATED TO SEVERAL PREVIOUS COMPSTAT SYMPOSIA. WHAT IS
THE MOST IMPORTANT ORGANIZATIONAL CHANGE?
 
During the previous COMPSTAT's the selection of papers and their
division as short and standard contributed papers has been based on
the submitted two page long abstract. This scheme has been
abandoned for COMPSTAT 2004. Instead, the participants are asked to
submit the full papers (up to 8 pages) not just an abstract. The
SPC will make a selection of the best of these for inclusion in the
published Proceedings Volume. These will be reviewed and subject to
revision before publication. The other papers that are deemed to be
acceptable will be, after reviewing, included on a CD that will
form the integral part of the Proceedings Volume. The time given
for the presentation may reflect the status of the paper as
assessed by the SPC - the allocation will depend on the time
pressures in putting together the final programme. In the case that
it will not be possible to schedule all oral contributions, SPC has
the right to only offer a poster presentation to some of them.
 
 2) WHY SHOULD I SUBMIT ALSO THE ABSTRACT IF MY PAPER WILL BE AVAILABLE
DURING THE SYMPOSIUM?
 
The abstracts should inform the participants about the content of
his/her lecture. All abstract will be included in the Abstracts
Volume produced by the LOC, distributed during the symposium and
mounted before the symposium on its web page. Notice also that not
all people have with them always a notebook to check abstracts on
the CD. Both SPC and LOC hope it will be more easy for the
participants to orient oneself and to chose the session where to
go. Latest day for the authors by which abstracts of submitted
contributions and invited papers must be delivered to the LOC in
Prague in electronic format is May 1, 2004!!!
 
 3) WHAT IS THE LATEST DAY FOR DELIVERING MANUSCRIPTS?
 
Latest day for the authors by which complete manuscripts of
submitted contributions and invited papers must be delivered to the
LOC in Prague in electronic format is February 2, 2004!!!


+
This mail has been sent through the MPI for Demographic Research.  Should you receive 
a mail that is apparently from a MPI user without this text displayed, then the 
address has most likely been faked.   If you are uncertain about the validity of this 
message, please check the mail header or ask your system administrator for assistance.

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] problem with pls(x, y, ..., ncomp = 16): Error in inherits(x, data.frame) : subscript out of bounds

2003-12-09 Thread ryszard . czerminski
When I try to use ncomp parameter in pls procedure  I get following error:

 library(pls.pcr)
 m - pls(x, y, validation = CV, niter = 68, ncomp = 16)
Error in inherits(x, data.frame) : subscript out of bounds

Without ncomp parameter everything seems to work OK

 dim(x)
[1]  68 116
 dim(y)
[1] 68  1
 m - pls(x, y, validation = CV, niter = 68)
 length(m$ncomp)
[1] 67

Ryszard

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] p-value from chisq.test working strangely on 1.8.1

2003-12-09 Thread Jeffrey Chang
On Dec 9, 2003, at 9:37 AM, Peter Dalgaard wrote:

Marc Schwartz [EMAIL PROTECTED] writes:

Confirmed on Fedora Core 1 with R Version 1.8.1 Patched (2003-12-07)
compiled using gcc (GCC) 3.3.2 20031107 (Red Hat Linux 3.3.2-2).

chisq.test(matrix(c(0, 1, 1, 12555), 2, 2), simulate.p.value=TRUE)
...
X-squared = 1e-04, df = NA, p-value = 1
chisq.test(matrix(c(0, 1, 1, 12556), 2, 2), simulate.p.value=TRUE)
X-squared = 1e-04, df = NA, p-value =  2.2e-16
...
chisq.test(matrix(c(0, 1, 1, 12557), 2, 2), simulate.p.value=TRUE)
...
X-squared = 1e-04, df = NA, p-value = 1
Ditto on RH8 with Martyn's RPM of 1.8.0 (yeah, I know...) and ditto
with a reasonably current r-devel (gcc 3.2)
Anyways, it is yet another fudge-factor issue: If you debug to the
point in chisq.test where it calculates
PVAL - sum(tmp$results = STATISTIC)/B

you'll find that

Browse[1] any(diff(tmp$result))
[1] FALSE
Browse[1] tmp$result[1]
[1] 7.96432e-05
Browse[1] STATISTIC
[1] 7.96432e-05
Browse[1] tmp$result[1] - STATISTIC
[1] -1.355253e-20
so PVAL becomes zero and yaddayaddayadda

The obvious fix would seem to be

PVAL - sum(tmp$results = (1-1e-10)*STATISTIC)/B
Yes, this is also the behavior that I am seeing.  I do not know about 
numerical programming to comment on the fix, but it would solve the 
problem in my case.

Thanks to everyone who has looked into this!

Jeff

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


RE: [R] Matrix to dates

2003-12-09 Thread Gabor Grothendieck


z - matrix( c(1960,1960,1961,1,9,6), 3, 2 )

1. Using chron:

 require(chron)
 chron( paste( z[,2], 1, z[,1], sep=/ ) )

2. Using POSIXct:

 ISOdate( z[,1], z[,2], 1 )  # relative to GMT time zone

or

 ISOdate( z[,1], z[,2], 1, tz= )   # relative to current time zone



---
Date: Mon, 8 Dec 2003 15:48:20 -0600 
From: Erin Hodgess [EMAIL PROTECTED]
To: [EMAIL PROTECTED] 
Subject: [R] Matrix to dates 

 
 
Let's try again!

I have a matrix in which the first column is a four digit year, and the 
second column is a 2 digit month.

How do I convert the matrix to a date function, please?

Thanks,
Erin
Version 1.8.0
mailto: [EMAIL PROTECTED]

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] R: the spdep package

2003-12-09 Thread Roger Bivand
On Tue, 9 Dec 2003, Osmo Kolehmainen wrote:

 Hi,
 Here is a listw object z corresponding to the matrix W. I understand n, nn, 
 S0, S1 and S2 in the weights constants summary. Is it simply so that n1 = 
 n-1, n2 = n-2 and n3 = n-3? If this is true where they are needed?

Yes, used in calculating the variance of the statistics (Moran, Geary)  
elsewhere, are returned by spweights.constants(). Will be hidden in
summary.listw() in next release.

Roger

 Just wondering
 Osmo
 
   W
 [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9]
 [1,] 0 1 0 1 0 0 0 0 0
 [2,] 1 0 1 0 1 0 0 0 0
 [3,] 0 1 0 0 0 1 0 0 0
 [4,] 1 0 0 0 1 0 1 0 0
 [5,] 0 1 0 1 0 1 0 1 0
 [6,] 0 0 1 0 1 0 0 0 1
 [7,] 0 0 0 1 0 0 0 1 0
 [8,] 0 0 0 0 1 0 1 0 1
 [9,] 0 0 0 0 0 1 0 1 0
   z=mat2listw(W);z
 Characteristics of weights list object:
 Neighbour list object:
 Number of regions: 9
 Number of nonzero links: 24
 Percentage nonzero weights: 29.62963
 Average number of links: 2.67
 Weights style: M
 Weights constants summary:
 n n1 n2 n3 nn S0 S1 S2
 M 9 8 7 6 81 24 48 272
 
 __
 [EMAIL PROTECTED] mailing list
 https://www.stat.math.ethz.ch/mailman/listinfo/r-help
 

-- 
Roger Bivand
Economic Geography Section, Department of Economics, Norwegian School of
Economics and Business Administration, Breiviksveien 40, N-5045 Bergen,
Norway. voice: +47 55 95 93 55; fax +47 55 95 93 93
e-mail: [EMAIL PROTECTED]

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] Windows Memory Issues

2003-12-09 Thread Benjamin . STABLER
I would also like some clarification about R memory management.  Like Doug,
I didn't find anything about consecutive calls to gc() to free more memory.
We run into memory limit problems every now and then and a better
understanding of R's memory management would go a long way.  I am interested
in learning more and was wondering if there is any specific R documentation
that explains R's memory usage?  Or maybe some good links about memory and
garbage collection.  Thanks.

Benjamin Stabler
Transportation Planning Analysis Unit
Oregon Department of Transportation
555 13th Street NE, Suite 2
Salem, OR 97301  Ph: 503-986-4104

---

Message: 21
Date: Mon, 8 Dec 2003 09:51:12 -0800 (PST)
From: Douglas Grove [EMAIL PROTECTED]
Subject: Re: [R] Windows Memory Issues
To: Prof Brian Ripley [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Message-ID:
[EMAIL PROTECTED]
Content-Type: TEXT/PLAIN; charset=US-ASCII

On Sat, 6 Dec 2003, Prof Brian Ripley wrote:

 I think you misunderstand how R uses memory.  gc() does not free up all 
 the memory used for the objects it frees, and repeated calls will free 
 more.  Don't speculate about how memory management works: do your 
 homework!

Are you saying that consecutive calls to gc() will free more memory than
a single call, or am I misunderstanding?   Reading ?gc and ?Memory I don't
see anything about this mentioned.  Where should I be looking to find 
more comprehensive info on R's memory management??  I'm not writing any
packages, just would like to have a better handle on efficiently using
memory as it is usually the limiting factor with R.  FYI, I'm running
R1.8.1 and RedHat9 on a P4 with 2GB of RAM in case there is any platform
specific info that may be applicable.

Thanks,

Doug Grove
Statistical Research Associate
Fred Hutchinson Cancer Research Center


 In any case, you are using an outdated version of R, and your first
 course of action should be to compile up R-devel and try that, as there 
 has been improvements to memory management under Windows.  You could also 
 try compiling using the native malloc (and that *is* described in the 
 INSTALL file) as that has different compromises.
 
 
 On Sat, 6 Dec 2003, Richard Pugh wrote:
 
  Hi all,
   
  I am currently building an application based on R 1.7.1 (+ compiled
  C/C++ code + MySql + VB).  I am building this application to work on 2
  different platforms (Windows XP Professional (500mb memory) and Windows
  NT 4.0 with service pack 6 (1gb memory)).  This is a very memory
  intensive application performing sophisticated operations on large
  matrices (typically 5000x1500 matrices).
   
  I have run into some issues regarding the way R handles its memory,
  especially on NT.  In particular, R does not seem able to recollect some
  of the memory used following the creation and manipulation of large data
  objects.  For example, I have a function which receives a (large)
  numeric matrix, matches against more data (maybe imported from MySql)
  and returns a large list structure for further analysis.  A typical call
  may look like this .
   
   myInputData - matrix(sample(1:100, 750, T), nrow=5000)
   myPortfolio - createPortfolio(myInputData)
   
  It seems I can only repeat this code process 2/3 times before I have to
  restart R (to get the memory back).  I use the same object names
  (myInputData and myPortfolio) each time, so I am not create more large
  objects ..
   
  I think the problems I have are illustrated with the following example
  from a small R session .
   
   # Memory usage for Rui process = 19,800
   testData - matrix(rnorm(1000), 1000) # Create big matrix
   # Memory usage for Rgui process = 254,550k
   rm(testData)
   # Memory usage for Rgui process = 254,550k
   gc()
   used (Mb) gc trigger  (Mb)
  Ncells 369277  9.9 667722  17.9
  Vcells  87650  0.7   24286664 185.3
   # Memory usage for Rgui process = 20,200k
   
  In the above code, R cannot recollect all memory used, so the memory
  usage increases from 19.8k to 20.2.  However, the following example is
  more typical of the environments I use .
   
   # Memory 128,100k
   myTestData - matrix(rnorm(1000), 1000)
   # Memory 357,272k
   rm(myTestData)
   # Memory 357,272k
   gc()
used (Mb) gc trigger  (Mb)
  Ncells  478197 12.8 818163  21.9
  Vcells 9309525 71.1   31670210 241.7
   # Memory 279,152k
   
  Here, the memory usage increases from 128.1k to 279.1k
   
  Could anyone point out what I could do to rectify this (if anything), or
  generally what strategy I could take to improve this?
   
  Many thanks,
  Rich.
   
  Mango Solutions
  Tel : (01628) 418134
  Mob : (07967) 808091
   
  
  [[alternative HTML version deleted]]
  
  __
  [EMAIL PROTECTED] mailing list
  https://www.stat.math.ethz.ch/mailman/listinfo/r-help
  
  
 
 -- 
 Brian D. Ripley,  [EMAIL PROTECTED]
 Professor of Applied 

Re: [R] How to append to a data.frame?

2003-12-09 Thread David Kreil

Dear Peter Dalgaard,

Thank you for these examples, they are very neat!
I really like the
  data.frame(x=as.numeric(NA),y=factor(NA))[rep(NA,1000),]
trick.

With best regards,

David.



Dr David Philip Kreil (`-''-/).___..--''`-._
Research Fellow`6_ 6  )   `-.  ( ).`-.__.`)
University of Cambridge(_Y_.)'  ._   )  `._ `. ``-..-'
++44 1223 764107, fax 333992 _..`--'_..-_/  /--'_.' ,'
www.inference.phy.cam.ac.uk/dpk20   (il),-''  (li),'  ((!.-'

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] Windows Memory Issues

2003-12-09 Thread Prof Brian Ripley
On Tue, 9 Dec 2003 [EMAIL PROTECTED] wrote:

 I would also like some clarification about R memory management.  Like Doug,
 I didn't find anything about consecutive calls to gc() to free more memory.

It was a statement about Windows, and about freeing memory *to Windows*.
Douglas Grove apparently had misread both the subject line and the 
sentence.

 We run into memory limit problems every now and then and a better
 understanding of R's memory management would go a long way.  I am interested
 in learning more and was wondering if there is any specific R documentation
 that explains R's memory usage?  Or maybe some good links about memory and
 garbage collection.  Thanks.

There are lots of comments in the source files.  And as I already said 
(but has been excised below), this is not relevant to the next version of 
R anyway.

BTW, the message below has been selectively edited, so please consult the 
original.

 Message: 21
 Date: Mon, 8 Dec 2003 09:51:12 -0800 (PST)
 From: Douglas Grove [EMAIL PROTECTED]
 Subject: Re: [R] Windows Memory Issues
 To: Prof Brian Ripley [EMAIL PROTECTED]
 Cc: [EMAIL PROTECTED]
 Message-ID:
   [EMAIL PROTECTED]
 Content-Type: TEXT/PLAIN; charset=US-ASCII
 
 On Sat, 6 Dec 2003, Prof Brian Ripley wrote:
 
  I think you misunderstand how R uses memory.  gc() does not free up all 
  the memory used for the objects it frees, and repeated calls will free 
  more.  Don't speculate about how memory management works: do your 
  homework!
 
 Are you saying that consecutive calls to gc() will free more memory than
 a single call, or am I misunderstanding?   Reading ?gc and ?Memory I don't
 see anything about this mentioned.  Where should I be looking to find 
 more comprehensive info on R's memory management??  I'm not writing any
 packages, just would like to have a better handle on efficiently using
 memory as it is usually the limiting factor with R.  FYI, I'm running
 R1.8.1 and RedHat9 on a P4 with 2GB of RAM in case there is any platform
 specific info that may be applicable.
 
 Thanks,
 
 Doug Grove
 Statistical Research Associate
 Fred Hutchinson Cancer Research Center
 
 
  In any case, you are using an outdated version of R, and your first
  course of action should be to compile up R-devel and try that, as there 
  has been improvements to memory management under Windows.  You could also 
  try compiling using the native malloc (and that *is* described in the 
  INSTALL file) as that has different compromises.
  
  
  On Sat, 6 Dec 2003, Richard Pugh wrote:
  
   Hi all,

   I am currently building an application based on R 1.7.1 (+ compiled
   C/C++ code + MySql + VB).  I am building this application to work on 2
   different platforms (Windows XP Professional (500mb memory) and Windows
   NT 4.0 with service pack 6 (1gb memory)).  This is a very memory
   intensive application performing sophisticated operations on large
   matrices (typically 5000x1500 matrices).

   I have run into some issues regarding the way R handles its memory,
   especially on NT.  In particular, R does not seem able to recollect some
   of the memory used following the creation and manipulation of large data
   objects.  For example, I have a function which receives a (large)
   numeric matrix, matches against more data (maybe imported from MySql)
   and returns a large list structure for further analysis.  A typical call
   may look like this .

myInputData - matrix(sample(1:100, 750, T), nrow=5000)
myPortfolio - createPortfolio(myInputData)

   It seems I can only repeat this code process 2/3 times before I have to
   restart R (to get the memory back).  I use the same object names
   (myInputData and myPortfolio) each time, so I am not create more large
   objects ..

   I think the problems I have are illustrated with the following example
   from a small R session .

# Memory usage for Rui process = 19,800
testData - matrix(rnorm(1000), 1000) # Create big matrix
# Memory usage for Rgui process = 254,550k
rm(testData)
# Memory usage for Rgui process = 254,550k
gc()
used (Mb) gc trigger  (Mb)
   Ncells 369277  9.9 667722  17.9
   Vcells  87650  0.7   24286664 185.3
# Memory usage for Rgui process = 20,200k

   In the above code, R cannot recollect all memory used, so the memory
   usage increases from 19.8k to 20.2.  However, the following example is
   more typical of the environments I use .

# Memory 128,100k
myTestData - matrix(rnorm(1000), 1000)
# Memory 357,272k
rm(myTestData)
# Memory 357,272k
gc()
 used (Mb) gc trigger  (Mb)
   Ncells  478197 12.8 818163  21.9
   Vcells 9309525 71.1   31670210 241.7
# Memory 279,152k

   Here, the memory usage increases from 128.1k to 279.1k

   Could anyone point out what I could do to rectify this (if anything), or
   generally what strategy I could take to improve this?

   Many thanks,
   Rich.

RE: [R] Windows Memory Issues

2003-12-09 Thread Pikounis, Bill

 [snipped]  Or maybe some good links 
 about memory and
 garbage collection. 

As is mentioned time-to-time on this list when the above subject comes up,
Windows memory is a complicated topic.  One open-source utility I have found
helpful to monitor memory when I work under XP is called RAMpage, authored
by John Fitzgibbon, and is available at 

http://www.jfitz.com/software/RAMpage/


In its FAQ / Help, it touches on a lot of general memory and resource
issues, which I found helpful to learn about.  

http://www.jfitz.com/software/RAMpage/RAMpage_FAQS.html

(Though the author clearly warns that its usefulness for freeing memory
may not be anymore than cosmetic on NT / 2000 / XP systems.)

Hope that helps.

Bill


Bill Pikounis, Ph.D.

Biometrics Research Department
Merck Research Laboratories
PO Box 2000, MailDrop RY33-300  
126 E. Lincoln Avenue
Rahway, New Jersey 07065-0900
USA

Phone: 732 594 3913
Fax: 732 594 1565


 -Original Message-
 From: [EMAIL PROTECTED] 
 [mailto:[EMAIL PROTECTED] On Behalf Of 
 [EMAIL PROTECTED]
 Sent: Tuesday, December 09, 2003 12:09 PM
 To: [EMAIL PROTECTED]
 Subject: Re: [R] Windows Memory Issues
 
 
 I would also like some clarification about R memory 
 management.  Like Doug,
 I didn't find anything about consecutive calls to gc() to 
 free more memory.
 We run into memory limit problems every now and then and a better
 understanding of R's memory management would go a long way.  
 I am interested
 in learning more and was wondering if there is any specific R 
 documentation
 that explains R's memory usage?  Or maybe some good links 
 about memory and
 garbage collection.  Thanks.
 
 Benjamin Stabler
 Transportation Planning Analysis Unit
 Oregon Department of Transportation
 555 13th Street NE, Suite 2
 Salem, OR 97301  Ph: 503-986-4104
 
 ---
 
 Message: 21
 Date: Mon, 8 Dec 2003 09:51:12 -0800 (PST)
 From: Douglas Grove [EMAIL PROTECTED]
 Subject: Re: [R] Windows Memory Issues
 To: Prof Brian Ripley [EMAIL PROTECTED]
 Cc: [EMAIL PROTECTED]
 Message-ID:
   [EMAIL PROTECTED]
 Content-Type: TEXT/PLAIN; charset=US-ASCII
 
 On Sat, 6 Dec 2003, Prof Brian Ripley wrote:
 
  I think you misunderstand how R uses memory.  gc() does not 
 free up all 
  the memory used for the objects it frees, and repeated 
 calls will free 
  more.  Don't speculate about how memory management works: do your 
  homework!
 
 Are you saying that consecutive calls to gc() will free more 
 memory than
 a single call, or am I misunderstanding?   Reading ?gc and 
 ?Memory I don't
 see anything about this mentioned.  Where should I be looking to find 
 more comprehensive info on R's memory management??  I'm not 
 writing any
 packages, just would like to have a better handle on efficiently using
 memory as it is usually the limiting factor with R.  FYI, I'm running
 R1.8.1 and RedHat9 on a P4 with 2GB of RAM in case there is 
 any platform
 specific info that may be applicable.
 
 Thanks,
 
 Doug Grove
 Statistical Research Associate
 Fred Hutchinson Cancer Research Center
 
 
  In any case, you are using an outdated version of R, and your first
  course of action should be to compile up R-devel and try 
 that, as there 
  has been improvements to memory management under Windows.  
 You could also 
  try compiling using the native malloc (and that *is* 
 described in the 
  INSTALL file) as that has different compromises.
  
  
  On Sat, 6 Dec 2003, Richard Pugh wrote:
  
   Hi all,

   I am currently building an application based on R 1.7.1 
 (+ compiled
   C/C++ code + MySql + VB).  I am building this application 
 to work on 2
   different platforms (Windows XP Professional (500mb 
 memory) and Windows
   NT 4.0 with service pack 6 (1gb memory)).  This is a very memory
   intensive application performing sophisticated operations 
 on large
   matrices (typically 5000x1500 matrices).

   I have run into some issues regarding the way R handles 
 its memory,
   especially on NT.  In particular, R does not seem able to 
 recollect some
   of the memory used following the creation and 
 manipulation of large data
   objects.  For example, I have a function which receives a (large)
   numeric matrix, matches against more data (maybe imported 
 from MySql)
   and returns a large list structure for further analysis.  
 A typical call
   may look like this .

myInputData - matrix(sample(1:100, 750, T), nrow=5000)
myPortfolio - createPortfolio(myInputData)

   It seems I can only repeat this code process 2/3 times 
 before I have to
   restart R (to get the memory back).  I use the same object names
   (myInputData and myPortfolio) each time, so I am not 
 create more large
   objects ..

   I think the problems I have are illustrated with the 
 following example
   from a small R session .

# Memory usage for Rui process = 19,800
testData - matrix(rnorm(1000), 1000) # 

[R] Contour plots

2003-12-09 Thread Qinghua Song
I am drawing several contour plots in one page. I use image, and get
several contours. But I don't know how to control the color in more than one
plots. For example, same color corresponds to different dependent values
in different plots. For example, yellow means z=100(the highest value)
in plot1, but yellow means z=40(the highest value) in plot2--this is
not what I want. In fact,I want the same color corresponds the same depedent
value in each plot. Say, in all plots, yellow means z=100,red means
z=40, etc. I was wondering if someone can tell me which function I can
use. Because it seems to me that image function doesn't have such option
to control that.
Thank you.


-
Qinghua Song
Department of Statistics
UW-Madison
office phone:262-8181

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] Contour plots

2003-12-09 Thread Bill Simpson
 I am drawing several contour plots in one page. I use image, and get
 several contours. But I don't know how to control the color in more than one
 plots. For example, same color corresponds to different dependent values
 in different plots. For example, yellow means z=100(the highest value)
 in plot1, but yellow means z=40(the highest value) in plot2--this is
 not what I want. In fact,I want the same color corresponds the same depedent
 value in each plot. Say, in all plots, yellow means z=100,red means
 z=40, etc. I was wondering if someone can tell me which function I can
 use. Because it seems to me that image function doesn't have such option
 to control that.
 Thank you.

I think you need to use the zlim argument in image().

For example
image(x,y,z,zlim=c(0,100))
will make sure that all the plots are drawn using the same values for z 
(between 0 and 100 here).

Bill

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] R Interface handholding

2003-12-09 Thread Pat Gunn
Hello,
I need a bit of handholding with R, specifically, with writing
packages for it. I'm a systems programmer, and am, on the request
of several users of our software, working on generating R interfaces.
For starters, I've written the following R function (which compiles):

SEXP myincr(SEXP Rinput)
{ // Returns input integer incremented by one
int input;
SEXP returner;

PROTECT(Rinput = AS_NUMERIC(Rinput));

input = * INTEGER(Rinput);
input++;
PROTECT(returner = NEW_INTEGER(input));
Rprintf(Hey there\n);
return returner;
}

I've made this into a package, by dropping it into a stub directory
along with something called init.c:

#include areone.h
#include R_ext/Rdynload.h
#include Rinternals.h

R_NativePrimitiveArgType myincr_t[1] = {INTSXP};

static const R_CMethodDef cMethods[] =
{
{myincr, (DL_FUNC) myincr, 1, myincr_t}
};

void R_init_myincr(DllInfo* dll)
{
R_registerRoutines(dll, cMethods, NULL, NULL, NULL);
}

R is happy to install this for me, but after doing a library(myincr),
the function doesn't seem to be available, so I presume I'm missing
something. Does R normally call, at library load, R_init_$MODULENAME() ?


My other question is.. our software produces data structures (we call datsets)
which resemble limited database tables, and I'd like some advice on exposing
them to R -- columns, in our scheme, either hold doubles or strings,
the columns have names, and we need the ability to export these into
appropriate R structures as well as populate them from R. I notice that
the R DBI, at least as according to its documentation, uses the database
to hold the data (presumably in temporary tables) and returns parts of it
as requested, via R functions. I could use a static global pointer (in C) into a
storage space of datsets, and write bridge functions exporting them as R arrays,
or I could attempt to find an appropriate native format and export to it..
any advice?

It might be worth mentioning that all of the code to do this will
eventually be auto-generated -- we already have code to do this for C
that doesn't need to expose people linking our code to all of our
custom structures and stuff.

-- 
Pat Gunn
Research/Systems Programmer, Auton Group, CMU

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


RE: [R] Windows Memory Issues

2003-12-09 Thread Benjamin . STABLER
Thanks for the reply.  So are you saying that multiple calls to gc() frees
up memory to Windows and then other processes can use that newly freed
memory?  So multiple calls to gc() does not actually make more memory
available to new R objects that I might create.  The reason I ask is because
I want to know how to use all the available memory that I can to store
object in R.  ?gc says that garbage collection is run without user
intervention so there is really nothing I can do to improve memory under
Windows except increase the --max-mem-size at startup (which is limited to
1.7GB under the current version of R for Windows and will be greater ~3GB
for R 1.9).  Thanks again.

Ben Stabler

-Original Message-
From: Prof Brian Ripley [mailto:[EMAIL PROTECTED]
Sent: Tuesday, December 09, 2003 9:29 AM
To: STABLER Benjamin
Cc: [EMAIL PROTECTED]
Subject: Re: [R] Windows Memory Issues


On Tue, 9 Dec 2003 [EMAIL PROTECTED] wrote:

 I would also like some clarification about R memory 
management.  Like Doug,
 I didn't find anything about consecutive calls to gc() to 
free more memory.

It was a statement about Windows, and about freeing memory *to 
Windows*.
Douglas Grove apparently had misread both the subject line and the 
sentence.

 We run into memory limit problems every now and then and a better
 understanding of R's memory management would go a long way.  
I am interested
 in learning more and was wondering if there is any specific 
R documentation
 that explains R's memory usage?  Or maybe some good links 
about memory and
 garbage collection.  Thanks.

There are lots of comments in the source files.  And as I already said 
(but has been excised below), this is not relevant to the next 
version of 
R anyway.

BTW, the message below has been selectively edited, so please 
consult the 
original.

 Message: 21
 Date: Mon, 8 Dec 2003 09:51:12 -0800 (PST)
 From: Douglas Grove [EMAIL PROTECTED]
 Subject: Re: [R] Windows Memory Issues
 To: Prof Brian Ripley [EMAIL PROTECTED]
 Cc: [EMAIL PROTECTED]
 Message-ID:
  [EMAIL PROTECTED]
 Content-Type: TEXT/PLAIN; charset=US-ASCII
 
 On Sat, 6 Dec 2003, Prof Brian Ripley wrote:
 
  I think you misunderstand how R uses memory.  gc() does 
not free up all 
  the memory used for the objects it frees, and repeated 
calls will free 
  more.  Don't speculate about how memory management works: do your 
  homework!
 
 Are you saying that consecutive calls to gc() will free more 
memory than
 a single call, or am I misunderstanding?   Reading ?gc and 
?Memory I don't
 see anything about this mentioned.  Where should I be 
looking to find 
 more comprehensive info on R's memory management??  I'm not 
writing any
 packages, just would like to have a better handle on 
efficiently using
 memory as it is usually the limiting factor with R.  FYI, I'm running
 R1.8.1 and RedHat9 on a P4 with 2GB of RAM in case there is 
any platform
 specific info that may be applicable.
 
 Thanks,
 
 Doug Grove
 Statistical Research Associate
 Fred Hutchinson Cancer Research Center
 
 
  In any case, you are using an outdated version of R, and your first
  course of action should be to compile up R-devel and try 
that, as there 
  has been improvements to memory management under Windows.  
You could also 
  try compiling using the native malloc (and that *is* 
described in the 
  INSTALL file) as that has different compromises.
  
  
  On Sat, 6 Dec 2003, Richard Pugh wrote:
  
   Hi all,

   I am currently building an application based on R 1.7.1 
(+ compiled
   C/C++ code + MySql + VB).  I am building this 
application to work on 2
   different platforms (Windows XP Professional (500mb 
memory) and Windows
   NT 4.0 with service pack 6 (1gb memory)).  This is a very memory
   intensive application performing sophisticated 
operations on large
   matrices (typically 5000x1500 matrices).

   I have run into some issues regarding the way R handles 
its memory,
   especially on NT.  In particular, R does not seem able 
to recollect some
   of the memory used following the creation and 
manipulation of large data
   objects.  For example, I have a function which receives a (large)
   numeric matrix, matches against more data (maybe 
imported from MySql)
   and returns a large list structure for further analysis. 
 A typical call
   may look like this .

myInputData - matrix(sample(1:100, 750, T), nrow=5000)
myPortfolio - createPortfolio(myInputData)

   It seems I can only repeat this code process 2/3 times 
before I have to
   restart R (to get the memory back).  I use the same object names
   (myInputData and myPortfolio) each time, so I am not 
create more large
   objects ..

   I think the problems I have are illustrated with the 
following example
   from a small R session .

# Memory usage for Rui process = 19,800
testData - matrix(rnorm(1000), 1000) # Create big matrix
# Memory usage for Rgui process = 254,550k
rm(testData)
# 

[R] C++: SET_LENGTH() Over Many Iterations?

2003-12-09 Thread Jim Java
In a C++ extension to R (v 1.8.1), I've been experimenting with a
generic push back function to tack one value at a time onto the end
of an R vector created within the extension. After calling this
function a certain number of times Rgui.exe (I'm writing in Windows
using Visual Studio .NET 2003) will fail with an Access Violation,
which doesn't happen when I pre-allocate the R-vector memory and write
to the reserved slots; i.e., I'm not trying to create an R object too
big to be handled by R within the context of my OS's available memory.
Here's some simple test code I've been running:

CPP Code
  #define PUSH_BACK_INTEGER(v, x) \
do {\
  UNPROTECT_PTR(v);\
  SET_LENGTH(v, GET_LENGTH(v) + 1);\
  PROTECT(v);\
  INTEGER_POINTER(v)[GET_LENGTH(v) - 1] = x;\
}\
while (false)

  SEXP R_SimplePushBackTest(SEXP args)
  {
SEXP arg1, arg2, int_vect;

PROTECT(arg1 = AS_INTEGER(CADR(args)));
int n_reps = INTEGER_POINTER(arg1)[0];
PROTECT(arg2 = AS_LOGICAL(CADDR(args)));
bool full_alloc = (LOGICAL_POINTER(arg2)[0] ? true : false);
if (full_alloc)
  PROTECT(int_vect = NEW_INTEGER(n_reps));
else
  PROTECT(int_vect = NEW_INTEGER(0));

for (int i = 0; i  n_reps; ++i) {
  Rprintf(  ** Iteration %d:\n, i + 1);
  if (full_alloc)
INTEGER_POINTER(int_vect)[i] = i;
  else
PUSH_BACK_INTEGER(int_vect, i);
}

SEXP out, names, cls;

PROTECT(out = NEW_LIST(1));
SET_VECTOR_ELT(out, 0, int_vect);

PROTECT(names = NEW_CHARACTER(1));
SET_STRING_ELT(names, 0, COPY_TO_USER_STRING(integer.vector));
SET_NAMES(out, names);

PROTECT(cls = NEW_CHARACTER(1));
SET_STRING_ELT(cls, 0, COPY_TO_USER_STRING(pushback));
classgets(out, cls);

UNPROTECT(6);
return out;
  }
/CPP Code

R Code
  nreps=5
  allocate=FALSE
  sink(pushback_test.txt)
  test.pushback=.External(R_SimplePushBackTest, as.integer(nreps), 
as.logical(allocate))
  print(test.pushback)
  sink()
/R Code

If allocate=TRUE (vector memory is pre-allocated in the extension),
the code proceeds normally on my system; if allocate=FALSE, Rgui.exe
eventually crashes from an Access Violation. I've gathered only enough
information from the R source code so far to think that R is
ultimately calling realloc() through the SET_LENGTH macro; that would
make my code rather inefficient, but I'm trying for genericness here.
In C++, is there a better way than what I'm doing to concatenate
values onto the end of an R vector of arbitrary length, especially
over many iterations?

Thanks for taking the time to read this.

System specs: Pentium 4 2.5 GHz, 512 MB RAM, 40 GB hard drive, Win XP

Yrs etc.,

Jim Java

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R]: global and local variables

2003-12-09 Thread Karl Knoblick
Hi!

I'm no guru in R. But I can think of 2 ways (have to
be tried):

1) As Uwe Ligges said: just save return the stored
variable a-preprocess(xdata) and return it (if you
want to return more than 1 item use list) and give
this variable to the next function. Example:

func1-function(x) 
{
y-x^2
# for testing:
print(paste(func1, x, y))
return(y) 
}

func2-function(x, valuefunc1=NA)
{
if (is.na(valuefunc1)) valuefunc1-func1(x)
# calculate things with valuefunc1
print(paste(func2, x, valuefunc1))
return(list(valuefunc1=valuefunc1))
}

func3-function(x, valuefunc1=NA)
{
if (is.na(valuefunc1)) valuefunc1-func1(x)
# calculate things with valuefunc1
print(paste(func3, x, valuefunc1))
return(list(valuefunc1=valuefunc1))
}

# use this
a-list(NA)
names(a)-c(valuefunc1)

a-func2(x=3, valuefunc1=a$valuefunc1)
a-func3(x=3, valuefunc1=a$valuefunc1) # be careful,
makes only sense if the x is equal to former fuction
call...
a-func3(x=999, valuefunc1=a$valuefunc1) # will be the
same
a-func3(x=999) # will be different 


2) use global variables like
assign(stored.value, x, envir=.GlobalEnv)
example see reply of my posting:
RE: [R] LOCF - Last Observation Carried Forward Simon
Fear (Sat 15 Nov 2003 - 03:28:03 EST) 

HTH,
Karl

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


RE: [R] problem with pls(x, y, ..., ncomp = 16): Error in inherit s( x, data.frame) : subscript out of bounds

2003-12-09 Thread ryszard . czerminski
Help for pls says:

 ?pls
[...]
Arguments:
...
   ncomp: the numbers of latent variables to be assessed in the
  modelling. Default is from one to the rank of 'X'.
[...]

so my assumption was (maybe wrong) that idea of ncomp parameter is to 
limit
number of assessed variables.

also, if called without crossvalidation it gives the same error:

 m - pls(x, y, ncomp = 16)
Error in inherits(x, data.frame) : subscript out of bounds

R







Liaw, Andy [EMAIL PROTECTED]
12/09/2003 12:50 PM

 
To: Ryszard Czerminski/PH/[EMAIL PROTECTED], [EMAIL PROTECTED]
cc: 
Subject:RE: [R] problem with pls(x, y, ..., ncomp = 16): Error in 
inherit s( x, 
data.frame) : subscript out of bounds


I don't know the details of pls (in the pls.pcr package, I assume), but if
you use validation=CV, that says you want to use CV to select the best
number of components.  Then why would you specify ncomp as well?

Andy

 From: [EMAIL PROTECTED]
 
 When I try to use ncomp parameter in pls procedure  I get 
 following error:
 
  library(pls.pcr)
  m - pls(x, y, validation = CV, niter = 68, ncomp = 16)
 Error in inherits(x, data.frame) : subscript out of bounds
 
 Without ncomp parameter everything seems to work OK
 
  dim(x)
 [1]  68 116
  dim(y)
 [1] 68  1
  m - pls(x, y, validation = CV, niter = 68)
  length(m$ncomp)
 [1] 67
 
 Ryszard
 
 __
 [EMAIL PROTECTED] mailing list
 https://www.stat.math.ethz.ch/mailman/listinfo/r-help
 


__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] Interfacing R and Python in MS Windows

2003-12-09 Thread Héctor Villafuerte D.
I got farther this time, but it wasn't enough though...
does anyone know what this errors are?
Thanks!
Hector
E:\to_doRcmd INSTALL e:/to_do/RSPython.tar.gz

-- Making package RSPython 

  **
  WARNING: this package has a configure script
It probably needs manual configuration
  **
 installing inst files
 adding build stamp to DESCRIPTION
 making DLL ...
making GeneralConverters.d from GeneralConverters.c
making PythonCall.d from PythonCall.c
making PythonFunctionConverters.d from PythonFunctionConverters.c
making PythonReferences.d from PythonReferences.c
making PythonReflectance.d from PythonReflectance.c
making RCall.d from RCall.c
making RPythonConverters.d from RPythonConverters.c
making RPythonReferences.d from RPythonReferences.c
making UserConverters.d from UserConverters.c
gcc  -I../inst/include -Ic:/Python23/include -D_R_=1 -DUSE_R=1 
-IC:/PROGRA~1/R/rw1080/src/include -Wall -O2   -c General
Converters.c -o GeneralConverters.o
In file included from c:/Python23/include/Python.h:75,
from ../inst/include/RPythonModule.h:4,
from ../inst/include/UserConverters.h:4,
from GeneralConverters.c:1:
c:/Python23/include/intobject.h:41: parse error before 
PyInt_AsUnsignedLongLongMask
c:/Python23/include/intobject.h:41: warning: type defaults to `int' in 
declaration of `PyInt_AsUnsignedLongLongMask'
c:/Python23/include/intobject.h:41: warning: data definition has no type 
or storage class
In file included from c:/Python23/include/Python.h:77,
from ../inst/include/RPythonModule.h:4,
from ../inst/include/UserConverters.h:4,
from GeneralConverters.c:1:
c:/Python23/include/longobject.h:37: warning: parameter names (without 
types) in function declaration
c:/Python23/include/longobject.h:39: parse error before PyLong_AsLongLong
c:/Python23/include/longobject.h:39: warning: type defaults to `int' in 
declaration of `PyLong_AsLongLong'
c:/Python23/include/longobject.h:39: warning: data definition has no 
type or storage class
c:/Python23/include/longobject.h:40: parse error before 
PyLong_AsUnsignedLongLong
c:/Python23/include/longobject.h:40: warning: type defaults to `int' in 
declaration of `PyLong_AsUnsignedLongLong'
c:/Python23/include/longobject.h:40: warning: data definition has no 
type or storage class
c:/Python23/include/longobject.h:41: parse error before 
PyLong_AsUnsignedLongLongMask
c:/Python23/include/longobject.h:41: warning: type defaults to `int' in 
declaration of `PyLong_AsUnsignedLongLongMask'
c:/Python23/include/longobject.h:41: warning: data definition has no 
type or storage class
make[2]: *** [GeneralConverters.o] Error 1
make[1]: *** [srcDynlib] Error 2
make: *** [pkg-RSPython] Error 2
*** Installation of RSPython failed ***



Liaw, Andy wrote:

Check the Writing R Extensions manual for requirements on the DESCRIPTION
file.
HTH,
Andy
 

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


RE: [R] Windows Memory Issues

2003-12-09 Thread Thomas Lumley
On Tue, 9 Dec 2003 [EMAIL PROTECTED] wrote:

 Thanks for the reply.  So are you saying that multiple calls to gc() frees
 up memory to Windows and then other processes can use that newly freed
 memory?

No.  You typically can't free memory back to Windows (or many other OSes).


So multiple calls to gc() does not actually make more memory
 available to new R objects that I might create.

Yes and no. It makes more memory available, but only memory that would
have been made available in any case if you had tried to use it.  R calls
the garbage collector before requesting more memory from the operating
system and before running out of memory.


 The reason I ask is because
 I want to know how to use all the available memory that I can to store
 object in R.  ?gc says that garbage collection is run without user
 intervention so there is really nothing I can do to improve memory under
 Windows except increase the --max-mem-size at startup

You can't do anything else to make more memory available, only to use
less.

-thomas

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


RE: [R] Windows Memory Issues

2003-12-09 Thread Prof Brian Ripley
On Tue, 9 Dec 2003 [EMAIL PROTECTED] wrote:

 Thanks for the reply.  So are you saying that multiple calls to gc() frees
 up memory to Windows and then other processes can use that newly freed
 memory?  So multiple calls to gc() does not actually make more memory

That is what I said.  Why do people expect me to repeat myself?

 available to new R objects that I might create.

-- 
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


RE: [R] Windows Memory Issues

2003-12-09 Thread Prof Brian Ripley
On Tue, 9 Dec 2003, Thomas Lumley wrote:

 On Tue, 9 Dec 2003 [EMAIL PROTECTED] wrote:
 
  Thanks for the reply.  So are you saying that multiple calls to gc() frees
  up memory to Windows and then other processes can use that newly freed
  memory?
 
 No.  You typically can't free memory back to Windows (or many other OSes).

At least using R under Windows NT/2000/XP you can.  I've watched it do so 
whilst fixing memory leaks.

There is another complication here: R for Windows uses a third-party 
malloc, and you can free memory back to that if not to the OS.  The reason 
Windows is special is the issue of fragmentation, which OSes using mmap
(and R-devel under Windows) typically do not suffer.


-- 
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] Interfacing R and Python in MS Windows

2003-12-09 Thread Prof Brian Ripley
I believe RSPython is from Omegahat, so why not ask on the mailing list 
for Omegahat?

On Tue, 9 Dec 2003, Héctor Villafuerte D. wrote:

 I got farther this time, but it wasn't enough though...
 does anyone know what this errors are?

-- 
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] packages for ecologists

2003-12-09 Thread Martin Wegmann
Hello R-user, 

sorry for this very off-topic question. 

But I shall present R to my dept. (pro's and con's and what it can do).
The pro's and con's are easy but not what R can do (additional to the normal 
statistics).
I looked through the packages, but the enormous amount of packages makes it 
very difficult for me to decide which one is worth mentioning.

I used only a small part of all R packages (mainly recommended packages and 
grasper) and would like to know which package for ecologist has to be 
mentioned. 

I would greatly appreciate if you can tell me which packages you think are 
very useful for ecolgical research in R e.g. vegan, ade4, ... 

thanks in advance, regards Martin

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


RE: [R] packages for ecologists

2003-12-09 Thread Herzog, Mark
Definitely take a look at WiSP. 

http://www.ruwpa.st-and.ac.uk/estimating.abundance/WiSP/

Mark
==

Mark Herzog
Post Doctoral Researcher
Dept. of Natural Resources and Environmental Science
University of Nevada Reno
Reno, NV  89512
(775) 784-6984 (office)
(775) 784-4583 (fax)
[EMAIL PROTECTED] 

==



-Original Message-
From: Martin Wegmann [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, December 09, 2003 12:26 PM
To: R-list
Subject: [R] packages for ecologists


Hello R-user, 

sorry for this very off-topic question. 

But I shall present R to my dept. (pro's and con's and what it can do).
The pro's and con's are easy but not what R can do (additional to the
normal 
statistics).
I looked through the packages, but the enormous amount of packages makes
it 
very difficult for me to decide which one is worth mentioning.

I used only a small part of all R packages (mainly recommended packages
and 
grasper) and would like to know which package for ecologist has to be 
mentioned. 

I would greatly appreciate if you can tell me which packages you think
are 
very useful for ecolgical research in R e.g. vegan, ade4, ... 

thanks in advance, regards Martin

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] histogram density division

2003-12-09 Thread Mathieu Drapeau
Hi,
I already sent this query but I didn't receive any answers.
I try another time and try to explain another way my question...
I want to decrease the height of my bands in my histogram by a factor of 
5... how can I do that? Is there an argument to hist() that do the job?

Thanks,
Mathieu
Hi,
I would like to plot an histogram with modified density values.
My Y-axis represent the occurences of the ranges (specified by the 
breaks argument) of my data (represented by a big vector). Now, I 
would like to divide by X the number of occurences in each bands and 
do a plot of lower occurences.
How can I do that?

Thanks,
Mathieu
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] Johnson's Sb distribution

2003-12-09 Thread Christian Mora
Hi all;

I'm working with the library SuppDists trying to fit a Johnson's Sb
distribution to a dataset. It works fine, but I need to set one of the
location parameters (epsilon) to zero. How can I do this using the
function JohnsonFit() or any other similar? ...and Is it possible to
define the type (SN,SL,SB,SU) or the library assumes the type
automatically depending on the data?

Thanks for any hint

Christian

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] histogram density division

2003-12-09 Thread Thomas W Blackwell
Mathieu  -

That's easy.  Assign the return value of  hist() to some
variable, say fixed, then go in and hack the value of
fixed$counts however you like, and re-plot using plot(fixed).
Example code:

fixed - hist(rnorm(2000))
fixed$counts - fixed$counts / 5
plot(fixed)

I confess I didn't quite understand your question the first
time I saw it, so couldn't reply.

-  tom blackwell  -  u michigan medical school  -  ann arbor  -

On Tue, 9 Dec 2003, Mathieu Drapeau wrote:

 Hi,
 I already sent this query but I didn't receive any answers.
 I try another time and try to explain another way my question...

 I want to decrease the height of my bands in my histogram by a factor of
 5... how can I do that? Is there an argument to hist() that do the job?

 Thanks,
 Mathieu

  Hi,
  I would like to plot an histogram with modified density values.
  My Y-axis represent the occurences of the ranges (specified by the
  breaks argument) of my data (represented by a big vector). Now, I
  would like to divide by X the number of occurences in each bands and
  do a plot of lower occurences.
  How can I do that?
 
  Thanks,
  Mathieu


__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] R Interface handholding

2003-12-09 Thread Douglas Bates
I'm not sure exactly what you want to do but it seems you are going
about it in an overly complicated way.  If you are going to build a
package you just need to put a NAMESPACE file at the top level and
include a useDynLib specification or write a short R function called
.First.Lib that calls library.dynam and include it in the R
subdirectory.  Typically this function is simply

.First.lib = function(lib, pkg) {
library.dynam(pkg, pkg, lib)
}

but it can do other initializations.

After that you just put the source for myincr in the src subdirectory,
compile and load the package, then call the function as

.Call(myincr, x, PACKAGE=myPackageName)


Pat Gunn [EMAIL PROTECTED] writes:

 Hello,
 I need a bit of handholding with R, specifically, with writing
 packages for it. I'm a systems programmer, and am, on the request
 of several users of our software, working on generating R interfaces.
 For starters, I've written the following R function (which compiles):
 
 SEXP myincr(SEXP Rinput)
 { // Returns input integer incremented by one
 int input;
 SEXP returner;
 
 PROTECT(Rinput = AS_NUMERIC(Rinput));
 
 input = * INTEGER(Rinput);
 input++;
 PROTECT(returner = NEW_INTEGER(input));
 Rprintf(Hey there\n);
 return returner;
 }
 
 I've made this into a package, by dropping it into a stub directory
 along with something called init.c:
 
 #include areone.h
 #include R_ext/Rdynload.h
 #include Rinternals.h
 
 R_NativePrimitiveArgType myincr_t[1] = {INTSXP};
 
 static const R_CMethodDef cMethods[] =
 {
 {myincr, (DL_FUNC) myincr, 1, myincr_t}
 };
 
 void R_init_myincr(DllInfo* dll)
 {
 R_registerRoutines(dll, cMethods, NULL, NULL, NULL);
 }
 
 R is happy to install this for me, but after doing a library(myincr),
 the function doesn't seem to be available, so I presume I'm missing
 something. Does R normally call, at library load, R_init_$MODULENAME() ?
 
 
 My other question is.. our software produces data structures (we call datsets)
 which resemble limited database tables, and I'd like some advice on exposing
 them to R -- columns, in our scheme, either hold doubles or strings,
 the columns have names, and we need the ability to export these into
 appropriate R structures as well as populate them from R. I notice that
 the R DBI, at least as according to its documentation, uses the database
 to hold the data (presumably in temporary tables) and returns parts of it
 as requested, via R functions. I could use a static global pointer (in C) into a
 storage space of datsets, and write bridge functions exporting them as R arrays,
 or I could attempt to find an appropriate native format and export to it..
 any advice?
 
 It might be worth mentioning that all of the code to do this will
 eventually be auto-generated -- we already have code to do this for C
 that doesn't need to expose people linking our code to all of our
 custom structures and stuff.
 
 -- 
 Pat Gunn
 Research/Systems Programmer, Auton Group, CMU
 
 __
 [EMAIL PROTECTED] mailing list
 https://www.stat.math.ethz.ch/mailman/listinfo/r-help

-- 
Douglas Bates[EMAIL PROTECTED]
Statistics Department608/262-2598
University of Wisconsin - Madisonhttp://www.stat.wisc.edu/~bates/

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] Key for custom lattice panel function

2003-12-09 Thread Hadley Wickham
Hi all,

I've created a custom lattice panel function for levelplot - instead of 
representing z by colour, it plot circles with radius proportional to z 
(in the style of the map plots of Jacques Bertin).  I'm happy to email 
an example graph to anyone interested.

The problem is now to create a key for the plot.  This is difficult 
because all of the other lattice plots convey information through point 
colour, shape, and texture, not point size.  For this reason (and having 
looked at the code) I don't think I can shoehorn $key or $colorkey to 
produce the type of key that I want. 

I'm using grid.circles() with native units that have been scaled in the 
same way as in panel.levelplot.

Can anyone offer any suggestions as to how to create this key?

Hadley

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] OT: BibTex year-only citation in text?

2003-12-09 Thread Jason Turner
Sorry for the off-topic question, but I know there are some talented 
LaTeX users out there. Which bibliography style gives only the year in 
text citations (e.g for further details, see Anderson (1992) )?

Thanks

Jason
--
Indigo Industrial Controls Ltd.
http://www.indigoindustrial.co.nz
64-21-343-545
[EMAIL PROTECTED]
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] Key for custom lattice panel function

2003-12-09 Thread Deepayan Sarkar
On Tuesday 09 December 2003 16:37, Hadley Wickham wrote:
 Hi all,

 I've created a custom lattice panel function for levelplot - instead of
 representing z by colour, it plot circles with radius proportional to z
 (in the style of the map plots of Jacques Bertin).  I'm happy to email
 an example graph to anyone interested.

 The problem is now to create a key for the plot.  This is difficult
 because all of the other lattice plots convey information through point
 colour, shape, and texture, not point size.  For this reason (and having
 looked at the code) I don't think I can shoehorn $key or $colorkey to
 produce the type of key that I want.

 I'm using grid.circles() with native units that have been scaled in the
 same way as in panel.levelplot.

 Can anyone offer any suggestions as to how to create this key?

Exactly how do you want to your key to look like ? 

As things stand now, I don't think it would be possible to put arbitrary keys. 
However, this should be easy to fix. That is, we could allow the key to be an 
arbitrary grid object (as produced by draw.key and draw.colorkey), and use 
its grobheight or grobwidth to allocate the necessary space. I'll try to 
add this in the next major update.

For now, you could fake it by printing your trellis object in a grid 
viewport that's smaller than the whole screen, and draw your key separately 
afterwards in the remaining space. 

HTH,

Deepayan

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] nested analysis with lme - odd result?

2003-12-09 Thread Gary Allison
Hello!

When I simulate variance at only a single level in a nested analysis 
using lme (all levels are random effects), the results confuse me. 
Instead of lme reporting high variance in only that simulated level, 
substantial variance (10% of simulated level) often appears in other 
levels -- in some configurations, as often as 50% of the time. Usually 
this spurious variance shows up in levels of nesting above the level 
with high simulated variance.

Am I doing something wrong or should I expect such 'over-detection'? I 
was expecting near zero variance in those other levels except say, 5% of 
the time.

Here's some example code with variance simulated in level 2 and spurious 
variance appearing frequently in level 1:

library(nlme)
# 4 level nesting
simVar - c(0,1,0,.1) # first is level one, last becomes error
nAtLevel - c(5,5,5,5)  # number of replicates at each level
F1V - F2V - F3V  - residV - numeric(0)
for (rep in 1:100){
  F1f - F2f - F3f - value - numeric(0)
  mn - numeric(length(nAtLevel))
  # data generator
  for (F1 in 1:nAtLevel[1]) {
mn[1] - rnorm(1,sd=simVar[1]) # set mean for level 1
for (F2 in 1:nAtLevel[2]) {
  mn[2] - rnorm(1,sd=simVar[2]) # set mean for level 2
  for (F3 in 1:nAtLevel[3]) {
mn[3] - rnorm(1,sd=simVar[3]) # set mean for level 3
for (F4 in 1:nAtLevel[4]) {
  mn[4] - rnorm(1,sd=simVar[4]) #set mean for lowest level
  value - c(value, sum(mn))
  F1f - c(F1f,F1)
  F2f - c(F2f,F2)
  F3f - c(F3f,F3)
}
  }
}
  }
  y.lme - lme(value ~ 1,random = ~ 1 | as.factor(F1f)/
as.factor(F2f)/
as.factor(F3f))
  v - as.numeric(VarCorr(y.lme)[,2])
  v - as.numeric(na.omit(v))
  F1V - c(F1V,v[1])
  F2V - c(F2V,v[2])
  F3V - c(F3V,v[3])
  residV - c(residV,y.lme$sigma)
}
var.df - data.frame(F1V,F2V,F3V,residV)
par(mfrow=c(2,2))
hist(var.df$F1V,main='Level 1 StdDev')
hist(var.df$F2V,main='Level 2 StdDev')
hist(var.df$F3V,main='Level 3 StdDev')
hist(var.df$residV,main='Residual StdDev')
txt - 'Simulated stddev:'
for (i in 1:length(simVar)) {
  txt - paste(txt,' level ',i,'=',simVar[i],',',sep='')}
mtext(txt, outer=T,line=-1,side=3)
Summary plots for several sample sizes is at:
http://david.science.oregonstate.edu/~allisong/R/sampSize.pdf
Thanks for any suggestions!
Gary
--
Gary Allison
Department of Evolution, Ecology and Organismal Biology
Ohio State University
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] expressing functions

2003-12-09 Thread Remington, Richard
# Why does expressing one function

require(ctest)
t.test
# return only

function (x, ...)
UseMethod(t.test)
environment: namespace:ctest
# but expressing another function

shapiro.test

# returns more complete code?

function (x)
{
DNAME - deparse(substitute(x))
x - sort(x[complete.cases(x)])
n - length(x)
if (n  3 || n  5000)
stop(sample size must be between 3 and 5000)
rng - x[n] - x[1]
if (rng == 0)
stop(all `x[]' are identical)
if (rng  1e-10)
x - x/rng
n2 - n%/%2
sw - .C(swilk, init = FALSE, as.single(x), n, n1 = as.integer(n),
as.integer(n2), a = single(n2), w = double(1), pw = double(1),
ifault = integer(1), PACKAGE = ctest)
if (sw$ifault  sw$ifault != 7)
stop(paste(ifault=, sw$ifault, . This should not happen))
RVAL - list(statistic = c(W = sw$w), p.value = sw$pw, method = 
Shapiro-Wilk normality test,
data.name = DNAME)
class(RVAL) - htest
return(RVAL)
}
environment: namespace:ctest

--

Richard E. Remington III
Statistician
KERN Statistical Services, Inc.
PO Box 1046
Boise, ID 83701
Tel: 208.426.0113
KernStat.com
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] expressing functions

2003-12-09 Thread Prof Brian Ripley
On Tue, 9 Dec 2003, Remington, Richard wrote:

 # Why does expressing one function
 
 require(ctest)
 t.test
 
 # return only
 
 function (x, ...)
 UseMethod(t.test)
 environment: namespace:ctest
 
 # but expressing another function
 
 shapiro.test
 
 # returns more complete code?
[...]

False hypothesis: both are the complete code.

You are not understanding (S3-style) generic functions: see any good book
on R/S.

(`An Introduction to R' is based on notes that predate them, but they are
covered in more detail in the draft R Language manual.  `The reader is
referred to the official references for a complete discussion of this
mechanism.' which I think means Chambers  Hastie, 1992,)

-- 
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] expressing functions

2003-12-09 Thread Jason Turner
Remington, Richard wrote:
# Why does expressing one function

require(ctest)
t.test
# return only

function (x, ...)
UseMethod(t.test)
environment: namespace:ctest
# but expressing another function

shapiro.test

# returns more complete code?

function (x)
{
DNAME - deparse(substitute(x))
x - sort(x[complete.cases(x)])
n - length(x)
if (n  3 || n  5000)
stop(sample size must be between 3 and 5000)
...

Short answer:  Unless you're programming your own functions, you don't 
need to worry about that.

Long answer:  Because the first is generic - it looks at what kind of 
data you're testing (two vectors, a formula, whatever, ...) and calls 
the appropriate sub-function.  shapiro.test does not; it just takes one 
data format, and  stops in its tracks if that's not what you've provided.

The ideas behind this are documented in Writing R Extensions 
(R-exts.pdf) which is supplied with binary R distributions, and is 
available from CRAN.  See chapter 6, Generic functions and methods, in 
the version that accompanies R-1.8.1.

Cheers

Jason
--
Indigo Industrial Controls Ltd.
http://www.indigoindustrial.co.nz
64-21-343-545
[EMAIL PROTECTED]
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


RE: [R] OT: BibTex year-only citation in text?

2003-12-09 Thread Matt Nelson
Jason,

For many bibliography styles, the command \citeyear{key} will work.  If this
doesn't work for the style you are using, you can investigate style-specific
methods or consider other styles.  I find that natbib is good for
author-year formats.

If you use Latex more than on occasion, a good reference book is invaluable.
I like The Latex Companion by Goossens, Mittelbach, and Samarin.

Regards,
Matt

Matthew R. Nelson, Ph.D.
Director, Biostatistics
Sequenom, Inc.

 -Original Message-
 From: Jason Turner [mailto:[EMAIL PROTECTED]
 Sent: Tuesday, December 09, 2003 4:20 PM
 To: R-Help
 Subject: [R] OT: BibTex year-only citation in text?
 
 
 Sorry for the off-topic question, but I know there are some talented 
 LaTeX users out there. Which bibliography style gives only 
 the year in 
 text citations (e.g for further details, see Anderson (1992) )?
 
 Thanks
 
 Jason
 -- 
 Indigo Industrial Controls Ltd.
 http://www.indigoindustrial.co.nz
 64-21-343-545
 [EMAIL PROTECTED]
 
 __
 [EMAIL PROTECTED] mailing list
 https://www.stat.math.ethz.ch/mailman/listinfo/r-help


__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] expressing functions

2003-12-09 Thread Gabor Grothendieck


Perhaps what should be added to the previous answers is
that you can find out where the real work is done like
this:

require(ctest)
t.test
methods(t.test)
ctest:::t.test.default
ctest:::t.test.formula

If the class of the first argument to t.test is formula
then t.test.formula gets invoked so that's where the
real work is done; otherwise, t.test.default
gets invoked so that's where the real work is done.

--- 
 
Remington, Richard wrote:
 # Why does expressing one function
 
 require(ctest)
 t.test
 
 # return only
 
 function (x, ...)
 UseMethod(t.test)
 environment: namespace:ctest
 
 # but expressing another function
 
 shapiro.test
 
 # returns more complete code?
 
 function (x)
 {
 DNAME - deparse(substitute(x))
 x - sort(x[complete.cases(x)])
 n - length(x)
 if (n  3 || n  5000)
 stop(sample size must be between 3 and 5000)

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] pvalues

2003-12-09 Thread Maya Sanders
dear all-
If I have a vector of numbers (not necessarily
normally distributed) how can I get the p-value of a
number in this distribution.  I am interested in the
inverse of 'quantile' .
thank you-
Maya

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help