Re: [R] sas vs r

2004-04-28 Thread Göran Broström
On Wed, Apr 28, 2004 at 11:13:30AM +0200, Henric Nilsson wrote:
 At 05:24 2004-04-28, Liliana Forzani wrote:
 
 I have a code in sas (NLMIXED) and I have a hard time converting to r
 1)it is poisson, with random intercept, but
 it have an offset.  Means, I do not want one of the coefficient to be
 estimate. Means, may model is
 g(mean) = beta X + Z,
 Z fixed, X fixed and beta to be estimate
 I am using glmmML.
 
 If I recall correctly, neither glmmML nor glmmPQL (from MASS) handles 
 offset terms. But GLMM in the lme4 package does.

glmmML handles offset terms. I am pretty sure that glmmPQL does too.

 
 2) the same but I have random slope (and I think with glmmML I can use
 only random intercept)
 
 GLMM in lme4 can do this.
 
 3) I try to use nlme, is this equivalent to NLMIXED?
 
 No, nlme fits non-linear and linear mixed-effect models with Gaussian error 
 terms.
 
 I hope this helps,
 Henric
 
 __
 [EMAIL PROTECTED] mailing list
 https://www.stat.math.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! 
 http://www.R-project.org/posting-guide.html

-- 
 Göran Broströmtel: +46 90 786 5223
 Department of Statistics  fax: +46 90 786 6614
 Umeå University   http://www.stat.umu.se/egna/gb/
 SE-90187 Umeå, Sweden e-mail: [EMAIL PROTECTED]

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Hsieh

2004-04-28 Thread Amor Messaoud
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] R hang-up using lm

2004-04-28 Thread Peter Dalgaard
Göran Broström [EMAIL PROTECTED] writes:

   set.seed(3)
   yy - rnorm(20)
   gg - rep(1:10, 2)
   y - tapply(yy, gg, median)
   x - 1:10
   z - lm(y ~ x)  # OK
   z - lm(x ~ y)  # crashes R

 I had to try it too: No crashes on Win2000 pro (1.8.1) or Linux (1.9.0),
 but (in both cases):
 
 
   lm(y ~ x)
 
 Call:
 lm(formula = y ~ x)
 
 Coefficients:
 (Intercept)x  
 -0.8783   0.1293  
 
 
   lm(x ~ y)
 
 Call:
 lm(formula = x ~ y)
 
 Coefficients:
 (Intercept)  
 5.5  
 
 i.e., only an intercept estimate in the second case! Surely something is
 wrong!? 

I just tried it on the 64-bit system and got Göran's result. However,
repeating the lm(x~y) bit seems to have gotten itself stuck after the
3rd time (looks like a memory runaway problem, currently at 3.6GB and
counting...). So perhaps those who couldn't reproduce should try it with
replicate(100, lm(x~y)) or so.

-- 
   O__   Peter Dalgaard Blegdamsvej 3  
  c/ /'_ --- Dept. of Biostatistics 2200 Cph. N   
 (*) \(*) -- University of Copenhagen   Denmark  Ph: (+45) 35327918
~~ - ([EMAIL PROTECTED]) FAX: (+45) 35327907

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] nlm

2004-04-28 Thread Dupas Stéphane
Hi, I am performing deviance minimization on a biological model of 
reproductive incompatibility due to symbiotic bacteria, I have up to 15 
parameter estimates, but even with 8 parameters.
Using simulated data, I observe my nlm procedure do not converge to 
minimum value, how can I improve and make sure it converges?
thanks,

Stéphane
--
Stéphane Dupas
IRD c/o CNRS
Laboratoire Populations  Génétique et Evolution
1 av de la Terrasse
91198 Gif sur Yvette
+ 33 1 69 82 37 04 
http://www.cnrs-gif.fr/pge/index.html
http://www.cnrs-gif.fr/pge/index.php?lang=en

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Extracting numbers from somewhere within strings

2004-04-28 Thread Lutz Prechelt
Hello everybody,

I have a bunch of strings like this:
IBM POWER4+ 1.9GHz  
IBM RS64-III 500MHz  
IBM RS64-IV 600 MHz 
IBM RS64 IV 750MHz   
Intel Itanium 2 Processor 6M 1.5GHz 
Intel Itanium2 1 Ghz 
Intel Itanium2 1.5GHz   
Intel MP 1.6GHz   

I want to extract the processor speed.

I am using
  grep(MHz, tpc$cpu, ignore.case=T)
  grep(GHz, tpc$cpu, ignore.case=T)
to extract the unit, because there are only these two.

But how to extract the number before it?
(I am using R 1.8.0)

In Perl one would match a regexp such as
  /([0-9.]+) ?[MG][Hh][Zz]/
and then obtain the number as $1.
But the capability of returning $1 is apparently not
implemented in grep() or any other function I could find.

How is it best done?

Thanks in advance,

  Lutz


Prof. Dr. Lutz Prechelt;  [EMAIL PROTECTED]
Institut fuer Informatik; Freie Universitaet Berlin
Takustr. 9; 14195 Berlin; Germany
+49 30 838 75115; http://www.inf.fu-berlin.de/inst/ag-se/

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] R hang-up using lm

2004-04-28 Thread Jean-Pierre Müller
Le 28 avr. 04, à 11:18, Göran Broström a écrit :

 lm(y ~ x)
Call:
lm(formula = y ~ x)
Coefficients:
(Intercept)x
-0.8783   0.1293

 lm(x ~ y)
Call:
lm(formula = x ~ y)
Coefficients:
(Intercept)
5.5
Same on RAqua 181,
and the 2 show:
z[]
...
$model
Error in as.data.frame.default(x[[i]], optional = TRUE) :
can't coerce array into a data.frame
And calling first
z - lm(x ~ y)
gives
Error: cannot allocate vector of size 3043328 Kb
HTH.
--
Jean-Pierre Müller
SSP / BFSH2 / UNIL / CH - 1015 Lausanne
Voice:+41 21 692 3116 / Fax:+41 21 692 3115
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] glmmPQL

2004-04-28 Thread Liliana Forzani
when I tried the example in glmmPQL I got an error

library(nlme)
   summary(glmmPQL(y ~ trt + I(week  2), random = ~ 1 | ID,
  family = binomial, data = bacteria))

iteration 1
iteration 2
iteration 3
iteration 4
iteration 5
iteration 6
Error: No slot of name reStruct for this object of class lme
Error in logLik([EMAIL PROTECTED]) : Unable to find the argument object in
selecting a method for function logLik


__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Extracting numbers from somewhere within strings

2004-04-28 Thread Peter Dalgaard
Lutz Prechelt [EMAIL PROTECTED] writes:

 Hello everybody,
 
 I have a bunch of strings like this:
 IBM POWER4+ 1.9GHz  
 IBM RS64-III 500MHz  
 IBM RS64-IV 600 MHz 
 IBM RS64 IV 750MHz   
 Intel Itanium 2 Processor 6M 1.5GHz 
 Intel Itanium2 1 Ghz 
 Intel Itanium2 1.5GHz   
 Intel MP 1.6GHz   
 
 I want to extract the processor speed.
 
 I am using
   grep(MHz, tpc$cpu, ignore.case=T)
   grep(GHz, tpc$cpu, ignore.case=T)
 to extract the unit, because there are only these two.
 
 But how to extract the number before it?
 (I am using R 1.8.0)
 
 In Perl one would match a regexp such as
   /([0-9.]+) ?[MG][Hh][Zz]/
 and then obtain the number as $1.
 But the capability of returning $1 is apparently not
 implemented in grep() or any other function I could find.
 
 How is it best done?
 
 Thanks in advance,

gsub() has \1 etc. For instance

 gsub(^.* ([0-9\\.]+) *[MG][Hh]z$,\\1,x)
[1] 1.9 500 600 750 1.5 1   1.5 1.6

(Not exactly trivial to get that right, but neither is it in Perl...)

-- 
   O__   Peter Dalgaard Blegdamsvej 3  
  c/ /'_ --- Dept. of Biostatistics 2200 Cph. N   
 (*) \(*) -- University of Copenhagen   Denmark  Ph: (+45) 35327918
~~ - ([EMAIL PROTECTED]) FAX: (+45) 35327907

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] connection to libraries problem

2004-04-28 Thread haleh.yasrebi
Hello All,
Although I have downloaded some libraries such as multivariate data analysis
library (multiv) and ade4, their functions such as pca or reconst are not
recognised. Should I install any thing else or use any instruction so that R
could find the location of libraries?

Thanks for your quick response,

Haleh Yasrebi

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] R hang-up using lm

2004-04-28 Thread Roger Bivand
On 28 Apr 2004, Peter Dalgaard wrote:

 Göran Broström [EMAIL PROTECTED] writes:
 
set.seed(3)
yy - rnorm(20)
gg - rep(1:10, 2)
y - tapply(yy, gg, median)
x - 1:10
z - lm(y ~ x)  # OK
z - lm(x ~ y)  # crashes R
 
  I had to try it too: No crashes on Win2000 pro (1.8.1) or Linux (1.9.0),
  but (in both cases):
  
  
lm(y ~ x)
  
  Call:
  lm(formula = y ~ x)
  
  Coefficients:
  (Intercept)x  
  -0.8783   0.1293  
  
  
lm(x ~ y)
  
  Call:
  lm(formula = x ~ y)
  
  Coefficients:
  (Intercept)  
  5.5  
  
  i.e., only an intercept estimate in the second case! Surely something is
  wrong!? 
 
 I just tried it on the 64-bit system and got Göran's result. However,
 repeating the lm(x~y) bit seems to have gotten itself stuck after the
 3rd time (looks like a memory runaway problem, currently at 3.6GB and
 counting...). So perhaps those who couldn't reproduce should try it with
 replicate(100, lm(x~y)) or so.
 

RHEL 3, R 1.9.0 seg.faults:

 set.seed(3)
 yy - rnorm(20)
 gg - rep(1:10, 2)
 y - tapply(yy, gg, median)
 x - 1:10
 z - lm(y ~ x)  # OK
 str(y)
 num [, 1:10] -0.853 -0.712 -0.229 -0.450  0.174 ...
 - attr(*, dimnames)=List of 1
  ..$ : chr [1:10] 1 2 3 4 ...
 dim(y)
[1] 10
 z - lm(x ~ y)
 replicate(100, lm(x~y))
Error in model.matrix.default(mt, mf, contrasts) : 
invalid type for dimname (must be a vector)
 lm(x~y)

Program received signal SIGSEGV, Segmentation fault.
Rf_getAttrib (vec=0x8f71200, name=0x81f47d8) at attrib.c:99
99  if (TAG(s) == name) {
(gdb) bt
#0  Rf_getAttrib (vec=0x8f71200, name=0x81f47d8) at attrib.c:99
#1  0x08122363 in Rf_isArray (s=0x8f71200) at util.c:413
#2  0x08069aef in Rf_dimnamesgets (vec=0x8f71200, val=0x8f627c8)
at attrib.c:581
#3  0x080cb845 in do_modelmatrix (call=0x8674200, op=0x8212f00, args=0x0, 
rho=0x8f5eaf8) at model.c:1904
#4  0x080cc6dc in do_internal (call=0x4320, op=0x8204a78, args=0x8f710e8, 
env=0x8f5eaf8) at names.c:1057
#5  0x080a74d2 in Rf_eval (e=0x86740b0, rho=0x8f5eaf8) at eval.c:375
#6  0x080a905b in do_set (call=0x8674040, op=0x82038c4, args=0x867405c, 
rho=0x8f5eaf8) at eval.c:1271
#7  0x080a74d2 in Rf_eval (e=0x8674040, rho=0x8f5eaf8) at eval.c:375
#8  0x080a88ce in do_begin (call=0x867a068, op=0x82037c8, args=0x8674008, 
rho=0x8f5eaf8) at eval.c:1046
#9  0x080a74d2 in Rf_eval (e=0x867a068, rho=0x8f5eaf8) at eval.c:375
#10 0x080a7779 in Rf_applyClosure (call=0x8f5ec80, op=0x867aaf4, 
arglist=0x8f5ee5c, rho=0x8f5ed60, suppliedenv=0x8f5ecb8) at eval.c:566
#11 0x080cc9af in applyMethod (call=0x8f5ec80, op=0x867aaf4, 
args=0x8f5ee5c, 
rho=0x8f5ed60, newrho=0x8f5ecb8) at objects.c:119
#12 0x080cd01c in Rf_usemethod (generic=0x87aaf88 model.matrix, 
obj=0x8f5a18c, call=0x867aa14, args=0x81f4998, rho=0x8f5ed60, 
callrho=0x8f3ab40, defrho=0x8b2a974, ans=0xbfffd288) at objects.c:326
#13 0x080cd4b5 in do_usemethod (call=0x867aa14, op=0x8212308, 
args=0x867aa30, env=0x8f5ed60) at objects.c:389
#14 0x080a74d2 in Rf_eval (e=0x867aa14, rho=0x8f5ed60) at eval.c:375
#15 0x080a7779 in Rf_applyClosure (call=0x875c02c, op=0x867a838, 
arglist=0x8f5ee5c, rho=0x8f3ab40, suppliedenv=0x81f4998) at eval.c:566
#16 0x080a72f5 in Rf_eval (e=0x875c02c, rho=0x8f3ab40) at eval.c:410
#17 0x080a905b in do_set (call=0x875be6c, op=0x82038c4, args=0x875bfd8, 
rho=0x8f3ab40) at eval.c:1271
#18 0x080a74d2 in Rf_eval (e=0x875be6c, rho=0x8f3ab40) at eval.c:375
#19 0x080a88ce in do_begin (call=0x875d598, op=0x82037c8, args=0x875be50, 
rho=0x8f3ab40) at eval.c:1046
#20 0x080a74d2 in Rf_eval (e=0x875d598, rho=0x8f3ab40) at eval.c:375
#21 0x080a74d2 in Rf_eval (e=0x875da20, rho=0x8f3ab40) at eval.c:375
#22 0x080a88ce in do_begin (call=0x8760a88, op=0x82037c8, args=0x875da04, 
rho=0x8f3ab40) at eval.c:1046
#23 0x080a74d2 in Rf_eval (e=0x8760a88, rho=0x8f3ab40) at eval.c:375
#24 0x080a7779 in Rf_applyClosure (call=0x8f3ad38, op=0x8760804, 
arglist=0x8f3ace4, rho=0x821546c, suppliedenv=0x81f4998) at eval.c:566
#25 0x080a72f5 in Rf_eval (e=0x8f3ad38, rho=0x821546c) at eval.c:410
#26 0x080c0eda in Rf_ReplIteration (rho=0x821546c, savestack=136268184, 
browselevel=0, state=0xbfffdeb0) at main.c:250
#27 0x080c1083 in R_ReplConsole (rho=0x821546c, savestack=0, 
browselevel=0) at main.c:298
#28 0x080c190f in run_Rmainloop () at main.c:653
#29 0x0812480c in main (ac=1, av=0xbfffe3c4) at system.c:99
(gdb) 



 

-- 
Roger Bivand
Economic Geography Section, Department of Economics, Norwegian School of
Economics and Business Administration, Breiviksveien 40, N-5045 Bergen,
Norway. voice: +47 55 95 93 55; fax +47 55 95 93 93
e-mail: [EMAIL PROTECTED]

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] glmmPQL

2004-04-28 Thread Liliana Forzani
I thinks I understand the porblem, you can not use glmmPQL if you have
open lme4



On Wed, 28 Apr 2004, Liliana Forzani wrote:

 when I tried the example in glmmPQL I got an error

 library(nlme)
summary(glmmPQL(y ~ trt + I(week  2), random = ~ 1 | ID,
   family = binomial, data = bacteria))

 iteration 1
 iteration 2
 iteration 3
 iteration 4
 iteration 5
 iteration 6
 Error: No slot of name reStruct for this object of class lme
 Error in logLik([EMAIL PROTECTED]) : Unable to find the argument object in
 selecting a method for function logLik
 

 __
 [EMAIL PROTECTED] mailing list
 https://www.stat.math.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Emacs Speaks Statistics version 5.2.0 has been released

2004-04-28 Thread Stephen Eglen
Emacs Speaks Statistics (ESS) version 5.2 is available for download
at:

http://www.analytics.washington.edu/downloads/ess/ess-5.2.0.tar.gz
or
http://www.analytics.washington.edu/downloads/ess/ess-5.2.0.zip

Changes since 5.1.24 are listed below.

Thanks, 
The ESS Core Team.

Changes/New Features in 5.2.0:
   * ESS[BUGS]:  new info documentation!  now supports interactive
 processing thanks to Aki Vehtari (mailto:[EMAIL PROTECTED]); new
 architecture-independent unix support as well as support for BUGS
 v. 0.5

   * ESS[SAS]:  convert .log to .sas with ess-sas-transcript; info
 documentation improved; Local Variable bug fixes; SAS/IML
 statements/functions now highlighted; files edited remotely by
 ange-ftp/EFS/tramp are recognized and pressing SUBMIT opens a
 buffer on the remote host via the local variable
 ess-sas-shell-buffer-remote-init which defaults to ssh; changed
 the definition of the variable ess-sas-edit-keys-toggle to boolean
 rather than 0/1; added the function ess-electric-run-semicolon
 which automatically reverse indents lines containing only run;;
 C-F1 creates MS RTF portrait from the current buffer; C-F2 creates
 MS RTF landscape from the current buffer; C-F9 opens a SAS DATASET
 with PROC INSIGHT rather than PROC FSVIEW; C-F10 kills all buffers
 associated with .sas program; inferior aliases for SAS batch:
 C-c C-r for submit region, C-c C-b for submit buffer, C-c C-x for
 goto .log; C-c C-y for goto .lst

   * ESS[S]: Pressing underscore (_) once inserts  -  (as before);
 pressing underscore twice inserts a literal underscore.  To stop
 this smart behaviour, add (ess-smart-underscore nil) to your
 .emacs after ess-site has been loaded;
 ess-dump-filename-template-proto (new name!) now can be customized
 successfully (for S language dialects); Support for Imenu has been
 improved; set ess-imenu-use-S to non-nil to get an Imenu-S item
 on your menubar; ess-help: Now using nice underlines (instead of
 `nuke-* ^H_')

   * ESS[R]:  After (require 'essa-r), M-x ess-r-var allows to load
 numbers from any Emacs buffer into an existing *R* process; M-x
 ess-rdired gives a directory editor of R objects; fixed
 ess-retr-lastvalue-command, i.e. .Last.value bug (thanks to David
 Brahm)

   * ESS: Support for creating new window frames has been added to ESS.
 Inferior ESS processes can be created in dedicated frames by
 setting inferior-ess-own-frame to t.  ESS help buffers can also
 open in new frames; see the documentation for ess-help-own-frame
 for details.  (Thanks to Kevin Rodgers for contributing code.)

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] R hang-up using lm

2004-04-28 Thread Göran Broström
On Wed, Apr 28, 2004 at 11:18:38AM +0200, Göran Broström wrote:
 On Wed, Apr 28, 2004 at 11:46:51AM +0300, Renaud Lancelot wrote:
  Martin Maechler a écrit :
  kjetil == kjetil  [EMAIL PROTECTED]
 on Tue, 27 Apr 2004 19:19:59 -0400 writes:
  
  
  kjetil On 27 Apr 2004 at 16:46, Raubertas, Richard wrote:
   Within the last few weeks, someone else reported a similar
   problem when using the results of tapply in a call to rlm().
   Note that the result of tapply is a 1D array, and it appears
   there is a general problem with using such a thing on the
   RHS in formula-based modeling functions:
   
   set.seed(3)
   yy - rnorm(20)
   gg - rep(1:10, 2)
   y - tapply(yy, gg, median)
   x - 1:10
   z - lm(y ~ x)  # OK
   z - lm(x ~ y)  # crashes R
   
   (R 1.8.1 on Windows XP Pro)
   
  
  kjetil What exactly do you mean by crashes R
  
  kjetil Doing this in R1.9.0, windows XP pro, there is no indication 
  of kjetil problems.
  
  nor is there with 1.9.0 or R-patched on Linux,
  nor with R 1.8.1 on Linux.
  
  no warning, no error, no problem at all.
  Is it really the above (reproducible, thank you!) example
  that crashes your R 1.8.1 ?
  
  It does it for me: Windows XP Pro, R 1.9.0 (P IV, 2.4 GHz, 256 Mo RAM). 
  It freezes RGui and a few seconds later, a Windows message appears 
  saying that Rgui front-end met a problem and must be closed.
 
 I had to try it too: No crashes on Win2000 pro (1.8.1) or Linux (1.9.0),
 but (in both cases):
 
 
   lm(y ~ x)
 
 Call:
 lm(formula = y ~ x)
 
 Coefficients:
 (Intercept)x  
 -0.8783   0.1293  
 
 
   lm(x ~ y)
 
 Call:
 lm(formula = x ~ y)
 
 Coefficients:
 (Intercept)  
 5.5  
 
 i.e., only an intercept estimate in the second case! Surely something is
 wrong!? 

Obviousy, y, generated as above, has an attribute that confuses 'lm',
because 

 lm(x ~ as.vector(y))

works as expected. To add further to confusion, R-1.8.1 (Windows):

 glm(x ~ y)

Error in model.matrix.default(mt, mf, contrasts) :
invalid type for dimname (must be a vector)

while 1.9.0 (Linux):


 glm(x ~ y)

Call:  glm(formula = x ~ y) 

Coefficients:
(Intercept)  
5.5  

Degrees of Freedom: 9 Total (i.e. Null);  9 Residual
Null Deviance:  82.5 
Residual Deviance: 82.5 AIC: 53.48 

 
 Göran
 
 
  
  Best,
  
  Renaud
  
  -- 
  Dr Renaud Lancelot
  vétérinaire épidémiologiste
  Ambassade de France - SCAC
  BP 834 Antananarivo 101
  Madagascar
  
  tél. +261 (0)32 04 824 55 (cell)
   +261 (0)20 22 494 37 (home)
  
  __
  [EMAIL PROTECTED] mailing list
  https://www.stat.math.ethz.ch/mailman/listinfo/r-help
  PLEASE do read the posting guide! 
  http://www.R-project.org/posting-guide.html
 
 -- 
  Göran Broströmtel: +46 90 786 5223
  Department of Statistics  fax: +46 90 786 6614
  Umeå University   http://www.stat.umu.se/egna/gb/
  SE-90187 Umeå, Sweden e-mail: [EMAIL PROTECTED]
 
 __
 [EMAIL PROTECTED] mailing list
 https://www.stat.math.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

-- 
 Göran Broströmtel: +46 90 786 5223
 Department of Statistics  fax: +46 90 786 6614
 Umeå University   http://www.stat.umu.se/egna/gb/
 SE-90187 Umeå, Sweden e-mail: [EMAIL PROTECTED]

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] connection to libraries problem

2004-04-28 Thread Petr Pikal


On 28 Apr 2004 at 11:08, haleh.yasrebi wrote:

 Hello All,
 Although I have downloaded some libraries such as multivariate data
 analysis library (multiv) and ade4, their functions such as pca or

Hi

library(packagename)

after you start R session

see

?library

Cheers
Petr

 reconst are not recognised. Should I install any thing else or use any
 instruction so that R could find the location of libraries?
 
 Thanks for your quick response,
 
 Haleh Yasrebi
 
 __
 [EMAIL PROTECTED] mailing list
 https://www.stat.math.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide!
 http://www.R-project.org/posting-guide.html

Petr Pikal
[EMAIL PROTECTED]

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Solving linear equations

2004-04-28 Thread Tomek R
lm.fit(A, b)$coefficients seems to do what I want, but, as you have rightly 
pointed out, the deviations most probably will not have constant variance 
(but should be normally distributed).  Basically, I make measurements from 
which I obtain the vector b with variance in each of the values v.  How 
would I fit my data then? (A is known and fixed). What's the best book to 
look at for solving those problems?

Thanks,
Tomek

 How do you wish to decide how to resolve the likely inconsistencies 
in the overdetermined system?  If you assume the vertical deviations from a 
linear fit are normally distributed with constant variance, then 
quot;lmquot; should do what you want.
_
It's fast, it's easy and it's free. Get MSN Messenger today!
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] nnet question

2004-04-28 Thread Prof Brian Ripley
Please read the documentation: why is a logistic output equation 
appropriate for example.  The iris3 example does not use the same 
arguments as you have, so you are claiming to know what you are doiung 
here.

You *have* worked though the examples in the book this software supports, 
haven't you?

On Mon, 26 Apr 2004, Erik Johnson wrote:

 I am using R 1.8.0, and am attempting to fit a Neural Network model of a 
 time series (here called Metrics.data).  It consists of one time series 
 variable run on its lag (AR(1)).  Basically, in an OLS model it would 
 look like
 Metrics.data$ewindx ~ Metrics.data$ewindx.lag1
 However, I am trying to run this through a neural network estimation.  
 So far, I have been getting convergence very quickly, and do not believe 
 it too be true.
 Here is the code and output.  Please note that I am using all of the 
 values for training and testing in one matrix, as I do not care about 
 the testing results right now, I only want to capture weights.  Here is 
 the code and output
 
   nnet(metrics.data$ewindxlag1,metrics.data$ewindx,size=2, entropy=FALSE)
 # weights:  7
 initial  value 78858370643.085342
 final  value 78841786515.212158
 converged
 a 1-2-1 network with 7 weights
 options were -
 
 When I run the iris3 example, the convergence looks much nicer 
 (consisting of more than one iteration).  Am I missing some fundamental 
 understanding of this example?  Thanks for any input.
 
 __
 [EMAIL PROTECTED] mailing list
 https://www.stat.math.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
 
 

-- 
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] connection to libraries problem

2004-04-28 Thread Martin Maechler
 haleh == haleh yasrebi [EMAIL PROTECTED]
 on Wed, 28 Apr 2004 11:08:00 +0100 writes:

haleh Hello All, Although I have downloaded some libraries
haleh such as multivariate data analysis library (multiv)
haleh and ade4, their functions such as pca or reconst are
haleh not recognised. Should I install any thing else or
haleh use any instruction so that R could find the location
haleh of libraries?

Petr has already helped you, so, ,

  === 
please, Please, PLEASE, PLEASE, P_L_E_A_S_E , P L E A S E 
  === 

   
   These are  NOT 'libraries'  but PACKAGES !!!
   

The packages are put into one or more libraries full of
packages,  which you see when typing  library().

The first argument of the function library() is called
  package 

phew..., thanks!
Martin

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] p-values

2004-04-28 Thread John Maindonald
The Bayesian framework is surely a good framework
for thinking about inference, and for exploring common
misinterpretations of p-values.  P-values are surely
unhelpful and to be avoided in cases where there is
`strong' prior evidence.  I will couch the discussion that
follows in terms of confidence intervals, which makes
the discussion simpler, rather than in terms of p-values.
The prior evidence is in my sense strong if it leads to a
Bayesian credible interval that is very substantially
different from the frequentist confidence interval
(though I prefer the term `coverage interval').
Typically the intervals will be similar if a diffuse prior
is used, i.e., all values over a wide enough range are,
on some suitable scale, a-priori equally likely.  This is,
in my view, the message that you should take from your
reading.
Examples of non-diffuse priors are what Berger focuses on.
Consider for example his discussion of one of Jeffreys'
analyses, where Jeffreys puts 50% of the probability on
on a point value of a a continuous parameter, i.e., there is
a large spike in the prior at that point.
Berger commonly has scant commentary on the specific
features of his priors that make the Bayesian results seem
very different (at least to the extent of having a different feel)
from the frequentist results. His paper in vol 18, no 1 of
Statistical Science (pp.1-32; pp.12-27 are comment from
other) seems more judicious in this respect than some of
his earlier papers.
It is interesting to speculate how R's model fitting routines
might be tuned to allow a Bayesian interpretation.  What
family or families of priors would be on offer, and/or used by
default? What default mechanisms would be suitable 
useful for indicating the sensitivity of results to the choice
of prior?
John Maindonald.

From: Greg Tarpinian [EMAIL PROTECTED]
Date: 28 April 2004 6:32:06 AM
To: [EMAIL PROTECTED]
Subject: [R] p-values
I apologize if this question is not completely
appropriate for this list.
I have been using SAS for a while and am now in the
process of learning some C and R as a part of my
graduate studies.  All of the statistical packages I
have used generally yield p-values as a default output
to standard procedures.
This week I have been reading Testing Precise
Hypotheses by J.O. Berger  Mohan Delampady,
Statistical Science, Vol. 2, No. 3, 317-355 and
Bayesian Analysis: A Look at Today and Thoughts of
Tomorrow by J.O. Berger, JASA, Vol. 95, No. 452, p.
1269 - 1276, both as supplements to my Math Stat.
course.
It appears, based on these articles, that p-values are
more or less useless.  If this is indeed the case,
then why is a p-value typically given as a default
output?  For example, I know that PROC MIXED and
lme( ) both yield p-values for fixed effects terms.
The theory I am learning does not seem to match what
is commonly available in the software, and I am just
wondering why.
Thanks,
Greg
John Maindonald email: [EMAIL PROTECTED]
phone : +61 2 (6125)3473fax  : +61 2(6125)5549
Centre for Bioinformation Science, Room 1194,
John Dedman Mathematical Sciences Building (Building 27)
Australian National University, Canberra ACT 0200.
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] reading a sparse matrix into R

2004-04-28 Thread David Meyer
Have you considered the read.matrix.csr() function in pkg. e1071? It
uses another sparse input format, but perhaps you can easily transform
your data in the supported one. Also, in my experience, data frames are
not the best basis for a sparse format since they might turn out to be
very memory consuming and slow... The sparse formats provided by the
SparseM package are better suited for this.

-d

Date: Tue, 27 Apr 2004 17:10:09 -0400
From: Aaron J. Mackey [EMAIL PROTECTED]
Subject: [R] reading a sparse matrix into R
To: [EMAIL PROTECTED]
Message-ID: [EMAIL PROTECTED]
Content-Type: text/plain; charset=US-ASCII; format=flowed


I have a 47k x 47k adjacency matrix that is very sparse (at most 30 
entries per row); my textual representation therefore is simply an 
adjacency list of connections between nodes for each row, e.g.

nodeconnections
A   B   C   D   E
B   A   C   D
C   A   E
D   A
E   A   F
F   E
G
H

I'd like to import this into a dataframe of node/connection 
(character/vector-of-characters) pairs.  I've experimented with scan, 
but haven't been able to coax it to work.  I can also hack it with 
strsplit() myself, but I thought there might be a more elegant way.

Thanks,

-Aaron

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] (no subject)

2004-04-28 Thread sofie
From [EMAIL PROTECTED]

Hi there. Is it possible to send wordtext to R and then code it your self? 
Can I by coding see if some results are correlated with others? Ex. If there is a low 
score in one collum and high in an other -if thats correlated?

Bedst regards Lene
[[alternative HTML version deleted]]

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] helps on levelplot

2004-04-28 Thread Deepayan Sarkar
On Wednesday 28 April 2004 01:35, Yan Wang wrote:
 Thank you for the hints! I have some followup questions.

 About the panel function, here is the code I copied from the example:
   xyplot(NOx ~ C | EE, data = ethanol,
  prepanel = function(x, y) prepanel.loess(x, y, span =
 1), xlab = Compression Ratio, ylab = NOx (micrograms/J), panel =
 function(x, y) {
  panel.grid(h=-1, v= 2)
  panel.xyplot(x, y)
  panel.loess(x,y, span=1)
  },
  aspect = xy)

 My question is what is the relation between the xyplot function and
 panel.xyplot(x,y). In my case, I found that in the panel function, if
 I only have panel.abline(), the levelplot wouldn't be drawn at all.
 But I don't know how to specify the arguments in panel.levelplot(),
 which is exactly the part that you used ... in your last reply. For
 all sorts of possible ways I tried, I got various of errors.

I believe this is described in some detail in the documentation. Have 
you read the entry for 'panel' in help(xyplot) ?

Generally speaking, each lattice function has its own panel function -- 
e.g. panel.xyplot, panel.levelplot, etc. The arguments they accept may 
be different, and are described in their respective help pages. 
Naturally, any high level function (like xyplot and levelplot) will 
pass to its panel function enough information for the default panel 
functions to work. Another way to find out what arguments are passed 
would be to use 

panel = function(...) print(names(list(...)))

BTW, I would recommend always using a ... argument in your panel 
function definition.

I don't know if this answers your questions. If it doesn't please be 
more specific in describing where your confusion lies.

 About the label size for the colorkey, when I coded as you suggested:
 levelplot(z~x*y, grid,at=seq(0,1,by=0.1),scales=list(cex=2),
   colorkey=list(labels=list(cex=2, at=seq(0,1,by=0.1

 I got error Error in draw.colorkey(x$colorkey) : invalid type/length
 (3/1) in vector allocation. In fact, for whatever at I specified
 in labels, I got the same error message.

Works for me as expected. Perhaps you are not using the latest R 
(1.9.0).

Deepayan

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] loading into a list

2004-04-28 Thread Roger D. Peng
Delayed response (!)
But perhaps more transparent than
mget(ls(e), envir = e)
is simply
as.list(e)
-roger
Roger D. Peng wrote:
Or maybe
loadIntoList - function(filename) {
e - new.env()
load(filename, envir = e)
mget(ls(e), envir = e)
}
-roger
Achim Zeileis wrote:
On Tue, 20 Apr 2004 15:39:07 +0200 Tamas Papp wrote:

I have the following problem.
Use the same algorithm (with different parameters) to generate
simulation results.  I store these in variables, eg A, B, C, which I
save into a file with
save(A, B, C, file=solution001.Rdata)
I do this many times.  Then I would like to load the above, but in
such a manner that A, B, C woule be part of a list, eg
sol001 - loadIntoList(solution001.Rdata)
so that sol001 is a list with elements A, B, C.
I am looking for a way to implement the above function. 

If your objects are always called A, B, and C, the following should
work:
loadIntoList - function(file) {
  load(file)
  return(list(A = A, B = B, C = C))
}
hth,
Z

The variables
are very large and I need a lot of time to compute them, so saving 
loading the results is the only viable alternative.
Thanks,
Tamas
--
Tamás K. Papp
E-mail: [EMAIL PROTECTED]
Please try to send only (latin-2) plain text, not HTML or other
garbage.
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide!
http://www.R-project.org/posting-guide.html

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! 
http://www.R-project.org/posting-guide.html

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! 
http://www.R-project.org/posting-guide.html

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Rtemp directories accumulating over time

2004-04-28 Thread Prof Brian Ripley
On Tue, 27 Apr 2004 [EMAIL PROTECTED] wrote:

 There is a nuisance that the number of directories with name starting 
 Rtmp (and always empty) in my temp directory is increasing over time. 
 
 I put the following in my  .Rprofile:
 
 tmp0001 - tempdir()
 setHook( packageEvent(base,onUnload), 
   function(...) unlink( tmp0001, recursive=TRUE) )
 
 
 which solves part of the problem, but not all. 

I don't see why: package base is never unloaded so that hook function is 
never run.  (Indeed, no package/namespace is unloaded except by 
explicit user action, in particular not when R is terminated.)

 So there are also other tmpdirs made by R. Why, where, and why are they
 not removed at exit (when their content are removed)?

They are removed by R.  This is a Windows-only bug, as Windows sometimes
does not act on commands to remove empty directories (but only sometimes).
Session temporary directories should only be left around when a session 
crashes.

-- 
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Problem with W2000

2004-04-28 Thread Cristian Pattaro
Have your ever had problem with Windows 2000 on supporting R 1.9.0?

After the installation of R 1.9.0, when we ran it for the first time, 
the computer stopped and we had to reset the machine.

Do you have any suggestions?

Thanks
Cristian

=
Cristian Pattaro
=
Unit of Epidemiology  Medical Statistics
Department of Medicines and Public Health
University of Verona

[EMAIL PROTECTED]
=


[[alternative HTML version deleted]]

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Fedora 1 RPM Packages

2004-04-28 Thread Prof Brian Ripley
I think this is appropriate for most users: if you want to use
--enable-R-shlib you can and probably should compile R yourself.  (After
all, you are going to need the development tools for almost all uses of
libR.so.)  It is not a zero-cost option: it does for example cause shared
objects in packages to be linked against libR.so not R.bin and so
increases considerably the run-time footprint.

On Tue, 27 Apr 2004, Ian Wallace wrote:

 Don't know if this is the correct place to post this question however I
 thought I would start here.  The Fedora 1 packages are built without the
 option '--enable-R-shlib' turned on in the R.spec file.  Other software,
 like, plr (the postgresql library that calls R) needs the shared lib. 
 Any chance that we can change the R.spec file to include:
 
 [EMAIL PROTECTED] SPECS]$ diff -bu R.spec R.spec.orig
 --- R.spec  2004-04-27 15:31:51.0 -0600
 +++ R.spec.orig 2004-04-27 15:31:39.0 -0600
 @@ -36,7 +36,6 @@
  %build
  export R_BROWSER=/usr/bin/mozilla
  ( %configure \
 ---enable-R-shlib \
  --with-tcl-config=%{_libdir}/tclConfig.sh \
  --with-tk-config=%{_libdir}/tkConfig.sh ) \
   | egrep '^R is now|^ |^$' -  CAPABILITIES
 
 I'm not sure if there are other issues with using the shared library,
 I'm very new to R and just joined the mailing list.
 
 Thanks!
 ian
 
 

-- 
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] numericDeriv

2004-04-28 Thread Jean Eid
Dear All,
I am trying to solve a Generalized Method of Moments problem which
necessitate the gradient of moments computation to get the
standard  errors of estimates.
I know optim does not output the gradient, but I can use numericDeriv to
get that. My question is: is this the best function to do this?

Thank you
Jean,

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] reading a sparse matrix into R

2004-04-28 Thread Douglas Bates
David Meyer [EMAIL PROTECTED] writes:

 Have you considered the read.matrix.csr() function in pkg. e1071? It
 uses another sparse input format, but perhaps you can easily transform
 your data in the supported one. Also, in my experience, data frames are
 not the best basis for a sparse format since they might turn out to be
 very memory consuming and slow... The sparse formats provided by the
 SparseM package are better suited for this.

The Matrix package (versions 0.8-1 and later) has a C function that does
the opposite operation, converting a symmetric sparse matrix to a
graph.

I would look at the way that the graph is stored there (the
formulation is from the Metis package of C code) and try to convert
your adjacency graph to that formulation first.

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


model.matrix (Re: [R] R hang-up using lm)

2004-04-28 Thread Göran Broström
On Wed, Apr 28, 2004 at 12:59:54PM +0200, Göran Broström wrote:
 On Wed, Apr 28, 2004 at 11:18:38AM +0200, Göran Broström wrote:
  On Wed, Apr 28, 2004 at 11:46:51AM +0300, Renaud Lancelot wrote:
   Martin Maechler a écrit :
   kjetil == kjetil  [EMAIL PROTECTED]
  on Tue, 27 Apr 2004 19:19:59 -0400 writes:
   
   
   kjetil On 27 Apr 2004 at 16:46, Raubertas, Richard wrote:
Within the last few weeks, someone else reported a similar
problem when using the results of tapply in a call to rlm().
Note that the result of tapply is a 1D array, and it appears
there is a general problem with using such a thing on the
RHS in formula-based modeling functions:

set.seed(3)
yy - rnorm(20)
gg - rep(1:10, 2)
y - tapply(yy, gg, median)
x - 1:10
z - lm(y ~ x)  # OK
z - lm(x ~ y)  # crashes R

(R 1.8.1 on Windows XP Pro)

   
   kjetil What exactly do you mean by crashes R
   
   kjetil Doing this in R1.9.0, windows XP pro, there is no indication 
   of kjetil problems.
   
   nor is there with 1.9.0 or R-patched on Linux,
   nor with R 1.8.1 on Linux.
   
   no warning, no error, no problem at all.
   Is it really the above (reproducible, thank you!) example
   that crashes your R 1.8.1 ?
   
   It does it for me: Windows XP Pro, R 1.9.0 (P IV, 2.4 GHz, 256 Mo RAM). 
   It freezes RGui and a few seconds later, a Windows message appears 
   saying that Rgui front-end met a problem and must be closed.
  
  I had to try it too: No crashes on Win2000 pro (1.8.1) or Linux (1.9.0),
  but (in both cases):
  
  
lm(y ~ x)
  
  Call:
  lm(formula = y ~ x)
  
  Coefficients:
  (Intercept)x  
  -0.8783   0.1293  
  
  
lm(x ~ y)
  
  Call:
  lm(formula = x ~ y)
  
  Coefficients:
  (Intercept)  
  5.5  
  
  i.e., only an intercept estimate in the second case! Surely something is
  wrong!? 
 
 Obviousy, y, generated as above, has an attribute that confuses 'lm',
 because 
 
  lm(x ~ as.vector(y))
 
 works as expected. To add further to confusion, R-1.8.1 (Windows):
 
  glm(x ~ y)
 
 Error in model.matrix.default(mt, mf, contrasts) :
 invalid type for dimname (must be a vector)

so, 

 model.matrix(x ~ y)
   (Intercept)
11
21
31
41
51
61
71
81
91
10   1
attr(,assign)
[1] 0

but


 model.matrix(x ~ as.vector(y))
   (Intercept) as.vector(y)
11 -0.853357506
21 -0.711872147
31 -0.228785137
41 -0.449739758
51  0.173914266
61 -0.138766243
71 -0.433799800
81  0.234183701
91  0.002728104
10   1  0.733590165
attr(,assign)
[1] 0 1

AND

 rr - model.matrix.default(x ~ y)
Segmenteringsfel
[EMAIL PROTECTED]:~$ 

After a restart and repeating 

 model.matrix.default(x ~ y)

several times, I FINALLY got

  model.matrix.default(x ~ y)
   (Intercept)
11
21
31
41
51
61
71
81
91
10   1
attr(,assign)
[1] 0
  model.matrix.default(x ~ y)
   (Intercept)y
11 -0.853357506
21 -0.711872147
31 -0.228785137
41 -0.449739758
51  0.173914266
61 -0.138766243
71 -0.433799800
81  0.234183701
91  0.002728104
10   1  0.733590165
attr(,assign)
[1] 0 1

  model.matrix.default(x ~ y)
Segmenteringsfel
[EMAIL PROTECTED]:~$ 

(Don't ask me what's going on: but 'Segmenteringsfel' means 'Segmentation
fault':) 

BTW, this was on Debian testing/unstable; R-1.9.0
-- 
 Göran Broströmtel: +46 90 786 5223
 Department of Statistics  fax: +46 90 786 6614
 Umeå University   http://www.stat.umu.se/egna/gb/
 SE-90187 Umeå, Sweden e-mail: [EMAIL PROTECTED]

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


RE: [R] reading a sparse matrix into R

2004-04-28 Thread Huntsinger, Reid
I've used lists (generic vectors) for this, with integer storage mode. Then
I can easily manipulate them in R, they don't take too much room, and C code
to traverse them (e.g. find connected components) is fast.

Did you have a need for a data frame? That seems like it might be painful to
manipulate.

I read the text file with readLines.

Reid Huntsinger

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Aaron J. Mackey
Sent: Tuesday, April 27, 2004 5:10 PM
To: [EMAIL PROTECTED]
Subject: [R] reading a sparse matrix into R



I have a 47k x 47k adjacency matrix that is very sparse (at most 30 
entries per row); my textual representation therefore is simply an 
adjacency list of connections between nodes for each row, e.g.

nodeconnections
A   B   C   D   E
B   A   C   D
C   A   E
D   A
E   A   F
F   E
G
H

I'd like to import this into a dataframe of node/connection 
(character/vector-of-characters) pairs.  I've experimented with scan, 
but haven't been able to coax it to work.  I can also hack it with 
strsplit() myself, but I thought there might be a more elegant way.

Thanks,

-Aaron

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide!
http://www.R-project.org/posting-guide.html

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: model.matrix (Re: [R] R hang-up using lm)

2004-04-28 Thread Martin Maechler
 GB == Göran Broström [EMAIL PROTECTED]
 on Wed, 28 Apr 2004 16:00:17 +0200 writes:


  .

GB so,

 model.matrix(x ~ y)
GB(Intercept) 1 1 2 1 3 1 4 1 5 1 6 1 7 1 8 1 9 1 10 1
GB attr(,assign) [1] 0

GB but


 model.matrix(x ~ as.vector(y))
GB(Intercept) as.vector(y) 1 1 -0.853357506 2 1
GB -0.711872147 3 1 -0.228785137 4 1 -0.449739758 5 1
GB 0.173914266 6 1 -0.138766243 7 1 -0.433799800 8 1
GB 0.234183701 9 1 0.002728104 10 1 0.733590165
GB attr(,assign) [1] 0 1

GB AND

 rr - model.matrix.default(x ~ y)
GB Segmenteringsfel [EMAIL PROTECTED]:~$

 ..

GB (Don't ask me what's going on: but 'Segmenteringsfel'
GB means 'Segmentation fault':)

Thank you, Göran.

Yes, the bug is in model.matrix.default();  
one part of the problems happens in the line
ans - .Internal(model.matrix(t, data))
(which only returns the intercept part).
How to fix this is not yet clear to me, since we have to decide
if the internal C code should do more checking
or the R code in model.matrix.default.

BTW, a very simple reproducible example is

x - 1:7; y. - x ; y - array(y, 7)

 model.matrix(x ~ y) # which behaves badly when called repeatedly;
 # which for me means memory allocation problems
## as opposed to 

 model.matrix(x ~ y.) # which is all fine



I'll also file this as bug report.
Martin

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Problem in Installing Package from CRAN...

2004-04-28 Thread Yao, Minghua
Hi,
 
I have installed R 1.9.0 under Windows XP. When I used 
 
Packages-Install Package(s) from CRAN... to install the package 'gregmisc', I got 
this message:
 
 local({a - CRAN.packages()
+ install.packages(select.list(a[,1],,TRUE), .libPaths()[1], available=a)})
trying URL `http://cran.r-project.org/bin/windows/contrib/1.9/PACKAGES'
Content type `text/plain; charset=iso-8859-1' length 17113 bytes
opened URL
downloaded 16Kb
trying URL `http://cran.r-project.org/bin/windows/contrib/1.9/gregmisc_0.10.2.zip'
Content type `application/zip' length 594089 bytes
opened URL
downloaded 580Kb
Error in unpackPkg(foundpkgs[okp, 2], pkgnames[okp], lib, installWithVers) : 
Unable to create temp directory C:/PROGRA~1/R/rw1090/library\file2869
 
 
Thanks in advance for any help.
 
Minghua

[[alternative HTML version deleted]]

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Problem in Installing Package from CRAN...

2004-04-28 Thread asemeria




Your user is not within administrators group.
Try with the user administrator.

A.S.



Alessandro Semeria
Models and Simulations Laboratory
Montecatini Environmental Research Center (Edison Group),
Via Ciro Menotti 48,
48023 Marina di Ravenna (RA), Italy
Tel. +39 544 536811
Fax. +39 544 538663
E-mail: [EMAIL PROTECTED]

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Problem in Installing Package from CRAN...

2004-04-28 Thread Prof Brian Ripley
Do you have write permission in C:/PROGRA~1/R/rw1090/library?  If not (and 
it seems not), then you need to install elsewhere.

This *is* covered in the rw-FAQ: please consult the posting guide.


On Wed, 28 Apr 2004, Yao, Minghua wrote:

 Hi,
  
 I have installed R 1.9.0 under Windows XP. When I used 
  
 Packages-Install Package(s) from CRAN... to install the package 'gregmisc', I got 
 this message:
  
  local({a - CRAN.packages()
 + install.packages(select.list(a[,1],,TRUE), .libPaths()[1], available=a)})
 trying URL `http://cran.r-project.org/bin/windows/contrib/1.9/PACKAGES'
 Content type `text/plain; charset=iso-8859-1' length 17113 bytes
 opened URL
 downloaded 16Kb
 trying URL `http://cran.r-project.org/bin/windows/contrib/1.9/gregmisc_0.10.2.zip'
 Content type `application/zip' length 594089 bytes
 opened URL
 downloaded 580Kb
 Error in unpackPkg(foundpkgs[okp, 2], pkgnames[okp], lib, installWithVers) : 
 Unable to create temp directory C:/PROGRA~1/R/rw1090/library\file2869
  
  
 Thanks in advance for any help.
  
 Minghua
 
   [[alternative HTML version deleted]]
 
 __
 [EMAIL PROTECTED] mailing list
 https://www.stat.math.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
 
 

-- 
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] p-values

2004-04-28 Thread Frank E Harrell Jr
On Tue, 27 Apr 2004 22:25:22 +0100 (BST)
(Ted Harding) [EMAIL PROTECTED] wrote:

 On 27-Apr-04 Greg Tarpinian wrote:
  I apologize if this question is not completely 
  appropriate for this list.
 
 Never mind! (I'm only hoping that my response is ... )
 
  [...]
  This week I have been reading Testing Precise
  Hypotheses by J.O. Berger  Mohan Delampady,
  Statistical Science, Vol. 2, No. 3, 317-355 and
  Bayesian Analysis: A Look at Today and Thoughts of
  Tomorrow by J.O. Berger, JASA, Vol. 95, No. 452, p.
  1269 - 1276, both as supplements to my Math Stat.
  course.
  
  It appears, based on these articles, that p-values are
  more or less useless.
 
. . .
 I don't have these articles available, but I'm guessing
 that they stress the Bayesian approach to inference.
 Saying p-values are more or less useless is controversial.
 Bayesians consider p-values to be approximately irrelevant
 to the real question, which is what you can say about
 the probability that a hypothesis is true/false, or
 what is the probability that a parameter lies in a
 particular range (sometimes the same question); and the
 probability they refer to is a posterior probability
 distribution on hypotheses, or over parameter values.
 The P-value which is emitted at the end of standard
 analysis is not such a probability, but instead is that part
 of a distribution over the sample space which is defined
 by a cut-off value of a test statistic calculated from the
 data. So they are different entities. Numerically they may
 coincide; indeed, for statistical problems with a certain
 structure the P-value is equal to the Bayesian posterior
 probability when a particular prior distribution is
 adopted.
 
  If this is indeed the case,
  then why is a p-value typically given as a default
  output?  For example, I know that PROC MIXED and 
  lme( ) both yield p-values for fixed effects terms.
 
 P-values are not as useless as sometimes claimed. They
 at least offer a measure of discrepancy between data and
 hypothesis (the smaller the P-value, the more discrepant
 the data), and they offer this measure on a standard scale,
 the probabiltiy scale -- the chance of getting something
 at least as discrepant, if the hypothesis being tested is
 true. What discrepant objectively means is defined by
 the test statistic used in calculating the P-value: larger
 values of the test statistic correspond to more discrepant
 data.

Ted, this opens up a can of worms, depending on what you mean by
discrepant and even data (something conditioned upon or a stochastic
quantity that we happen to only be looking at one copy of?).  I think your
statement plays into some of the severe difficulties with P-values,
especially large P-values.
  
 
 Confidence intervals are essentially aggregates of hypotheses
 which have not been rejected at a significance level equal
 to 1 minus the P-value.
 
 The P-value/confidence-interval approach (often called the
 frequentist approach) gives results which do not depend
 on assuming any prior distribution on the parameters/hypotheses,
 and therefore could be called objective in that they
 avoid being accused of importing subjective information
 into the inference in the form of a Bayesion prior distribution.

They are objective only in the sense that subjectivity is deferred in a
difficult to document way when P-values are translated into decisions.

 This can have the consequence that your confidence interval
 may include values in a range which, a priori, you do not
 acept as plausible; or exclude a range of values in which
 you are a priori confident that the real value lies.
 The Bayesian comment on this situation is that the frequentist
 approach is incoherent, to which the frequentist might
 respond well, I just got an unlucky experiment this time
 (which is bound to occur with due frequency).
 
  The theory I am learning does not seem to match what
  is commonly available in the software, and I am just
  wondering why.
 
 The standard ritual for evaluating statistical estimates
 and hypothesis tests is frequentist (as above). Rightly
 interpreted, it is by no means useless. For complex
 historical reasons, it has become the norm in research
 methodology, and this is essentially why it is provided
 by the standard software packages (otherwise pharmaceutical
 companies would never buy the software, since they need
 this in order to get past the FDA or other regulatory
 authority). However, because this is the norm, such
 results often have more meaning attributed to them than
 they can support, by people disinclined to delve into
 what rightly interpreted might mean.

The statement that frequentist methods are the norm, which I'm afraid is
usually true, is a sad comment on the state of much of scientific
inquiry.  IMHO P-values are so defective that the imperfect Bayesian
approach should be seriously entertained.

 
 This is not a really clean answer to your question; but
 then your question touches on complex and 

Re: [R] coding of categories in rpart

2004-04-28 Thread Prof Brian Ripley
On Tue, 27 Apr 2004, Prabhakar Krishnamurthy wrote:

 I am using rpart to derive classification rules for customer segments.
 I have a few categorical variables in the set of independent variables.
 For instance,
 
 Account Size can be (Very-Small, Small, Medium, Large, V-Large)
 
 Rpart seems to encode these categories into: a,b,c,d,e

It doesn't.  That is one output representation (of several), of the factor
levels.

 The results are expressed in terms of the encoded values.
 
 How do I find out what encoding was used by rpart.  i.e.
 what categories in my input set do a, b, c,... correspond to?

By reading the documentation!  E.g. ?text.rpart.

-- 
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Possible bug in foreign library import of Stata datasets

2004-04-28 Thread Thomas Lumley
On Wed, 28 Apr 2004, Peter Dalgaard wrote:

 Looks like a classic signed/unsigned confusion. Negative numbers
 stored in ones-complement format in single bytes, but getting
 interpreted as unsigned. A bug report could be a good idea if the
 resident Stata expert (Thomas, I believe) is unavailable just now.


Yes, it's a simple signed/unsigned bug. We carefully read in the 2-byte
and 4-byte types as unsigned int so we can use  and  to do byte
swapping, but of course this doesn't work for the 1-byte type.  There's
the additional problem that sometimes we want to read an unsigned byte of
meta-data.

-thomas

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Possible bug in foreign library import of Stata datasets

2004-04-28 Thread Thomas Lumley
On Wed, 28 Apr 2004, Paul Johnson wrote:

 The read.dta has translated the negative values as (256-deml).

 Is this the kind of thing that is a bug, or have I missed something in
 the documentation about the handling of negative numbers?  Should a
 formal bug report be filed?

A fixed version of the foreign package has been sent to CRAN.

-thomas

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Fedora 1 RPM Packages

2004-04-28 Thread Ian Wallace
On Wed, 2004-04-28 at 07:12, Prof Brian Ripley wrote:
 I think this is appropriate for most users: if you want to use
 --enable-R-shlib you can and probably should compile R yourself.  (After
 all, you are going to need the development tools for almost all uses of
 libR.so.)  It is not a zero-cost option: it does for example cause shared
 objects in packages to be linked against libR.so not R.bin and so
 increases considerably the run-time footprint.
 


From the response that I have had from the list it appears that not many
people use the shared lib from R, and that it wouldn't make sense to
build it that way by default.

I'll drop trying to get this added.  If it's only a handful of us who
are using PL/R (PostgresQL procedural language) we'll have to just
recompile the source RPM.  Which is no big deal.

Thanks for the responses.  Now off to the real work of getting some data
in PostgresQL and taking a look at it!

cheers
ian



 On Tue, 27 Apr 2004, Ian Wallace wrote:
 
  Don't know if this is the correct place to post this question however I
  thought I would start here.  The Fedora 1 packages are built without the
  option '--enable-R-shlib' turned on in the R.spec file.  Other software,
  like, plr (the postgresql library that calls R) needs the shared lib. 
  Any chance that we can change the R.spec file to include:
  
  [EMAIL PROTECTED] SPECS]$ diff -bu R.spec R.spec.orig
  --- R.spec  2004-04-27 15:31:51.0 -0600
  +++ R.spec.orig 2004-04-27 15:31:39.0 -0600
  @@ -36,7 +36,6 @@
   %build
   export R_BROWSER=/usr/bin/mozilla
   ( %configure \
  ---enable-R-shlib \
   --with-tcl-config=%{_libdir}/tclConfig.sh \
   --with-tk-config=%{_libdir}/tkConfig.sh ) \
| egrep '^R is now|^ |^$' -  CAPABILITIES
  
  I'm not sure if there are other issues with using the shared library,
  I'm very new to R and just joined the mailing list.
  
  Thanks!
  ian
  
  
-- 
Ian Wallace [EMAIL PROTECTED]

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] installing R on Fedora Core 2 test 2

2004-04-28 Thread Martyn Plummer
On Tue, 2004-04-27 at 17:21, Peter Dalgaard wrote:
 Jonathan Baron [EMAIL PROTECTED] writes:
 
  The one thing I cannot figure out is that readline does not
  work.  It was installed, but apparently not detected.  Grepping
  config.site for readline gets stuff like this:
  
  configure:21256: checking for rl_callback_read_char in -lreadline
  configure:21286: gcc -o conftest -g -O2 -I/usr/local/include
  -L/usr/local/lib conftest.c -lreadline  -ldl -lncurses -lm  5
  /usr/bin/ld: cannot find -lreadline
 
 Hm? The usual problem is that people forget to install readline-devel,
 but is that the expected error message then? Could you do an rpm -ql
 readline and see where the libreadline stuff went to? If you did
 forget readline-devel, try that first of course.
 
 BTW, it might be a good idea to see if you can rpm --rebuild from
 Martyn's src.rpm. That gives you your very own .rpm, which you can
 install and eventually upgrade (which is the point) using the standard
 tools.

The RPM also has a patched shell script that sets LANG=C so you don't
get the warning about utf-8 locales not being supported (This may not
worry you, but try checking an R package and you will see it can be a
problem).

You can check the CAPABILITIES file against my rpm for Fedora 1 (which
you can download from CRAN) to make sure you got everything.

I'll be moving to Fedora 2 when it is released.  But the rapid release
schedule of Fedora, combined with lack of binary compatibility between
versions, does pose problems for me as a binary package maintainer. I
think I will only maintain the current and previous releases at any
given time. 

Martyn

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] sas vs r

2004-04-28 Thread Henric Nilsson
At 11:39 2004-04-28, Göran Broström wrote:
 If I recall correctly, neither glmmML nor glmmPQL (from MASS) handles
 offset terms. But GLMM in the lme4 package does.
glmmML handles offset terms.
You're right. I confused it with glmmML's not being able to fit models with 
family = gaussian. Sorry.

 I am pretty sure that glmmPQL does too.
glmmPQL doesn't complain about the offset term but ignores it. I guess that 
glmmPQL ignores offsets since lme, upon which glmmPQL depends, ignores offsets.

//Henric
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Fedora 1 RPM Packages

2004-04-28 Thread Martyn Plummer
From the response that I have had from the list it appears that not
many
 people use the shared lib from R, and that it wouldn't make sense to
 build it that way by default.

The generic answer to this kind of request is that if a feature isn't
enabled by default, then it doesn't go in the RPM.

Martyn

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Slicing an area....

2004-04-28 Thread v . demartino2
I have interpolated a large number of data and obtained a regular x-y curve
(and its interpolated function).
My problem is that I need to slice the entire area delimited by the curve,
the x-axis and the two finite extremes of the curve  (starting from the
very bottom of the entire area) into smaller stripes of given (and different)
areas.

How can I do it?

Vittorio

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] label separators in boxplots with conditioning

2004-04-28 Thread Webb Sprague
Hi R-helpers,
I have a data.frame with three columns (lots more reps though in each), 
like so:

'FOO''BAR''RESULT'
1   .0175
1   .0512
1.1.01100
1.1.0550
1.2.0175
1.2.0512
I am calling boxplot(RESULT ~ FOO:BAR, ...)  This gives me the box plots 
I want, but on the x-axis my labels are 1.01, 1.05, 1.1.01, 
1.1.05, 1.2.01, 1.2.01.  I would like to separate the factors by 
something other than a dot for obvious reasons.  I would also like to 
*avoid* using the 'names' parameter to boxplot (because I am lazy and 
want a general solution).

Please cc me directly as I read the list on digest
Thanks again to such a helpful list!
W
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] numericDeriv

2004-04-28 Thread Spencer Graves
 optim(..., hessian=TRUE, ...) outputs a list with a component 
hessian, which is the second derivative of the log(likelihood) at the 
minimum.  If your objective function is (-log(likelihood)), then 
optim(..., hessian=TRUE)$hessian is the observed information matrix.  If 
eigen(...$hessian)$values are all positive with at most a few orders of 
magnitude between the largest and smallest, then it is invertable, and 
the square roots of the diagonal elements of the inverse give standard 
errors for the normal approximation to the distribution of parameter 
estimates.  With objective functions that may not always be well 
behaved, I find that optim sometimes stops short of the optimum.  I run 
it with method = Nelder-Mead, BFGS, and CG, then restart the 
algorithm giving the best answer to one of the other algorithms.  Doug 
Bates and Brian Ripley could probably suggest something better, but this 
has produced acceptable answers for me in several cases, and I did not 
push it beyond that. 

 hope this helps. 

Jean Eid wrote:
Dear All,
I am trying to solve a Generalized Method of Moments problem which
necessitate the gradient of moments computation to get the
standard  errors of estimates.
I know optim does not output the gradient, but I can use numericDeriv to
get that. My question is: is this the best function to do this?
Thank you
Jean,
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
 

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] numericDeriv

2004-04-28 Thread Jean Eid
True, True, However I am not estimating via MLE. The objective function is
bunch of moment conditions weighted according to the uncertainty of the
moment ( i.e. an estimate of the asymptotic Var-Cov matrix of the moments
(not the estimates)) Technically it looks more like a weighted nonlinear
least square problem. I have a bunch of momnets that look like this
 E(e_{ik} z_i)=0
where e_{ik} is the error term and is a nonlinear function
of  the paramaters at observation i. . z_i is an instrument ( the model
have endogenous covariates). k above indicates that there is more than one
functional form for the residuals (simultaneous equation system that is
nonlinear). one of them look like
 e_{ik}=\ln(p-{1\over \alpha} \Delta^{-1})-W\theta
There are two more.
I am interseted in estimating  \alpha, \theta, (\theta \in R^{k}) in
addition to other paramaters in the other equations.
I only want to use these moment conditions rather than assuming knowledge
of the distribution oof the error term.

At the end of the day, I need to use the delta method to get at an
estimate for the standard errors.

Hope this clarifies some bit more


On Wed, 28 Apr 2004, Spencer Graves wrote:

   optim(..., hessian=TRUE, ...) outputs a list with a component
 hessian, which is the second derivative of the log(likelihood) at the
 minimum.  If your objective function is (-log(likelihood)), then
 optim(..., hessian=TRUE)$hessian is the observed information matrix.  If
 eigen(...$hessian)$values are all positive with at most a few orders of
 magnitude between the largest and smallest, then it is invertable, and
 the square roots of the diagonal elements of the inverse give standard
 errors for the normal approximation to the distribution of parameter
 estimates.  With objective functions that may not always be well
 behaved, I find that optim sometimes stops short of the optimum.  I run
 it with method = Nelder-Mead, BFGS, and CG, then restart the
 algorithm giving the best answer to one of the other algorithms.  Doug
 Bates and Brian Ripley could probably suggest something better, but this
 has produced acceptable answers for me in several cases, and I did not
 push it beyond that.

   hope this helps.

 Jean Eid wrote:

 Dear All,
 I am trying to solve a Generalized Method of Moments problem which
 necessitate the gradient of moments computation to get the
 standard  errors of estimates.
 I know optim does not output the gradient, but I can use numericDeriv to
 get that. My question is: is this the best function to do this?
 
 Thank you
 Jean,
 
 __
 [EMAIL PROTECTED] mailing list
 https://www.stat.math.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
 
 



__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] [R-pkgs] Release candidate 1 of lme4_0.6-1

2004-04-28 Thread Douglas Bates
Deepayan Sarkar and I have a source package of release candidate 1 of
the 0.6 series of the lme4 package available at
 http://www.stat.wisc.edu/~bates/lme4_0.6-0-1.tar.gz
This package requires Matrix_0.8-6 which has been uploaded to CRAN and
should be available in a few days.  A copy of the source package is
available as
 http://www.stat.wisc.edu/~bates/Matrix_0.8-6.tar.gz

Although this version of lme4 passes R CMD check on our GNU/Linux
systems we have not uploaded it to CRAN because it still lacks
capabilities that are available in lme4_0.5-2, which is currently on
CRAN.  As soon as we have all the capabilities of the 0.5 series
available in the 0.6 series we will release lme4_0.6-1.tar.gz to CRAN.

This version of lme4 is a complete rewrite of the data structures and
algorithms for fitting linear mixed models.  An incomplete draft
version of a paper describing the methods is available as a vignette.
Subsequent releases will contain a more polished version of this
paper.

The big change relative to earlier versions is that you can fit models
with crossed random effects quickly and easily.  For example, using
the data on Scottish secondary school students achievement scores
(from http://multilevel.ioe.ac.uk/softrev/) we can fit a model with
random effects for both the secondary and the primary school attended
as

 library(lme4)
 This package is in development.  For production work use
 lme from package nlme or glmmPQL from package MASS.
 data(ScotsSec)
 fm1 = lme(attain ~ verbal*sex, ScotsSec, random=list(primary=~1,second=~1))
 gc();system.time(lme(attain ~ verbal*sex, ScotsSec, 
 random=list(primary=~1,second=~1)))
 used (Mb) gc trigger (Mb)
Ncells 701438 18.81166886 31.2
Vcells 267929  2.1 786432  6.0
[1] 0.1 0.0 0.1 0.0 0.0
 summary(fm1)
Linear mixed-effects model fit by REML
Fixed: formula 
  AIC  BIClogLik
 14882.32 14925.32 -7434.162

Random effects:
 Groups   NameVariance Std.Dev.
 primary  (Intercept) 0.275458 0.52484 
 second   (Intercept) 0.014748 0.12144 
 Residual 4.2531   2.0623  

Fixed effects:
  Estimate Std. Error   DF t value Pr(|t|)
(Intercept) 5.9147e+00 7.6795e-02 3431 77.0197   2e-16 ***
verbal  1.5836e-01 3.7872e-03 3431 41.8136   2e-16 ***
sexF1.2155e-01 7.2413e-02 3431  1.6786  0.09332 .  
verbal:sexF 2.5929e-03 5.3885e-03 3431  0.4812  0.63041
---
Signif. codes:  0 `***' 0.001 `**' 0.01 `*' 0.05 `.' 0.1 ` ' 1 

Correlation of Fixed Effects:
(Intr) verbal sexF  
verbal   0.177  
sexF-0.482 -0.178   
verbal:sexF -0.122 -0.680  0.161

Number of Observations: 3435
Number of Groups: 
primary  second 
148  19 

There are other examples in the tests subdirectory.  

The lme function behaves as previously *with one exception*.  In the
model specification there is no longer any distinction between crossed
or nested or partially crossed random effects.  This means that for
nested random effects you must ensure that every inner grouping
corresponds to a unique level of the inner grouping factor.  For
example, in the Pixel data there are two grouping factors, Dog and
Side with Side nested within Dog.  You must create a new grouping
factor, say DS, with unique levels for each Dog/Side combination to be
able to specify a model of Side within Dog.

___
R-packages mailing list
[EMAIL PROTECTED]
https://www.stat.math.ethz.ch/mailman/listinfo/r-packages

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Thank you for your message.

2004-04-28 Thread Governor [GO]
Thank you for your e-mail message to my office. Your views are very important to me 
and I appreciate you taking the time to share you comments and concerns.

I have asked my staff to respond to only mail that is properly identified. If you live 
in Kansas and have included your mailing address, you may expect a written reply from 
my office. Unfortunately, because of a variety of constraints, I cannot respond to you 
by e-mail at this time. My staff will make every effort to respond as quickly as 
possible. I do have a website at www.ksgovernor.org which links you to a wealth of 
information about services available in Kansas. I have posted new releases, position 
statements, and other information that will keep you advised about my administration. 
Students wanting to know about Kansas will find helpful facts on that page, too.

To schedule an event/appointment, please submit your request by U.S. mail to: 
Governor's Office, The Statehouse, 300 SW 10th, Topeka, KS 66612.

If you have a matter that needs immediate attention, I would encourage you to call my 
office at 1-800-748-4408. Thank you for your interest in my administration.

Sincerely,
Kathleen Sebelius
Governor of the State of Kansas

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] simple repeated measures model: dumb user baffled!

2004-04-28 Thread Chris Evans
I am in the process of transferring from an old version of S+ to using
R having used a variety of other packages in the past.  I'm hugely
impressed with R but it has an excellent but depressing habit of exposing
that I'm not a professional statistician and has done so again.

Someone has run a nice little repeated measures design on my advice,
students randomised to four orders of a condition that can be N or E:
four orders used: NEEN, NENE, ENEN, ENNE, ten students in each block.  I've inherited
the data (in SPSS but I can deal with that!) with variables like:
ID GENDER ORDER C1 C2 C3 C4 RESP1 RESP2 RESP3 RESP4 ...

ORDER is the order as a/b/c/d; C1:C4 has the N or E for each occasion, and
RESP1:RESP4 the response variables.  (There are a number of these but
looking at each separately is justifiable theoretically).

I've had a look around the R help and some S+ books I've got and I
realise I'm seriously out of my depth and my repeated measures ANOVA
knowledge is rusty and very different from the way that more modern
statistics handles such designs.

Can anyone point me to an idiot's guide to the syntax that would help
me test:
a) that there is a change (probably a fall in RESPn) over the four
repeats (probable through a practice effect)
b) whether that shows any sign of higher than linear change
c) whether on top of that, there are N/E differences.

I realise that this is probably trivially easy but I'm staring at all
sorts of wonderful things in Venables  Ripley (S+ 2nd ed.) and in Chambers  Hastie
(S, 1st ed.) but nothing is quite near enough to what I need to help
me overcome my limitations!

TIA,

Chris

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] se.fit in predict.glm

2004-04-28 Thread Ted Harding
On 27-Apr-04 Peter Dalgaard wrote:
 (Ted Harding) [EMAIL PROTECTED] writes:
 The documentation does not say definitely what p$se.fit is,
 only calling it Estimated standard errors. I *believe*
 this means, at each value of X, the SE in the estimation
 of P[y=1] taking account of the joint uncertainty in the
 estimation of 'a' and 'b' in the relation
 
   probit(P) = a + b*X
 
 Can someone confirm that this really is so?
 
 Pretty accurate, I'd say. 
 
 Basically, the fitted value is a function of the estimated parameters.
 Asymptotically, the latter are approximately normally distributed with
 a small dispersion so that the function is effectively linear and you
 can approximate the distribution of the fitted value with a normal
 distribution.

Hmm!!
This is a bit of a minefield! I've been fitting a binomial regression
  g - glm(y~x, family=binomial(link=probit))
followed by a predicition
  p - predict.glm(g, newx, type=response, se.fit=TRUE)

Plotting the fit p$fit and the inplied confidence bands
  p$fit +- 2*p$se.fit
gave rather narrow (I thought) bands for the prediction, so I did
a simulation of 20 mvrnorm draws from the joint distribution of
a and b (using their SDs and correlation from summary.glm).

I then plotted the corresponding curves pnorm(a + b*x) and got
a set of 20 curves about half of which lay well outside the
above condfidence band, some quite a long way off. (This, I must
say, is what I had intuitively been expecting to find in the first
place!)

I then turned to the VR book, and happened on the section Problems
with binomial GLMs in 7.2; and discovered profile.

So this led me to confint in MASS via ?profile and ?profile.glm.

The results of
  confint(g)
gave me a and b for the lower (2.5%) and upper (97.5%) curve.
When plotted, these curves lay well outside the confidence
bands obtained from predict.glm and were much more realistically
related to my simulations (19 of my simulated curves nicely packed
the space between tthe two confint curves, and one lay just
outside -- couldn't have hoped for a result more close to expectation!).

Nevertheless, I don't think my data were all that few or nasty:

23 x-values, roughly equally spaced, with about 12 0/1 results
at each, and numbers of responses going up as
  0 0 0 0 0 0 0 0 0 0 2 0 0 1 0 2 4 5 8 8 9 4 10

So I tend to conclude that the predict.glm(...,se.fit=TRUE)
method should perhaps be avoided in favour of using confint,
though I see no indication that confint respects the covariance
of the parameter estimates (intercept and slope) whereas the
predict method in theory does.

Maybe I'll have another go, after centering the x-values at their
mean ...

Anyway, comments would be appreciated!

Best wishe to all,
Ted.



E-Mail: (Ted Harding) [EMAIL PROTECTED]
Fax-to-email: +44 (0)870 167 1972
Date: 28-Apr-04   Time: 21:32:11
-- XFMail --

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Matrix efficiency in 1.9.0 vs 1.8.1

2004-04-28 Thread Stephen Ellner
I'm seeking some advice on effectively using the new Matrix
library in R1.9.0 for operations with large dense matrices. I'm working on
integral operator models (implemented numerically via matrix operations)
and except for the way entries are generated, the examples below really are 
representative of my problem sizes. 

My main concern is speed of large dense matrix multiplication.
In R 1.8.1 (Windows2000 Professional, dual AthlonMP 2800)  
 a=matrix(rnorm(2500*2500),2500,2500); v=rnorm(2500); 
 system.time(a%*%v);
[1] 0.11 0.00 0.12   NA   NA

In R 1.9.0, same platform:
 a=matrix(rnorm(2500*2500),2500,2500); v=rnorm(2500);
 system.time(a%*%v);
[1] 0.24 0.00 0.25   NA   NA

These differences are consistent. But using the Matrix library 
in 1.9.0, the discrepancy disappears  
 library(Matrix);
 a=Matrix(rnorm(2500*2500),2500,2500); v=Matrix(rnorm(2500),2500,1);
 system.time(a%*%v);
[1] 0.11 0.00 0.11   NA   NA

The problem is 
 b=a/3
Error in a/3 : non-numeric argument to binary operator

which seems to mean that I can't just rewrite code to use Matrix 
instead of matrix objects -- I would have to do lots and lots of
conversions between Matrix and matrix. Am I missing a trick
here somewhere, that would let me use only Matrix objects and do
with them the things one can do with matrix objects? Or some other
way to avoid the twofold speed hit in moving to 1.9? 

I've tried using the Rblas.dll for AthlonXP on CRAN, and 
it doesn't help. 

Thanks in advance, 

Steve 


Stephen P. Ellner ([EMAIL PROTECTED])
Department of Ecology and Evolutionary Biology
Corson Hall, Cornell University, Ithaca NY 14853-2701
Phone (607) 254-4221FAX (607) 255-8088

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] se.fit in predict.glm

2004-04-28 Thread Peter Dalgaard
(Ted Harding) [EMAIL PROTECTED] writes:

 The results of
   confint(g)
 gave me a and b for the lower (2.5%) and upper (97.5%) curve.
 When plotted, these curves lay well outside the confidence
 bands obtained from predict.glm and were much more realistically
 related to my simulations (19 of my simulated curves nicely packed
 the space between tthe two confint curves, and one lay just
 outside -- couldn't have hoped for a result more close to expectation!).

Hmm, that doesn't actually hold up mathematically... You cannot just
take upper/lower limits of both parameters and combine them.

 
 Nevertheless, I don't think my data were all that few or nasty:
 
 23 x-values, roughly equally spaced, with about 12 0/1 results
 at each, and numbers of responses going up as
   0 0 0 0 0 0 0 0 0 0 2 0 0 1 0 2 4 5 8 8 9 4 10
 
 So I tend to conclude that the predict.glm(...,se.fit=TRUE)
 method should perhaps be avoided in favour of using confint,
 though I see no indication that confint respects the covariance
 of the parameter estimates (intercept and slope) whereas the
 predict method in theory does.
 
 Maybe I'll have another go, after centering the x-values at their
 mean ...

Shouldn't change anything (except maybe demonstrate the fallacy of
your calculation above -- lower b's give higher p's when x is negative).

 Anyway, comments would be appreciated!

I don't seem to get anything that drastic. Things look somewhat better
if you use the link-scale estimates, but the response-scale curves are
not hopeless. You do have to notice that these are pointwise CIs so
having multiple curves straying outside is not necessarily a problem.

Just as a sanity check, did your plots look anything like the below:

y - c(0,0,0,0,0,0,0,0,0,0,2,0,0,1,0,2,4,5,8,8,9,4,10)
x - 1:23
d - data.frame(x,y,n=12)
m1 - glm(cbind(y,n-y) ~ x,data=d,family=binomial(probit))
confint(m1) # +-2SE approximation
library(MASS)
confint(m1) # profiling
x1 - with(predict(m1,se.fit=TRUE,type=response),
  fit+outer(se.fit,c(l=-2,u=2)))
x2 - with(predict(m1,se.fit=TRUE,type=link),
  fit+outer(se.fit,c(l=-2,u=2)))
x2 - pnorm(x2)

ab - mvrnorm(20,coef(m1),vcov(m1))
matplot(x2,type=l,col=black,lwd=3,lty=1)
matlines(x1,type=l,col=red,lwd=3,lty=1)
matlines(pnorm(t(ab%*%rbind(1,x



-- 
   O__   Peter Dalgaard Blegdamsvej 3  
  c/ /'_ --- Dept. of Biostatistics 2200 Cph. N   
 (*) \(*) -- University of Copenhagen   Denmark  Ph: (+45) 35327918
~~ - ([EMAIL PROTECTED]) FAX: (+45) 35327907

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Matrix efficiency in 1.9.0 vs 1.8.1

2004-04-28 Thread Douglas Bates
Stephen Ellner [EMAIL PROTECTED] writes:

 I'm seeking some advice on effectively using the new Matrix
 library in R1.9.0 for operations with large dense matrices. I'm working on
 integral operator models (implemented numerically via matrix operations)
 and except for the way entries are generated, the examples below really are 
 representative of my problem sizes. 
 
 My main concern is speed of large dense matrix multiplication.
 In R 1.8.1 (Windows2000 Professional, dual AthlonMP 2800)  
  a=matrix(rnorm(2500*2500),2500,2500); v=rnorm(2500); 
  system.time(a%*%v);
 [1] 0.11 0.00 0.12   NA   NA
 
 In R 1.9.0, same platform:
  a=matrix(rnorm(2500*2500),2500,2500); v=rnorm(2500);
  system.time(a%*%v);
 [1] 0.24 0.00 0.25   NA   NA
 
 These differences are consistent. But using the Matrix library 
 in 1.9.0, the discrepancy disappears  
  library(Matrix);
  a=Matrix(rnorm(2500*2500),2500,2500); v=Matrix(rnorm(2500),2500,1);
  system.time(a%*%v);
 [1] 0.11 0.00 0.11   NA   NA
 
 The problem is 
  b=a/3
 Error in a/3 : non-numeric argument to binary operator
 
 which seems to mean that I can't just rewrite code to use Matrix 
 instead of matrix objects -- I would have to do lots and lots of
 conversions between Matrix and matrix. Am I missing a trick
 here somewhere, that would let me use only Matrix objects and do
 with them the things one can do with matrix objects? Or some other
 way to avoid the twofold speed hit in moving to 1.9? 

The trick is waiting for the author of the Matrix package to write the
methods for arithmetic operations or contributing said methods
yourself. :-)

Actually I want to at least e-discuss the implementation with John
Chambers and other members of the R Development Core Team before doing
much more implementation.  There are some subtle issues about how to
arrange the classes and this is usually the point where John can
provide invaluable guidance.

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Fedora 1 RPM Packages

2004-04-28 Thread Ian Wallace
That makes sense since most people will build with the defaults and
that's what we'd like packages for the main stream.

I'm already working around it ... but thought I would mention that some
of us do like the shared library.  I hadn't even given any thought
though to the memory foot print and assumed it would be roughly the
same, I never checked though.

cheers
ian


On Wed, 2004-04-28 at 10:11, Martyn Plummer wrote:
 From the response that I have had from the list it appears that not
 many
  people use the shared lib from R, and that it wouldn't make sense to
  build it that way by default.
 
 The generic answer to this kind of request is that if a feature isn't
 enabled by default, then it doesn't go in the RPM.
 
 Martyn
-- 
Ian Wallace [EMAIL PROTECTED]

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] plot.ts

2004-04-28 Thread kjetil
On 28 Apr 2004 at 0:46, Gabor Grothendieck wrote:

  kjetil at acelerate.com writes:
  I have problems getting sensible series name plotted
  with the ts.plot function. It doesn't seem to 
  use either ylab= or xy.labels= arguments. 
  I ended up using
  
  plot({arg - ts.union(gasolina.eq, PIBmensPred, PIBgrowthmens) ;
 colnames(arg) - c(Gaso ,PIB, PIBgrowth);arg },
 main=Gasolina eq. con crecimiento Economico, 
 xlab=Tiempo)
 
 plot( 
   ts.union(Gaso = gasolina.eq, PIB = PIBmensPred, PIBgrowth =
   PIBgrowthmens), main = Gasolina eq. con crecimiento Economico,
   xlab = Tiempo )
 

Thanks! Then I can avoid the variable labels at all, if I want, by 
using backticks:

 plot( 
   ts.union(` ` = gasolina.eq, `  ` = PIBmensPred, `   ` =
   PIBgrowthmens), main = Gasolina eq. con crecimiento Economico,
   xlab = Tiempo )


However, with plot.type=s, whatever I do, I get the complete x 
argument as y-label, no way to avoid it. Any solution (apart from 
hacking the code, which I am about to do)?

Kjetil Halvorsen



 __
 [EMAIL PROTECTED] mailing list
 https://www.stat.math.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide!
 http://www.R-project.org/posting-guide.html

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Rtemp directories accumulating over time

2004-04-28 Thread kjetil
On 28 Apr 2004 at 13:53, Prof Brian Ripley wrote:

 On Tue, 27 Apr 2004 [EMAIL PROTECTED] wrote:
 

.
.
 I don't see why: package base is never unloaded so that hook function
 is never run.  (Indeed, no package/namespace is unloaded except by
 explicit user action, in particular not when R is terminated.)
 
  So there are also other tmpdirs made by R. Why, where, and why are
  they not removed at exit (when their content are removed)?
 
 They are removed by R.  This is a Windows-only bug, as Windows
 sometimes does not act on commands to remove empty directories (but
 only sometimes).

So is there anything I can do to remedy this nuisance (apart from 
reinstalling a newer XP (other messages indicates that can give 
surprises!) or changing OS)?

Kjetil Halvorsen

 Session temporary directories should only be left
 around when a session crashes.
 
 -- 
 Brian D. Ripley,  [EMAIL PROTECTED]
 Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
 University of Oxford, Tel:  +44 1865 272861 (self) 1 South
 Parks Road, +44 1865 272866 (PA) Oxford OX1 3TG,
 UKFax:  +44 1865 272595
 
 __
 [EMAIL PROTECTED] mailing list
 https://www.stat.math.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide!
 http://www.R-project.org/posting-guide.html

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] [R ESS]

2004-04-28 Thread Erin Hodgess
Dear ESS/R People:

I installed ESS as per the directions on the ESS page from
the R-Gui Page.

When I start Xemacs/ESS, the scratch window comes up but
no R.  Also, the special function button do not come up.

Any clues as to what I'm doing wrong, please?
R for Windows


Thanks in advance,
Sincerely,
Erin Hodgess
Associate Professor
Department of Computer and Mathematical Sciences
University of Houston - Downtown
mailto: [EMAIL PROTECTED]

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] se.fit in predict.glm

2004-04-28 Thread Ted Harding
On 28-Apr-04 Peter Dalgaard wrote:
 Hmm, that doesn't actually hold up mathematically... You cannot just
 take upper/lower limits of both parameters and combine them.

Yes, excuse me. I had misunderstood what 'confint' does -- I misled
myself into thinking that it gave be the 'a' and 'b' for the
lower and the upper curve!

 I don't seem to get anything that drastic. Things look somewhat better
 if you use the link-scale estimates, but the response-scale curves are
 not hopeless. You do have to notice that these are pointwise CIs so
 having multiple curves straying outside is not necessarily a problem.
 
 Just as a sanity check, did your plots look anything like the below:

Yes, that's useful -- and it also brought to the surface the other
error I'd made whereby a slip of eyesight caused me to transcribe
a correlation of -0.91 as 0.91, so I was simulating from the
wrong distribution ...

 y - c(0,0,0,0,0,0,0,0,0,0,2,0,0,1,0,2,4,5,8,8,9,4,10)
 x - 1:23
 d - data.frame(x,y,n=12)
 m1 - glm(cbind(y,n-y) ~ x,data=d,family=binomial(probit))
 confint(m1) # +-2SE approximation
 library(MASS)
 confint(m1) # profiling
 x1 - with(predict(m1,se.fit=TRUE,type=response),
   fit+outer(se.fit,c(l=-2,u=2)))
 x2 - with(predict(m1,se.fit=TRUE,type=link),
   fit+outer(se.fit,c(l=-2,u=2)))
 x2 - pnorm(x2)
 
 ab - mvrnorm(20,coef(m1),vcov(m1))
 matplot(x2,type=l,col=black,lwd=3,lty=1)
 matlines(x1,type=l,col=red,lwd=3,lty=1)
 matlines(pnorm(t(ab%*%rbind(1,x

Yes, fairly similar when I do it right. Thanks for the above
code. I'd also overlooked 'vcov' etc and was copying SEs and
correlations from the output of 'summary.glm'.

The main lesson: one shouldn't stay up too late at these things.
Cheers,
Ted.



E-Mail: (Ted Harding) [EMAIL PROTECTED]
Fax-to-email: +44 (0)870 167 1972
Date: 28-Apr-04   Time: 23:54:20
-- XFMail --

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] [R ESS] please disregard prev. message

2004-04-28 Thread Erin Hodgess
Wrong name in init.el 

Sorry,
Erin

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] plot.ts

2004-04-28 Thread Gabor Grothendieck
 kjetil at acelerate.com writes:
:  plot( 
:ts.union(` ` = gasolina.eq, `  ` = PIBmensPred, `   ` =
:PIBgrowthmens), main = Gasolina eq. con crecimiento Economico,
:xlab = Tiempo )
: 
: However, with plot.type=s, whatever I do, I get the complete x 
: argument as y-label, no way to avoid it. Any solution (apart from 
: hacking the code, which I am about to do)?

par(ann=F)

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] [R ESS]

2004-04-28 Thread Jason Turner
Erin Hodgess wrote:
Dear ESS/R People:
I installed ESS as per the directions on the ESS page from
the R-Gui Page.
When I start Xemacs/ESS, the scratch window comes up but
no R.  Also, the special function button do not come up.
If the R bin folder is on your path (something like c:\Program 
Files\R\rw1091\bin), and ess is loaded, then it should be as simple as 
M-x R (I typically type ESC x R enter, because Alt is in an awkward 
place on my laptop).  If that doesn't work, then ess probably isn't 
being loaded.  Check the docs for recommended settings on the 
.xemacs\init.el file.

Jason
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] [R ESS] one more thing.....

2004-04-28 Thread Erin Hodgess
Ok.
R loads up.
But when I try to use 
Ctrl-x-o or any control/escape key
in the minibuffer, 
I get
XEmacs does not own the primary selection

Any ideas, please?
Thanks yet again,
Erin
mailto: [EMAIL PROTECTED]

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] (no subject)

2004-04-28 Thread klea lambrou

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] R-crash using read.shape (maptools)

2004-04-28 Thread Alexander.Herr
Hi List,

I am trying to read a large shapefile (~37,000 polys) using read.shape [winxp, 1gig 
ram, dellbox). I receive the following error:

AppName: rgui.exeAppVer: 1.90.30412.0ModName: maptools.dll
ModVer: 1.90.30412.0 Offset: 309d

The getinfo.shape returns info, and the shapefile is readable in arcmap. 

Any ideas on how to overcome this?

Thanks Herry

---
Alexander Herr - Herry

 

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] R-crash using read.shape (maptools)

2004-04-28 Thread Danny Heuman
Hi Herry,

On Thu, 29 Apr 2004 12:20:44 +1000, you wrote:

Hi List,

I am trying to read a large shapefile (~37,000 polys) using read.shape [winxp, 1gig 
ram, dellbox). I receive the following error:

AppName: rgui.exe   AppVer: 1.90.30412.0ModName: maptools.dll
ModVer: 1.90.30412.0Offset: 309d

The getinfo.shape returns info, and the shapefile is readable in arcmap. 

Any ideas on how to overcome this?

Thanks Herry

---
Alexander Herr - Herry

 

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


I've had difficulty if there is too much detail in the polygon
definition (i.e. too many nodes).  Try thinning the polygons and try
again.

Danny

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] outer

2004-04-28 Thread K Fung
Hello,

Can anyone help explain why the following are not
equivalent?

I have written a function called cord.mag(x,y) which
takes two numbers and outputs a number.

Further I defined m=1:5, n=1:26

  for(i in m) { for (j in n) print(cord.mag(i,j))}

this prints the m*n values, one on each line properly

 outer(m,n,cord.mag)

this gives me a matrix of zeroes

 outer(1,2,cord.mag)

this gives the right value on the other hand

Thanks

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re:[R] p-values

2004-04-28 Thread John Maindonald
This is, of course, not strictly about R.  But if there should be
a decision to pursue such matters on this list, then we'd need
another list to which such discussion might be diverted.
I've pulled Frank's Regression Modeling Stratregies down
from my shelf and looked to see what he says about
inferential issues.  There is a suggestion, in the introduction,
that modeling provides the groundwork that can be used a
point of departure for a variety of inferential interpretations.
As far as I can see Bayesian interpretations are never
really explicitly discussed, though the word Bayesian does
appear in a couple of places in the text.  Frank, do you now
have ideas on how you would (perhaps, in a future edition,
will) push the discussion in a more overtly Bayesian direction?
What might be the style of a modeling book, aimed at practical
data analysts who of necessity must (mostly, at least) use
off-the-shelf software, that seriously entertains the Bayesian
approach?
R provides a lot of help for those who want a frequentist
interpretation, even to including by default the *, **, ***
labeling that some of us deplore.  There is no similar help
for those who want at least the opportunity to place the
output from a modeling exercise in a Bayesian context of
some description.  There is surely a strong argument for
the use of a more neutral form of default output, even to
the excluding of p-values, on the argument that they also
push too strongly in the direction of a frequentist
interpretative framework.
There seems, unfortunately, to be a dearth of good ideas
on how the assist the placing of output from modeling
functions such as R provides in an explicitly Bayesian
framework.  Or is it, at least in part, that I am unaware of
what is out there? That, I guess, is the point of my
question to Frank.  Is it just too technically demanding
to go much beyond trying to get users to understand
that a Bayesian credible interval can, if there is an
informative prior, be very different from a frequentist CI,
that they really do need to pause if there is an
informative prior lurking somewhere in the undergrowth?
John Maindonald.
Frank Harrell wrote:
They [p-values] are objective only in the sense that
subjectivity is deferred in a difficult to document way
when P-values are translated into decisions.

The statement that frequentist methods are the norm, which I'm
afraid is usually true, is a sad comment on the state of much
of scientific inquiry.  IMHO P-values are so defective that
the imperfect Bayesian approach should be seriously entertained.
John Maindonald email: [EMAIL PROTECTED]
phone : +61 2 (6125)3473fax  : +61 2(6125)5549
Centre for Bioinformation Science, Room 1194,
John Dedman Mathematical Sciences Building (Building 27)
Australian National University, Canberra ACT 0200.
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html