Re: [R] 'singular gradient matrix? when using nls() and how to make the program skip nls( ) and run on

2007-09-07 Thread Joerg van den Hoff
On Wed, Sep 05, 2007 at 04:43:19PM -0700, Yuchen Luo wrote:
 Dear friends.
 
 I use nls() and encounter the following puzzling problem:
 
 
 
 I have a function f(a,b,c,x), I have a data vector of x and a vectory  y of
 realized value of f.
 
 
 
 Case1
 
 I tried to estimate  c with (a=0.3, b=0.5) fixed:
 
 nls(y~f(a,b,c,x), control=list(maxiter = 10, minFactor=0.5
 ^2048),start=list(c=0.5)).
 
 The error message is: number of iterations exceeded maximum of 10
 
 
 
 Case2
 
 I then think maybe the value of a and be are not reasonable. So, I let nls()
 estimate (a,b,c) altogether:
 
 nls(y~f(a,b,c,x), control=list(maxiter = 10, minFactor=0.5
 ^2048),start=list(a=0.3,b=0.5,c=0.5)).
 
 The error message is:
 
 singular gradient matrix at initial parameter estimates.
 
 
 
 This is what puzzles me, if the initial parameter of (a=0.3,b=0.5,c=0.5) can
 create 'singular gradient matrix', then why doesn't this 'singular gradient
 matrix' appear in Case1?
 
 
 
 I have tried to change the initial value of (a,b,c) around but the problem
 persists. I am wondering if there is a way out.
 
 
 
 My another question is, I need to run 220 of  nls() in my program with
 different y and x. When one of the nls() encounter a problem, the whole
 program stops.  In my case, the 3rd nls() runs into a problem.  I would
 still need the program to run the remaining 217 nls( )! Is there a way to
 make the program skip the problematic nls() and complete the ramaining
 nls()'s?

?try

 
 
 
 Your help will be highly appreciated!
 
 Yuchen Luo
 
   [[alternative HTML version deleted]]
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] non-linear fitting (nls) and confidence limits

2007-09-01 Thread Joerg van den Hoff
dear list members,

I apologize in advance for posting a second time, but probably after one
week chances are, the first try went down the sink..

my question concerns computation of confidence intervals in nonlinear fits
with `nls' when weigthing the fit. the seemingly correct procedure does not
work as expected, as is detailed in my original post below.

any remarks appreciated.

greetings

joerg


original post:
--

for unweighted fits using `nls' I compute confidence intervals for the
fitted model function by using:

#---
se.fit - sqrt(apply(rr$m$gradient(), 1, function(x) sum(vcov(rr)*outer(x,x
luconf - yfit + outer(se.fit, qnorm(c(probex, 1 - probex)))
#---

where `rr' contains an `nls' object, `x' is the independent variable vector,
`yfit' the corresponding model prediction (`fitted(rr)'), `se.fit' the
corresponding standard error and `luconf' the lower and upper confidence
limits at some level specified by `probex'.

when using the same approach with weighted fits (even if all weights are equal
(but not equal to one)), I noted that the confidence limits depend on the
weights in an counter-intuitive, probably erroneous way:

consider the following simple example (yes, I know that this actually is a linar
model :-)):

#---
probex - 0.05
x - 1:10
y - rnorm(x, x, .8)
w1 - rep(1, 10)
w2 - w1; w2[7:10] - 0.01 * w2[7:10]

rr1 - nls(y ~ a*x + b, data = list(x = x, y = y), start = list(a = 1, b = 0), 
weights = w1)
rr2 - nls(y ~ a*x + b, data = list(x = x, y = y), start = list(a = 1, b = 0), 
weights = w2)
yfit1 - fitted(rr1)
yfit2 - fitted(rr2)
se.fit1 - sqrt(apply(rr1$m$gradient(), 1, function(x) 
sum(vcov(rr1)*outer(x,x
luconf1 - yfit1 + outer(se.fit1, qnorm(c(probex, 1 - probex)))

se.fit2 - sqrt(apply(rr2$m$gradient(), 1, function(x) 
sum(vcov(rr2)*outer(x,x
luconf2 - yfit2 + outer(se.fit2, qnorm(c(probex, 1 - probex)))

op - par(mfrow = c(2,1))
matplot(x, cbind(y, yfit1,luconf1), type = 'blll', pch = 1, col = c(1,2,4,4), 
lty = 1)
matplot(x, cbind(y, yfit2,luconf2), type = 'blll', pch = 1, col = c(1,2,4,4), 
lty = 1)
par(op, no.readonly = T)
#---

the second fit uses unequal weights where the last 4 points are next to
unimportant for the fit. the confidence intervals in this case are fine up to
point no. 6, but nonsense afterwards I believe.

I have tracked this down to the fact that `m$gradient' does _not_ contain the 
actual gradients w.r.t. the parameters but rather the gradients multiplied by
the sqrt of the weights. without trying to understand the inner workings of the
`nls*' functions in detail my questions are:

1. 
is the fact that `m$gradient' in an `nls' object does contain the scaled
gradients documented somewhere (I did'nt find it)? if yes, where is it, if no,
can someone please give me a hint what the rationale behind this approach is?

2.
am I right to understand, that the above approach to compute `se.fit'
(essentially in this compact form proposed by p. daalgaard on r-help some time 
ago)
is erroneous (for weighted nls) in computing the confidence limits and that
I have to undo the multiplication by sqrt(weights) which is included 
in `m$gradient' (or compute the gradients of my model function myself,
e.g. with `deriv')?

thanks,

joerg

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] weighted nls and confidence intervals

2007-08-23 Thread Joerg van den Hoff

for unweighted fits using `nls' I compute confidence intervals for the
fitted model function by using:

#---
se.fit - sqrt(apply(rr$m$gradient(), 1, function(x) sum(vcov(rr)*outer(x,x
luconf - yfit + outer(se.fit, qnorm(c(probex, 1 - probex)))
#---

where `rr' contains an `nls' object, `x' is the independent variable vector,
`yfit' the corresponding model prediction (`fitted(rr)'), `se.fit' the
corresponding standard error and `luconf' the lower and upper confidence
limits at some level specified by `probex'.

when using the same approach with weighted fits (even if all weights are equal
(but not equal to one)), I noted that the confidence limits depend on the
weights in an counter-intuitive, probably erroneous way:

consider the following simple example (yes, I know that this actually is a linar
model :-)):

#---
probex - 0.05
x - 1:10
y - rnorm(x, x, .8)
w1 - rep(1, 10)
w2 - w1; w2[7:10] - 0.01 * w2[7:10]

rr1 - nls(y ~ a*x + b, data = list(x = x, y = y), start = list(a = 1, b = 0), 
weights = w1)
rr2 - nls(y ~ a*x + b, data = list(x = x, y = y), start = list(a = 1, b = 0), 
weights = w2)
yfit1 - fitted(rr1)
yfit2 - fitted(rr2)
se.fit1 - sqrt(apply(rr1$m$gradient(), 1, function(x) 
sum(vcov(rr1)*outer(x,x
luconf1 - yfit1 + outer(se.fit1, qnorm(c(probex, 1 - probex)))

se.fit2 - sqrt(apply(rr2$m$gradient(), 1, function(x) 
sum(vcov(rr2)*outer(x,x
luconf2 - yfit2 + outer(se.fit2, qnorm(c(probex, 1 - probex)))

op - par(mfrow = c(2,1))
matplot(x, cbind(y, yfit1,luconf1), type = 'blll', pch = 1, col = c(1,2,4,4), 
lty = 1)
matplot(x, cbind(y, yfit2,luconf2), type = 'blll', pch = 1, col = c(1,2,4,4), 
lty = 1)
par(op, no.readonly = T)
#---

the second fit uses unequal weights where the last 4 points are next to
unimportant for the fit. the confidence intervals in this case are fine up to
point no. 6, but nonsense afterwards I believe.

I have tracked this down to the fact that `m$gradient' does _not_ contain the 
actual gradients w.r.t. the parameters but rather the gradients multiplied by
the sqrt of the weights. without trying to understand the inner workings of the
`nls*' functions in detail my questions are:

1. 
is the fact that `m$gradient' in an `nls' object does contain the scaled
gradients documented somewhere (I did'nt find it)? if yes, where is it, if no,
can someone please give me a hint what the rationale behind this approach is?

2.
am I right to understand, that the above approach to compute `se.fit'
(essentially in this compact form proposed by p. daalgaard on r-help some time 
ago)
is erroneous (for weighted nls) in computing the confidence limits and that
I have to undo the multiplication by sqrt(weights) which is included 
in `m$gradient' (or compute the gradients of my model function myself,
e.g. with `deriv')?

thanks,

joerg

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] How to store the parameter estimated by nls( ) to a variable?

2007-08-13 Thread Joerg van den Hoff
On Sun, Aug 12, 2007 at 10:50:59AM -0700, Yuchen Luo wrote:
 Dear Professor Murdoch.
 Thank you so much
 
 Best Wishes
 Yuchen Luo
 
 On 8/12/07, Duncan Murdoch [EMAIL PROTECTED] wrote:
 
  Yuchen Luo wrote:
   Dear Colleagues.
  
   I believe this should be a problem encountered by many:
  
   nls( ) is a very useful and efficient function to use if we are just to
   display the estimated value on screen. What if we need R to store the
   estimated parameter in a variable?
  
   For example:
  
   x=rnorm(10, mean=1000, sd=10)
  
   y=x^2+100+rnorm(10)
  
   a=nls(y~(x^2+para),control=list(maxiter = 1000, minFactor=0.5
   ^1024),start=list(para=0.0))
  
   How to store the estimated value of para in this case, in a variable,
  say,
   b?
  
   It is easy to display a and find all the information. How ever, I need
  to
   fit a different set of x and y in every loop of my code and I need to
  store
   the estimated values for further use. I have checked both the online
  manual
   and several S-plus books but no example as such showed up.
  
   Your help will be highly appreciated!
 
  coef(a) will get what you want.  coef() works for most modelling
  functions where it makes sense.
 
  Duncan Murdoch
 
 

further information might be extracted from the `summary' return value.
e.g., if you need not only the parameters but also their error
estimates, you could get them via

summary(a)$parameters[, Std. Error]

(question: is this the canonical way to do it or is their a better one?)

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Reducing the size of pdf graphics files produced with R

2007-05-24 Thread Joerg van den Hoff
On Wed, May 23, 2007 at 10:13:22AM -0700, Waichler, Scott R wrote:
 
 
 Scott
 
 Scott Waichler, Senior Research Scientist
 Pacific Northwest National Laboratory
 MSIN K9-36
 P.O. Box 999
 Richland, WA   99352USA
 509-372-4423 (voice)
 509-372-6089 (fax)
 [EMAIL PROTECTED]
 http://hydrology.pnl.gov
 
 ---
  
 
  -Original Message-
  From: Joerg van den Hoff [mailto:[EMAIL PROTECTED] 
  Sent: Wednesday, May 23, 2007 9:25 AM
  To: Waichler, Scott R
  Cc: r-help@stat.math.ethz.ch; [EMAIL PROTECTED]
  Subject: Re: [R] Reducing the size of pdf graphics files 
  produced with R
  
  On Wed, May 23, 2007 at 07:24:04AM -0700, Waichler, Scott R wrote:
as you are using MacOS X, you'll have ghostscript 
  installed anyway. 
so try in R `dev2bitmap' with `type =pdfwrite'. I believe `gs' 
_does_ include compression. a quick test showed at least 
  a reduction 
by about a factor of 2 relative to `pdf()'. probably one 
  can fiddle 
with the ghostscript settings (cf. e.g. `Ps2pdf.htm' in the 
ghostscipt
docs: you
can adjust the resolution for images in the pdf file) to improve 
this, so as a last resort you could indeed export the graphics as 
postscript and do the conversion to `pdf' by adjusting 
  the `ps2pdf'
switches. but even with the default settings the pdf produced via 
dev2bitmap/ghostscript is the better solution. apart from 
  file size 
I by and then ran into problems when converting `pdf()' output to 
postscript later on, for instance.
   
   Can you give an example of dev2bitmap usage?  I tried using it in 
  `dev2bitmap(file = rf.pdf, type = pdfwrite)' copies the 
  current device to the pdf-file `rf.pdf', i.e. you should have 
  plotted something on the screen prior to using this (the 
  manpage tells you so much...). no `dev.off' is necessary in this case.
  
  in order to plot directly into the pdffile, you can use 
  `bitmap' instead of `dev2bitmap', i.e.
  
  use either:
  
  plot(1:10)
  dev2bitmap(file = rf1.pdf, type = pdfwrite)
  
  or:
  
  bitmap(file = rf2.pdf, type = pdfwrite)
  plot(1:10)
  dev.off()
  
  both should produce the desired output file (at least after 
  including the correct `width' and `height' settings).
 
 I tried the second method, using width=8.5, height=11, but the margins
 in the output were bad, as if it assumed a4 paper or something.  I tried
 inserting paper=special and paper=letter in the bitmap call, but
 both caused errors.  Also, the plot was in grayscale instead of color.  
 
 Scott Waichler

if you look at the source code of `bitmap' you'll see that it essentially
constructs a `gs' command (the string `cmd') which is fed by the output of the
R `postscript' device via a pipe (the manpage of `bitmap' tells the same).
so I presume if you don't get colour or seem to have wrong page size this has
nothing to do with R, but you need to check your ghostscript installation.
only guessing, though. 

joerg

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Reducing the size of pdf graphics files produced with R

2007-05-23 Thread Joerg van den Hoff
On Wed, May 23, 2007 at 07:24:04AM -0700, Waichler, Scott R wrote:
  as you are using MacOS X, you'll have ghostscript installed anyway. so
  try in R `dev2bitmap' with `type =pdfwrite'. I believe `gs' _does_
  include compression. a quick test showed at least a reduction by about
  a factor of 2 relative to `pdf()'. probably one can fiddle with the
  ghostscript settings (cf. e.g. `Ps2pdf.htm' in the ghostscipt 
  docs: you
  can adjust the resolution for images in the pdf file) to
  improve this, so as a last resort you could indeed export the graphics
  as postscript and do the conversion to `pdf' by adjusting the `ps2pdf'
  switches. but even with the default settings the pdf produced via
  dev2bitmap/ghostscript is the better solution. apart from file size I
  by and then ran into problems when converting `pdf()' output to
  postscript later on, for instance.
 
 Can you give an example of dev2bitmap usage?  I tried using it in place
 of a pdf() statement.  An X11 window opened and my figure flew by, but I
 didn't get the file output.  I also used dev2bitmap after opening a
 pdf() and just before the dev.off() statement, since the help says it
 works on the current device, but again no written output.  What am I
 doing wrong?
 
 I tried:
   dev2bitmap(file = plotfile2, type=pdfwrite, width=8.5, height=11,
 pointsize=12)
   print(myplot())
   dev.off()
 
 and
 
   pdf(file = plotfile, paper=letter, width=8.5, height=11,
 pointsize=12)
   print(myplot())
   dev2bitmap(file = plotfile2, type=pdfwrite, width=8.5, height=11,
 pointsize=12)
   dev.off()
 
 Thanks,
 Scott Waichler
 scott.waichler _at_ pnl.gov

`dev2bitmap(file = rf.pdf, type = pdfwrite)' copies the current device to 
the
pdf-file `rf.pdf', i.e. you should have plotted something on the screen prior to
using this (the manpage tells you so much...). no `dev.off' is necessary in 
this case.

in order to plot directly into the pdffile, you can use `bitmap' instead of
`dev2bitmap', i.e.

use either:

plot(1:10)
dev2bitmap(file = rf1.pdf, type = pdfwrite)

or:

bitmap(file = rf2.pdf, type = pdfwrite)
plot(1:10)
dev.off()


both should produce the desired output file (at least after including 
the correct `width' and `height' settings).


joerg

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Reducing the size of pdf graphics files produced with R

2007-05-22 Thread Joerg van den Hoff
On Tue, May 22, 2007 at 01:06:06PM -0400, Chabot Denis wrote:
 Thank you Prof. Ripley.
 
 Believe me, I do not have the skills to contribute such a thing as a  
 stream compressor and I DO appreciate the work and usefulness of the  
 pdf device as it is. I do most of my plots with pdf device, the rest  
 with quartz (especially when I'm not sure I'll want to save a plot)  
 and (rarely) png when the pdf output is too large or for  
 compatibility with microsoft applications.
 
 I find the statement you took from the help page promising: I often  
 include these large plots into LaTeX, so I'll investigate what form  
 of compression pdftex can do.
 
 Sincerely,
 
 Denis
 Le 07-05-22 à 12:47, Prof Brian Ripley a écrit :
 
  From the help page
 
   'pdf' writes uncompressed PDF.  It is primarily intended for
   producing PDF graphics for inclusion in other documents, and
   PDF-includers such as 'pdftex' are usually able to handle
   compression.
 
  If you are able to contribute a stream compressor, R will produce  
  smaller plots.  Otherwise it is unlikely to happen (and it any case  
  would be a
  smaller contribution than that of the author of pdf(), who is quite  
  happy with external compressors).
 
  Acrobat does other things (not all of which it tells you about),  
  but compression is the main advantage.
 
  On Tue, 22 May 2007, Chabot Denis wrote:
 
  Hi,
 
  Without trying to print 100 points (see http://
  finzi.psych.upenn.edu/R/Rhelp02a/archive/42105.html), I often print
  maps for which I do not want to loose too much of coastline detail,
  and/or plots with 1000-5000 points (yes, some are on top of each
  other, but using transparency (i.e. rgb colors with alpha
  information) this actually comes through as useful information.
 
  But the files are large (not as large as in the thread above of
  course, 800 KB to about 2 MB), especially when included in a LaTeX
  document by the dozen.
 
  Acrobat (not the reader, the full program) has an option reduce file
  size. I don't know what it does, but it shrinks most of my plots to
  about 30% or original size, and I cannot detect any loss of detail
  even when zooming several times. But it is a pain to do this with
  Acrobat when you generate many plots... And you need to buy Acrobat.
 
  Is this something the pdf device could do in a future version? I
  tried the million points example from the thread above and the 55
  MB file was reduced to 6.9 MB, an even better shrinking I see on my
  usual plots.
 
 
  Denis Chabot
 
  __
  R-help@stat.math.ethz.ch mailing list
  https://stat.ethz.ch/mailman/listinfo/r-help
  PLEASE do read the posting guide http://www.R-project.org/posting- 
  guide.html
  and provide commented, minimal, self-contained, reproducible code.
 
 
  -- 
  Brian D. Ripley,  [EMAIL PROTECTED]
  Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
  University of Oxford, Tel:  +44 1865 272861 (self)
  1 South Parks Road, +44 1865 272866 (PA)
  Oxford OX1 3TG, UKFax:  +44 1865 272595
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.



as an attempt to suggest something more helpful than do write the
compressor yourself if you have a problem with pdf():

as you are using MacOS X, you'll have ghostscript installed anyway. so
try in R `dev2bitmap' with `type =pdfwrite'. I believe `gs' _does_
include compression. a quick test showed at least a reduction by about
a factor of 2 relative to `pdf()'. probably one can fiddle with the
ghostscript settings (cf. e.g. `Ps2pdf.htm' in the ghostscipt docs: you
can adjust the resolution for images in the pdf file) to
improve this, so as a last resort you could indeed export the graphics
as postscript and do the conversion to `pdf' by adjusting the `ps2pdf'
switches. but even with the default settings the pdf produced via
dev2bitmap/ghostscript is the better solution. apart from file size I
by and then ran into problems when converting `pdf()' output to
postscript later on, for instance.

hth,
joerg

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] save intermediate result

2007-05-05 Thread Joerg van den Hoff
On Fri, May 04, 2007 at 04:55:36PM -0400, Weiwei Shi wrote:
 sorry for my English, staying in US over 7 years makes me think I were
 ok, now :(
 
 anyway, here is an example and why I need that:
 
 suppose I have a function like the following:
 
 f1 - function(){
 line1 - f2() # assume this line takes very long time, like more than 30 min
 # then I need to save line1 as an object into workspace b/c
 # the following lines after this might have some bugs
 # currently I use
 save(line1, file=line1.robj)
 # but I am thinking if there is anothe way, like enviroment, instead
 of save it into a file
 
 # codes as followed might have some bugs
 # blabla...
 }

yes, that's what I thought you meant. so use

line1 - f2()

i.e. the assignment operator `-' instead of `-'. after f2() is
finished, the object `line1' is available from the R prompt, i.e.
visible in the workspace (for details, cf. `help(-)').

 
 if some codes as followed have bugs, my f1 function cannot return
 anything, which means I have to re-run the time-consuming line again.

you probably should debug it using some dummy data instead of the f2()
call...

 
 thanks,
 
 Weiwei
 
 On 5/4/07, Joerg van den Hoff [EMAIL PROTECTED] wrote:
 On Fri, May 04, 2007 at 03:45:10PM -0400, Weiwei Shi wrote:
  hi,
 
  is there a way to save a R object into workspace instead of into a
  file during a running of function?
 
 
 if I understand the question correctly you want the 'super-assignment'
 operator `-' as in
 
 -cut---
 R f - function(x) y - 2*x
 R f(1)
 R y
 [1] 2
 R f(10)
 R y
 [1] 20
 -cut---
 
 i.e. y is assigned a value in the 'workspace'. be careful, though, with
 this kind of assignment. why not return the object you are interested
 in as a list component together with other results when the function
 call finishes?
 
 
 
 
 -- 
 Weiwei Shi, Ph.D
 Research Scientist
 GeneGO, Inc.
 
 Did you always know?
 No, I did not. But I believed...
 ---Matrix III

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] code/documentation mismatch with backslashes in argument list

2007-05-04 Thread Joerg van den Hoff
I have a function definition such as

f - function (pattern = .*\\.txt) {}

in the manpage this has to be documented as 

f - function (pattern = .*.txt) 

in order to get the correct display (with double backslash) in the R console
when issuing `?f', but this causes complains from `R CMD CHECK' (code
documentation mismatch), which is not nice.

question: is there (or will there be with future releases) a way to avoid these
warnings without loosing the correct display of the manpage?

regards,
joerg

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] save intermediate result

2007-05-04 Thread Joerg van den Hoff
On Fri, May 04, 2007 at 03:45:10PM -0400, Weiwei Shi wrote:
 hi,
 
 is there a way to save a R object into workspace instead of into a
 file during a running of function?
 

if I understand the question correctly you want the 'super-assignment'
operator `-' as in

-cut---
R f - function(x) y - 2*x
R f(1)
R y
[1] 2
R f(10)
R y
[1] 20
-cut---

i.e. y is assigned a value in the 'workspace'. be careful, though, with
this kind of assignment. why not return the object you are interested
in as a list component together with other results when the function
call finishes?

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] nls.control( ) has no influence on nls( ) !

2007-04-16 Thread Joerg van den Hoff
On Mon, Apr 16, 2007 at 09:03:27AM +0200, Martin Maechler wrote:
  Yuchen == Yuchen Luo [EMAIL PROTECTED]
  on Sun, 15 Apr 2007 12:18:23 -0700 writes:
 
 Yuchen Dear Friends.
 Yuchen I tried to use nls.control() to change the 'minFactor' in nls( ), 
 but it
 Yuchen does not seem to work.
 
 yes, you do not seem to use it correctly.
 No reason to jump to the conclusion that you give in the subject
 of this posting ... which is hilariously wrong!
 You don't really think that a quality software like R has had nls()
 and nls.control(), and nls.control() would never have worked for all
 those years and tens of thousands of users ???
 Please get to your senses, and first read the posting guide (*)
 - and then try again, so we can help you further!
 
 Regards,
 Martin Maechler, ETH Zurich
 
 Yuchen I used nls( ) function and encountered error message step factor
 Yuchen 0.000488281 reduced below 'minFactor' of 0.000976563. I then 
 tried the
 Yuchen following:
 
 Yuchen 1) Put nls.control(minFactor = 1/(4096*128)) inside the 
 brackets of nls,
 Yuchen but the same error message shows up.
 
 Yuchen 2) Put nls.control(minFactor = 1/(4096*128)) as a separate 
 command before
 Yuchen the command that use nls( ) function, again, the same thing 
 happens,
 Yuchen although the R responds to the nls.control( ) function 
 immediately:

you need to provide a list for the `control' argument of `nls',  i.e. you need
to use something like:

nls(other_arguments, control = nls.control(minFactor = 1/4096))

nls.control() only helps in so far, as that it always returns the complete list
of three components needed for the `control' argument of `nls'. you can equally
well use an explicit list instead.


 Yuchen -
 Yuchen $maxiter
 
 Yuchen [1] 50
 
 
 
 Yuchen $tol
 
 Yuchen [1] 1e-05
 
 
 
 Yuchen $minFactor
 
 Yuchen [1] 1.907349e-06
 
 Yuchen --
 
 
 Yuchen I am wondering how may I change the minFactor to a smaller value? 
 The manual
 Yuchen that comes with the R software about nls( )  is very sketchy --- 
 the only
 Yuchen relevant example I see is a separate command like 2).
 
 Yuchen A more relevent question might be, is lower the 'minFactor'  the 
 only
 Yuchen solution to the problem? What are the other options?

check your model?
check your data?
change start values (wrong minimum)?
use another algorithm (cf. nls manpage)?

 
 Yuchen Best Wishes
 Yuchen Yuchen Luo

a final remark (off-topic?), concerning the response of m. maechler (but I
notice this attitude continously on the list): this is supposed to be a help
list, neither a boot camp nor a thought-police school, right?. 
concerning the question at hand, the one word response

`?nls'

would have been justifiable (it usually _is_ in the manpage), if not overly
helpful, probably. but the actual response is worded such that --
would it be directed at me -- I would judge it simply offensive: get to your
senses is not what one should be told, simply because ond did not grasp how to
use the `control' argument and/or because in the subject (contrary to the body) 
of
the mail 'has no influence' is used instead of `seems to have no influence'.

I really do think that if someone deems a certain question really stupid, it
would be wise simply to keep 'radio silence' (i.e. ignore it), which probably
is the best corrective anyway. the 'how dare you' responses are annoying, to
say the least.

for heavens sake...

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] replacing all NA's in a dataframe with zeros...

2007-03-15 Thread Joerg van den Hoff
On Thu, Mar 15, 2007 at 10:21:22AM +0100, Peter Dalgaard wrote:
 Gavin Simpson wrote:
  On Wed, 2007-03-14 at 20:16 -0700, Steven McKinney wrote:

  Since you can index a matrix or dataframe with
  a matrix of logicals, you can use is.na()
  to index all the NA locations and replace them
  all with 0 in one command.
 
  
 
  A quicker solution, that, IIRC,  was posted to the list by Peter
  Dalgaard several years ago is:
 
  sapply(mydata.df, function(x) {x[is.na(x)] - 0; x}))

 I hope your memory fails you, because it doesn't actually work.
 
  sapply(test.df, function(x) {x[is.na(x)] - 0; x})
  x1 x2 x3
 [1,]  0  1  1
 [2,]  2  2  0
 [3,]  3  3  0
 [4,]  0  4  4
 
 is a matrix, not a data frame.
 
 Instead:
 
  test.df[] - lapply(test.df, function(x) {x[is.na(x)] - 0; x})
  test.df
   x1 x2 x3
 1  0  1  1
 2  2  2  0
 3  3  3  0
 4  0  4  4
 
 Speedwise, sapply() is doing lapply() internally, and the assignment
 overhead should be small, so I'd expect similar timings.

just an idea:
given the order of magnitude difference (factor 17 or so) in runtime 
between the obvious solution and the fast one: would'nt it be 
possible/sensible
to modify the corresponding subsetting method ([.data.frame) such that it
recognizes the case when it is called with an arbitrary index matrix (the
problem is not restricted to indexing with a logical matrix, I presume?) and
switch internally to the fast solution given above?

in my (admittedly limited) experience it seems that one of the not so nice
properties of R is that one encounters in quite a few situations exactly the 
above
situation: unexpected massive differences in run time between different 
solutions (I'm not
talking about explicit loop penalty). what concerns me most, are the very
basic scenarios (not complex algorithms): data frames vs. matrices, naming
vector components or not, subsetting, read.table vs. scan, etc. if their were a
concise HOW TO list for the cases when speed matters, that would be helpful,
too.

I understand that part of the uneven performance is unavoidable and one must
expect the user to go to the trouble to understand the reasons, e.g. for
differences between handling purely numerical data in either matrices or data
frames. but a factor of 17 between the obvious approach and the wise one seems
a trap in which 99% of the people will step (probably never thinking that their
might be a faster approach).

joerg

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Deconvolution of a spectrum

2007-03-09 Thread Joerg van den Hoff
On Fri, Mar 09, 2007 at 01:25:24PM +0100, Lukasz Komsta wrote:
 
 Dear useRs,
 
 I have a curve which is a mixture of Gaussian curves (for example UV
 emission or absorption spectrum). Do you have any suggestions how to
 implement searching for optimal set of Gaussian peaks to fit the curve?
 I know that it is very complex problem, but maybe it is a possibility
 to do it? First supposement is to use a nls() with very large functions,
 and compare AIC value, but it is very difficult to suggest any starting
 points for algotirithm.
 
 Searching google I have found only a description of commercial software
 for doing such deconvolution (Origin, PeakFit) without any information
 about used algorithms. No ready-to-use function in any language.
 
 I have tried to use a Mclust workaround for this problem, by generating a
 large dataset for which the spectrum is a histogram and feed it into
 the Mclust. The results seem to be serious, but this is very ugly and
 imprecise method.
 
 Thanks for any help,
 
 Luke
 
I would try `nls'. we have used `nls' for fitting magnetic resonance spectra
consisting of =~ 10 gaussian peaks. this works OK, if the input data are
reasonable (not too noisy, peak amplitudes above noise level, peak distance
not unreasonably smaller than peak width, i.e peak overlap such that peaks are
still more or less identifiable visually). 

of course you must invest effort in getting the start values (automatically or
manually) right. if your data are good, you might get good start values for the
positions (the means of the gaussians) with an approach that was floating around
the r-help list in 11/2005, which I adopted as follows:


peaks - function (series, span = 3, what = c(max, min), do.pad = TRUE, 
   add.to.plot = FALSE, ...) 
{
if ((span - as.integer(span))%%2 != 1) 
stop('span' must be odd)
if (!is.numeric(series)) 
stop(`peaks' needs numeric input)
what - match.arg(what)
if (is.null(dim(series)) || min(dim(series)) == 1) {
series - as.numeric(series)
x - seq(along = series)
y - series
}
else if (nrow(series) == 2) {
x - series[1, ]
y - series[2, ]
}
else if (ncol(series) == 2) {
x - series[, 1]
y - series[, 2]
}
if (span == 1) 
return(list(x = x, y = y, pos = rep(TRUE, length(y))), 
span = span, what = what, do.pad = do.pad)
if (what == min) 
z - embed(-y, span)
else z - embed(y, span)
s - span%/%2
s1 - s + 1
v - max.col(z, first) == s1
if (do.pad) {
pad - rep(FALSE, s)
v - c(pad, v, pad)
idx - v
}
else idx - c(rep(FALSE, s), v)
val - list(x = x[idx], y = y[idx], pos = v, span = span, 
what = what, do.pad = do.pad)
if (add.to.plot == TRUE) 
points(val, ...)
val
}

this looks for local maxima in the vector (y-values) or 2-dim array
(x/y-matrix) `series'in a neighborhood of each point defined by `span'. 
if you first plot your data and then call the above on the data with
'add.to.plot = TRUE', the results of the peak search are added to your plot (and
you can modify this plotting via the `...' argument).

maybe this works for your data to get the peak position estimates (and the
amplitudes in the next step) right. frequently the standard deviations
estimates can be set to some fixed value for any given experiment.

and of course distant parts of your spectrum won't have anything to do which
each other, so you can split up the fitting to help `nls' along a bit.

joerg

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Some problems with X11

2007-03-08 Thread Joerg van den Hoff
On Wed, Mar 07, 2007 at 05:21:52PM -0700, Rafael Rosolem wrote:
 Hi,
 
 I am really new with R, so I don't know anything about it. I have 
 written a script (attached) which tries to do really basic stuff (such 
 as computing basic statistics and basic plots). When I try to plot a 
 histogram and pairs, for example, I get the following message:
 
  source(project.R)
 Loading required package: sp
 
 -
 Analysis of geostatistical data
 For an Introduction to geoR go to http://www.est.ufpr.br/geoR
 geoR version 1.6-13 (built on 2006/12/26) is now loaded
 -
 
 Error in title(main = main, sub = sub, xlab = xlab, ylab = ylab, ...) :
 X11 font at size 8 could not be loaded
 
 
 I have seen some threads about this problem, though none of them really 
 states what should be done in order to solve that problem (At least it 
 was not clear for me!).
well, some X11 font is not installed/not found. 

in `R', getOpton(X11fonts) tells you which font family `X11()' is using by
default (but you can alter this in the `X11' call if need be) and the size 8
subtype seems to be missing.

you may compare the above with the output from `xlsfonts' in the unix shell,
or use `xfontsel' to browse interactively through the installed fonts
to verify whether `R's complaining is justified.

probable solution: use some existing sufficiently complete (different sizes)
font-family  in the `X11()' call or copy over the default fonts from the labtop
where it works.

hth,
joerg
 
 I am also running R on a Ubuntu Linux Edgy distro. The interesting thing 
 is that I have this problem on my desktop only. I also have Ubuntu Edgy 
 installed on my laptop and it is working just fine!
 
 Thank you

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] reading a text file with a stray carriage return

2007-03-08 Thread Joerg van den Hoff
On Thu, Mar 08, 2007 at 10:11:06AM -, Ted Harding wrote:
 On 08-Mar-07 jim holtman wrote:
  How do you define a carriage return in the middle of a line
  if a carriage return is also used to delimit a line?  One of the
  things you can do is to use 'count.fields' to determine the
  number of fields in each line. For those lines that are not the
  right length, you could combine them together with a 'paste'
  command when you write them out.
 
... 
 cat funnyfile.csv | awk 'BEGIN{FS=,}
   {nline = NR; nfields = NF}
   {if(nfields != 3){print nline   nfields}'

I think this might be more 'awk like':

cat funnyfile.csv | awk 'BEGIN{FS=,} NF != 3 {print NR, NF}'

(separating the pattern from the action, I mean. it's slightly
faster too -- but not much). and since everyone has the  same problems,
here's my fieldcounter/reporter (the actual awk program (apart from reporting
the results) is a single line, cute..).
#
#!/bin/sh
#scan input files and output statistics of number of fields.
#suitable mainly to check 'rectangular' data files for deviations
#from correct number of columns.

FS='[ \t]+'  #default FS
COMMENT='#'

v1=F
v2=c
valop=$v1:$v2:
while getopts $valop option; do
   if   [ $option = $v1 ]; then
  FS=$OPTARG
   elif   [ $option = $v2 ]; then
  COMMENT=$OPTARG
   else
  echo 
  echo usage: fieldcount [-$v1 field_separator] [-$v2 comment_char] file 
...
  echo default FS is white space
  echo default comment_char is #
  echo 
  exit
   fi
done
shift `expr $OPTIND - 1`

for i do
   echo 
   echo file $i (using FS = '$FS'):
   awk '
  BEGIN {
 FS = '$FS'
  }
  !/^'$COMMENT'/ {fc[NF]++}
  END {
 for (nf in fc) {
if (fc[nf]  1) s1 = s; else s1 = 
if (nf  1) s2 = s; else s2 = 
print fc[nf], record s1, with, nf, field s2
 }
  }
   ' $i |sort -rn
   echo  
done
#
here, you could use this a la (it uses white space as separator per default):

fieldcount -F, funnyfile1.csv funnyfile2.csv ...

it's handy for huge files, since it outputs cumulative results reverse sorted
by frequency of occurence (the bad guys should be infrequent and
at the bottom, contrary to real world experience...).

one could of course tweak this to report only records deviating from some
expected field count (say 3) by adding a further command line switch and using
this in the awk script as is done with the other shell variables ($FS and
$COMMENT).

joerg

 
 
  On 3/7/07, Walter R. Paczkowski [EMAIL PROTECTED] wrote:
 
 
Hi,
I'm  hoping someone has a suggestion for handling a simple problem. 
A client  gave  me a comma separated value file (call it x.csv)
that has an id and name and address for about 25,000 people
(25,000 records).
I used read.table to read it, but then discovered that there are
stray carriage returns on several records. This plays havoc with
read.table since it starts a new input line when it sees the
carriage return. 
In short, the read is all wrong.
I thought I could write a simple function to parse a line and
write it back  out,  character by character. If a carriage
return is found, it  would  simply  be  ignored on the writing
back out part. But how do I identify a carriage return? What is
the code or symbol? Is there any easier way to rid the file
of carriage returns in the middle of the input lines?
Any help is appreciated.
Walt Paczkowski


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] R CMD CHECK question

2007-03-05 Thread Joerg van den Hoff
hi,
second try...

I ran into problems when checking one of my packages with R CMD CHECK:


I have two packages, the first (named `pkc') depending on the second one (named
`roiutils'). The source code and DESCRIPTION files describes the dependency as
it should be, I think ('Imports', `require'). but if I run

R CMD CHECK pkc 

I get significant warnings related to missing links (refering to functions
from the second package) in the manpages of the first package as can be
seen below. despite the warnings, after installing the two packages the help
system works just fine including the cross-references.

my question:
why is it, that R CMD CHECK is complaining?  can one selectively switch of this
warning?  or how have I to specify the links in the manpages to tell CHECK that
everything is basically OK?

CUT
* checking for working latex ... OK
* using log directory '/Users/vdh/rfiles/Rlibrary/.check/pkc.Rcheck'
* using R version 2.4.0 (2006-10-03)
* checking for file 'pkc/DESCRIPTION' ... OK
* this is package 'pkc' version '1.1'
* checking package dependencies ... OK
* checking if this is a source package ... OK
* checking whether package 'pkc' can be installed ... WARNING
Found the following significant warnings:
   missing link(s):  readroi readroi readroi figure readroi conv3exmodel 
readroi
   missing link(s):  figure readroi
* checking package directory ... OK
* checking for portable file names ... OK
* checking for sufficient/correct file permissions ... OK
* checking DESCRIPTION meta-information ... OK
* checking top-level files ... OK
* checking index information ... OK
* checking package subdirectories ... OK
* checking R files for syntax errors ... OK
* checking R files for non-ASCII characters ... OK
* checking whether the package can be loaded ... OK
* checking whether the package can be loaded with stated dependencies ... OK
* checking whether the name space can be loaded with stated dependencies ... OK
* checking S3 generic/method consistency ... OK
* checking replacement functions ... OK
* checking foreign function calls ... OK
* checking R code for possible problems ... OK
* checking Rd files ... WARNING
Rd files with unknown sections:
  /Users/vdh/rfiles/Rlibrary/pkc/man/fitdemo.Rd: example

See the chapter 'Writing R documentation files' in manual 'Writing R
Extensions'.
* checking Rd cross-references ... WARNING
Missing link(s) in documentation object 'compfit.Rd':
  readroi readroi readroi figure readroi conv3exmodel readroi

Missing link(s) in documentation object 'exp3fit.Rd':
  figure readroi
CUT


any hints appreciated,

joerg

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Double-banger function names: preferences and suggestions

2007-03-02 Thread Joerg van den Hoff
On Thu, Mar 01, 2007 at 10:31:08AM -0700, Jason Barnhart wrote:
 Definitely not #2.   Prefer #1 but #3 is ok as well.

  Definitely not #1.   Prefer #2 but #3 is ok as well. 
  
  there is nothing like unanimous agreement :-)

  but in earnest: I don't think #1 is good on the already mentioned grounds
  that it masks the difference between a method call and a simple function name.
  I don't think a function name such as

  plot.mydata

  would be reasonable: is this a method call for plotting objects of class
  `mdyata' or is it some other function call (probably plotting my data)?

  in browsing through source code this is at least an unnecessary nuisance
  having to check this point.

 
 Thanks for contributing and inquiring.
 
 
 - Original Message - 
 From: hadley wickham [EMAIL PROTECTED]
 To: [EMAIL PROTECTED]
 Sent: Sunday, February 25, 2007 7:44 AM
 Subject: [R] Double-banger function names: preferences and suggestions
 
 
  What do you prefer/recommend for double-banger function names:
 
  1 scale.colour
  2 scale_colour
  3 scaleColour
 
  1 is more R-like, but conflicts with S3.  2 is a modern version of
  number 1, but not many packages use it.  Number 3 is more java-like.
  (I like number 2 best)
 
  Any suggestions?
 
  Thanks,
 
  Hadley
 
  __
  R-help@stat.math.ethz.ch mailing list
  https://stat.ethz.ch/mailman/listinfo/r-help
  PLEASE do read the posting guide 
  http://www.R-project.org/posting-guide.html
  and provide commented, minimal, self-contained, reproducible code.
 
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] from function to its name?

2007-03-02 Thread Joerg van den Hoff
On Fri, Mar 02, 2007 at 03:42:46PM +0100, Ido M. Tamir wrote:
 Hi,
 
 I can get from a string to a function with this name:
 
 f1 - function(x){ mean(x) }
 
 do.call(f1,list{1:4})
 get(f1)
 etc...
 
 But how do I get from a function to its name?
 
 funcVec - c(f1,median)
 
 funcVec
 [[1]]
 function(x){ mean(x) }
  str(funcVec)
 List of 2
  $ :function (x)
   ..- attr(*, source)= chr function(x){ mean(x) }
  $ :function (x, ...)
  deparse(funcVec[1])
 [1] list(function (x)  {  mean(x)
 [4] })
 


for any symbol/name

deparse(substitute(f1))

yields the string representation, but this won't give you f1 for

deparse(substitute(funcVec[1])). 

but rather the string funcVec[1].

if you actually want to access funcVec components via strings denoting the
components, maybe you simply could use

funcVec - list(f1 = f1, median = median)

and access these via

funcVec[[f1]]

which would enable using a string variable holding the name:

dum = f1

funcVec[[dum]]

hth
joerg

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] str() to extract components

2007-02-27 Thread Joerg van den Hoff
On Tue, Feb 27, 2007 at 03:52:54PM -, Simon Pickett wrote:
 Hi,
 
 I have been dabbling with str() to extract values from outputs such as
 lmer etc and have found it very helpful sometimes.
 
 but only seem to manage to extract the values when the output is one
 simple table, any more complicated and I'm stumped :-(
 
 take this example of the extracted coeficients from a lmer analysis...
 
 using str(coef(lmer(resp3~b$age+b$size+b$pcfat+(1|sex), data=b))) yields
 
 Formal class 'lmer.coef' [package Matrix] with 3 slots
   ..@ .Data :List of 1
   .. ..$ :`data.frame': 2 obs. of  4 variables:
   .. .. ..$ (Intercept): num [1:2] 1.07 1.13
   .. .. ..$ b$age  : num [1:2] 0.00702 0.00702
   .. .. ..$ b$size : num [1:2] 0.0343 0.0343
   .. .. ..$ b$pcfat: num [1:2] 0.0451 0.0451
   ..@ varFac: list()
   ..@ stdErr: num(0)
 
 how do I get inside the first table to get the value 1.07 for instance?
 
 Any help much appreciated.
 
may `unlist' would be enough?

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] pdf with an exact size

2007-02-23 Thread Joerg van den Hoff
On Fri, Feb 23, 2007 at 04:24:54PM -0200, Alberto Monteiro wrote:
 Is it possible to create a pdf output file with an (as nearly as
 possible) exact size?
 
 For example, if I want to draw in an A4 paper (210 x 297 mm) a
 square of 100 x 100 mm, how can I do it?
 
 FWIW, about 6 months ago I learned here how to create an exact
 png image. For example, if I want a 500 x 500 black square in 
 a 1000 x 1000 white png, occupying the center of the png, the 
 procedure is this:
 
   png(image.png, width=1000, height=1000, bg=white)
   par(mar=c(0,0,0,0)) # reset margins
   plot(0, xlim=c(0, 999), ylim=c(0, 999), col=white)
   par(usr=c(0, 999, 0, 999))
   points(c(250, 250, 749, 749, 250), c(250, 749, 749, 250, 250), 
 type=l, col=black)
   dev.off()
 
 However, I don't know how do this with a pdf monstr... oops... file.
 
 Alberto Monteiro
 

going via `bitmap' and using the `pdfwrite' type is probably the better
idea since in this case `ghostscript` is used for pdf generation which seems
over all to generate cleaner/better pdf-output in comparsion
to the R internal pdf-device (I believe there was recently some
discussion of this on the list?).

joerg

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] R CMD CHECK question

2007-02-22 Thread Joerg van den Hoff
hi,

I have two private packages, the first (`pkc') depending on the second one
(`roiutils'). The source code and DESCRIPTION files describes the dependency
as it should be ('Imports', `require'), at least I think so.

now, running

R CMD CHECK pkc 

yields the following output in which I have inserted my
questions (lines starting with --):

* checking for working latex ... OK
* using log directory '/Users/vdh/rfiles/Rlibrary/.check/pkc.Rcheck'
* using R version 2.4.0 (2006-10-03)
* checking for file 'pkc/DESCRIPTION' ... OK
* this is package 'pkc' version '1.1'
* checking package dependencies ... OK
* checking if this is a source package ... OK
* checking whether package 'pkc' can be installed ... WARNING
Found the following significant warnings:
   missing link(s):  readroi readroi readroi figure readroi conv3exmodel 
readroi
   missing link(s):  figure readroi

-- there _are_ links to the mentioned functions (from `roiutils') in the
-- manpages of `pkc'. after installing the libs, the help system works just
-- fine. why is it, that CHECK is complaining? can one selectively switch of
-- this warning? or how have I to specify the links to tell CHECK that
-- everything is OK?

* checking package directory ... OK
* checking for portable file names ... OK
* checking for sufficient/correct file permissions ... OK
* checking DESCRIPTION meta-information ... OK
* checking top-level files ... OK
* checking index information ... OK
* checking package subdirectories ... OK
* checking R files for syntax errors ... OK
* checking R files for non-ASCII characters ... OK
* checking whether the package can be loaded ... OK
* checking whether the package can be loaded with stated dependencies ... OK
* checking whether the name space can be loaded with stated dependencies ... OK
* checking S3 generic/method consistency ... OK
* checking replacement functions ... OK
* checking foreign function calls ... OK
* checking R code for possible problems ... OK
* checking Rd files ... WARNING
Rd files with unknown sections:
  /Users/vdh/rfiles/Rlibrary/pkc/man/fitdemo.Rd: example

See the chapter 'Writing R documentation files' in manual 'Writing R
Extensions'.
* checking Rd cross-references ... WARNING
Missing link(s) in documentation object 'compfit.Rd':
  readroi readroi readroi figure readroi conv3exmodel readroi

Missing link(s) in documentation object 'exp3fit.Rd':
  figure readroi

-- this seems the same problem as above, right?





any hints appreciated,

joerg

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Confindence interval for Levenberg-Marquardt fit

2007-02-21 Thread joerg van den hoff
On Wed, Feb 21, 2007 at 11:09:52AM +, Prof Brian Ripley wrote:
 Well, the algorithm used does not affect the confidence interval (provided 
 it works correctly), but what is nls.ml (presumably in some package you 
 have not mentioned) and why would I want to use an old-fashioned 
 algorithm?
is'nt this a bit strong? in what respect do you consider levenberg-marquardt
(going back to the 19-forties, I think) as old-fashioned (especially
in comparsion to the `nls' standard gauss-newton approach (both gentlemen
seem to have done their major work a bit longer ago :-))?
AFAICS levenberg-marquardt is generally appreciated for it's rapid
convergence achieved by a smooth transition from an
inverse-hessian approach to steepest descent. my own experience
with non-linear least squares minimization using this algorithm
are positive as well, but
I have not tried out the levenberg-marquardt
implementation in package `minpack.lm' (which originates from netlib.org)
and don't know if it's good. but in any case there sure are implementations
around (e.g. in the CERN MINUIT library) which have proven to be
of high quality.

`nls' sure is a _very_ valuable function, but not necessarily the
last word with regards to the chosen algorithm(s).


 
 You could start nls at the solution you got from nls.ml and use confint() 
 on that.
maybe one should look at profile.nls and confint.nls and see what information
of the usual `nls' object is actually used for the confidence intervall
computation and mimick this for the `nls.lm' output? at a (admittedly)
quick glance it seems that only parameters, std.errs. and the fitted/residual
values are needed which should all be provided by nls.lm as well.
maybe one could even try to map the nls.lm results into a structure of class 
`nls' (although this would not be a clean solution, probably) in order
to use `confint.nls'?

 
 On Wed, 21 Feb 2007, Michael Dondrup wrote:
 
  Dear all,
  I would like to use the Levenberg-Marquardt algorithm for non-linear
  least-squares regression using function nls.lm. Can anybody help  me to
find a a way to compute confidence intervals on the  fitted
  parameters as it is possible for nls (using confint.nls, which does not
  work for nls.lm)?
 
  Thank you for your help
  Michael
 


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Confindence interval for Levenberg-Marquardt fit

2007-02-21 Thread Joerg van den Hoff
On Wed, Feb 21, 2007 at 09:41:29AM -0600, Douglas Bates wrote:
 On 2/21/07, joerg van den hoff [EMAIL PROTECTED] wrote:
 On Wed, Feb 21, 2007 at 11:09:52AM +, Prof Brian Ripley wrote:
  Well, the algorithm used does not affect the confidence interval 
 (provided
  it works correctly), but what is nls.ml (presumably in some package you
  have not mentioned) and why would I want to use an old-fashioned
  algorithm?
 
 is'nt this a bit strong? in what respect do you consider 
 levenberg-marquardt
 (going back to the 19-forties, I think) as old-fashioned (especially
 in comparsion to the `nls' standard gauss-newton approach (both gentlemen
 seem to have done their major work a bit longer ago :-))?
 AFAICS levenberg-marquardt is generally appreciated for it's rapid
 convergence achieved by a smooth transition from an
 inverse-hessian approach to steepest descent. my own experience
 with non-linear least squares minimization using this algorithm
 are positive as well, but
 I have not tried out the levenberg-marquardt
 implementation in package `minpack.lm' (which originates from netlib.org)
 and don't know if it's good. but in any case there sure are implementations
 around (e.g. in the CERN MINUIT library) which have proven to be
 of high quality.
 
 The nls function provides implementations of the Gauss-Newton
 algorithm and the Golub-Pereyra algorithm and the NL2SOL algorithm.
 Check the possible values of the optional argument algorithm to
 nls().
I'm using `nls' rather frequently and I'm aware of this. maybe the formulation
standard gauss-newton approach was misleading (please attribute this to the
fact that I'm not a native speaker: I actually meant the default algorithm used
and was not implying that `nls' can only use some odd standard
procedure)
 
 If you will allow me to wax philosophical for a moment, I submit that
 there are three stages to any iterative optimization:
  - formulating initial values of the parameters
  - determining what step to take from iteration i to iteration i+1
  - determining when to declare convergence
 
 For the most part optimization research focuses on the second step.
 The first and third steps are also important.  One of the advantages
 of the implementation of the Gauss-Newton and Golub-Pereyra algorithms
 in nls is that they use a convergence criterion (based only on the
 current parameter values) not a termination criterion (based on the
 changes in the parameter values or the changes in the objective
 function or both).  See chapter 2 of Bates and Watts, Nonlinear
 Regression Analysis and Its Applications (Wiley, 1988) for a
 discussion of the particular convergence criterion used.
I did this when starting to use `nls' seriously. being a physicist,
not a mathematician, my attitude is nevertheless a bit more pragmatic
(or simple-minded, if you like): I fully appreciate that these
things need a thorough and rigorous mathematical foundation but any
convergence criterion suits me just fine as long as (1) convergence
is actually achieved (i.e. the algorithm reports some results...)
and (2) the final least squares sum is sufficiently near the
achievable minimum value for that model (sufficiently near meaning
that the parameter estimation errors are much larger than any further
changes in the parameters when wandering around any longer near the
current point in parameter space -- possibly finding a slightly
better mininmum) and (3) the results are obtained in sufficiently
short time.  It's not obivous to me (I did'nt encounter an
example case up to now), for instance, that a good termination
criterion is either more unreliable (yielding wrong answers way off
the true minimum) or less efficient (e.g. driving
the iteration count up) than the convergence criterion used in `nls'
(against which I do not have any objections -- apart from the minor
handicap that it can't converge on 'ideal' data which by and then
comes in the way as the help list shows)

an impression which I cannot prove by hard data (so it may be wrong) is,
that the `nls' gauss-newton approach tends to be more sensitive to
suboptimal start values (sending it towards outer space in the worst
case...) than levenberg-marquardt driven approaches and definitely
has on average a higher iteration count.

 
 Another point that is often lost in a discussion of various
 optimization algorithms is that one does not compare algorithms - one
 compares implementations of algorithms which, at the very least, must
 involve all of the above steps.
sure
 
 For many people in the nonlinear least squares field the NL2SOL
 algorithm by Dennis, Gay and Welch was considered the gold standard
 and David Gay's implementation is now available in nls.  This allows
 those who are interested to compare implementations of these
 algorithms.  Also, with R being an Open Source system anyone can add
 their own implementation or modify an existing implementation to try
 things out.
for one, I already stressed that I appreciate `nls

Re: [R] [[ gotcha

2007-01-16 Thread Joerg van den Hoff
Robin Hankin wrote:
 The following gotcha caught me off-guard just now.
 
 I have two matrices, a and b:
 
 
 a - matrix(1,3,3)
 b - matrix(1,1,1)
 
 (note that both a and b are matrices).
 
 I want them in a list:
 
   B - NULL
   B[[1]] - a
   B[[2]] - b
   B
 [[1]]
   [,1] [,2] [,3]
 [1,]111
 [2,]111
 [3,]111
 
 [[2]]
   [,1]
 [1,]1
 
  
 
 This is fine.
 
 But swapping a and b over does not behave as desired:
 
 
   B - NULL
   B[[1]] - b
   B[[2]] - a
 Error in B[[2]] - a : more elements supplied than there are to replace
  
 
 
 
 The error is given because after B[[1]] - a,   the variable B is  
 just a scalar and
 not a matrix (why is this?)
 
 What's the bulletproof method for assigning matrices to a list (whose  
 length is
 not known at runtime)?
 
 
not sure about bulletproof, but:

you should tell R that B is really intended to be a list in the first place:

B - list()

the rest then works as you intended. whether the 'simplification' of your 1x1 
matrix to
a scalar in your example is canonical (and desirable) behaviour seems a 
question for some 
of the experts (it's a bit reminiscent of the `drop = TRUE' vs. `drop = FALSE' 
problem)

joerg
 
 
 
 
 
 
 --
 Robin Hankin
 Uncertainty Analyst
 National Oceanography Centre, Southampton
 European Way, Southampton SO14 3ZH, UK
   tel  023-8059-7743
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] sapply problem

2006-12-15 Thread Joerg van den Hoff
Sundar Dorai-Raj wrote:
 
 
 Joerg van den Hoff said the following on 12/14/2006 7:30 AM:
 I have encountered the following problem: I need to extract from
 a list of lists equally named compenents who happen to be 'one row'
 data frames. a trivial example would be:

 a - list(list(
 df = data.frame(A = 1, B = 2, C = 3)), list(df = data.frame(A = 4,B = 
 5,C = 6)))

 I want the extracted compenents to fill up a matrix or data frame row 
 by row.
 the obvious thing to do seems:

 b - sapply(a, [[, df)
 b - t(b)

 now `b' looks all right:

 b
 class(b)

 but it turns out that all elements in this matrix are one element lists:

 class(b[1,1])

 which prevents any further standard processing of `b' (like 
 `colMeans', e.g.)

 question 1: is their a straightforward way to enforce that `b' contains
 simple numbers as elements right from the start (instead of something 
 like
 apply(b, 1:2, class-, numeric) afterwards)?

 
 Try this:
 
 a - list(list(df = data.frame(A = 1, B = 2, C = 3)),
   list(df = data.frame(A = 4, B = 5, C = 6)))
 b - do.call(rbind, sapply(a, [, df))
 b
 
 
 question 2: should not sapply do this further 'simplification' anyway 
 in a situation
 like this (matrix elements turn out to be one-element lists)?

 
 I think it does as it much as it knows how. I think you might believe 
 that matrix elements can only contain numeric values. This is not a 
 valid assumption. Take this example:
 
   a - list(1)
   b - list(2)
   (m - matrix(c(a, b), 2, 1))
  [,1]
 [1,] 1
 [2,] 2
   class(m[1, 1])
 [1] list
   class(m[2, 1])
 [1] list
 
 HTH,
 
 --sundar
 
 regards

 joerg


thanks for the help to everybody. the proposal of sundar seems the most concise 
one (if it 
is also efficient I'll see with larger data sets ..).

with regards to my question no. 2: sure I realize that I _can_ have a sensible 
matrix 
consisting of elements that are lists. with regards to `sapply' and the 
intention (as I 
understand it) to simplify as much as is sensible possible (`sapply' unlists 
usually 
everything it can), I think it would be desirable if `sapply' would detect this 
admittedly 
very special case: a matrix of one-element lists whose sole list elements are 
atomic of 
length 1 (which was my case and is exactly your example above).

of course, I don't know whether such a special treatment (probably meaning 
further 
recursive tests in `sapply') would slow down `sapply' significantly in general 
(which 
would be an argument against such a change of `sapply') or maybe it is against 
the actual
purpose of sapply -- I'm not sure.

anyway, thanks again,

joerg

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] sapply problem

2006-12-14 Thread Joerg van den Hoff
I have encountered the following problem: I need to extract from
a list of lists equally named compenents who happen to be 'one row'
data frames. a trivial example would be:

a - list(list(
df = data.frame(A = 1, B = 2, C = 3)), list(df = data.frame(A = 4,B = 5,C = 6)))

I want the extracted compenents to fill up a matrix or data frame row by row.
the obvious thing to do seems:

b - sapply(a, [[, df)
b - t(b)

now `b' looks all right:

b
class(b)

but it turns out that all elements in this matrix are one element lists:

class(b[1,1])

which prevents any further standard processing of `b' (like `colMeans', e.g.)

question 1: is their a straightforward way to enforce that `b' contains
simple numbers as elements right from the start (instead of something like
apply(b, 1:2, class-, numeric) afterwards)?

question 2: should not sapply do this further 'simplification' anyway in a 
situation
like this (matrix elements turn out to be one-element lists)?

regards

joerg

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Delete all dimnames

2006-12-14 Thread Joerg van den Hoff
Serguei Kaniovski wrote:
 Hello, how can I get rid of all dimnames so that:
 $amat
 Var3 Var2 Var1 
 8 1111 1 1 1 1 0 0 0 0 0 0 0
 7 1110 1 0 0 0 1 0 0 0 0 0 0
 6 1101 0 1 0 0 0 1 0 0 0 0 0
 5 1100 0 0 0 0 0 0 1 0 0 0 0
 4 1011 0 0 1 0 0 0 0 1 0 0 0
 3 1010 0 0 0 0 0 0 0 0 1 0 0
 2 1001 0 0 0 0 0 0 0 0 0 1 0
 1 1000 0 0 0 0 0 0 0 0 0 0 1
 
 is displayed with [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] in rows, and the 
 same in columns. The matrix was generated using apply
 



dimnames(mat) - NULL

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Better way to change the name of a column in a dataframe?

2006-12-14 Thread Joerg van den Hoff
Ben Fairbank wrote:
 Hello R users --
 
  
 
 If I have a dataframe such as the following, named frame with the
 columns intended to be named col1 through col6,
 
  
 
 frame
 
  col1 col2 cmlo3 col4 col5 col6
 
 [1,]3   10 2657
 
 [2,]68 4   1071
 
 [3,]75 1318
 
 [4,]   106 5492
 
  
 
 and I want to correct or otherwise change the name of one of the
 columns, I can do so with 
 
  
 
 dimnames(frame)[[2]][which(dimnames(frame)[[2]]==cmlo3)] - col3
 
  
 
 which renames the offending column:
 
  
 
 frame
 
  col1 col2 col3 col4 col5 col6
 
 [1,]3   102657
 
 [2,]684   1071
 
 [3,]751318
 
 [4,]   1065492
 
  
 
 This seems cumbersome and not very intuitive.  How can one accomplish
 this more simply?
 
  
well I would simply use

names(frame)[3] = 'col3'

(supposing you know the column number of your offending column anyway).

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] How to generate the random numbers uniformly distributed on the unit disc?

2006-10-09 Thread Joerg van den Hoff
S.Q. WEN wrote:
 Hi,
 I want to get  random number which is uniformly distributed  on the unit
 disc.
 How can I do that with R?
 
 
 Best wishes,
 WAN WAN
 
   [[alternative HTML version deleted]]
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.

the following function should put N uniformly distributed points into 
a disc of radius R (and plot the points in the (phi,rho) and (xx, yy) 
coordinate systems).

disc - function (N = 10^4, R = 1) {
phi - runif(N) * 2 * pi
rho - R * sqrt(runif(N))
xx - rho * cos(phi)
yy - rho * sin(phi)
layout(1:2)
plot(phi, rho, pch = .)
plot(xx, yy, pch = ., asp = 1)
layout(1)
invisible(list(phi = phi, rho = rho))
}

the trick is to transform a uniform distribution along `rho' in such a 
way that it gets denser towards the rim in such a way that the 
'smearing out' of the  points over the circumference is compensated.

regards

joerg

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] extremely slow recursion in R?

2006-08-25 Thread Joerg van den Hoff
Thomas Lumley wrote:
 On Thu, 24 Aug 2006, Jason Liao wrote:
 
 I recently coded a recursion algorithm in R and ir ran a few days
 without returning any result. So I decided to try a simple case of
 computing binomial coefficient using recusrive relationship

 choose(n,k) = choose(n-1, k)+choose(n-1,k-1)

 I implemented in R and Fortran 90 the same algorithm (code follows).
 The R code finishes 31 minutes and the Fortran 90 program finishes in 6
 seconds. So the Fortran program is 310 times faster. I thus wondered if
 there is room for speeding up recursion in R. Thanks.

 
 Recursive code that computes the same case many times can often be sped up 
 by memoization, eg
 
 memo-new.env(hash=TRUE)
 chewse-function(n,k) {
  if (n==k) return(1)
  if(k==1) return(n)
 
  if(exists(paste(n,k),memo,inherits=FALSE))
  return(get(paste(n,k),memo))
  rval-chewse(n-1,k)+chewse(n-1,k-1)
  assign(paste(n,k),rval,envir=memo)
  return(rval)
 }
 
 This idea was discussed in an early Programmers' Niche article by Bill 
 Venables in R News.
 
 However, I'm surprised that you're surprised that compiled Fortran 90 is 
 310 times faster than interpreted R.  That would be about what I would 
 expect for code that isn't making use of vectorized functions in R.
 
 
   -thomas


maybe someone's interested:
I made the same observation of seemingly very slow recursion recently: 
just for fun I used the (in)famously inefficient

fib - function(n = 1) {
if (n  2)
   fn - 1
else
   fn - fib(n - 1) + fib(n - 2)
fn
}

for calculating the fibonacci numbers and compared `fib(30)' (about 
1.3e6 recursive function calls ...) to some other languages (times in sec):

language  time
==
C  0.034  (compiled, using integers)
Ocaml  0.041  (compiled, using integers)
Ocaml  0.048  (interpreted, using integers)
C  0.059  (compiled, using floats)
Lua1.1
ruby   3.4
R  21
octave120

apart from octave (which seems to have a _real_ issue with recursive 
function calls), R is by far the slowest in this list and still a factor 
7-20 slower than the interpreter based Lua and ruby. the speed loss 
compared to C or Ocaml is about a factor of 350-600 here (Ocaml keeps 
its speed more or less in this simple example even in 'script mode', 
which is remarkable, I think (and it usually loses only a factor of 
about 7 or so in script mode compared to the compiled variant)

for the specialists the bad performance of R in this situation might not 
be surprising, but I was not aware that recursive function calls are 
seemingly as expensive as explicit loops (where the execution time ratio 
of R to C again is of the same order, i.e. =~ 400).

of course, such comparsions don't make too much sense: the reason to use 
R will definitely not be its speed (which, luckily, often does not 
matter), but the combination of flexible language, the number of 
available libraries and the good 2D graphics.



joerg

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Intro to Programming R Book

2006-08-24 Thread Joerg van den Hoff
Raphael Fraser wrote:
 I am new to R and am looking for a book that can help in learning to
 program in R. I have looked at the R website suggested books but I am
 still not sure which book best suite my needs. I am interesting in
 programming, data manipulation not statistics. Any suggestions?
 
 Raphael
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


S Programming by Venables and Ripley (Springer) seems the only(?) one 
around targeting the language, not it's applications. luckily, it's very 
good. for the rest (things specific to R, e.g. package development, 
namespaces etc.) I think one can only resort to the R manuals .

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] nls convergence problem

2006-08-16 Thread Joerg van den Hoff
Earl F. Glynn wrote:
 Berton Gunter [EMAIL PROTECTED] wrote in message 
 news:[EMAIL PROTECTED]
   Or, maybe there's something I don't understand about the
 algorithm being used.
 Indeed! So before making such comments, why don't you try to learn about 
 it?
 Doug Bates is a pretty smart guy,  and I think you do him a disservice 
 when
 you assume that he somehow overlooked something that he explicitly warned
 you about. I am fairly confident that if he could have made the problem go
 away, he would have. So I think your vent was a bit inconsiderate and
 perhaps even intemperate. The R Core folks have produced a minor miracle
 IMO, and we should all be careful before assuming that they have 
 overlooked
 easily fixable problems. They're certainly not infallible -- but they're a
 lot less fallible than most of the rest of us when it comes to R.
 
 I meant no disrespect to Doug Bates or any of the R Core folks. I thought 
 what I wrote had a neutral tone and was respectful.  I am sorry if anyone 
 was offended by my comments and suggestions.  I am certainly thankful for 
 all the hard work that has gone into developing R.
 
 efg
 

well, just a feedback to that: of course the tone of your mail was by no 
means inadequate (at least douglas bates did not object...). the 
tendency on this list to rather harshly rebuke people for some kind of 
(real or imagined) misconduct and to 'defend' R against 'attacks' is 
counterproductive and unnecessary. it goes without saying that the 
people 'behind' R can not and will not (and are not expected to) change 
the code after each mail on the help list which raises some question.

and concerning your `nls' question: sure, the noise requirement is a 
pitfall in the beginning, but afterwards it's irrelevant: you don't fit 
noise free data in real life (in the sense that real data never follow 
you model exactly). and, sure, the convergence decision could be altered 
(given enough time and knowledge). whether convergence failure on exact 
data is a bug or a feature is a matter of taste, probably.

getting better access to the trace output and especially access to 
intermediate pre-convergence values of the model parameters (this would 
  'solve' your problem, too) would really be an improvement, in my mind 
(I think this is recognized by d. bates, but simply way down his 'to do' 
list :-().


joerg

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] colors on graph

2006-07-14 Thread Joerg van den Hoff
Jim Lemon wrote:
 COMTE Guillaume wrote:
 Hy all,

  

 I need to draw something in 2 dimension that has 3 dimension, the choice
 has been made to use colors to display the third dimension into the
 graph.

  

 Has someone done something like that, i can't figure out how to
 parametize the colors.

 Hi Guilllaume,
 
 Have a look at color.scale in the plotrix package.
 
 Jim
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


?image
?filled.contour

and the See Alsos therein

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] least square fit with non-negativity constraints for absorption spectra fitting

2006-07-14 Thread Joerg van den Hoff
Xu, Xiuli (NIH/NHLBI) [E] wrote:
 I would really appreciate it if someone can give suggestions on how to
 do spectra fitting in R using ordinary least square fitting and
 non-negativity constraints. The lm() function works well for ordinary
 least square fitting, but how to specify non-negativity constraints? It
 wouldn't make sense if the fitting coefficients coming out as negative
 in absorption spectra deconvolution.
 
 Thanks. 
 
 Xiuli
 

I'm not sure, but would presume that constraints could not be imposed on 
a linear least squares fit. maybe someone can correct me.

if you move to `nls', i.e. non-linear least squares fitting, you should 
be able to transform your model function. say, you want some parameter 
`a' to stay positive. then you could e.g. substitute

`a = exp(b)' in the model function and fit `b' without constraints in 
the new model and calculate `a' afterwards (which obviously is 
guaranteed now to be positive). note that error estimates would than 
have to be computed by gaussian error propagation from `b' to `a'.


joerg

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] simple question about variables....

2006-07-13 Thread Joerg van den Hoff
Stéphane Cruveiller wrote:
 Dear R users,
 
 I have a simple question on variable manipulation.
 Imagine I have an object OBJ that has toto as one of its variables.
 I would like to understand why if I do
 
   varname - toto
 
  OBJ$varname returns no results
 
 whereas
 
   OBJ[varname]returns the column entitled 
 toto
 
 
 Thanks for your help.
 
 Stéphane.
 

because if the value of `varname' is substituted in the expressions, in 
the first case that yields

OBJ$toto and in the second
OBJ[toto]


the latter is valid, the former is not (you'd need `OBJ$toto' there), 
read ` ?$ ':

...Both '[[' and '$' select a single element of the list.  The main
  difference is that '$' does not allow computed indices, whereas
  '[[' does.  'x$name' is equivalent to 'x[[name]]'...


not, too, the difference between `[' (sublist) and `[[' (single element 
extraction)

joerg

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Optional variables in function?

2006-07-03 Thread Joerg van den Hoff
jim holtman wrote:
 ?missing
 
 On 7/2/06, Jonathan Greenberg [EMAIL PROTECTED] wrote:
 I'm a bit new to writing R functions and I was wondering what the best
 practice for having optional variables in a function is, and how to test
 for optional and non-optional variables?  e.g. if I have the following
 function:

 helpme - function(a,b,c) {


 }

 In this example, I want c to be an optional variable, but a and b to be
 required.  How do I:
 1) test to see if the user has inputted c
 2) break out of the function of the user has NOT inputted a or b.

if(missing(c))
stop(need c)

or

if(missing(c))
return()

depending on your problem it might be better to provide sensible default 
values for a,b,c in the function definition

helpme-function(a=A, b=B, c=C) {...}

instead of enforcing that every argument is specified?

joerg



 Thanks!

 --j

 --

 Jonathan A. Greenberg, PhD
 NRC Research Associate
 NASA Ames Research Center
 MS 242-4
 Moffett Field, CA 94035-1000
 Phone: 415-794-5043
 AIM: jgrn3007
 MSN: [EMAIL PROTECTED]

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide!
 http://www.R-project.org/posting-guide.html

 
 


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Help needed understanding eval,quote,expression

2006-06-29 Thread Joerg van den Hoff
Prof Brian Ripley wrote:
 You are missing eval(parse(text=)). E.g.
 
 x - list(y=list(y1=hello,y2=world),z=list(z1=foo,z2=bar))
 (what do you mean by the $ at the start of these lines?)
 eval(parse(text=x$y$y1))
 [1] hello
 
 However, bear in mind
 
 fortune(parse)
 
 If the answer is parse() you should usually rethink the question.
 -- Thomas Lumley
R-help (February 2005)
 
 In your indicated example you could probably use substitute() as 
 effectively.
 
 
 On Wed, 28 Jun 2006, [EMAIL PROTECTED] wrote:
 
 I am trying to build up a quoted or character expression representing a
 component in a list  in order to reference it indirectly.
 For instance, I have a list that has data I want to pull, and another list
 that has character vectors and/or lists of characters containing the names
 of the components in the first list.


 It seems that the way to do this is as evaluating expressions, but I seem
 to be missing something.  The concept should be similar to the snippet
 below:


 For instance:

 $x = list(y=list(y1=hello,y2=world),z=list(z1=foo,z2=bar))
 $y = quote(x$y$y1)
 $eval(y)
 [1] hello


 but, I'm trying to accomplish this by building up y as a character and
 then evaluating it, and having no success.

 $y1=paste(x$y$,y1,sep=)
 $y1
 [1] x$y$y1


 How can I evaluate y1 as I did with y previously?  or can I?


 Much Thanks !



if I understand you correctly you can achieve your goal much easier than 
with eval, parse, substitute and the like:

x - list(y=list(y1=hello,y2=world),z=list(z1=foo,z2=bar))

s1 - 'y'
s2 - 'y1'

x[[s1]][[s2]]

i.e. using `[[' instead of `$' for list component extraction allows to 
use characters for indexing (in other words: x$y == x[['y']])

joerg

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] R on MAC OS X

2006-06-27 Thread Joerg van den Hoff
Sara Mouro wrote:
 
 Dear all,

 I have been usig R for some time, but now I have a MAC instead of a  
 PC, am I am having problems in reading files...


 I have tried:
 Data-read.table(Users/SaraMM/PhD/Analises-LitterBags/Dados- 
 Litter.txt,head=T)

 but it said:
 Error in file(file, r) : unable to open connection
 In addition: Warning message:
 cannot open file 'Users/SaraMM/PhD/Analises-LitterBags/Dados- 
 Litter.txt', reason 'No such file or directory'


 I guess that might be simple...
 ... related to the fact that in PCs we do C:/
  but in iMAC, the only path I found was that one: Users/ 
 SaraMM(...)...

 Can someone please help me on this?

 Wishes,
 Sara
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


the (absolute) path to your file has to start with `/', i.e. your file 
should be

/Users/SaraMM/PhD/Analises-LitterBags/Dados-Litter.txt
  ^
  |

the leading slash was missing in your read.table call (in the original 
way it is a relative path name and would only work, if your current 
directory within R were `/' itself)

joerg

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] About filled.contour function

2006-06-26 Thread Joerg van den Hoff
Prodromos Zanis wrote:
 Dear R-projects users
 
 I would like like to ask if there is any way  to produce a multipanel plot
 with the filled.contour function. In the help information of filled
 contour it is said that this function is restricted to a full page
 display.
 
 With kind regards
 
 Prodromos Zanis
 
 
 
 
`filled.contour' sets up a 2-by-1 grid (for colormap and image) using 
the `layout' function, hence a prior setup of a multipanel layout would 
be (and is) destroyed by the `filled.contour' call.

you might edit a copy of `filled contour' to modify the `layout' call 
(e.g. to set up a 2-by-2 grid where only the upper row is used by the 
plots generated in filled contour). I tried this very shortly before 
answering but it seems that only 'within' the filled.contour copy one 
has access to the other subplots, after return from the function plot 
focus is again in subplot no. one --no time to check this further. in 
any case  along this line one should be able to enforce multiple 
subplots where 2 of them are used by the modified `filled.contour'

joerg

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Basic package structure question

2006-06-23 Thread Joerg van den Hoff
Gabor Grothendieck wrote:
 On 6/22/06, Duncan Murdoch [EMAIL PROTECTED] wrote:
 Jay Emerson wrote:
 At the encouragement of many at UseR, I'm trying to build my first real
 package. I have no C/Fortran code, just plain old R code, so it should be
 rocket science.  On a Linux box, I used package.skeleton() to create a basic
 package containing just one hello world type of function.  I edited the
 DESCRIPTION file, changin the package name appropriately.  I edited the
 hello.Rd file.  Upon running R CMD check hello, the only warning had to do
 with the fact that src/ was empty (obviously I had no source in such a
 simple package).  I doubt this is a problem.

 I was able to install and use the package successfully on the Linux system
 from the .tar.gz file, so far so good!  Next, on to Windows, where the
 problem arose:

 I created a zip file from inside the package directory: zip -r ../hello.zip
 ./*


 Which package directory, the source or the installed copy?  I think this
 might work in the installed copy, but would not work on the source.
 It's not the documented way to build a binary zip, though.
 When I moved this to my Windows machine and tried to install the package, I
 received the following error:


 utils:::menuInstallLocal()

 Error in unpackPkg(pkgs[i], pkgnames[i], lib, installWithVers) :
 malformed bundle DESCRIPTION file, no Contains field

 I only found one mention of this in my Google search, with no reply to the
 thread.  The Contains field appears to be used for bundles, but I'm trying
 to create a package, not a bundle.  This leads me to believe that a simple
 zipping of the package directory structure is not the correct format for
 Windows.

 Needless to say, there appears to be wide agreement that making packages
 requires precision, but fundamentally a package should (as described in the
 documentation) just be a collection of files and folders organized a certain
 way.  If someone could point me to documentation I may have missed that
 explains this, I would be grateful.

 I think the organized in a certain way part is actually important.
 Using R CMD install --build is the documented way to achieve this.  It's
 not trivial to do this on Windows, because you need to set up a build
 environment first, but it's not horribly difficult.

 Duncan Murdoch
 Regards,

 Jay
 
 One idea that occurred to me in reading this would be to have a server
 that one can send a package to and get back a Windows build to
 eliminate having to set up a development environment.  Not sure if
 this is feasible, particularly security aspects, but if it were it would
 open up package building on Windows to a larger audience.
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

just to confirm duncan murdochs remark:

our Windows machines lack proper development environments (mainly 
missing perl is the problem for pure R-code packages, I believe?) and we 
bypass this (for pure R-code packages only, of course) by

1.) install the package on the unix machine into the desired R library
2.) zip the _installed_ package (not the source tree!) found in the R 
library directory
3.) transfer this archive to the Windows machine
4.) unzip directly into the desired library destination

this procedure up to now always worked including properly installed 
manpages (text + html (and I hope this remains the case in the future...)

joerg van den hoff

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Basic package structure question

2006-06-23 Thread Joerg van den Hoff
Prof Brian Ripley wrote:
 On Fri, 23 Jun 2006, Joerg van den Hoff wrote:
 
 just to confirm duncan murdochs remark:

 our Windows machines lack proper development environments (mainly
 missing perl is the problem for pure R-code packages, I believe?) and we
 bypass this (for pure R-code packages only, of course) by

 1.) install the package on the unix machine into the desired R library
 2.) zip the _installed_ package (not the source tree!) found in the R
   library directory
 3.) transfer this archive to the Windows machine
 4.) unzip directly into the desired library destination

 this procedure up to now always worked including properly installed
 manpages (text + html (and I hope this remains the case in the future...)
 
  From README.packages:
 
   If your package has no compiled code it is possible that zipping up the
   installed package on Linux will produce an installable package on
   Windows.  (It has always worked for us, but failures have been reported.)
 
 so this is indeed already documented, and does not work for everyone.
 
albeit sparsely...

AFAIKS it's not in the `R Extensions' manual at all. If(!) this approach 
could be made an 'official' workaround and explained in the manual, that 
would be good.

I'd appreciate if someone could tell me:

are the mentioned failures confirmed or are they UFOs?

if so, are (the) reasons for (or circumstances of) failure known (I'm 
always afraid walking on thin ice when using this transfer strategy)?

what does produce an installable package on Windows in the README text 
mean? I presume it does not mean that R CMD INSTALL (or the Windows 
equivalent) does work? if it really means unzip the package on the 
Windows machine into the library directory, should'nt the text be altered?

and I forgot to mention in my first mail: I use the described procedure 
for transfer from a non-Intel apple machine under MacOS (a FreeBSD 
descendant) to Windows (and even (unneccessarily, I know) to 
Sun/Solaris). so the strategy is not restricted to transfer from Linux 
- Windows.

and it is useful (if it is not 'accidental' that it works at all): in 
this way one can keep very easily in sync several local incarnations of 
a package across platforms (if network access to a common disk is not 
the way to go): I simply `rsync' the affected (local, R-code only) 
library directories.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] problem with code/documentation mismatch

2006-06-23 Thread Joerg van den Hoff
I have a package with a division method for special objects of class 
RoidataList, i.e. the function is named `/.RoidataList'.
documentation for this is in the file Divide.RoidataList.


R CMD CHECK complains with:

=cut==
* checking for code/documentation mismatches ... WARNING
Functions/methods with usage in documentation object 
'Divide.RoidataList' but not in code:
   /

* checking Rd \usage sections ... WARNING
Objects in \usage without \alias in documentation object 
'Divide.RoidataList':
   /
=cut==

the `usage' section in the Rd file reads

=cut==
\usage{
x/y
}
=cut==

which, of course is the desired way to use the function.
what am I doing wrong, i.e. how should I modify the Rd file?
maybe obvious, but not to me.

joerg van den hoff

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Function hints

2006-06-20 Thread Joerg van den Hoff
Jonathan Baron wrote:
 On 06/19/06 13:13, Duncan Murdoch wrote:
 `help.search' does not allow full text search in the manpages (I can
 imagine why (1000 hits...), but without such a thing google, for
 instance, would probably not be half as useful as it is, right?) and
 there is no sorting by relevance in the `help.search' output, I think.
 how this sorting could be achieved is a different question, of course.
 You probably want RSiteSearch(keyword, restrict=functions) (or even
 without the restrict part).
 
 Yes.  The restrict part will speed things up quite a bit, if you
 want to restrict to functions.
 
 Or, alternatively, you could use Namazu (which I use to generate
 what RSiteSearch provides) to generate an index specific to your
 own installed functions and packages.  The trick is to cd to the
 directory /usr/lib/R/library, or the equivalent, and then say
 
 mknmz -q */html
 
 which will pick up the html version of all the man pages
 (assuming you have generated them, and I have no idea whether
 this can be done on Windows).  To update, say
 
 mknmz --update=. -q */html
 
 Then make a bookmark for the Namazu search page in your browser,
 as a local file.  (I haven't given all the details.  You have to
 install Namazu and follow the instructions.)
 
 Or, if you have a web server, you could let Google do it for
 you.  But, I warn you, Google will fill up your web logs pretty
 fast if you don't exclude it with robots.txt.  I don't let it
 search my R stuff.
 
 I think that Macs and various Linux versions also have other
 alternative built-in search capabilities, but I haven't tried
 them.  Beagle is the new Linux search tool, but I don't know what
 it does.
 
 Jon

thanks for theses tips. I was not aware of the  `RSiteSearch' function 
(I did know of the existence of the web sites, though) and this helps, 
but of course this is depdendent on web access (off-line labtop 
usage...) and does not know of 'local' (non-CRAN) packages (and knows of 
maybe too many contributed packages, which I might not want to 
consider for one reason or the other)

thanks also for the hint on `Namazu'. maybe I do as adviced to get a 
index which is aware of my local configuration and private packages. 
(under MacOS there is a very good and fast full text search engine, but 
it cannot be told to only search the R documentation, for instance, so 
one gets lots of other hits as well.)

what I really would love to see would be an improved help.search():
on r-devel I found a reference to the /concept tag in .Rd files and the 
fact that it is rarely used (again: I was not aware of this :-( ...), 
which might serve as keyword container suitable for improving 
help.search() results. what about changing the syntax here to something like
\concept {
keyword = score,
keyword = score
...
}
where score would be restricted to a small range of values (say, 1-3 or 
1-5). if package maintainer then would choose a handful of sensible 
keywords (and scores) for a package and its functions one could expect 
improved search results. this might be a naive idea, but could a 
sort-by-relevance in the help.search() output profit from this?

to make it short: I'm not happy with the output, for instance, of
help.search(fitting)   #1
vs.
help.search(linear fitting)#2
vs.
help.search(non-linear fitting)#3
I somehow feel that `lm' and `nls' should both be found in the first 
search and that they should be near the top of the lists when they are 
found.

but `lm' is found only in #1 (near the bottom of the list) and `nls' not 
at all (which is really bad). this is partly a problem, of course, of 
inconsistent nomenclature in the manpages but also due to the fact that 
help.search() only accepts single phrases as pattern (and maybe the 
absense of concept keywords including a score?)

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Function hints

2006-06-20 Thread Joerg van den Hoff
hadley wickham wrote:
 what I really would love to see would be an improved help.search():
 on r-devel I found a reference to the /concept tag in .Rd files and the
 fact that it is rarely used (again: I was not aware of this :-( ...),
 which might serve as keyword container suitable for improving
 help.search() results. what about changing the syntax here to 
 something like
 \concept {
 keyword = score,
 keyword = score
 ...
 }
 where score would be restricted to a small range of values (say, 1-3 or
 1-5). if package maintainer then would choose a handful of sensible
 keywords (and scores) for a package and its functions one could expect
 improved search results. this might be a naive idea, but could a
 sort-by-relevance in the help.search() output profit from this?
 
 This is not something I think you can solve automatically.  Good
 keywording requries a lot of effort, and needs to be consistent to be
 useful.  The only way to achieve consistency is to have only person
I was thinking of manual keywording (by the package authors, nobody 
else!) as a means to give the search engine (help.search()) reasonable 
information including a (subjective) relevance score for each keyword.
of course, the problem is the same as with every (especially permuted) 
index: to find the best compromise betweeen indexing next to nothing and 
indexing everything (the best probably meaning to index comprehensively 
but not excessively with reasonable index terms) in the documents at hand.
sure, consistency could not be enforced but it's not consistent right 
now, simply because the real \keyword tag is far to restrictive for 
indexing purposes(only a handful of predefined allowed keywords) and 
otherwise only the name/alias and title in the Rd files seem to be 
searched (and here the author must be really aware that these fields are 
at the moment the ones which should be forced to contain the relevant 
'keywords' if the function is to be found by help.search -- this imposes 
sometimes rather artificial constraints on the wording, especially if 
you try to include some general keyword in the title of a very 
specialized function).

looking at the example I gave
(help.search(fitting) etc.) it's quite clear that `nls' simply is not 
found because 'fitting' does not occur in the title, but I trust, if 
asked to provide, say, three keywords, one of them would contain fit 
or fitting. I mean, every scientific journal asks you to do just this: 
provide some free-text keywords, which you think to be relevant for the 
paper. there are no restrictions/directives, usually, but the purpose 
(to categorize the paper a bit) is served quite well.

and maybe the \concept tag really is meant for something different, I'm 
not sure. what I have in mind really is similar to providing index terms 
(plus scores to guide `help.search' in sorting). to stay with the `nls' 
example:
\concept {
non-linear fitting = 4
non-linear least-squares = 5
non-linear models = 3
parameter estimimation = 2
gauss-newton = 1
}
would probably achieve that `nls' usually is correctly found (if this 
syntax were allowed). apart from the scores (which would be nice, I 
think) my main point is that extensive use of \concept (or a new tag 
`\index', for instance, if \concept's purpose is actually different -- 
I'm not sure) should be pushed to get better hits from help.search().

I personally have decided to start using the \concept tag in its present 
form for our local .Rd files extensively to inject a sufficient number 
of free-text relevant keywords into help.search()


joerg
 keywording (difficult/expensive), or exhaustively document the process
 of keywording and then require all package authors to read and use
 (impossible).
 
 Hadley

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] $

2006-06-20 Thread Joerg van den Hoff
Davis, Jacob B. wrote:
 If object is user defined is:
 
  
 
 object$df.residual
 
  
 
 the same thing as
 
  
 
 df.residual(object)
 
  
 
 This is my first time to encounter the $ sign in R, I'm new.  I'm
 reviewing summary.glm and in most cases it looks as though the $ is
 used to extract some characteristic/property of the object, but I'm not
 positive.
 
  
 
 Thanks
 
 ~~
 
  
 
 Jacob Davis
 
 Actuarial Analyst
 
 
   [[alternative HTML version deleted]]
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

`$' is for list component extraction. this is really very basic and, for 
  once (I usually don't like this answer...),  looking at the very first 
pages of the very first manual, would be not so bad an idea:


http://cran.r-project.org/  -- Manuals -- An Introduction to R

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] can I call user-created functions without source() ?

2006-06-19 Thread Joerg van den Hoff
Duncan Murdoch wrote:
 On 6/19/2006 6:25 AM, (Ted Harding) wrote:
 On 19-Jun-06 Rob Campbell wrote:
 Hi,

 I have to R fairly recently from Matlab, where I have been used to
 organising my own custom functions into directories which are adding to
 the Matlab search path. This enables me to call them as I would one of
 Matlab's built-in functions. I have not found a way of doing the same
 thing in R. I have resorted to using source() on symlinks located in
 the
 current directory. Not very satisfactory.

 I looked at the R homepage and considered making a package of my
 commonly used functions so that I can call them in one go:
 library(myFuncions, lib.loc=/path/to/library) Perhaps this is the
 only
 solution but the docs on the web make the process seem rather
 complicated--I only have a few script files I want to call! Surely
 there's a straightforward solution?

 How have other people solved this problem? Perhaps someone has a simple
 package skeleton into which I can drop my scripts?


 Thanks,

 Rob
 There are pros and cons to this, but on the whole I sympathise
 with you (having pre-R been a heavy matlab/octave user myself).

 Unfortunately (from this perspective) R does not seem to have
 an automatic load-on-demand facility similar to what happens
 in matlab (i.e. you call a function by name, and R would search
 for it in whatever the current search-path is, and load its
 definition plus what else it depends on).

 I have a few definitions which I want in every R session, so
 I have put these in my .Rprofile. But this is loaded from
 my home directory, not from the directory where I was when I
 started R, so it is the same every time. 
 
 Which version of R are you using?  This is not the current documented 
 behaviour.  It looks in the current directory first, and only in your 
 home directory if that fails.
 
 Duncan Murdoch
 
 
 Again, one of the
 conveniences of the matlab/octave approach is that you can
 have a different sub-directory for each project, so if you
 start work in a particular one then you have access to any
 special definitions for that project, and not to others.

 I'm no expert on this aspect of R, but I suspect that the way
 start-up is organised in R does not fit well with the other
 kind of approach. I stand to be corrected, of course ...

 And others may well have formulated their own neat work-rounds,
 so we wait eagerly to hear about these!

 Best wishes,
 Ted.


using `package.skeleton' (as already mentioned) for the package dir 
layout and `prompt' for generating templates of the needed manpages is 
not so bad (once you get used to it), if the things you have in mind are 
at least of some long(er) term value (for you): at least you are forced 
(sort of) to document your software...

for short term usage of some specialized functions I have added some 
lines to the `.Rprofile' in my home(!) directory as follows (probably 
there are smarter solutions, but at least it works):

#source some temporary useful functions:
fl - dir(path='~/rfiles/current',patt='.*\\.R$',full.names=TRUE)
for (i in fl) {cat(paste('source(',i,')\n',sep=)); source(i)}
rm(i,fl)

here, I have put all the temporary stuff in a single dedicated dir 
`~/rfiles/current', but of course you can use several dirs in this way. 
all files in this dir with names ending in `.R' are sourced on startup 
of R. this roughly works like one of the directories on MATLAB's search 
path: every function definition in this directory is  'understood' by R 
(but everything is loaded into the workspace on startup, no matter, 
whether you really need it in the end: no real `load on demand'). one 
important difference, though: this is only sensible for function 
definitions, not scripts ('executable programms' (which would directly 
be executed on R startup, otherwise).
and, contrary to matlab/octave, this is not dynamic: everything is read 
in at startup, later modifications to the directories are not recognized 
without explicitely sourcing the files again.

if you in addition you want to load definitions from the startup 
directory where you launch R (your project dir), the above could be 
modified to:

#source some temporary useful functions from startup dir:
fl - dir(path=getwd(),patt='.*\\.R$',full.names=TRUE)
for (i in fl) {cat(paste('source(',i,')\n',sep=)); source(i)}
rm(i,fl)

in this way you at least don't need a separate `.Rprofile' in each 
project dir.



joerg

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] can I call user-created functions without source() ?

2006-06-19 Thread Joerg van den Hoff
Duncan Murdoch wrote:
 Just a few comments below on alternative ways to do the same things:
 
 On 6/19/2006 8:19 AM, Joerg van den Hoff wrote:
 
 for short term usage of some specialized functions I have added some 
 lines to the `.Rprofile' in my home(!) directory as follows (probably 
 there are smarter solutions, but at least it works):

 #source some temporary useful functions:
 fl - dir(path='~/rfiles/current',patt='.*\\.R$',full.names=TRUE)
 for (i in fl) {cat(paste('source(',i,')\n',sep=)); source(i)}
 rm(i,fl)
 
 Another way to do this without worrying about overwriting some existing 
 variables is
 
 local({
 fl - ...
 for (i in fl) ...
 })

 
 No need to remove fl and i at the end; they were created in a temporary 
 environment, which was deleted at the end.
 
sure, that's better (just one more case, where I did'nt know of the 
existence of a certain function). but what is the difference (with 
regards to scope) of `i' or `fl' and the functions defined via sourcing? 
are'nt both objects defined within `local'? why _are_ the functions 
visible in the workspace? probably I again don't understand the 
`eval'/environment intricacies.

 here, I have put all the temporary stuff in a single dedicated dir 
 `~/rfiles/current', but of course you can use several dirs in this 
 way. all files in this dir with names ending in `.R' are sourced on 
 startup of R. this roughly works like one of the directories on 
 MATLAB's search path: every function definition in this directory is  
 'understood' by R (but everything is loaded into the workspace on 
 startup, no matter, whether you really need it in the end: no real 
 `load on demand'). 
 
 It's possible to have load on demand in R, and this is used in packages. 
  It's probably not worth the trouble to use it unless you're using a 
 package.
 
 
 one
 important difference, though: this is only sensible for function 
 definitions, not scripts ('executable programms' (which would directly 
 be executed on R startup, otherwise).
 and, contrary to matlab/octave, this is not dynamic: everything is 
 read in at startup, later modifications to the directories are not 
 recognized without explicitely sourcing the files again.
 
 There isn't really any reasonable way around this.  I suppose some hook 
 could be created to automatically read the file if the time stamp 
 changes, but that's not really the R way of doing things:  generally in 
 R active things are in the workspace, not on disk.  A good way to work 
 is prepare things on disk, then when they are ready, explicitly import 
 them into R.
 

 if you in addition you want to load definitions from the startup 
 directory where you launch R (your project dir), the above could be 
 modified to:

 #source some temporary useful functions from startup dir:
 fl - dir(path=getwd(),patt='.*\\.R$',full.names=TRUE)
 for (i in fl) {cat(paste('source(',i,')\n',sep=)); source(i)}
 rm(i,fl)

 in this way you at least don't need a separate `.Rprofile' in each 
 project dir.
 
 Another alternative if you want something special in the project is to 
 create a .Rprofile file there, and put source(~/.Rprofile) into it, so 
 both the local changes and the general ones get loaded.
 
 Duncan Murdoch



 joerg

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! 
 http://www.R-project.org/posting-guide.html


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] can I call user-created functions without source() ?

2006-06-19 Thread Joerg van den Hoff
Duncan Murdoch wrote:
 On 6/19/2006 10:19 AM, Joerg van den Hoff wrote:
 Duncan Murdoch wrote:
 Just a few comments below on alternative ways to do the same things:

 On 6/19/2006 8:19 AM, Joerg van den Hoff wrote:

 for short term usage of some specialized functions I have added some 
 lines to the `.Rprofile' in my home(!) directory as follows 
 (probably there are smarter solutions, but at least it works):

 #source some temporary useful functions:
 fl - dir(path='~/rfiles/current',patt='.*\\.R$',full.names=TRUE)
 for (i in fl) {cat(paste('source(',i,')\n',sep=)); source(i)}
 rm(i,fl)

 Another way to do this without worrying about overwriting some 
 existing variables is

 local({
 fl - ...
 for (i in fl) ...
 })


 No need to remove fl and i at the end; they were created in a 
 temporary environment, which was deleted at the end.

 sure, that's better (just one more case, where I did'nt know of the 
 existence of a certain function). but what is the difference (with 
 regards to scope) of `i' or `fl' and the functions defined via 
 sourcing? are'nt both objects defined within `local'? why _are_ the 
 functions visible in the workspace? probably I again don't understand 
 the `eval'/environment intricacies.
 
 Whoops, sorry.  Yes, you'd need to be careful to assign them into 
 globalenv().  Those ...'s weren't so simple.
 
no, no, you _were_ right (I tested it), but I did'nt understand why it 
works. I found the answer only now in the `source' manpage: there is an 
argument `local = FALSE' which by default enforces assignment into 
globalenv(). so the `...' remain unaltered :-). sorry for the confusion.

joerg

PS: and as a follow up on the recent 'setwd vs. cd' thread: as I noted 
in passing `source' has a further argument `chdir' to change the 
directory prior to sourcing, which is neither the usual name (using 
`cd') nor the R-way (using `setwd'), but rather identical to the 
corresponding native matlab/octave command (both of these packages have 
for quite some time changed to accepting `cd' as well). of course, the 
`source' arguments have nothing to do with the commands but consistency 
would'nt harm either :-). not important, but a bit funny.

 Duncan Murdoch

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Function hints

2006-06-19 Thread Joerg van den Hoff
hadley wickham wrote:
 One of the recurring themes in the recent UserR conference was that
 many people find it difficult to find the functions they need for a
 particular task.  Sandy Weisberg suggested a small idea he would like
 to see: a hints function that given an object, lists likely
 operations.  I've done my best to implement this function using the
 tools currently available in R, and my code is included at the bottom
 of this email (I hope that I haven't just duplicated something already
 present in R).  I think Sandy's idea is genuinely useful, even in the
 limited form provided by my implementation, and I have already
 discovered a few useful functions that I was unaware of.
 
 While developing and testing this function, I ran into a few problems
 which, I think, represent underlying problems with the current
 documentation system.  These are typified by the results of running
 hints on a object produced by glm (having class c(glm, lm)).  I
 have outlined (very tersely) some possible solutions.  Please note
 that while these solutions are largely technological, the problem is
 at heart sociological: writing documentation is no easier (and perhaps
 much harder) than writing a scientific publication, but the rewards
 are fewer.
 
 Problems:
 
  * Many functions share the same description (eg. head, tail).
 Solution: each rdoc file should only describe one method. Problem:
 Writing rdoc files is tedious, there is a lot of information
 duplicated between the code and the documenation (eg. the usage
 statement) and some functions share a lot of similar information.
 Solution: make it easier to write documentation (eg. documentation
 inline with code), and easier to include certain common descriptions
 in multiple methods (eg. new include command)
 
  * It is difficult to tell which functions are commonly
 used/important. Solution: break down by keywords. Problem: keywords
 are not useful at the moment.  Solution:  make better list of keywords
 available and encourage people to use it.  Problem: people won't
 unless there is a strong incentive, plus good keywording requires
 considerable expertise (especially in bulding up list).  This is
 probably insoluable unless one person systematically keywords all of
 the base packages.
 
  * Some functions aren't documented (eg. simulate.lm, formula.glm) -
 typically, these are methods where the documentation is in the
 generic.  Solution: these methods should all be aliased to the generic
 (by default?), and R CMD check should be amended to check for this
 situation.  You could also argue that this is a deficiency with my
 function, and easily fixed by automatically referring to the generic
 if the specific isn't documented.
 
  * It can't supply suggestions when there isn't an explicit method
 (ie. .default is used), this makes it pretty useless for basic
 vectors.  This may not really be a problem, as all possible operations
 are probably too numerous to list.
 
  * Provides full name for function, when best practice is to use
 generic part only when calling function.  However, getting precise
 documentation may requires that full name.  I do the best I can
 (returning the generic if specific is alias to a documentation file
 with the same method name), but this reflects a deeper problem that
 the name you should use when calling a function may be different to
 the name you use to get documentation.
 
  * Can only display methods from currently loaded packages.  This is a
 shortcoming of the methods function, but I suspect it is difficult to
 find S3 methods without loading a package.
 
 Relatively trivial problems:
 
  * Needs wide display to be effective.  Could be dealt with by
 breaking description in a sensible manner (there may already by R code
 to do this.  Please let me know if you know of any)
 
  * Doesn't currently include S4 methods.  Solution: add some more code
 to wrap showMethods
 
  * Personally, I think sentence case is more aesthetically pleasing
 (and more flexible) than title case.
 
 
 Hadley
 
 
 hints - function(x) {
   db - eval(utils:::.hsearch_db())
   if (is.null(db)) {
   help.search(abcd!, rebuild=TRUE, agrep=FALSE)
   db - eval(utils:::.hsearch_db())
   }
 
   base - db$Base
   alias - db$Aliases
   key - db$Keywords
 
   m - all.methods(class=class(x))
   m_id - alias[match(m, alias[,1]), 2]
   keywords - lapply(m_id, function(id) key[key[,2] %in% id, 1])
 
   f.names - cbind(m, base[match(m_id, base[,3]), 4])
   f.names - unlist(lapply(1:nrow(f.names), function(i) {
   if (is.na(f.names[i, 2])) return(f.names[i, 1])
   a - methodsplit(f.names[i, 1])
   b - methodsplit(f.names[i, 2])
   
   if (a[1] == b[1]) f.names[i, 2] else f.names[i, 1]  
   }))
   
   hints - cbind(f.names, base[match(m_id, base[,3]), 5])
   hints - hints[order(tolower(hints[,1])),]
   hints - 

Re: [R] Access and assign list sub-elements using a string suchas l$a$b

2006-06-15 Thread Joerg van den Hoff
Petr Pikal wrote:
 Hi
 
 yes you are correct, I remembered there is something with eval from 
 older posts but did not find a connection to parse from eval help 
 page. Shouldn't there be a link? Or even an example?

would be a good thing to do (there only is a link from parse to eval). 
after all eval(parse(text = some_string)) is what many users (which 
might come from matlab/octave where this _would_ suffice) want when they 
try out eval(some_string)). and a standard eval(parse(text=...) example 
on the eval manpage (where most people probably would look, would be 
very good.
 
 Best regards
 Petr
 
 
 On 15 Jun 2006 at 17:21, Dimitris Rizopoulos wrote:
 
 From: Dimitris Rizopoulos [EMAIL PROTECTED]
 To:   Petr Pikal [EMAIL PROTECTED], [EMAIL PROTECTED]
 Copies to:r-help@stat.math.ethz.ch
 Subject:  Re: [R] Access and assign list sub-elements using a 
 string suchas   l$a$b
 Date sent:Thu, 15 Jun 2006 17:21:26 +0200
 
 - Original Message - 
 From: Petr Pikal [EMAIL PROTECTED]
 To: Gregory Jefferis [EMAIL PROTECTED]
 Cc: r-help@stat.math.ethz.ch
 Sent: Thursday, June 15, 2006 4:56 PM
 Subject: Re: [R] Access and assign list sub-elements using a string
 suchas l$a$b


 Hi
 very, very close


 On 15 Jun 2006 at 13:27, Gregory Jefferis wrote:

 Date sent:  Thu, 15 Jun 2006 13:27:05 +0100
 From:   Gregory Jefferis [EMAIL PROTECTED]
 To: [EMAIL PROTECTED] 
 [EMAIL PROTECTED]
 Forwarded to:   r-help@stat.math.ethz.ch
 Forwarded by:   Gregory Jefferis [EMAIL PROTECTED]
 Date forwarded: Thu, 15 Jun 2006 14:54:13 +0100
 Subject:[R] Access and assign list sub-elements using a
 string such as l$a$b

 If I have a list I can set a sub-element as follows on the command
 line:

 people=list()
 people$tom$hair=brown
 people

 But what if I have a string containing the name of the sub-element
 that I want to access?

 subel= people$tom$hair

 get(subel) # returns error
 assign(subel,red) # silent but doesn't change list
 people
 See what happens when

 people-assign(subel, red)
 but I think this is not what Greg wanted; the above just assigns red
 to object 'people' (i.e., check `str(assign(subel, red))'). If I
 understood correctly, the following could be of help:

 people - list()
 people$tom$hair - brown
 people
 #
 subel - people$tom$hair
 eval(parse(text = subel))
 eval(parse(text = paste(subel, - 'red')))
 people


 Best,
 Dimitris


 
 Dimitris Rizopoulos
 Ph.D. Student
 Biostatistical Centre
 School of Public Health
 Catholic University of Leuven

 Address: Kapucijnenvoer 35, Leuven, Belgium
 Tel: +32/(0)16/336899
 Fax: +32/(0)16/337015
 Web: http://med.kuleuven.be/biostat/
  http://www.student.kuleuven.be/~m0390867/dimitris.htm


 HTH
 Petr


 The attempts above using assign/get won't do what I am trying to do
 [nor according to the help should they].  I would be very grateful
 for any suggestions.  Many thanks,

 Greg.

 -- 
 Gregory Jefferis, PhD   and:
 Research Fellow
 Department of Zoology   St John's
 College University of Cambridge Cambridge Downing Street   
CB2 1TP Cambridge, CB2 3EJ United Kingdom

 Lab Tel: +44 (0)1223 336683 Office: +44 (0)1223
 339899 Lab Fax: +44 (0)1223 336676

 http://www.zoo.cam.ac.uk/zoostaff/jefferis.html
 [EMAIL PROTECTED]

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide!
 http://www.R-project.org/posting-guide.html
 Petr Pikal
 [EMAIL PROTECTED]

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! 
 http://www.R-project.org/posting-guide.html


 Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm

 
 Petr Pikal
 [EMAIL PROTECTED]
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] plot two graphs with different length of data

2006-06-13 Thread Joerg van den Hoff
Eric Hu wrote:
 Hi I am trying to plot two data set in the same picture window without
 overlapping with each other. I am using the format plot(x1,y1,x2,y2)
 but get the following error message:
 
 plot(as.numeric(r0[,4]),as.numeric(r0[,7]),as.numeric(r0[,4]),as.numeric(r0[,7][ind[,1]]))
 Error in plot.window(xlim, ylim, log, asp, ...) :
 invalid 'ylim' value
 
 Can anyone tell me what went wrong? Thanks.
 
 Eric
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


matplot(cbind(x1,x2), cbind(y1,y2)) might be what you want.

(if x1, x2 and y1,y2 are of equal length. otherwise pad the short  ones 
with NAs or use  `matplot' with type =n (to get the scaling of the plot 
right), followed by `plot(x1,y1), lines(x2,y2)')

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] solving first-order differential equation

2006-06-12 Thread Joerg van den Hoff
ZhanWu Dai wrote:
 I am an initial user of R. Could you give me some explanations or examples on 
 how to solve the first order differential equations by the first-order 
 Runge-Kutta method? 
 
 Thank you very much
 
 Kind regards
 
 
 
 
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

not really an answer, but a remark:

if your ODE is of the form

dy
---  - k y = A f(x)
dx

(k, A const.) it might be a better idea to use the 'analytic' solution
instead of runge-kutta (faster, probably more accurate).
for instance, if the initial condition is

y(x=0) = 0

and you're looking only at x0 the solution simply is


y(x) = A (x) {*} exp(-kx)


where {*} means the finite (continous) convolution extending from 0 to x:

y(x) = A  integral from z=0 to z=x {f(z) exp(-k(x-z)) dz}


(which, of course, still has to be computed numerically in general.)
this closed-form solution can then
be used, for instance, to determine the unknown parameters (k, A) from a
least squares fit to measured f(x), y(x)

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

Re: [R] Building packages in R - 'private' functions

2006-06-07 Thread Joerg van den Hoff
Dan Rabosky wrote:
 Hello.
 
 I am creating an R package that I'd like to submit to CRAN (OS Windows 
 XP).  How do I distinguish among 'public' functions, e.g., those that are 
 intended to be called by users of the package and for which I am providing 
 documentation  examples, and 'private' functions, which are used 
 internally by the 'public' functions, but for which I do not wish to 
 provide documentation?  The private functions are all coded in R (nothing 
 in C or Fortran) and are essential to the operation of several public 
 functions.
 
 I have been unable to find any documentation on this in the 'writing r 
 extensions' manual', on previous posts to R-help, or through any other 
 source.  One possibility is to include the source code for the 'private' 
 functions within the public functions.  However, since multiple public 
 functions utilize the same core set of 'private' functions, this seems 
 unwieldy and redundant at best.
 
 If I simply include the source for the 'private' functions in the R 
 directory (without corresponding *.Rd and *.html documentation in /man), 
 then check the package with R CMD check', it does appear to process the 
 private functions (and successfully builds with R CMD build).  However, I 
 do receive a warning for including undocumented code objects.  Is this the 
 recommended approach and/or is there a better way to do this?  One 
 potential problem with this approach is that - should an error occur within 
 a private function, it may be very difficult for the user to decipher the 
 nature of the problem.
 
 Any suggestions will be greatly appreciated.
 ~Dan Rabosky
 
 
 
 Dan Rabosky
 Department of Ecology and Evolutionary Biology
 237 Corson Hall
 Cornell University
 Ithaca, NY14853-2701 USA
 [EMAIL PROTECTED]
 web: http://www.birds.cornell.edu/evb/Graduates_Dan.htm
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

it's in the 'extensions' manual, including an example I believe:

1.
create a file `NAMESPACE' in the package top level dir (beside `R' and 
`man') containing the single line

exportPattern(^[^\\.])

2.
name all private functions with a leading `.' (more precisely: all 
functions starting with a `.' are private in this setting).


of course, you can modify the pattern to suit another naming convention.

joerg

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] nls fitting

2006-05-22 Thread Joerg van den Hoff
Lorenzo Isella wrote:
 Dear All,
 I may look ridiculous, but I am puzzled at the behavior of the nls with
 a fitting I am currently dealing with.
 My data are:
 
x N
 1   346.4102 145.428256
 2   447.2136 169.530634
 3   570.0877 144.081627
 4   721.1103 106.363316
 5   894.4272 130.390552
 6  1264.9111  36.727069
 7  1788.8544  52.848587
 8  2449.4897  25.128742
 9  3464.1016   7.531766
 10 4472.1360   8.827367
 11 6123.7244   6.600603
 12 8660.2540   4.083339
 
 I would like to fit N as a function of x according to a function
 depending on 9 parameters (A1,A2,A3,mu1,mu2,mu3,myvar1,myvar2,myvar3),
 namely
 N ~
 (log(10)*A1/sqrt(2*pi)/log(myvar1)*exp(-((log(x/mu1))^2)/2/log(myvar1)/log(myvar1))
 
 +log(10)*A2/sqrt(2*pi)/log(myvar2)*exp(-((log(x/mu2))^2)/2/log(myvar2)/log(myvar2))
 
 +log(10)*A3/sqrt(2*pi)/log(myvar3)*exp(-((log(x/mu3))^2)/2/log(myvar3)/log(myvar3)))
 
 (i.e. N is to be seen as a sum of three bells whose parameters I need
 to determine).
 
 
 So I tried:
 out-nls(N ~ 
 (log(10)*A1/sqrt(2*pi)/log(myvar1)*exp(-((log(x/mu1))^2)/2/log(myvar1)/log(myvar1))
 
 +log(10)*A2/sqrt(2*pi)/log(myvar2)*exp(-((log(x/mu2))^2)/2/log(myvar2)/log(myvar2))
 
 +log(10)*A3/sqrt(2*pi)/log(myvar3)*exp(-((log(x/mu3))^2)/2/log(myvar3)/log(myvar3)))
  ,start=list(A1 = 85,
 A2=23,A3=4,mu1=430,mu2=1670,mu3=4900,myvar1=1.59,myvar2=1.5,myvar3=1.5  )
 ,algorithm = port
 ,control=list(maxiter=2,tol=1)
 ,lower=c(A1=0.1,A2=0.1,A3=0.1,mu1=0.1,mu2=0.1,mu3=0.1,myvar1=0.1,myvar2=0.1,myvar3=0.1)
 )
 
 getting the error message:
 Error in nls(N ~ (log(10) * A1/sqrt(2 * pi)/log(myvar1) *
 exp(-((log(x/mu1))^2)/2/log(myvar1)/log(myvar1)) +  :
 Convergence failure: singular convergence (7)
 
 
 I tried to adjust tol  maxiter, but unsuccessfully.
 If I try fitting N with only two bells, then nls works:
 
 out-nls(N ~ 
 (log(10)*A1/sqrt(2*pi)/log(myvar1)*exp(-((log(x/mu1))^2)/2/log(myvar1)/log(myvar1))
 
 +log(10)*A2/sqrt(2*pi)/log(myvar2)*exp(-((log(x/mu2))^2)/2/log(myvar2)/log(myvar2))
 )
  ,start=list(A1 = 85, A2=23,mu1=430,mu2=1670,myvar1=1.59,myvar2=1.5  )
 ,algorithm = port
 ,control=list(maxiter=2,tol=1)
 ,lower=c(A1=0.1,A2=0.1,mu1=0.1,mu2=0.1,myvar1=0.1,myvar2=0.1)
 )
 
  out
 Nonlinear regression model
   model:  N ~ (log(10) * A1/sqrt(2 * pi)/log(myvar1) *
 exp(-((log(x/mu1))^2)/2/log(myvar1)/log(myvar1)) +  log(10) *
 A2/sqrt(2 * pi)/log(myvar2) *
 exp(-((log(x/mu2))^2)/2/log(myvar2)/log(myvar2)))
data:  parent.frame()
 A1 A2mu1mu2 myvar1 myvar2
  84.920085  40.889968 409.656404 933.081936   1.811560   2.389215
  residual sum-of-squares:  2394.876
 
 Any idea about how to get nls working with the whole model?
 I had better luck with the nls.lm package, but it does not allow to
 introduce any constrain on my fitting parameters.
 I was also suggested to try other packages like optim to do the same
 fitting, but I am a bit unsure about how to set up the problem.
 Any suggestions? BTW, I am working with R Version 2.2.1
 
 Lorenzo
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


apart from the fact that fitting 9 parameters to 12 points quite 
genereally will not yield satisfactory results (at least estimates will 
have huge uncertainties), total failure in your case seems obviouus if 
you plot your data: it's not even obvious where the three peaks (means 
of the gaussians) should be: all below x=2000 or is there one peak at 
about x=4500 and one of the 'peaks' below x=2000 is spurious? if you 
can't decide, nls has problems. moreover: how should reliable estimates 
of the standard deviations (width) of the gaussian result if the peaks 
essentially consist of exactly one point?

in short: I believe, you either  need much more data points or you must 
put in substantial a priori information (e.g. either means or standard 
deviations of the gaussians).

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] problem with pdf() R 2.2.1, os 10.4.6

2006-05-20 Thread Joerg van den Hoff
Marc Schwartz wrote:
 On Fri, 2006-05-19 at 16:36 -0700, Betty Gilbert wrote:
 Hi,
 I'm trying to write a histogram to a pdf

 pdf()
 plot-hist(c, xlim=c( 0.69, 0.84), ylim=c(0,100))

 when I try to open the pdf I can't open it, there is always some 
 error . Is there something I should add to make it run under this 
 operation system? I had problems upgrading to 2.3 (problem 
 downloading packages) so I'm not sure an upgrade will work out with 
 me. I just want a publication quality histogram...

 thank you,
 betty
 
 Betty,
 
 You need to explicitly close the pdf device with:
 
   dev.off()
 
 once the plotting related code has completed. See the example in ?pdf
 for more information.
 
 Otherwise, the result of hist() is not flushed to the disk file and the
 file then properly closed.
 
 HTH,
 
 Marc Schwartz
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


and maybe you are better off with

plot_something
dev.print(pdf)


with the standard settings of pdf() you get more or less a pdf file 
maintaining the current aspect ratio of the plot on the screen prior to 
the dev.print call

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] help

2006-05-19 Thread Joerg van den Hoff
karim99.karim wrote:
 Dear Sir,
 
 I’am a frensh student and i’am a new user of the R software.
 
 After using the command (x-read.delim(“clipboard”) to read a spreadsheet of 
 Excel, I want to run the bds test and calculate the Lyapunov exponent. I have 
 charged the R software by the packages tseries and tseriesChaos. when i run 
 bds.test(x,m=2) Unfortunately the R software displays “error in 
 as.vector(x,mode= “double”) : the object (list) can not be automatically 
 converted on double” so what shall I do to run this two tests(lyapunov and 
 bds)? And what is my mistake?
 
 I thank you in advance and I’am waiting forward your e-mail
 
 This si my e-mail: [EMAIL PROTECTED]
 
 Accédez au courrier électronique de La Poste : www.laposte.net ; 
 3615 LAPOSTENET (0,34 €/mn) ; tél : 08 92 68 13 50 (0,34€/mn)
 
 
 
   [[alternative HTML version deleted]]
 
 
 
 
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

seems that the data import failed. probably the exported excel data are 
not in a format accepted correctly by read.delim. check this.


alternative (maybe): there is an additional package 'gdata' on CRAN 
which you can download and install. it contains a function

read.xls

which can read directly the excel binary format (with some limitations).

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Can't there be a cd command?

2006-05-17 Thread Joerg van den Hoff
Duncan Murdoch wrote:
 On 5/16/2006 5:46 AM, Joerg van den Hoff wrote:
 Manuel López-Ibáñez wrote:
 Jan T. Kim wrote:
 That's an idea I like very much too -- much better than the currently
 popular idea of protecting users from the unfriendliness of
 programming, anyway...


 It is just my opinion that the amount of mail in R-help speaks 
 volumes about the current friendliness [1], or lack thereof, of R. 
 Perhaps I am just the only one who thinks this way...

 [1] http://en.wikipedia.org/wiki/Usability



 I think you are 100% right: the r-help list says it all. needless to 
 say, R is a great achievment without any doubt, but claiming that it's 
 easy to use (beyond the most basic arithmetics) is really wishful 
 thinking.
 
 This is sloppy thinking.  The volume of mail here shows that there are a 
 lot of questions, perhaps because there are a lot of users.
well, as far as my english goes, 'sloppy' is a strong word (and apart
from mathematicians physicists (my background) probably are the people
who are most allergic to being accused of it :-)) and it's an overhasty
conclusion on your side, I'd say.

I want to make clear beforehand, that I do _not_ think this a very
important discussion, but rather an informal exchange of opinions, so
maybe this takes us all a bit to far, but anyway:

for one, I myself (and I think manuel, too) was not talking of the shear
volume of mails (this obviously would have to be 'calibrated' against
the total number of R users and the resulting quantity had to be
compared to other help-lists). at least my impression is, that there are
a number of reoccuring  difficulties in the mail, which are rather
specific to R's design (whether this situation could or should be
altered: that would be a different topic). certainly, among these are
the subsetting/indexing issues, certainly lazy evaluation, certainly
anything related to environments, namespaces, computing  on the language
(substitute, eval, ...).
 You're also misquoting Jan:  he didn't say R was easy to use, he said 
 that the idea of urging people to program is better than trying to be 
 too friendly and protecting them from it.
I did'nt quote anyone AFAIKS, so I can't have misquoted anyone (having
misinterpreted someone I cannot rule out). the 'easy to use' did not
refer to a special statement from this thread, but to the general
impression one can get from the list and contributed as well as official
manuals (I only checked now: the 'what is R?' section on the homepage
contains one 'ease', one 'easy', one 'easily' within a total of two or
three paragraphs...).

it is pointless to dwell on this to long: what is easy for you might be
difficult for me or vice versa, depending on the question to answer/
problem to solve. _if_ I take the freedom to interpret it as 'easy for
the pedestrians', the statements are simply not true (easily extended
via packages??).

with reference to the idea of urging people to programm: well, the idea
in itself is not objectionable, the question is how realistic the
approach is (i.e. how large will be the success rate of getting people
to programm, which otherwise would'nt have done _and_ is this rate
larger in R than in the other packages?).
 
 I don't think programming R is easier than programming C, for example. 
 
 I do both, and I think R programming is easier.  It has a more sensible 
 idea of scoping, it doesn't have the preprocessor doing bizarre 
 transformations to the text, it doesn't have pointers writing to random 
 memory locations, it can handle strings much more sensibly.
this is all very well, though I only partly agree, but this a very
technical assessment anyway and seems to indicate that a non-programmer
will not be much better off with R as a 'starting language' than with C
(since your criteria mostly will not be initially 'operational'). I
_bet_ this starting phase would be easier with MATLAB/octave (but I'm
not arguing for pushing beginners to MATLAB!).
 
 On the negative side, the vector orientation of R encourages people to 
 come up with clever APL-style one-liners that are unreadable; the lack 
 of type declarations, the weird handling of indexing, the strange object 
 oriented programming models all make R programming hard.
yepp. and cascaded apply/lapply calls, I'd add.
 
 So R is not easy, but it's easier than C.
by a margin, maybe, though I have people in my group who definitely do
object (making especially a point of the fact that they have
difficulties to rapidly read/understand their own R code after a few
month which they do not experience with  their C++ stuff...)
 
 This is not to say that it takes the same time to solve the same 
 problem in both languages, since in R many, many things are already 
 there (either in the language (vectorized computations) or in the 
 packages). but the quantity 'number of new lines of working code per 
 hour' should be about the same.

 I have used MATLAB/octave previously. in comparison to R, the MATLAB 
 language sure

Re: [R] Can't there be a cd command?

2006-05-16 Thread Joerg van den Hoff
Manuel López-Ibáñez wrote:
 Jan T. Kim wrote:
 That's an idea I like very much too -- much better than the currently
 popular idea of protecting users from the unfriendliness of
 programming, anyway...

 
 It is just my opinion that the amount of mail in R-help speaks volumes 
 about the current friendliness [1], or lack thereof, of R. Perhaps I 
 am just the only one who thinks this way...
 
 [1] http://en.wikipedia.org/wiki/Usability
 
   

I think you are 100% right: the r-help list says it all. needless to 
say, R is a great achievment without any doubt, but claiming that it's 
easy to use (beyond the most basic arithmetics) is really wishful thinking.

I don't think programming R is easier than programming C, for example. 
This is not to say that it takes the same time to solve the same problem 
in both languages, since in R many, many things are already there 
(either in the language (vectorized computations) or in the packages). 
but the quantity 'number of new lines of working code per hour' should 
be about the same.

I have used MATLAB/octave previously. in comparison to R, the MATLAB 
language sure is not pretty, not consistent and less powerful, but 
without any question easier. and of course, the interactive use is 
facilitated by the absence of lots of paranthesis and quotes and the 
way, unknown commands are resolved by looking them up in the search path.

but that's not a situation, one should complain about: R simply cannot 
be everybody's darling.

what I always find a bit strange is the explicit claim/aim to blur the 
distinction between users and programmers: one could postulate the same 
aim (or property) for MATLAB, octave, IDL, or even any other language 
providing an interactive interpreter (python, ruby, Tcl, C++/Cint/root, 
...). in my experience the distinction between users (rather: 
non-programmers) and programmers always is quite sharp.

again, in comparison to MATLAB (where I have some experience) I have 
noted that the 'users' (a.k.a. non-programmers) have more difficulties 
in using R interactively, mainly for the following reasons:

- difficulties in understanding the neccessety for all those paranthesis
   (why can't I just say 'ls'?)
- subsetting ((x), [x], [[x]], $x, $x, ['x'], [x], [[x]], ... or 
what!?)
- R's strong functional orientation: R encourages writing functions with 
  _lots_ of arguments which you must understand/know of (even if you 
don't touch the defaults) and the user must construct the suitable 
function call and launch it to get his/her results.

of course one can write interactive ('please enter ...') programms, but 
usually does'nt since it 's much more convenient from the programmers 
point of view to utilize the functional approach). in a way R's 
functions behave like UNIX commands including lots of command line 
parameters: they are great for experienced
users but the average guy neither understands

tar[EMAIL PROTECTED][block] [tarfile]
  [exclude-file]  {-I include-file  |  -C directory  |  file |
  file} ...

nor

plot(x, y = NULL, type = p,  xlim = NULL, ylim = NULL,
   log = , main = NULL, sub = NULL, xlab = NULL, ylab = NULL,
   ann = par(ann), axes = TRUE, frame.plot = axes,
   panel.first = NULL, panel.last = NULL,
   col = par(col), bg = NA, pch = par(pch),
   cex = 1, lty = par(lty),
   lwd = par(lwd), asp = NA, ...)

easily.

so, to make a long story short: it would'nt harm avoiding the claim that 
R is 'easy'.  it's probably more of the you either love it or hate it 
variety (so the unicycle metaphor occuring in this thread fits very 
well, I believe).

joerg

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Can't there be a cd command?

2006-05-10 Thread Joerg van den Hoff
Issac Trotts wrote:
 On 5/9/06, Prof Brian Ripley [EMAIL PROTECTED] wrote:
 On Wed, 10 May 2006, Peter Dalgaard wrote:

 Issac Trotts [EMAIL PROTECTED] writes:

 I'm on Linux, so it didn't occur to me to look in the Windows FAQ
 until it came up in the Google search.  Why should Windows users be
 the only ones who learn how to change directories?
 Well, on Linux et al., the normal mode of operation is to start R from
 the commmand line in the directory where you want to be. On Windows
 you start from a desktop icon or a menu entry, which implies that
 you're always in the same directory, and usually the wrong one.
 Exactly.  I don't think I have ever used setwd() on Linux.

 Also, I have never seen this asked before for a Unix-alike, so it seems
 not to be a FAQ.  There is a common tendency for users who run into a
 problem to think everyone does too, and it isn't necessarily so.

 Frequently asked questions do make it to the FAQs: it is a defence
 mechanism for the volunteers supporting R.

 help.search(directory)  gets you there.
 
 OK, good to know.  Thanks for helping.
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


probably this has used up sufficient disk space in the mail archive
already, but anyway a bit of support  to the initial request: one of the
very first things I did, when I started using R, was adding

cd - setwd
pwd - getwd

to my .Rprofile. and when I encouraged colleagues to try R out, the 'top
three' questions where

how can I get out? I tried ^C, exit(), quit(), but nothing works
how can I change the directory? `cd' does'nt work
where am I? `pwd' does'nt work (`pwd' is something like a 'local
speciality', I presume...)

this is of course a marginal obstacle for new users, but avoidable all
the same. (and maybe most people (like myself) in the first place don't
think it important enough to post it -- maybe it _would_ be in the FAQ
otherwise...)

so, _if_ 'cd' would be recognized in future releases, it would'nt do any
harm, would it?


joerg

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] boxplot - labelling

2006-05-08 Thread Joerg van den Hoff
Sachin J wrote:
 Hi,

   How can I get the values of mean and median (not only points but values 
 too) on the boxplot. I am using boxplot function from graphics package. 
 Following is my data set

df 
   [1]  5  1  1  0  0 10 38 47  2  5  0 28  5  8 81 21 12  9  1 12  2  4 22  3

mean.val - sapply(df,mean)
 boxplot(df,las = 1,col = light blue)
 points(seq(df), mean.val, pch = 19)

   I could get mean as dot symbol but i need values too? Also how to print the 
 x-axis labels vertically instead of horizontally? Is there any other function 
 to achieve these?

   Thanx in advance.

   Sachin
 


 
 __
 
 
 
   [[alternative HTML version deleted]]
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

use

par(las = 3)
bxp - boxplot(df,las = 1,col = light blue)

to control label orientation and access the data displayed in the 
boxplot (see `?par', `?boxplot')

use

text(x, y, 'some_text_or_number')

to add text to the boxplot. for x, use the 'number of the box' (i.e. `1' 
in the example above). for y use the mean or median or whatever you wanted).


joerg

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] symbols plot

2006-03-29 Thread Joerg van den Hoff
before getting scolded for submitting a (non-)bug report:

when using the 'symbols' function for plotting boxplot data (i.e. using 
'boxplots' symbols), I noted that the x/y-position of the symbols is 
associated with the center of the box.

while this is obviously natural for a usual plotting symbol (say a 
circle or a rectangle), it is probably not desired if one uses the 
'boxplots' symbols: looking at such a plot, probably everyone will 
assume that y-position is that of the median within the box (in other 
words: that one can read off the values of the medians from the y-axis).

the current behaviour is counter-intuitive, I believe, if the 
distributions are asymmetrical (and the median is not centered in it's 
box). (I even think, that such plots are misinterpreted easily: think 
what happens if the median lies very near one of the hinges in one box 
and is centered in another one within the same plot and the medians are 
actually the same)

in short: I think the 'boxplots' should not be centered at the specified 
y-coordinates but rather drawn with a y-coordinate of

y + bxh * (0.5 - bxm)

where bxh and bxm are the second and fifths column of the 'boxplots' 
matrix. in this way, the median position is identical to the specified 
y-coordinate.

at the very least, I think, the manpage should explicitely state that 
all symbols (including boxplots) are positioned with their geometrical 
center at the specified coordinates.

right or wrong?

regards,

joerg

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] bug in map('world') ?

2006-03-08 Thread Joerg van den Hoff
hi,

did'nt see anything in the archive:

map('world',pro='rectangular',para=0)

yields a strange artifact (horizontal bar) extending over the whole map 
at a certain latitude range (approx 65 deg. north), whereas

map('world',pro='rectangular',para=180)

(which should be the same) does not show the artifact.

the artifact shows up in other projections as well, e.g.

map('world',pro='bonne',para=45)


which seems to show that the problem might be in the data base of the 
map (i.e. polygon definition)??


any ideas?

regards,

joerg

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] inconsistent behaviour of ifelse and if ... else

2005-12-21 Thread Joerg van den Hoff
is the behaviour

val - ifelse(TRUE, numeric(0), 123)
val  #NA

intended or is it a bug, i.e. should an empty object be returned as 
might be expected(also in comparsion to what an explicit
val - {if(TRUE) numeric(0) else 123} yields)?

thanks,

joerg

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] nls and weighting

2005-11-30 Thread Joerg van den Hoff
I posted this a week ago on r-devel but to no avail and hope this not 
considered cross-posting:


===cut===
hi everybody,

which each release I hope that the section

weights: an optional numeric vector of (fixed) weights.  When present,
the objective function is weighted least squares. _not yet
implemented_

in the help page of 'nls' is missing the last sentence.

are their any plans to allow/include weighting in the upcoming releases?

modifying the cost function to include the weights is probably not the
problem, I presume. what is the reason for not including the weighting?
are they related to the 'statistical' output (estimation of parameter
uncertainties and significances?).

I know of the y ~ M   vs. ~ sqrt(W)*(y-M) work around suggestion in
MASS to include weighting. (I understand that resulting error estimates
und confidence intervals from 'nls' are wrong in this case. right?)
would'nt it be sensible to inlcude weighting in 'nls' at least on this
level, i.e. weighted parameters provided now, correct error estimates
and the like coming later?


regards,
joerg
===cut===

I post this here again hoping to learn about possible work arounds 
(apart from the MASS proposal) for the current situation with 'nls' (no 
weighting allowed).

thanks in advance

joerg

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] problem with lapply(x, subset, ...) and variable select argument

2005-10-11 Thread joerg van den hoff
Gabor Grothendieck wrote:
 The problem is that subset looks into its parent frame but in this
 case the parent frame is not the environment in tt but the environment
 in lapply since tt does not call subset directly but rather lapply does.
 
 Try this which is similar except we have added the line beginning
 with environment before the print statement.
 
 tt - function (n) {
x - list(data.frame(a=1,b=2), data.frame(a=3,b=4))
environment(lapply) - environment()
print(lapply(x, subset, select = n))
 }
 
 n - b
 tt(a)
 
 What this does is create a new version of lapply whose
 parent is the environment in tt.
 
 
 On 10/10/05, joerg van den hoff [EMAIL PROTECTED] wrote:
 
I need to extract identically named columns from several data frames in
a list. the column name is a variable (i.e. not known in advance). the
whole thing occurs within a function body. I'd like to use lapply with a
variable 'select' argument.


example:

tt - function (n) {
   x - list(data.frame(a=1,b=2), data.frame(a=3,b=4))
   for (xx in x) print(subset(xx, select = n))   ### works
   print (lapply(x, subset, select = a))   ### works
   print (lapply(x, subset, select = a))  ### works
   print (lapply(x, subset, select = n))  ### does not work as intended
}
n = b
tt(a)  #works (but selects not the intended column)
rm(n)
tt(a)   #no longer works in the lapply call including variable 'n'


question: how  can I enforce evaluation of the variable n such that
the lapply call works? I suspect it has something to do with eval and
specifying the correct evaluation frame, but how? 


many thanks

joerg

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

 
 

many thanks to thomas and gabor for their help. both solutions solve my 
problem perfectly.

but just as an attempt to improve my understanding of the inner workings 
of R (similar problems are sure to come up ...) two more question:

1.
why does the call of the [ function (thomas' solution) behave 
different from subset in that the look up of the variable n works 
without providing lapply with the current environment (which is nice)?

2.
using 'subset' in this context becomes more cumbersome, if sapply is 
used. it seems that than I need
...
environment(sapply) - environment(lapply) - environment()
sapply(x, subset, select = n))
...
to get it working (and that means you must know, that sapply uses 
lapply). or can I somehow avoid the additional explicit definition of 
the lapply-environment?


again: many thanks

joerg

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] problem with lapply(x, subset, ...) and variable select argument

2005-10-10 Thread joerg van den hoff
I need to extract identically named columns from several data frames in 
a list. the column name is a variable (i.e. not known in advance). the 
whole thing occurs within a function body. I'd like to use lapply with a
variable 'select' argument.


example:

tt - function (n) {
x - list(data.frame(a=1,b=2), data.frame(a=3,b=4))
for (xx in x) print(subset(xx, select = n))   ### works
print (lapply(x, subset, select = a))   ### works
print (lapply(x, subset, select = a))  ### works
print (lapply(x, subset, select = n))  ### does not work as intended
}
n = b
tt(a)  #works (but selects not the intended column)
rm(n)
tt(a)   #no longer works in the lapply call including variable 'n'


question: how  can I enforce evaluation of the variable n such that
the lapply call works? I suspect it has something to do with eval and
specifying the correct evaluation frame, but how? 


many thanks

joerg

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] R graphics

2005-07-21 Thread joerg van den hoff
Sam Baxter wrote:
 Hi
 
 I am trying to set up 16 graphs on one graphics page in R. I have used 
 the mfrow=c(4,4) command. However I get a lot of white space between 
 each graph. Does anyone know how I can reduce this?
 
 Thanks
 
 Sam
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
 


?layout
as an alternative to par(mfrow) might be helpful anyway


too large margins:

?par

reduce value of mar, for instance

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Michaelis-menten equation

2005-07-20 Thread joerg van den hoff
I believe the following is correct:
1.
first of all, as peter daalgaard already pointed out, your data Cp(t)
are following a straight line
very closely, i.e. 0.-order kinetics
2.
for your diff. eq. this means that you are permanently in the range cp
 Km so that
dCp/dt = - Vm/Vd = const. =: -b and, therefore, Cp = a - b*t
3.
you can't get any reliable information concerning Km from the fit. the
solution of the diff. eq (according to Maxima),
namely t + const. = -(Km*log(Cp) + Cp)/(Vm/Vd) tells you the same:
Km*log(cp)  Cp in your data.
4.
in any case (even in the range Km ~= Cp) you can't determine Vm _and_ Vd
separately according to your diff. eq., you only get the ratio b =
Vm/Vd. this does make sense:
what you are measuring  is the decreasing plasma concentration, you
don't have any information
concerning the relevant volume fraction, i.e. the Vd, in you data.
therefore any variation in the effective
max. velocity can either be ascribed to a variation of Vm or to a
modified Vd. in other words:
you should view Vm* = Vm/Vd as your  effective Vm.
5.
x-PKindex[,1]
y-PKindex[,2]
res - lm ( y~x ) yields a=8.561, b=Vm*= 0.279
summary(abs(residuals(r)/y)*100)
   Min. 1st Qu.  MedianMean 3rd Qu.Max.
0.00863 0.09340 0.13700 0.26500 0.22900 1.37000

i.e. 0.265 percent deviation on average between data and fit. I believe
that is the max. information you can get from your data
((the result for b is  accidently is not so far away
from the ratio of your results Vm = 10.04, Vd = 34.99 which actually
must be completely unstable
(double both parameters and nothing happens to the fit).
6.
you state you simulated the data with km=4.69? using the above Vm and
Vd, the resulting data are (not unexpectedly)
quite different from those you used as input to the fit. maybe you made
an error somewhere upstream?
7.
in conclusion: don't try to fit Vm _and_ Vd separately, check whether
your simulated data are correct, keep in mind that if kmCp, you can't
fit Km (at least not reliably).



Chun-Ying Lee wrote:
 Hi,
 
We are doing a pharmaockinetic modeling.  This model is
 described as intravenous injection of a certain drug into
 the body.  Then the drug molecule will be eliminated (or decayed)
 from the body.  We here used a MM eq. to describe the elimination
 process and the changes of the drug conc..  So the diff. eq. would
 be: dCp/dt = -Vm/Vd * Cp/(Km+Cp).  Vd is a volume of distribution.
 We used lsoda to solve the diff. eq. first and fit the diff. eq.
 with optim first (Nelder-Mead simplex) and followed by using nls 
 to take over the fitting process of optim.  However, we can not
 obtain the correct value for Km if we used the above model.  The
 correct Km can be obtained only when we modeled the diff eq. with
 dCp/dt= -Vm/Vd * Cp/(Km/vd + Cp).  Now we lost.  The data were
 from simulation with known Vm and Km.  Any idea?  Thanks.
 
 regards,
 --- Chun-ying Lee
 
it is not clear to me what you are trying to do:
you seem to have a time-concentration-curve in PKindex and you seem 
to set up a derivative of this time dependency according to some 
model in dCpdt. AFAIKS this scenario is  not directly related to the 
situation described by the Michaelis-Menten-Equation which relates 
some input concentration with some product concentration. If Vm and
Km are meant to be the canonical symbols,
what is Vd, a volume of distribution? it is impossible to see (at least
for me) what exactly you want to achieve.

(and in any case, I would prefer nls for a least squares fit 
instead of 'optim').

joerg



__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-
 
 guide.html
 
 
 
 
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

Re: [R] Michaelis-menten equation

2005-07-19 Thread joerg van den hoff
Chun-Ying Lee wrote:
 Dear R users:
I encountered difficulties in michaelis-menten equation. I found 
 that when I use right model definiens, I got wrong Km vlaue, 
 and I got right Km value when i use wrong model definiens. 
 The value of Vd and Vmax are correct in these two models. 
 
 #-right model definiens
 PKindex-data.frame(time=c(0,1,2,4,6,8,10,12,16,20,24),
conc=c(8.57,8.30,8.01,7.44,6.88,6.32,5.76,5.20,4.08,2.98,1.89))
 mm.model - function(time, y, parms) { 
dCpdt - -(parms[Vm]/parms[Vd])*y[1]/(parms[Km]+y[1]) 
list(dCpdt)}
 Dose-300
 modfun - function(time,Vm,Km,Vd) { 
out - lsoda(Dose/Vd,time,mm.model,parms=c(Vm=Vm,Km=Km,Vd=Vd),
   rtol=1e-8,atol=1e-8)
   out[,2] } 
 objfun - function(par) { 
out - modfun(PKindex$time,par[1],par[2],par[3]) 
sum((PKindex$conc-out)^2) } 
 fit - optim(c(10,1,80),objfun, method=Nelder-Mead)
 print(fit$par)
 [1] 10.0390733  0.1341544 34.9891829  #--Km=0.1341544,wrong value--
 
 
 #-wrong model definiens
 #-Km should not divided by Vd--
 PKindex-data.frame(time=c(0,1,2,4,6,8,10,12,16,20,24),
conc=c(8.57,8.30,8.01,7.44,6.88,6.32,5.76,5.20,4.08,2.98,1.89))
 mm.model - function(time, y, parms) { 
dCpdt - -(parms[Vm]/parms[Vd])*y[1]/(parms[Km]/parms[Vd]+y[1]) 
list(dCpdt)}
 Dose-300
 modfun - function(time,Vm,Km,Vd) { 
 out - lsoda(Dose/Vd,time,mm.model,parms=c(Vm=Vm,Km=Km,Vd=Vd),
 rtol=1e-8,atol=1e-8)
out[,2] 
 } 
 objfun - function(par) { 
 out - modfun(PKindex$time,par[1],par[2],par[3]) 
 sum((PKindex$conc-out)^2)} 
 fit - optim(c(10,1,80),objfun, method=Nelder-Mead)
 print(fit$par)
 [1] 10.038821  4.690267 34.989239  #--Km=4.690267,right value--
 
 What did I do wrong, and how to fix it?
 Any suggestions would be greatly appreciated.
 Thanks in advance!!
 
 
 

it is not clear to me what you are trying to do:
you seem to have a time-concentration-curve in PKindex and you seem to
set up a derivative of this time dependency according
to some model in dCpdt. AFAIKS this scenario is  not directly related to
the situation described by the Michaelis-Menten-Equation which relates
some input concentration with some product concentration. If Vm and
Km are meant to be the canonical symbols,
what is Vd, a volume of distribution? it is impossible to see (at least
for me) what exactly you want to achieve.

(and in any case, I would prefer nls for a least squares fit instead
of 'optim').

joerg
 
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

Re: [R] unexpected par('pin') behaviour

2005-07-14 Thread joerg van den hoff
Martin Maechler wrote:
joerg == joerg van den hoff [EMAIL PROTECTED]
on Wed, 13 Jul 2005 16:00:58 +0200 writes:
 
 
 joerg hi everybody,
 joerg I noticed the following: in one of my scripts 'layout' is used to 
 joerg generate a (approx. square) grid of variable dimensions (depending 
 on 
 joerg no. of input files). if the no. of subplots (grid cells) becomes 
 joerg moderately large  (say  9) I use a construct like
 
 joerg   ###layout grid computation and set up occurs here###
 joerg   ...
 joerg   opar - par(no.readonly = T);
 joerg   on.exit(par(opar))
 joerg   par(mar=c(4.1, 4.1, 1.1, .1))
 joerg   ###plotting occurs here
 joerg   ...
 
 joerg to reduce the figure margins to achieve a more
 joerg compact display. apart from 'mar' no other par()
 joerg setting is modified.
 
 yet another example of using  par('no.readonly') when it's not
 needed and inefficient.

might be. but at least it is immune against modifying some more 'par' 
settings in the course of modfications  at some other place in the 
programm. inefficiency: should be at the ppm level of total cpu-usage in 
my case, :-). what's so bad with copying back and forth this moderately 
large vector?
 
 Replacing the above by
 
 ###layout grid computation and set up occurs here###
 ...
  op - par(mar=c(4.1, 4.1, 1.1, .1))
  on.exit(par(op))
 ###plotting occurs here
 

 will be much more efficient and even solve your problem with pin.
 
right (solves the problem). I'll adopt this change for the time being. 
thank you.

 But then, yes, there might be another par() problem hidden in
 your code / example, 
 but unfortunately you have not specified reproducible code.
 
 
 joerg this works fine until the total number of subplots becomes too 
 large 
 joerg (large depending on the current size of the X11() graphics 
 device 
 joerg window, e.g. 7 x 6 subplots for the default size fo x11()).
 
 joerg I then get the error message (only _after_ all plots are correctly 
 joerg displayed, i.e. obviously during execution of the above on.exit() 
 call)
 
 joerg Error in par(opar) :
 joerg invalid value specified for graphics parameter pin
 
 joerg and par(pin) yields:
 
 joerg [1]  0.34864 -0.21419
 
 you mean *after* all the plotting ,  not the pin values in
 'opar', right?

yes
 
 joerg which indeed is invalid (negative 2nd component).
 
 joerg I'm aware of this note from ?par:
 
 joerg The effect of restoring all the (settable) graphics parameters as
 joerg in the examples is hard to predict if the device has been resized.
 joerg Several of them are attempting to set the same things in different
 joerg ways, and those last in the alphabet will win.  In particular, the
 joerg settings of 'mai', 'mar', 'pin', 'plt' and 'pty' interact, as do
 joerg the outer margin settings, the figure layout and figure region
 joerg size.
 
 {{which shows you the known  but not widely known fact that
   traditional par() based graphics are ``flawed by design''
   and that's why there is the package grid for better 
   designed graphics

... which seems to my simple mind a lot more complicated to come to 
terms with than the graphics package. I understand that grid is more 
powerful but the subset of functionality provided by 'graphics' seems 
more difficult to use in 'grid'. wrong or right?
 }}
 
 joerg but my problem occurs without any resizing of the
 joerg x11() window prior to resetting par to par(opar).
 
 It still would be interesting to get a reproducible example
 here, as the posting guide asks for.
 
===cut
graphics.off()

f - function(n=7, m=6) {
nm - n*m
layout(matrix(1:(nm),n,m))
opar - par(no.readonly = T)
on.exit(par(opar))
par(mar = c(4.1, 4.1, 1.1, 0.1))
for (i in 1:nm) plot(i, pch=(i-1)%%25+1)
layout(1)
}
f(5) #good
par('pin')
f()  #bad (at least for x11() default size)
par('pin')
===cut
 Martin

thanks for bothering.
joerg
 
 
 joerg any ideas, what is going on?
 
 joerg platform powerpc-apple-darwin7.9.0
 joerg arch powerpc
 joerg os   darwin7.9.0
 joerg system   powerpc, darwin7.9.0
 joerg status   Patched
   ^^^
 I hope that this is not the basic problem 
no, don't think so. that concerned the MacOS GUI, I believe.
 
 joerg major2
 joerg minor1.0
 joerg year 2005
 joerg month05
 joerg day  12
 joerg language R
 
 joerg regards,
 
 joerg joerg
 


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] unexpected par('pin') behaviour

2005-07-13 Thread joerg van den hoff
hi everybody,

I noticed the following: in one of my scripts 'layout' is used to 
generate a (approx. square) grid of variable dimensions (depending on 
no. of input files). if the no. of subplots (grid cells) becomes 
moderately large  (say  9) I use a construct like

   ###layout grid computation and set up occurs here###
...
   opar - par(no.readonly = T);
   on.exit(par(opar))
   par(mar=c(4.1, 4.1, 1.1, .1))
   ###plotting occurs here
...

to reduce the figure margins to achieve a more compact display. apart 
from 'mar' no other par() setting is modified.

this works fine until the total number of subplots becomes too large 
(large depending on the current size of the X11() graphics device 
window, e.g. 7 x 6 subplots for the default size fo x11()).

I then get the error message (only _after_ all plots are correctly 
displayed, i.e. obviously during execution of the above on.exit() call)

Error in par(opar) :
invalid value specified for graphics parameter pin


and par(pin) yields:

[1]  0.34864 -0.21419


which indeed is invalid (negative 2nd component).

I'm aware of this note from ?par:

The effect of restoring all the (settable) graphics parameters as
  in the examples is hard to predict if the device has been resized.
  Several of them are attempting to set the same things in different
  ways, and those last in the alphabet will win.  In particular, the
  settings of 'mai', 'mar', 'pin', 'plt' and 'pty' interact, as do
  the outer margin settings, the figure layout and figure region
  size.


but my problem occurs without any resizing of the x11() window prior to 
resetting par to par(opar).

any ideas, what is going on?

platform powerpc-apple-darwin7.9.0
arch powerpc
os   darwin7.9.0
system   powerpc, darwin7.9.0
status   Patched
major2
minor1.0
year 2005
month05
day  12
language R

regards,

joerg

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] x11 and pseudo-color

2005-06-01 Thread joerg van den hoff
for some reason the following message seems not to have reached the list 
 in the first try, at least I can't find it. my apologies if this is my 
fault:


we are running R under Solaris with SunRay Terminals, which are set to 8
bit color to comply with some other software. In this configuration,
X11() opens with colortype=true, i.e., it is not recognized that
actually the display is only 8 bit. This leads to error messages
(advising to use 'pseudo.cube').

question 1: is X11() supposed to recognize  the actual color
capabilities? i.e. is this a bug?

question 2: is there a possibility to query the color capabilities from
within R in order to being able to open the X11() displays always (for
true color as well as for 8 bit) with the correct colortype setting from
within a function?

regards,
joerg

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] nls(() and trace

2005-06-01 Thread joerg van den hoff

hi everybody,

is there a canonical way to get hold of the trace=TRUE output from 
nls, i.e. to copy it to a R variable (or at least to an external log file)?


I have only found the possibility to fix(nlsModel) (and than the 
correct copy of that: namespace function ...) within the R-session by 
modifying the trace() definition within nlsModel. not really good for 
everyday use ...



regards,
joerg


ps: if this comes to douglas bates' attention: would'nt it be helpful to 
add the trace output as a further component(possibly optitional, 
depending on the trace flag)  to the nls-object returned by nls()?


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] X11() and pseudo color

2005-05-31 Thread joerg van den hoff
we are running R under Solaris with SunRay Terminals, which are set to 8 
bit color to comply with some other software. In this configuration, 
X11() opens with colortype=true, i.e., it is not recognized that 
actually the display is only 8 bit. This leads to error messages 
(advising to use 'pseudo.cube').


question 1: is X11() supposed to recognize  the actual color 
capabilities? i.e. is this a bug?


question 2: is there a possibility to query the color capabilities from 
within R in order to being able to open the X11() displays always (for 
true color as well as for 8 bit) with the correct colortype setting from 
within a function?


regards,
joerg

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] values of bars in barplot

2005-05-30 Thread joerg van den hoff

luc tardieu wrote:
Hi, 


I couldn't find how to have the values written on the
top of each bar in a barplot. When using hist(), it is
possible to use labels=T, but this option does not
seem to exist for barplot().

Is there a trick I could use to do that ?

Thanks to all

Luc

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html



something like:


x - 1:3
#ensure some free space above the bar and remember midpoints:
bmp - barplot(x, ylim = c(0, 1.1*max(x)))

text(bmp, x + .03*max(x), x)


should do.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] How to break an axis?

2005-05-24 Thread joerg van den hoff

Bo Peng wrote:

Dear list,

I need to plot four almost horizontal lines with y-values around
1,3,4, 400. If I plot them directly, the first three lines will be
indiscernible so I am thinking of breaking y-axis into two parts, one
with range (0,5), another (395,400). Is there an easy way to do this?

I can think of two ways: 
1. use two plots and draw axes manually. The plot margins, are however

difficult to adjust.
2. use one plot, adjust y-values of the lines and draw y-axis
manually. But, how would I break y-axis and add separation symbols
*on* yaxis? (By separation symbol, I mean something like
--//--

Many thanks in davance.
Bo

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


maybe something like

matplot(1:10, rep(1,10)%o%c(1,3,4), col=1:3, ylim = c(0,20), type='b')
par(new=T)
matplot(1:10, rep(400,10),axes=F,ann=F, col=4, ylim = c(0,400),type='b')
axis(4)
legend(par('usr')[2], par('usr')[4], bg='white',   xjust=1, c('left 
axis', 'left axis', 'left axis', 'right axis'),col=1:4,

   pch=as.character(1:4))


solves your problem (double y-axis instead of splitting the axis).

joerg

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] double backslashes in usage section of .Rd files

2005-04-04 Thread joerg van den hoff
I have written a package, where a function definition includes a regexp 
pattern including double backslashes, such as

myfunction - function (pattern = .*\\.txt$)
when I R CMD CHECK the corresponding .Rd file, I get warnings 
(code/documentation mismatch), if I enforce two backslashes in the 
documentation print out by

\usage { myfunction (pattern = .*.txt$) }
have I to live with this or is their a way to avoid the warnings (apart 
from being satisfied with a wrong manpage ...)?

regards
joerg
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] y-label on right hand side of plot

2005-03-31 Thread joerg van den hoff
Is there a better way than:
par(mar=c(6,6,6,6))
plot(1:10,yaxt=n,ylab=)
axis(4)
text(12,5.5,'y-label',xpd=T,srt=90)
to get the y-ticks _and_ the y-label to the rhs of the plot? I did not 
find anything in the  'par', 'plot', 'axis' and 'title' manpages to 
solve the problem. (the above is ugly, because one needs to hardcode the 
  text position or needs to calculate it 'manually' from par('usr'). it 
would be much nicer, if there were a flag to 'title' controlling were 
the labels occur).

thanks
joerg
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] OS X proxy question

2005-03-22 Thread joerg van den hoff
Drew Balazs wrote:
All,
I'm currently using R 2.0.1 on a Powerbook G4 with OS X 10.3.8. So far
the only way I've found to set my proxy is by doing 
Sys.putenv(http_proxy=insert proxy url:proxy port)  everytime I
start up R.

This works fine, but I'd like to find a solution that doesnt require
manual input everytime I start up R. Does any one have a better way of
accomplishing this?
Thanks,
Drew Balazs
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
if I understand your  problem, you need a way to get some environment 
variables right which usually are set in .profile or .cshrc.

to this  end you need a directory `.MacOSX' in your home dir and there a 
file `environment.plist'. this can best be edited with the uitility 
`Property Editor' which is in `/Developer/Applications/Utilities'.

see also http://developer.apple.com/qa/qa2001/qa1067.html
hope this helps
joerg
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Installation problem MacOS X

2005-03-18 Thread joerg van den hoff
Hector L. Ayala-del-Rio wrote:
R gurus
   I have tried to install the R 2.0.1 binary for OS X and although the 
installation was successful I can get the application going.  When I 
double click the icon R tries to load (R window shows briefly) and it 
quits immediately.  This behavior was described in this list before and 
nobody found the answer to the problem. If you try to load the x11 
version by typing R at the command line it loads up with no problem.  
This means that the app is partially working and there are no 
permissions issue.  The most interesting thing is if I log to a 
different account (Dummy) and I double click the application it loads 
with no problem.  This makes me think that there has to be some type of 
user specific file or directory that is causing the gui to quit.  Any 
suggestions on what file(s) could be affecting R?

Thanks
Hector
**
Héctor L. Ayala-del-Río, Ph.D.
Assistant Professor
Department of Biology
University of Puerto Rico at Humacao
CUH postal station
100 road 908
Humacao, PR 00791
Ph: 787-850- x 9001
Fax: 787-850-9439
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! 
http://www.R-project.org/posting-guide.html

you have moved the R app already to it's final destination (instead of 
starting it from the mounted disk image)?

I would try the Console utility (in /Applications/Utilities) to have a 
look at console.log and system.log (or inspect the logs directly: 
/private/var/log/system.log and /Library/Logs/Console/vdh/console.log) 
immediately _after_ an  attempt to start the R app. probably it's a 
permission problem all the same.

regards
joerg
ps: there is a R mailing list exclusively for OS X users:
https://stat.ethz.ch/mailman/listinfo/r-sig-mac
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] plot function

2005-01-26 Thread joerg van den hoff
Cuichang Zhao wrote:
Hello, 
how can use change the plot function to change the range of axises. I want my graph from a certain range [a, b], instead of  from the min to max of of datas?
if i want draw a line instead of dots, should i use both plot and lines function.
for example:
plot(x, y);
lines(x, y);
 
things seem not working if i only use lines(x, y).
 
Thank you so much.
 
Cuichang Zhao
 
Jan 25, 2005


-
[[alternative HTML version deleted]]
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
the first see also entry in the help text for plot is 
plot.default. well, do that: try ?plot.default. time estimate for 
finding it yourself: one minute. you really should think about using the 
online help. if everybody would use your approach, the list would 
probably get a thousand 'hits' per hour of the quality: is there a way 
to speed up matrix addition? the for loops run too slow.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] more question

2005-01-25 Thread joerg van den hoff
Cuichang Zhao wrote:
Hello, 
thank you very much for your help in last email. it is very helpful. right now, i have more questions about my project,
 
1. solve can i remove the NA from a vectors:
for exmample, if my vector is:
v - (NA, 1, 2, 3, 4, 5)
which should read v - c(NA, 1, 2, 3, 4, 5):

how can I remove the NA from vector v
 
v - v[!is.na(v)]
2. how I can get input from the user?
try ?readline
 
3. can I read more than one data files in one program? and how i can write something to a file
?read.table, ?write.table, ?scan, ?readLines, ?write
 
4. how i can display/output something on the screen?
?print, ?x11, ?plot, ?Devices
 
thank you so much
 
Best wish
 
C-Ming
 
Jan 24, 2005
one remark: I personally think it's a bad idea to refer people too 
quickly (with implied reproach) to the manuals (if the manuals are good, 
_anything_ is written down _somewhere_ but it might take substantial 
time to find it).
but at the very least your questions 2.-4. (and the solution to no. 1 is 
not so hard to find either) are so elementary, that you probably have 
not spent a single minute reading the documentation (online help or e.g. 
Introduction to R). and that is in my view not a healthy ratio of time 
I invested myself to solve the problem to time others invest to solve 
my problem.

no offense meant, but really ...
joerg
 
 


-
[[alternative HTML version deleted]]
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] background color for plotting symbols in 'matplot'

2005-01-19 Thread joerg van den hoff
thanks for the response.
Uwe Ligges wrote:
joerg van den hoff wrote:
something like
matplot2(matrix(1:6,3,2),matrix(7:12,3,2),pch=21,bg=c(2,3),type='b')

Where can we find matplot2?
oops. that should have been 'matplot' (not 'matplot2'), of course.

does not yield the expected (at least by me) result: only the points 
on the first line get (successively) background colors for the 
plotting symbols, the second line gets no background color at all for 
its plotting symbols.

I think the natural behaviour should be two curves which (for the 
example given above) symbol-background colors 2 and 3, respectively 
(as would be obtained by a suitable manual combination of 'plot' and 
'lines'). the modification of the matplot code to achieve this 
behaviour is obvious as far as I can see (adding 'bg' to the explicit 
arguments of matplot and handling similar to 'lty', 'cex' and the like 
inside the function including transfer to 'plot' and 'lines' argument 
list).

is the present behaviour a bug of 'matplot' or is it for some reason 
intended behaviour?

The real point is that you might want to mark by rows *or* by columns, 
so it's not that easy to specify a sensible default behaviour, at least 
one has to think about it.
I'm aware of this: any specific behaviour could be the 'best' default 
for someone. in terms of consistency, I would argue that matplot plots 
columns of x against columns of y, so these columns should be 
addressed. that is how 'lty' and 'pch' and 'cex' do it. the present 
behaviour of 'bg' ('bg' interpreted only for column 1 of x against 
column 1 of y) is not sensible.

If you want to implement it for all possible arguments, the well known 
problem of huge number of arguments springs to mind as well.
that is indeed a problem, but I think mainly when reading the help 
pages, which then are cluttered with many not often used graphic parameters.
Since you say the modification [...] is obvious: I think R-core 
welcomes your contribution.
well, I'm not a fluent R programmer. I'm not sure if the simple minded 
modification of 'matplot' would be welcome by R-core. rather, I attach 
here the modified code 'matplot2' (sic!), if someone wants to use it. a 
'diff' vs. the original versions shows easily the few modified lines.

joerg
Uwe Ligges

regards,
joerg
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! 
http://www.R-project.org/posting-guide.html




#clone of the standard 'matplot' version augmented by using 'bg' as an
#additional explicit argument and modifications of the code leading to
#a bevaviour of 'bg' similar to 'lty', 'pch', 'cex' et cetera (columnwise
#recycling of 'bg' entries).
matplot2 - function (x, y, type = p, lty = 1:5, lwd = 1, pch = NULL, col = 
1:6, 
cex = NULL, xlab = NULL, ylab = NULL, xlim = NULL, ylim = NULL, bg=NULL,
..., add = FALSE, verbose = getOption(verbose)) 
{
paste.ch - function(chv) paste(\, chv, \, sep = , 
collapse =  )
str2vec - function(string) {
if (nchar(string)[1]  1) 
strsplit(string[1], NULL)[[1]]
else string
}
xlabel - if (!missing(x)) 
deparse(substitute(x))
ylabel - if (!missing(y)) 
deparse(substitute(y))
if (missing(x)) {
if (missing(y)) 
stop(Must specify at least one of `x' and `y')
else x - 1:NROW(y)
}
else if (missing(y)) {
y - x
ylabel - xlabel
x - 1:NROW(y)
xlabel - 
}
kx - ncol(x - as.matrix(x))
ky - ncol(y - as.matrix(y))
n - nrow(x)
if (n != nrow(y)) 
stop(`x' and `y' must have same number of rows)
if (kx  1  ky  1  kx != ky) 
stop(`x' and `y' must have only 1 or the same number of columns)
if (kx == 1) 
x - matrix(x, nrow = n, ncol = ky)
if (ky == 1) 
y - matrix(y, nrow = n, ncol = kx)
k - max(kx, ky)
type - str2vec(type)
if (is.null(pch)) 
pch - c(paste(c(1:9, 0)), letters)[1:k]
else if (is.character(pch)) 
pch - str2vec(pch)
if (verbose) 
cat(matplot: doing , k,  plots with , paste( col= (, 
paste.ch(col), ), sep = ), paste( pch= (, paste.ch(pch), 
), sep = ),  ...\n\n)
ii - match(log, names(xargs - list(...)), nomatch = 0)
log - if (ii != 0) 
xargs[[ii]]
xy - xy.coords(x, y, xlabel, ylabel, log = log)
xlab - if (is.null(xlab)) 
xy$xlab
else xlab
ylab - if (is.null(ylab)) 
xy$ylab
else ylab
xlim - if (is.null(xlim)) 
range(xy$x[is.finite(xy$x)])
else xlim
ylim - if (is.null(ylim)) 
range(xy$y[is.finite(xy$y)])
else ylim
if (length(type)  k) 
type - rep(type, length.out = k)
if (length(lty)  k) 
lty - rep(lty, length.out = k)
if (length(lwd)  k) 
lwd - rep(lwd, length.out = k)
if (length(pch)  k) 
pch - rep(pch

[R] background color for plotting symbols in 'matplot'

2005-01-18 Thread joerg van den hoff
something like
matplot2(matrix(1:6,3,2),matrix(7:12,3,2),pch=21,bg=c(2,3),type='b')
does not yield the expected (at least by me) result: only the points on 
the first line get (successively) background colors for the plotting 
symbols, the second line gets no background color at all for its 
plotting symbols.

I think the natural behaviour should be two curves which (for the 
example given above) symbol-background colors 2 and 3, respectively (as 
would be obtained by a suitable manual combination of 'plot' and 
'lines'). the modification of the matplot code to achieve this 
behaviour is obvious as far as I can see (adding 'bg' to the explicit 
arguments of matplot and handling similar to 'lty', 'cex' and the like 
inside the function including transfer to 'plot' and 'lines' argument list).

is the present behaviour a bug of 'matplot' or is it for some reason 
intended behaviour?

regards,
joerg
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] german umlaut problem under MacOS

2004-12-15 Thread joerg van den hoff
I did not find this in the archive (hope it isn't there...):
the current release of R (2.0.1) for MacOS (10.3.6) seems not to handle
german special characters like '' correctly:
 f - ''
can be entered at the prompt, but echoing the variable yields
[1] \303\274  (I think the unicode of the character)
and inserting, for instance
text(1,2,f)
in some plot seems to insert two characters () (probably an 
interpretation of the first and second group of the unicode?).

I believe, this is a R problem or is there a simple configuration switch?
thanks
joerg
__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] R does not support UTF-8 (was german umlaut problem under MacOS)

2004-12-15 Thread joerg van den hoff
Brian D Ripley wrote:
You wrote your mail in UTF-8.  R does not support UTF-8, and that is both
documented and announced on startup in such a locale (at least on OSes
with standard-conforming implementations):
thanks for clarifying this point.
nevertheless:
1. the mail was (on purpose) sent in utf-8 to transport correctly the 
output from the R command window (i.e. the GUI provided with the macOS 
port). it is _this_ GUI (sorry for not explaining this correctly in the 
first place) where the problem occurs. I'm not using (knowingly at 
least) utf-8.
when starting the same binary from the command line in a terminal (where 
I generally use ISO Latin 1 encoding) it is perfectly possible to get 
the special characters into variables and into plots.

2. the OS is macos 10.3, i.e. essentially FreeBSD derivative and 
hopefully conforms to the standardsbu  R on startup in the GUI gives only:
cut=

R : Copyright 2004, The R Foundation for Statistical Computing
Version 2.0.1  (2004-11-15), ISBN 3-900051-07-0
R is free software and comes with ABSOLUTELY NO WARRANTY.
You are welcome to redistribute it under certain conditions.
Type 'license()' or 'licence()' for distribution details.
R is a collaborative project with many contributors.
Type 'contributors()' for more information and
'citation()' on how to cite R or R packages in publications.
Type 'demo()' for some demos, 'help()' for on-line help, or
'help.start()' for a HTML browser interface to help.
Type 'q()' to quit R.
R
cut=
i.e. no announcement whatsoever concerning missing utf-8 support, 
despite the fact that following input is interpreted in such a way.

so, probably this is more a question to the maintainers of the macOS 
port:_where_ did R (when startet with the GUI) get the notion that it 
should interpret keyboard input as utf-8?  can I change this (it's not 
in the preferences, for instance)?

gannet% env LANG=en_GB.utf8 R
R : Copyright 2004, The R Foundation for Statistical Computing
Version 2.0.1  (2004-11-15), ISBN 3-900051-07-0
...
WARNING: UTF-8 locales are not currently supported
Solution: do not use an unsupported locale.
On Wed, 15 Dec 2004, joerg van den hoff wrote:

I did not find this in the archive (hope it isn't there...):
the current release of R (2.0.1) for MacOS (10.3.6) seems not to handle
german special characters like '' correctly:

I get two characters (Atilde quarter) here.

 f - ''
can be entered at the prompt, but echoing the variable yields

You mean printing the contents, I presume.
yes (shell speak).

[1] \303\274  (I think the unicode of the character)
and inserting, for instance
text(1,2,f)
in some plot seems to insert two characters () (probably an
interpretation of the first and second group of the unicode?).
I believe, this is a R problem or is there a simple configuration switch?
thanks
joerg
__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

regards,
joerg
__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Modifying Code in .rda based packages (e.g. lme4)

2004-06-14 Thread joerg van den hoff
Prof Brian Ripley wrote:
No standard R package is supplied as a .rda, including not lme4.  You
must be looking at a binary installation, and you would do best to
reinstall from the sources.  You could use
R --vanilla
load(/all.rda)
fix(GLMM)
save(ls(all=T), file=/all.rda, compress = TRUE)
q()
but we would not recommend it.  Indeed, we do not recommend your altering
functions in other people's packages.  Why not just make a copy of GLMM
with another name and alter that?
On Fri, 11 Jun 2004, Dieter Menne wrote:
 

assume I want to make a minor local change in a package that is supplied as
.rda. For example, I want to get rid of the non-verbose-protected
Iteration message in GLMM/lme4.
Probably I have to load / change / save the package, but could someone help
me to get the syntax right?
   

 

I think, saving need to be done with
save(list=ls(all=T), file=/all.rda, compress = TRUE)
otherwise R  complains about
   Object ls(all = T) not found
(the '...' argument comes first in the 'save' argument list)
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] nls and R scoping rules

2004-06-10 Thread joerg van den hoff
I apologize for posting this in essence the second time (no light at the 
end of the tunnel yet..):

is there a way to enforce that nls takes both, the data *and* the 
model definition from the parent environment? the following fragment 
shows the problem.

# cut here==
wrapper - function (choose=0)
{
  x - seq(0,2*pi,len=100)
  y - sin(1.5*x);
  y - rnorm(y,y,.1*max(y))
  if (choose==0) {
 rm(fifu,pos=1)
 fifu - function(w,x) {sin(w*x)}
  }
  else
 assign('fifu',function(w,x) {sin(w*x)},.GlobalEnv)
  res - nls(y ~ fifu(w,x),start=list(w=1))
  res
}
# cut here==
if called as wrapper(1) this runs fine because the fitting function 
fifu is assigned in the GlobalEnv.
if called as wrapper(0), fifu is defined only locally and nls 
(actually, nlsModel, I think) does not know what I'm talking about.

I understand, the problem is that  the scoping rules are such that nls
does not resolve 'fifu' in the parent environment, but rather in the
GlobalEnv. (this is different for the data, which *are* taken from the
parent environment of the nls-call).
I tried some variants of using eval but without starting to modify 
nls itself there seems no way (up to now, anyway).

The solution to assign 'fifu' directly into the GlobalEnv does work, 
of course, but leads to the undesirable effect of accumulating objects 
in the workspace which are not needed there (and might overwrite 
existing ones).

in response to my first post, I got the hint that for lm the situation 
is different: it handles the above situation as desired (i.e. accepts 
local model definition). in so far one might even argue that this 
behaviour of lm and nls leads to an inconsistent behaviour of R in 
quite similar situations (e.g. replacing at some point a linar model by 
a nonlinear model in some large project is not achieved by simply 
replacing lm by nls somewhere deep down in the source code).

regards,
joerg
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! 
http://www.R-project.org/posting-guide.html

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] nls and R scoping rules

2004-06-10 Thread joerg van den hoff
Prof Brian Ripley wrote:
On Thu, 10 Jun 2004, Prof Brian Ripley wrote:
 

Around R 1.2.x the notion was introduced that variables should be looked 
for in the environment of a formula.  Functions using model.frame got 
converted to do that, but nls did not.  I guess that the best way forward 
is to ensure that nls (and nlsModel) does search the environment of the 
formula for functions.
   

It transpires that is rather easy to achieve.  At the top of nlsModel and 
nlsModel.plinear use

   env - new.env(parent=environment(form))
instead of
   env - new.env()
This then automatically searches for objects used in the formula
Notes
1) nls is in a namespace, so you need to fix the copy in the namespace or 
fix the source code and rebuild.

2) This will add to the baggage needed when you save() a nls fit in a 
workspace.  I think that is inevitable as we cannot just identify the 
funtions used in the formula (as they might call other functions in the 
local environment), and it is necessary to capture the objects needed to 
evaluate the formula for the predict() method to be reliable.

We could call all.names() to get the names of functions used directly in 
the formula, but that seems not good enough (previous para).

Question to the cognescenti: is the price in 2) too great for this to be 
done for 1.9.1?  I will put the change in R-devel for now.

 

thank's a lot for the two responses. that  seems exactly what helps me out.
remaining question: how can I edit (I mean from within R, not by messing 
around with the source code directly) an invisible function object such 
as nlsModel.plinear?
help.search('invisible'), help('function'), help('edit') at least did 
not tell me.

joerg
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Is there an R-version of rayplot

2004-06-09 Thread joerg van den hoff
maybe this qd try helps?
#=cut herer=
vectorplot - function (field) {
  #input is a (N x 4 array) of N vectors:
  #   field[,1:2] - x/y position  of vectors
  #   field[,3:4] - x/y componnent of vectors
  # plotted are the 2-D vectors attached to  the specified positions
  if (is.null(dim(field))||dim(field)[2] != 4) stop(N x 4 array expected)
  loc - field[,1:2]
  val - field[,3:4]
  alpha - rbind(loc[,1],loc[,1]+val[,1])
  omega - rbind(loc[,2],loc[,2]+val[,2])
  matplot(alpha,omega,type='l',lty=1,col='black') #the vector lines
  points(loc) #the start points
  points(loc+val,pch=20) #the end points
}
#example:
loc=matrix(rnorm(100),50,2)
field - cbind(loc,loc)
vectorplot(field)
#=cut herer=
there are no nice arrow heads, of course, but one could construct some...
for now, vectors start with open circles and end with filled circles.
joerg
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] scoping rules

2004-06-08 Thread joerg van den hoff
is there a good way to get the following fragment to work when calling 
it as wrapper(1) ?

# cut here==
wrapper - function (choose=0)
{
  x - seq(0,2*pi,len=100)
  y - sin(1.5*x);
  y - rnorm(y,y,.1*max(y))
  if (choose==0) {
 rm(fifu,pos=1)
 fifu - function(w,x) {sin(w*x)}
  }
  else
 assign('fifu',function(w,x) {sin(w*x)},.GlobalEnv)
  res - nls(y ~ fifu(w,x),start=list(w=1))
  res
}
# cut here==
I understand, the problem is that  the scoping rules are such that nls 
does not resolve 'fifu' in the parent environment, but rather in the 
GlobalEnv. (this is different for the data, which *are* taken from the 
parent environment of the nls-call).

The solution to assign 'fifu' directly into the GlobalEnv (which 
happens when calling 'wrapper(1)) does obviously work but leads to the 
undesirable effect of accumulating objects in the workspace which are 
not needed there (and might overwrite existing ones).

so: is there a way to enforce that nls takes the model definition from 
the parent environment together with the data?

joerg
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Confidence intervals for predicted values in nls

2004-06-07 Thread joerg van den hoff
Cristina Silva wrote:
Dear all
I have tried to estimate the confidence intervals for predicted values of a
nonlinear model fitted with nls. The function predict gives the predicted
values and the lower and upper limits of the prediction, when the class of
the object is lm or glm. When the object is derived from nls, the function
predict (or predict.nls) gives only the predicted values. The se.fit and
interval aguments are just ignored.
Could anybody tell me how to estimate the confidence intervals for the
predicted values (not the model parameters), using an object of class nls?
Regards,
Cristina
--
Cristina Silva
IPIMAR - Departamento de Recursos Marinhos
Av. de Brasília, 1449-006 Lisboa
Portugal
Tel.: 351 21 3027096
Fax: 351 21 3015948
[EMAIL PROTECTED]
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
 

maybe this example helps:
==cut here===
#define a model formula (a and b are the parameters, f is x):
frml - k1 ~ f*(1-a*exp(-b/f))
#simulate some data:
a0 - .6
b0 - 1.2
f  - seq(0.01,4,length=20)
k1true- f*(1-a0*exp(-b0/f))
#include some noise
amp - .1
k1 - rnorm(k1true,k1true,amp*k1true)
#fit:
fifu - deriv(frml,c(a,b),function(a,b,x){})
rr-nls(k1~fifu(a,b,f),start=list(a=.5,b=1))
#the derivatives and variance/covariance matrix:
#(derivs could be extracted from fifu, too)
dk1.da - D(frml[[3]],'a')
dk1.db - D(frml[[3]],'b')
covar - vcov(rr)
#gaussian error propagation:
a - coef(rr)['a']
b - coef(rr)['b']
vark1 -
  eval(dk1.da)^2*covar[1,1]+
  eval(dk1.db)^2*covar[2,2]+
  2*eval(dk1.da)*eval(dk1.db)*covar[1,2]
errk1 - sqrt(vark1)
lower.bound - fitted(rr)-2*errk1  #two sigma !
upper.bound - fitted(rr)+2*errk1  #dito
plot(f,k1,pch=1)
ff - outer(c(1,1),f)
kk - outer(c(1,1),k1)*c(1-amp,1+amp)
matlines(ff,kk,lty=3,col=1)
matlines(f,cbind(k1true,fitted(rr),lower.bound,upper.bound),col=c(1,2,3,3),lty=c(1,1,2,2))
xylim - par('usr')
xpos - .1*(xylim[2]-xylim[1])+xylim[1]
ypos - xylim[4] - .1*(xylim[4]-xylim[3])
legend(xpos,ypos,
 c( 
'data',
'true',
'fit', 
'confidence'
  ), 
  pch=c(1,-1,-1,-1),
  lty=c(0,1,1,2),
  col=c(1,1,2,3)
)
==cut here===
if you put this in a file and source it a few times from within R you'll 
get an impression how often the fit deviates from the 'true' curve more 
than expected from
the shown confidence limits.

I believe this approach is 'nearly' valid as long as gaussian error 
probagation is valid (i.e. only to first order in covar and therefore 
for not too large std. errors, am I right?).
to my simple physicist's mind this should suffice to get 'reasonable' 
(probably, in strict sense, not completely correct?) confidence 
intervals for the fit/the prediction.
If somebody objects, please let me know!

joerg
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html