[R] nls problem

2012-07-03 Thread joerg van den hoff

hi list,

used versions: 2.12.1 and 2.14.0 under ubuntu and macosx.

I recently stumbled over a problem with `nls', which occurs if the model  
is not specified explicitly but via an evaluation of a 'call' object.  
simple example:


8--

nlsProblem - function (len = 5) {
#===
   # purpose: to demonstrate an apparent problem with `nls' which occurs,
   # if the model is specified by passing th lhs as an evaled 'call'
   # object.  The problem is related to the way `nls' tries to compute
   # its internal variable `varIndex' which rests on the assumption that
   # the dependent (y) and, possibly, the independent (x) variable
   # are identified by having a length equal to the `nls' variable
   # `respLength'. the problem arises when there are `varNames'
   # components accidentally having this length, too.

   # in the present example, setting the number of data points to
   # len=2 triggers the error since the `call' object `fifu' has this
   # length, too and `nls' constructs an erroneous `mf$formula' internally.
#===
   #generate some data
   x - seq(0, 4, len = len)
   y - exp(-x)
   y - rnorm(y, y, .001*y)
   data - list(x = x, y = y)

   #define suitable model
   model - y ~ exp(-k*x)
   fifu  - model[[3]]

   #this fit is fine:
   fit1 - nls(model, data = data, start = c(k = 1))
   print(summary(fit1))

   #this fit crashes `nls' if len = 2:
   fit2 - nls(y ~ eval(fifu), data = data, start = c(k = 1))
   print(summary(fit2))
}

8--

to see the problem call `nlsProblem(2)'.

as explained in the above comments in the example function, I tracked it  
down to the way
`nls' identifies x and y in the model expression. the problem surfaces in  
the line


varIndex - n%%respLength == 0

(line 70 in the function listing from within R) which, in the case of  
`fit2' in the above
example always returns a single TRUE index as long as `len != 2' (which  
seems fine for the
further processing) but returns a TRUE value for the index of `fifu' as  
well if `len == 2'.


question1: I'm rather sure it worked about 6 months ago with (ca.) 2.11.x  
under ubuntu. have there been changes in this area?
question2: is something like the `fit2' line in the example expected to  
work or not?
qeustion3: if it is not expected to work, should not the manpage include a  
corresponding caveat?
question4: is there a a substitute/workaround for the `fit2' line which  
still allows to specify the rhs of the model via a variable instead of a  
constant (explicit) expression or function call?


the above  example is of course construed but in my use case I actually  
need this sort of thing. is there any chance that
the way `nls' analyzes its `model' argument can be changed to parse the  
`eval(fifu)' construct correctly in all cases?


since I'm currently not subscribed to the list I'd appreciate if responses  
could be Cc'ed to me directly.


thanks in advance,

joerg

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] 'R CMD check' fails with evaluation nested too deeply: infinite recursion

2009-10-28 Thread Joerg van den Hoff
I get the error

Error : evaluation nested too deeply: infinite recursion / 
options(expressions=)?

during a 'R CMD check ...'

on one of my packages. The reason seems to be that this package is
mutually dependent on another one (i.e. the DESCRIPTION files of package
A lists package B under Depends and vice versa). this might be bad
design (having bits in both packages needed by the other), but I believe
prior to R 2.9. this did not cause trouble. now the log file of the
'check' is something like



Installing *source* package 'roiutils' ...
** R
** exec
** preparing package for lazy loading
Loading required package: roiutils
Loading required package: fzrutils
===CUT (many more of the same) 
Loading required package: roiutils
Loading required package: fzrutils
Loading required package: roiutils
Error : evaluation nested too deeply: infinite recursion / 
options(expressions=)?




i.e. it seems that R loads both packages again and again.

what am I missing/doing wrong?

thanks in advance

joerg

PS:

platform   powerpc-apple-darwin8.11.1  
arch   powerpc 
os darwin8.11.1
system powerpc, darwin8.11.1   
status 
major  2   
minor  9.2 
year   2009
month  08  
day24  
svn rev49384   
language   R   
version.string R version 2.9.2 (2009-08-24)

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] probability density function for maximum values in repeated finite samples from a normal distribution??

2009-09-28 Thread Joerg van den Hoff
this is probably not really a R specific question, if so apologies for
off-topic posting:

I'm interested in the probability density function of the maximum values
from repeated samples of size N from a normal distribution:

smp - rnorm(N, meanval, stdev)

with some mean 'meanval' and standard deviation 'stdev'.

I would like to know what is the frequency distribution of max(smp) if I draw 
many such
samples?

if I investigate this simply via a simulation, I get of course approximate
results (and see that the resulting distribution is not quite normal
anymore, that the mean increases with increasing N, etc.).

my question: does somebody know whether there exists an analytical
expression for the distribution of max(smp) (or where to look)?

thanks,

joerg

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] one-site competition data // curve fitting

2008-07-08 Thread Joerg van den Hoff
On Mon, Jul 07, 2008 at 11:15:57AM +0200, Thiemo Schreiber wrote:
 Hello everyone,
 
 I have biological data from a competition experiment where a free ligand is 
 titrated against the binding of a protein.
 
 Now, I would like to fit a standard on-site binding curve to this data in 
 order to obtain the IC50 and Kd values. 
 Unfortunately I have not been able to find a package/function which allows 
 such a fitting and calculates the results.
 
 
 Does anyone know if there is a suitable package available and which function 
 to apply?
 
 Thanks a lot.
 
 Cheers
 Thiemo Schreiber
   [[alternative HTML version deleted]]


this is probably what you are looking for:

let 

x = vector of free ligand concenctrations
y = vector of correspondng spec. protein bindings

be defined in the R workspace.

than use

res - nls( y ~ (x * Bmax) / (x + Kd), start = list(Bmax=, Kd=))

(providing some sensible start values for the free parameters , of course)

cf. `nls' manpage for details

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Legend problem when exporting a plot to PDF

2008-04-29 Thread Joerg van den Hoff
On Tue, Apr 29, 2008 at 01:49:01PM +0200, Philipp Pagel wrote:
 
  When exporting to PDF a graph with a legend, in the final PDF, the
  text  is going beyond the legend box.
   dev2bitmap(test.pdf, type=pdfwrite, h=6, w=6)
  The legend looks OK on the screen.  I noticed that the size of the
  legend box depends on the size of the  screen window,
 
 As far as I remember, te problem has to do with different font handling
 in different devices. I'm sure someone more familiar wiht the internals
 will comment on this.
 
 The fix is easy: don't use dev2bitmap but open the desired target device
 before plotting. In your case: 
 
 pdf(file='test.pdf', width=6, height=6)
 plot(...)
 legend(...)
 dev.off()
 
 cu
   Philipp

I'm not sure, whether this is the way to go. at least until recently the `pdf'
device of R had a few rough edges which surfaced by an then. In my experience
using `dev2bitmap(type=pdfwrite, ...)' -- and thus ghostscript for pdf
generation resulted in cleaner/better pdf output (actually, `dev2bitmap' sort
of postprocessess output from the `pdf' device a bit).

concerning the original question: yes, the problem is there. I believe it
is related simply to the fact that the same absolute font size used in the 
graphic window
is used in the pdf output, too, which makes its _relative_ size  dependent on 
the
chosen size (widht/height) of the pdf output. 

my workaround is to ensure that the graphic windows size is exactly the same
as that used in the `dev2bitmap` call. e.g. the X11() device has defaults of
width = 7, height = 7 (but you can enforce other values by open a new one with,
e.g., X11(width = 6, height = 9). thus, first plot to graphic window, make sure
that no interactive resizing is necessary to get acceptable display on the
monitor (otherwise close it and open new X11() with better width/height 
values). 
then call dev2bitmap() with the same width/height settings. that should lead 
essentially
to WYSIWYG (modulo font substitution, maybe).

joerg

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] weighted nonlinear fits: `nls' and `eval'

2008-04-28 Thread Joerg van den Hoff
dear list,

my question concerns the use of `eval' in defining the model formula
for `nls' (version 2.6.2.).

consider the following simple example, where the same model and data
are used to perform unweighted and weighted fits. I intentionally
used very uneven weights to guarantee large differences in the results 


#CUT===
ln  - 100
x   - 1:ln
y0  - exp(-.02*x)
y   - rnorm(y0, y0, .01)

wts - rep(1, ln)
y[30]   - 1.2*y[30]
wts[30] - 1e3
model   - y ~ exp(-k*x)

xydata  - list(x=x, y=y)

#simple unweighted fit works as expected:
r0 - nls(model, start = c(k=.01), data = list(x=x, y=y))

#simple weighted fit works as expected:
r1 - nls(model, start = c(k=.01), data = xydata, weights = wts)

#this actually performs an unweighted fit (issuing a warning):
mdl - model[[3]]
r2  - nls(y ~ eval(mdl), start = c(k=.01), data = xydata, weights = wts)

#this, too, actually performs an unweighted fit (issuing a warning):
dv1 - deriv(model, k)
r3  - nls(y ~ eval(dv1), start = c(k=.01), data = xydata, weights = wts)

#weighted fit, works as expected
dv2 - deriv(model, k, c(k, x))
r4  - nls(y ~ dv2(k, x), start = c(k=.01), data = xydata, weights = wts)
#CUT===

as you can see the fits producing `r2' and `r3' do _not_ do what's intended.
looking at the source code of `nls' I get some ideas where it is going wrong
(in setting up the model frame in `mf'?) but not what exactly is the problem
(scoping rules?).

`r2' and `r3' can be convinced to do what's intended by modifying `xydata' to

xydata  - list(x=x, y=y, `(weights)` = wts)

i.e. by adding `(weights)` component here, too. but that probably is not
the way to go ...

while it is clear in the case of `r2' that my example is artificial, `r3' is 
what
I have used up to now for unweighted fits without problems. moreover there _are_
circumstances where `eval' constructs for defining the model formula occur quite
naturally.

questions:

1.)
is it actually not intended/supported to use `eval' in defining the
model formula argument of `nls'? if so, why not? or what am I missing (e.g.
necessity to provide explicitely an `envir' argument to `eval': which one?)?

2.)
could/should not `nls' be modified to cleanly support the use of `eval'
(remarks along the line of  submit changes would not help much :-)).

3.)
if the policy of the core group is that the current state of affairs is
quite perfect, should not the manpage of `nls' be augmented to warn people
very clearly that any use of `eval' in the model formula is calling
for trouble. I find the example above especially annoying: `r2' and `r3'
run apparently fine, only issuing a warning which sure is cryptic for joe user.
so, if this happens in some wrapper function provided to others (which I do)
they very well might believe having performed a weighted fit when they did not.


joerg

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] efficiently replacing values in a matrix

2008-04-17 Thread Joerg van den Hoff
On Wed, Apr 16, 2008 at 03:56:26PM -0600, Matthew Keller wrote:
 Yes Chuck, you're right.
 

just a comment:

 Thanks for the help. It was a data.frame not a matrix (I had called
 as.matrix() in my script much earlier but that line of code didn't run
 because I misnamed the object!). My bad. Thanks for the help. And I'm
 VERY relieved R isn't that inefficient...

well,  it _is_ at least when using data frames. and while it
is obvious that operations on lists (data frames  are  lists
in   disguise,   actually,   right?)   are  slower  than  on
arrays/matrices, I'm not happy with a performance drop by  a
factor of about seemlingy   1500 (30 sec vs.  13 h) -- and
I have seen similar things even with rather small data sets,
where  the  difference  of using data frame vs. matrix might
mean, e.g. overall run times of 10 sec. vs. 0.1 sec. 

where  is  all  this  time  burned?  there  _are_ functional
languages which operate efficiently on lists.

I  think  these  extreme  performance  drop  when  using  an
apparently innocent data structure is really bad.  and  it's
bad,  that  it's not repeatedly stated in BIG LETTERS in the
manuals: use matrices, at least for  big  arrays,  whereever
possible.  this  message  is  not  at  all tranferred by the
description in data.frame manpage, e.g.:

This   function   creates   data  frames,  tightly  coupled
collections of variables which share many of the  properties
of  matrices  and  of  lists,  used  as the fundamental data
structure by most of R's modeling software

probably 90% (+ x) of all R users are simply that: users and
not experts. when I started using R I exclusively used  data
frames  for purely numerical data instead of matrices simply
because I could get column n with x[n] instead of x[,n]  and
mean(x)  worked  columnwise  (whereas apply(x, 2, 'mean') is
tiresome) thus saving some typing. this is no strong  reason
in  retrospect but probably quite common. and many then will
stick with data.frames and endure long runtimes for now good
reason at all.

another  question  would  be whether homogeneous data frames
could not internally be handled as matrices...

joerg

 
 Matt
 
 
 On Wed, Apr 16, 2008 at 3:39 PM, Rolf Turner [EMAIL PROTECTED] wrote:
 
   On 17/04/2008, at 9:33 AM, Charles C. Berry wrote:
 
  snip
 
 
 
   I'll lay odds that Matthew's 'matrix' is actually a data.frame, and I'll
  not be surprised if the columns are factors.
  
 
  snip
 
   I suspect that you're right.
 
   ***Why*** can't people distinguish between data frames and matrices?
   If they were the same expletive deleted thing, there wouldn't be two
   different terms for them, would there?
 
  cheers,
 
  Rolf Turner
 
   ##
   Attention:This e-mail message is privileged and confidential. If you are
  not theintended recipient please delete the message and notify the
  sender.Any views or opinions presented are solely those of the author.
 
 
 
   This e-mail has been scanned and cleared by
  MailMarshalwww.marshalsoftware.com
   ##
 
 
 
 
 -- 
 Matthew C Keller
 Asst. Professor of Psychology
 University of Colorado at Boulder
 www.matthewckeller.com
 
 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] ps or pdf

2008-04-01 Thread Joerg van den Hoff
have not followed the thread completely, but:

have you tried `bitmap' with `type = pdfwrite' (or psgrb)
for comparison? at least with `pdf' there are some issues which
can be avoided by using ghostscript via `bitmap'.

joerg

On Mon, Mar 31, 2008 at 04:17:50PM -0400, Francois Pepin wrote:
 Prof Brian Ripley wrote:
  Please see the footer of this message.  
 
 Sorry, here is an example. For some reason, I cannot reproduce it 
 without using actual gene names.
 
 set.seed(1)
 ##The row names were originally obtained using the hgug4112a library 
 ##from bioconductor. I set it manually for people who don't have it 
 ##installed.
 ##library(hgug4112a);row-sample(na.omit(unlist(as.list(hgug4112aSYMBOL))),50)
 row-c(BDNF, EMX2, ZNF207, HELLS, PWP1, PDXDC1,  BTD, 
 NETO1, SLCO4C1, FZD7, NICN1, TMSB4Y, PSMB7,  CADM2, 
 SIRT3, ADH6, TM6SF1, AARS, TMEM88, CP110,  ADORA2A, 
 ATAD3A, VAPA, NXPH3, IL27RA, NEBL, FANCF,  PTPRG, 
 HSU79275, CCDC34, EPDR1, FBLN1, PCAF, AP1B1,  TXNRD2, 
 MUC20, MBNL1, STAU2, STK32C, PPIAL4, TGFBR2,  DPY19L2P3, 
 TMEM50B, ENY2, MAN2A2, ZFYVE26, TECTA,  CD55, LOC400794, 
 SLC19A3)
 postscript('/tmp/heatmap.ps',paper='letter',horizontal=F)
 heatmap(matrix(rnorm(2500),50),labRow=row)
 dev.off()
 
  Neither postscript() nor pdf() 
  graphics devices split up strings they are passed (by e.g. text()), so 
  this is being done either by the code used to create the plot (and we 
  have no idea what that is) or by the viewer.  I suspect the problem is 
  rather in the viewer, but without the example we asked for it is 
  impossible to know.
 
 Example of row names that are truncated in Illustrator (* denoting 
 truncation):
 CCDC3*4 (2nd row)
 MUC2*0 (3rd row)
 MBNL*1 (8th row)
 ...
 
 It is likely that Illustrator (CS 3, OS X version) is at fault.  I do 
 not see any truncation if I look at the ps file by hand (lines 4801 and 
 4802):
 
 540.22 545.88 (MUC20) 0 0 0 t
 540.22 553.90 (CCDC34) 0 0 0 t
 
  There also seems to be somewhat arbitrary grouping of the last column
  cells in heatmaps in ps files.
  
  Again, we need an example.
 
 The top right cell (26, TXNRD2) is grouped with the cell just below it 
 (26, CCDC34). It's more of a curiosity than anything else.
 
  I used to prefer the ps because they embed more easily in latex
  documents (although pdf are not difficult and conversions are trivial
  anyhow), but I'm curious if there are other reasons why one format might
  be preferred over the other in this context.
  
  The graphics devices are very similar (they share a lot of code).  One 
  small difference is that PostScript has an arc primitive, and PDF does not.
 
 This is what I thought at first, which is why I found these differences 
 surprising. I think your idea of blaming the viewer is correct. I 
 thought that Adobe of all people could deal with Postscript files 
 properly, but I guess I was overly trusting.
 
 Thanks for the help,
 
 Francois
 
 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Compare parameter estimates of a nlsList object

2008-03-28 Thread Joerg van den Hoff
On Wed, Mar 26, 2008 at 05:27:22PM +1300, Frank Scherr wrote:
 Hello together,
  
 Is there a tool to test the statistical differences between parameter 
 estimates of a nlsList fit?
  
 I fitted degradation data using the nlsList method and want to find out 
 whether derived rate constants are significantly different from each other at 
 the grouping factors soil and temperature.
  

here is a physicist's (not a mathematician's)  answer:

from each nls-fit you get an estimate of the std. error of the parameter
estimate.

so you have,e.g., (a1  +/- del_a1) from fit 1 and (a2 +/- del_a2) -- where
a1 and a2 are actually the same parameter in the model -- from fit 2.
since you thus have actual estimated errors, I'd simply ask what is the error
estimate of the difference, i.e.

a1 - a2

and, assuming independent underlying data,
compute this by gaussian error propagation (i.e. assuming normal
distributions of the parameter estimates). here, the variances (squares
of ths std. errors) add up:

del_[a1-a2]^2 = del_a1^2 + del_a2^2

if (a1-a2) +/- del_[a-a2]  (or rather 2-3 times that error) is
compatible with zero, a and a2 do not differ significantly, else
they do.

HTH

joerg

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] fitting power model in nls()

2007-12-02 Thread Joerg van den Hoff
On Sun, Dec 02, 2007 at 11:08:01AM -0800, Milton Cezar Ribeiro wrote:
 Dear all,
 I am still fighting against my power model.
 I tryed several times to use nls() but I can??t run it.
 I am sending my variables and also the model which I would like to fit. 
 As you can see, this power model is not the best model to be fit, but I 
 really need also to fit it.
 
 The model which I would like to fit is Richness = B*(Area^A)
 
 richness-c(44,36,31,39,38,26,37,33,34,48,25,22,44,5,9,13,17,15,21,10,16,22,13,20,9,15,14,21,23,23,32,29,20,
 26,31,4,20,25,24,32,23,33,34,23,28,30,10,29,40,10,8,12,13,14,56,47,44,37,27,17,32,31,26,23,31,34,
 37,32,26,37,28,38,35,27,34,35,32,27,22,23,13,28,13,22,45,33,46,37,21,28,38,21,18,21,18,24,18,23,22,
 38,40,52,31,38,15,21)
 area-c(26.22,20.45,128.68,117.24,19.61,295.21,31.83,30.36,13.57,60.47,205.30,40.21,
 7.99,1.18,5.40,13.37,4.51,36.61,7.56,10.30,7.29,9.54,6.93,12.60,
 2.43,18.89,15.03,14.49,28.46,36.03,38.52,45.16,58.27,67.13,92.33,1.17,
 29.52,84.38,87.57,109.08,72.28,66.15,142.27,76.41,105.76,73.47,1.71,305.75,
 325.78,3.71,6.48,19.26,3.69,6.27,1689.67,95.23,13.47,8.60,96.00,436.97,
 472.78,441.01,467.24,1169.11,1309.10,1905.16,135.92,438.25,526.68,88.88,31.43,21.22,
 640.88,14.09,28.91,103.38,178.99,120.76,161.15,137.38,158.31,179.36,214.36,187.05,
 140.92,258.42,85.86,47.70,44.09,18.04,127.84,1694.32,34.27,75.19,54.39,79.88,
 63.84,82.24,88.23,202.66,148.93,641.76,20.45,145.31,27.52,30.70)
 plot(richness~area)
 
 I tryed to fit the following model:
 
 m1-nls(richness ~ Const+B*(area^A))
 
 Thanks a lot, 
 miltinho
 Brazil.
 

for easier notation, let y=richness, x=area, C=const in the following.

then 

nls(y~B*x^A + C, start = c(A=3.2, B=0.002, C=0))

converges alright. where's the problem (apart from this being not a very good
model for the data)? the critical point is to provide some reasonable estimate
of the parameters as starting values.

to get reasonable start values, I'd use:


y = B*x^A + C -- log(y-C) = log(B) + A*log(x) -- ly = b + a*lx,

estimate C from the x - 0 asymptotic value (approx. 0)

and use lm(ly~lx) which yields a and b estimates which you could use in `nls'.

and, contrary to other assessments you've received, I definitely would prefer 
`nls'
for least squares fitting instead of using `optim' or other general 
minimization routines.

hth

joerg

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Fwd: Empty list to use in a for cycle

2007-11-26 Thread Joerg van den Hoff
On Mon, Nov 26, 2007 at 04:35:53PM +0100, Niccolò Bassani wrote:
 Dear R-users,
 I'm posting a problem I already asked help for some time ago, because I'm
 facing that problem once again and even because now, reading that old
 e-mail, and the answer recevied, I understand I've not made myself clear.
 
 Here's the question: I need to create an empty list of a specific length to
 fill it with a quite large amount of square matrices, which is 602. The
 question is that these matrices are created inside a for cycle, and I do not
 know how to recall all of them one by one, except by creating an empty list
 before the cycle, than assigning for each value of the i index the amtrix
 computed to the first element of the empty list.
 The fact is that: i've trided to create an empty list with
 
 vector(list,602)
 
 and then putting it in a cycle, but it didn't work. This is the cycle I've
 used. To prove it works (and then the cycle itself is not a problem) there's
 also the output (i.e. the last square matrix computed).
 
 for (i in unique(elio2$id)){
 sub.mu - exp.mu[exp.mu$id==i,]
 D - matrix(0,nrow( sub.mu),nrow(sub.mu))
 diag(D) - sub.mu$deriv.link
 A - mat.cov[1:nrow(D),1:nrow(D)]
 R - corstr[1:nrow(D),1:nrow(D)]
 W - solve(D)%*%solve(sqrt(A))%*%solve(R)%*%solve(sqrt(A))%*%solve(D)
 }
 
  W
   [,1]  [,2]  [,3]  [,4]
 [1,]  3.492489e+02 -7.9324883721  0.0006286788 -0.0031046240
 [2,] -7.932488e+00 17.4974625191 -1.7575467817  0.0001403319
 [3,]  6.286788e-04 -1.7575467817 17.3227959738 -1.7529916860
 [4,] -3.104624e-03  0.0001403319 -1.7529916860 17.2279244622
 
 
 Does anyone knows how to insert each and every matrix like the one above in
 a omnicomprehensive list? That's because I've to use a function requiring
 me to have the matrices I need inside a list.
 Thanks in advance, hope it's not a too much stupid problem!
 niccol?
 

you  seem  to  have  all  the  ingredients,  so where is the
problem? it's probably not really faster to preallocate this
(moderately long) list.  so something like, e.g.

matlist - list()
for (i in 1:3) {
   matlist[[i]] = matrix(i, 2, 2)
}

should do what you want: create a list of matrices.


joerg

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] strange `nls' behaviour

2007-11-13 Thread Joerg van den Hoff
On Mon, Nov 12, 2007 at 06:59:56PM -0500, Duncan Murdoch wrote:
 On 12/11/2007 2:56 PM, Joerg van den Hoff wrote:
 On Mon, Nov 12, 2007 at 11:09:21AM -0500, Duncan Murdoch wrote:
 On 11/12/2007 9:14 AM, Joerg van den Hoff wrote:
 On Mon, Nov 12, 2007 at 07:36:34AM -0500, Duncan Murdoch wrote:
 On 11/12/2007 6:51 AM, Joerg van den Hoff wrote:
 I initially thought, this should better be posted to r-devel
 but alas! no response. 
 I think the reason there was no response is that your example is too 
 complicated.  You're doing a lot of strange things (fitfunc as a result 
 of deriv, using as.name, as.call, as.formula, etc.)  You should 
 simplify 
 thanks for the feedback.
 
 concerning  lot  of  strange  things:  OK.  I  thought the
 context might be important (why, for heaven's sake  do  you
 want  to  do  this!?), but, then, maybe not. so the easiest
 way to trigger a similar (not the  identical)  behaviour  is
 something like
 
 f - function (n) {
   #-
   #define n data points for a (hardcoded) model:
   #---
   x - seq(0, 5, length  = n)
   y - 2 * exp(-1*x) + 2; 
   y - rnorm(y,y, 0.01*y)
 
   #the model (same as the above hardcoded one):
   model - y ~ a * exp (-b*x) + c
 
   #call nls as usual:
   res1 - try(nls(model, start = c(a=2, b=1, c=2)))
 
   #call it a bit differently:
   res2 - nls(y ~ eval(model[[3]]), start = c(a=2, b=1, c=2))
 
   list(res1 = res1, res2 = res2)
   #-
 }
 I'd say the problem is relying on the default for the envir parameter to 
 eval.  It's reasonable to expect nls to set things up so that terms in 
 the model formula are taken from the right place, but when your model 
 formula is playing with evaluation, you should be very careful to make 
 sure the evaluation takes place in the right context.
 
 agreed.
 
 The default for envir is parent.frame(), and that can't be right: that 
 will see local variables in whatever function called it, so if one of 
 them has a variable called model, you'll subscript it.
 
 for one, in my actual application, I _do_ specify envir (and
 remember the above was really  not  the  original  situation
 where the subsetting of `model' is actually done only in the
 `deriv' call).   second, if I'm not missing  something,  doing
 the  evaluation  in  parent.frame()  is  fine  in  the above
 example and does not explain  the  error,  whose  triggering
 solely depends on the number of data points used.
 
 You are missing something.  parent.frame() will be evaluated by eval(), 
 so it refers to whatever function calls eval().  You don't know what 
 function that will be, because it's buried deep within nls().

thanks for taking the time.

I  see. will remember to be even more careful with `eval' in
the future.

 
 Perhaps you think that the formula
 
 y ~ eval(model[[3]])
 
 is the same as the original one?  It's not.  Try printing it:

no, I understand roughly what lazy evaluation means :-)

 
  y ~ eval(model[[3]])
 y ~ eval(model[[3]])
 
 The eval doesn't get called at this point, it gets called for every step 
 of the least squares minimization.  Who knows what parent.frame() means 
 then?
 
 If I were trying to do what you're doing, I would construct the formula 
 before the call to nls, and pass that.  I.e. something like the 
 following silly code:
 
 model2 - model
 model2[[3]] - model[[3]] # The eval() is implicit
 res2 - nls(model2, start = c(a=2, b=1, c=2))
 
 
 I  was  trying that (more or less) in my original example, 
 I think, but I will reexamine my application and see, whether I can  bypass
 the problem somehow along these lines.
 
 If you really want to put eval() in a formula, then I can't see how you 
 could avoid an explicit specification of the envir parameter.  So I'd 
 call this a bug in your code.
 
 
 as I said, this seems right in principle (still, if the call
 does happen at some well  defined  place  such  as  a  small
 function local to a user visible one, the eval without envir
 might be quite OK) but not w.r.t. explaining the nls  crash.
 
 No, it's not okay.  Your formula is looking in places it shouldn't.  It 
 makes absolutely no sense for your formula to depend on what function 
 evaluates it.  It should only depend on the data that is available in 
 the data argument to nls() or visible in the current environment.

yes, accepted, as above.

 
 katherine  mullen  seems  to  have located the exact spot in
 `nls' where something goes wrong. so, for now I think martin
 might be right (bug in my thinking by assuming I'm allowed
 to use eval in this place), but otherwise it is  a  property
 (or bug) of `nls'.
 
 She may have discovered where the error call was triggered, but things 
 went wrong in this example when you mis-used eval().  If you think your 

I  don't  think  so  (but  may be wrong, of course): I again
stepped through the `nls'  call  with  `debug'  to  see  the
effect of katharine's

Re: [R] `eval' and environment question

2007-11-13 Thread Joerg van den Hoff
On Tue, Nov 13, 2007 at 09:04:25AM -0500, Duncan Murdoch wrote:
 On 11/13/2007 8:39 AM, Joerg van den Hoff wrote:
 my   understanding   of   `eval'   behaviour   is  obviously
 incomplete.
 
 myquestion:   usually   `eval(expr)'   and   `eval(expr,
 envir=parent.frame())' should be identical.   why  does  the
 last  `eval'  (yielding `r3') in this code _not_ evaluate in
 the local environment of function  `f'  but  rather  in  the
 global environment (if `f' is called from there)?]
 
 Because you said parent.frame() as its value.  The values of 
 explicitly passed arguments are evaluated in the frame of the caller, 
 i.e. in the evaluation frame of f().  There, parent.frame() is the frame 
 that called f(), i.e. the global environment.
 
 Default values for parameters are evaluated in the evaluation frame of 
 the function using them.  So if you don't specify envir in a call to 
 eval(), parent.frame() is evaluated in the eval evaluation frame, and it 
 refers to whatever function called eval().
 
 
 #--
 f - function () {
 
a - 1
x - 1
 
model- y ~ a * x
fitfunc  - deriv(model[[3]], c(a), c(a, x))
call.fitfunc - c(list(fitfunc), as.name(a), as.name(x))
call.fitfunc - as.call(call.fitfunc)
 
curenv - environment()
 
cat( eval(call.fitfunc)\n)
r1 = eval(call.fitfunc)
str(r1)
 
cat( eval(call.fitfunc, envir = curenv)\n)
r2 = eval(call.fitfunc, envir = curenv)
str(r2)
 
cat( eval(call.fitfunc, envir = parent.frame())\n)
r3 = eval(call.fitfunc, envir = parent.frame())
str(r3)
 
 }
 #--
 
 
 `args(eval)' yields:
 
 function (expr, envir = parent.frame(), enclos = if (is.list(envir) || 
 is.pairlist(envir)) parent.frame() else baseenv())
 
 
 and  the manpage says the same: the default value of `envir'
 is `parent.frame()'. so I  would  expect  (lazy  evaluation)
 that  providing  the  default argument explicitely should'nt
 alter the behaviour. where is my error?
 
 Lazy evaluation affects the order of evaluation, but not the evaluation 
 environment.  If you pass an argument to a function, it will be 
 evaluated in your environment.  If you rely on the function to provide a 
 default, that will be evaluated in the environment of the function.

OK,  I  see  (and  it's  probably  all  in the documentation
somewhere). it's obvious once it's explained. but why _does_
the following work?

#
f - function () {

   x - 1

   curenv - environment()

   cat( eval(x)\n)
   r1 = eval(x)
   str(r1)

   cat( eval(x, envir = curenv\n)
   r2 = eval(x, envir = curenv)
   str(r2)

   cat( eval(x, envir = parent.frame())\n)
   r3 = eval(x, envir = parent.frame())
   str(r3)
#

now, the third eval _yields_ a result, although according to
your explanation `x' should be searched for  in  the  global
env.  (as was actually the case in my initial example). what
am I missing this time? is `x' copied into  the  call  as  a
constant  and  no  longer  searched at all or something like
that?

}
 
 It all makes sense:  when you write the function you've got no idea what 
 the caller will be like, so you can't refer to its environment in your 
 defaults.  On the other hand, when you call a function you shouldn't 
 care what its internal environment looks like, so your arguments should 
 be evaluatable in your own environment, and their value shouldn't change 
 just because the implementation of the function changes.

I  presume,  one  could  argue for the opposite behaviour as
well? maybe there is some language out there actually  doing
it  the  other  way (passing the unevaluated argument string
and leaving everything to the called function)?   I will try
to  remember  this  detail and then it's fine with me, but
from the outset it's quite irritating that  writing  down  a
call  which  is  identical to the definition of the function
(including its defaults) does not necessarily yield the same
result as when the defaults are really used.

 
 This all gets more complicated with formulas in functions like nls(). 
 Formulas normally have an environment attached to them too, but 
 sometimes people mess with it, and then things can go crazy.

hope you're not meaning me... 


thanks

joerg
 
 Duncan Murdoch
 
 regards,
 
 joerg
 
 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide 
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] strange `nls' behaviour

2007-11-12 Thread Joerg van den Hoff

I initially thought, this should better be posted to r-devel
but alas! no response. so  I  try  it  here.  sory  for  the
lengthy explanation but it seems unavoidable. to quickly see
the problem simply copy the litte example below and execute

f(n=5)

which  crashes. called with n !=  5 (and of course n3 since
there are 3 parameters in the model...) everything is as  it
should be.

in detail:
I  stumbled over the follwing _very_ strange behaviour/error
when using `nls' which  I'm  tempted  (despite  the  implied
dangers) to call a bug:

I've  written a driver for `nls' which allows specifying the
model and the data vectors using arbitrary  symbols.   these
are  internally  mapped  to  consistent names, which poses a
slight complication when using `deriv' to  provide  analytic
derivatives. the following fragment gives the idea:

#-
f - function(n = 4) {

   x - seq(0, 5, length  = n)

   y - 2 * exp(-1*x) + 2; 
   y - rnorm(y,y, 0.01*y)

   model - y ~ a * exp (-b*x) + c

   fitfunc - deriv(model[[3]], c(a, b, c), c(a, b, c, x))

   #standard call of nls:
   res1 - nls(y ~ fitfunc(a, b, c, x), start = c(a=1, b=1, c=1))

   call.fitfunc - 
   c(list(fitfunc), as.name(a), as.name(b), as.name(c), as.name(x))
   call.fitfunc - as.call(call.fitfunc)
   frml - as.formula(y ~ eval(call.fitfunc))

   #computed call of nls:
   res2 - nls(frml, start = c(a=1, b=1, c=1))

   list(res1 = res1, res2 = res2)
}
#-

the  argument  `n'   defines  the number of (simulated) data
points x/y which are going to be fitted by some model ( here
y ~ a*exp(-b*x)+c )

the first call to `nls' is the standard way of calling `nls'
when knowing all the variable and parameter names.

the second call (yielding `res2') uses a constructed formula
in `frml' (which in this example is of course not necessary,
but  in  the general case 'a,b,c,x,y' are not a priori known
names).

now, here is the problem: the call

f(4)

runs fine/consistently, as does every call with n  5.

BUT: for n = 5 (i.e. issuing f(5))

the second fit leads to the error message:

Error in model.frame(formula, rownames, variables, varnames, extras, 
extranames,  : 
invalid type (language) for variable 'call.fitfunc'

I  cornered  this  to a spot in `nls' where a model frame is
constructed in variable `mf'.  the parsing/constructing here
seems  simply  to  be messed up for n = 5: `call.fitfunc' is
interpreted as variable.

I,  moreover, empirically noted that the problem occurs when
the total number of  parameters  plus  dependent/independent
variables  equals  the number of data points (in the present
example a,b,c,x,y).

so it is not the 'magic' number of 5 but rather the identity
of data vector length and number of parameters+variables  in
the model which leads to the problem.

this  is  with  2.5.0  (which  hopefully  is  not considered
ancient) and MacOSX 10.4.10.

any ideas?

thanks

joerg

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] strange `nls' behaviour

2007-11-12 Thread Joerg van den Hoff
On Mon, Nov 12, 2007 at 03:25:38PM +0100, Martin Maechler wrote:
  DM == Duncan Murdoch [EMAIL PROTECTED]
  on Mon, 12 Nov 2007 07:36:34 -0500 writes:
 
 DM On 11/12/2007 6:51 AM, Joerg van den Hoff wrote:
  I initially thought, this should better be posted to r-devel
  but alas! no response. 
 
 DM I think the reason there was no response is that your example is too 
 DM complicated.  You're doing a lot of strange things (fitfunc as a 
 result 
 DM of deriv, using as.name, as.call, as.formula, etc.)  You should 
 simplify 
 DM it down to isolate the bug.  Thats a lot of work, but you're the one 
 in 
 DM the best position to do it.  I'd say there's at least an even chance 
 DM that the bug is in your code rather than in nls().
 
 yes.. and.. no : 
 - His code is quite peculiar, but I think only slightly too complicated

thank's  for  this  response. I just tried to give an easier
example in response to duncan's mail.

 
 - one could argue that the bug is in Joerg's thinking that
   something like
   nls(y ~ eval(fitfunc), )
 
   should be working at all.
   But then he had found by experiment that it (accidentally I   d'say)
   does work in many cases.
 
 DM And 2.5.0 *is* ancient; please confirm the bug exists in R-patched if 
 it 
 DM turns out to be an R bug.
 
 You are right, but indeed (as has Kate just said) it *does*
 exist in current R versions.
 
 I agree that the behavior of nls() here is sub-optimal.
 It *should* be consistent, i.e. work the same for n=4,5,6,..
 
 I had spent about an hour after Joerg's R-devel posting,
 and found to be too busy with more urgent matters --
 unfortunately forgetting to give *some* feedback about my findings.
 
 It may well be that we find that nls() should give an
 (intelligible) error message in such eval() cases - rather than
 only in one case...

well,  if at all possible, it would _really_ be nice to have
the opportunity  to  pass  `eval'  constructs  throught  the
`model'  argument  to `nls'. I'm far from understanding what
actually is going on in `model.frame', but if it parses  the
thing  correctly  (accidentally  or  not)  except  in  one
special case, is'nt  there  a  clean  way  to  repair  the
special  case?  in my application I found the ability to use
`eval' constructs here really helpful. at least  I  did  not
find another way to be able to use `deriv' information _and_
allow the user to use arbitrary symbols  in  specifying  the
model  formula. at least I'd prefer the present situation of
works nearly always to a consistent works never.

joerg

 
 Martin Maechler
 
 
 DM Duncan Murdoch
 
 
 
 
 DM so  I  try  it  here.  sory  for  the
  lengthy explanation but it seems unavoidable. to quickly see
  the problem simply copy the litte example below and execute
  
  f(n=5)
  
  which  crashes. called with n !=  5 (and of course n3 since
  there are 3 parameters in the model...) everything is as  it
  should be.
  
  in detail:
  I  stumbled over the follwing _very_ strange behaviour/error
  when using `nls' which  I'm  tempted  (despite  the  implied
  dangers) to call a bug:
  
  I've  written a driver for `nls' which allows specifying the
  model and the data vectors using arbitrary  symbols.   these
  are  internally  mapped  to  consistent names, which poses a
  slight complication when using `deriv' to  provide  analytic
  derivatives. the following fragment gives the idea:
  
  #-
  f - function(n = 4) {
  
  x - seq(0, 5, length  = n)
  
  y - 2 * exp(-1*x) + 2; 
  y - rnorm(y,y, 0.01*y)
  
  model - y ~ a * exp (-b*x) + c
  
  fitfunc - deriv(model[[3]], c(a, b, c), c(a, b, c, x))
  
  #standard call of nls:
  res1 - nls(y ~ fitfunc(a, b, c, x), start = c(a=1, b=1, c=1))
  
  call.fitfunc - 
  c(list(fitfunc), as.name(a), as.name(b), as.name(c), 
 as.name(x))
  call.fitfunc - as.call(call.fitfunc)
  frml - as.formula(y ~ eval(call.fitfunc))
  
  #computed call of nls:
  res2 - nls(frml, start = c(a=1, b=1, c=1))
  
  list(res1 = res1, res2 = res2)
  }
  #-
  
  the  argument  `n'   defines  the number of (simulated) data
  points x/y which are going to be fitted by some model ( here
  y ~ a*exp(-b*x)+c )
  
  the first call to `nls' is the standard way of calling `nls'
  when knowing all the variable and parameter names.
  
  the second call (yielding `res2') uses a constructed formula
  in `frml' (which in this example is of course not necessary,
  but  in  the general case 'a,b,c,x,y' are not a priori known
  names).
  
  now, here is the problem: the call
  
  f(4)
  
  runs fine/consistently, as does every call with n  5

Re: [R] strange `nls' behaviour

2007-11-12 Thread Joerg van den Hoff
On Mon, Nov 12, 2007 at 11:09:21AM -0500, Duncan Murdoch wrote:
 On 11/12/2007 9:14 AM, Joerg van den Hoff wrote:
 On Mon, Nov 12, 2007 at 07:36:34AM -0500, Duncan Murdoch wrote:
 On 11/12/2007 6:51 AM, Joerg van den Hoff wrote:
 I initially thought, this should better be posted to r-devel
 but alas! no response. 
 
 I think the reason there was no response is that your example is too 
 complicated.  You're doing a lot of strange things (fitfunc as a result 
 of deriv, using as.name, as.call, as.formula, etc.)  You should simplify 
 
 thanks for the feedback.
 
 concerning  lot  of  strange  things:  OK.  I  thought the
 context might be important (why, for heaven's sake  do  you
 want  to  do  this!?), but, then, maybe not. so the easiest
 way to trigger a similar (not the  identical)  behaviour  is
 something like
 
 f - function (n) {
#-
#define n data points for a (hardcoded) model:
#---
x - seq(0, 5, length  = n)
y - 2 * exp(-1*x) + 2; 
y - rnorm(y,y, 0.01*y)
 
#the model (same as the above hardcoded one):
model - y ~ a * exp (-b*x) + c
 
#call nls as usual:
res1 - try(nls(model, start = c(a=2, b=1, c=2)))
 
#call it a bit differently:
res2 - nls(y ~ eval(model[[3]]), start = c(a=2, b=1, c=2))
 
list(res1 = res1, res2 = res2)
#-
 }
 
 I'd say the problem is relying on the default for the envir parameter to 
 eval.  It's reasonable to expect nls to set things up so that terms in 
 the model formula are taken from the right place, but when your model 
 formula is playing with evaluation, you should be very careful to make 
 sure the evaluation takes place in the right context.

agreed.

 
 The default for envir is parent.frame(), and that can't be right: that 
 will see local variables in whatever function called it, so if one of 
 them has a variable called model, you'll subscript it.

for one, in my actual application, I _do_ specify envir (and
remember the above was really  not  the  original  situation
where the subsetting of `model' is actually done only in the
`deriv' call). second, if I'm not missing  something,  doing
the  evaluation  in  parent.frame()  is  fine  in  the above
example and does not explain  the  error,  whose  triggering
solely depends on the number of data points used.

 
 If I were trying to do what you're doing, I would construct the formula 
 before the call to nls, and pass that.  I.e. something like the 
 following silly code:
 
 model2 - model
 model2[[3]] - model[[3]] # The eval() is implicit
 res2 - nls(model2, start = c(a=2, b=1, c=2))


I  was  trying that (more or less) in my original example, 
I think, but I will reexamine my application and see, whether I can  bypass
the problem somehow along these lines.

 
 If you really want to put eval() in a formula, then I can't see how you 
 could avoid an explicit specification of the envir parameter.  So I'd 
 call this a bug in your code.


as I said, this seems right in principle (still, if the call
does happen at some well  defined  place  such  as  a  small
function local to a user visible one, the eval without envir
might be quite OK) but not w.r.t. explaining the nls  crash.
katherine  mullen  seems  to  have located the exact spot in
`nls' where something goes wrong. so, for now I think martin
might be right (bug in my thinking by assuming I'm allowed
to use eval in this place), but otherwise it is  a  property
(or bug) of `nls'.

joerg van den hoff

 
 Duncan Murdoch
 
 
 this is without all the overhead of my first example and now
 (since not quite the same) the problem  arises  at  n  ==  3
 where  the  fit  can't  really  procede  (there  are  also 3
 parameters -- the first example was more  relevant  in  this
 respect)  but  anyway  the  second nls-call does not procede
 beyond the parsing phase of `model.frame'.
 
 so,  I  can't  see  room for a real bug in my code, but very
 well a chance that I misuse `nls'  (i.e.  not  understanding
 what really is tolerable for the `model' argument of `nls').
 but  if the latter is not the case, I would think there is a
 bug in `nls'  (or,  actually,  rather  in  `model.frame'  or
 whatever)  when  parsing  the  nls call.
 
 
 
 
 it down to isolate the bug.  Thats a lot of work, but you're the one in 
 the best position to do it.  I'd say there's at least an even chance 
 that the bug is in your code rather than in nls().
 
 And 2.5.0 *is* ancient; please confirm the bug exists in R-patched if it 
 turns out to be an R bug.
 
 if  need  be,  I'll  do  that  (if  I  get it compiled under
 macosX). but assuming  that  you  have  R-patched  installed
 anyway, I would appreciate if you would copy the new example
 above or the old one  below  to  your  R-  prompt  and  see,
 whether  it  crashes  with  the same error message if called
 with the special number of data points (3 for the new, 5

Re: [R] How can I fitting this function from values.

2007-11-12 Thread Joerg van den Hoff
On Mon, Nov 12, 2007 at 04:27:26PM -0300, Horacio Castellini wrote:
 Hi all, I'd like fit this function:
 
 G(t)=A*1/(1+t/B)*1/sqrt(1+t/C)
 
 where A, B and C are fitting constants that one like search. I have
 got a fcs-(t,G(t)) value table in a data frame.
 
 Any one can help me? Tahnks Horacio.

if your data are in `df' and the columns are labeled `t' and `G'
this seems what you want (cf. `?nls'):

nls(G ~ A*1/(1+t/B)*1/sqrt(1+t/C), start =list(A=1, B=2, C=3), data = df) 

where `start' should be set to reasonable start values.

and I would think, it is better to use p1 = 1/B, p2= 1/C during the fit
(no singularities of G for certain parameter values...).

joerg

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] a question on impulse responses

2007-10-15 Thread Joerg van den Hoff
On Sat, Oct 13, 2007 at 03:08:58PM +0300, Martin Ivanov wrote:
 
 Dear R users,
 
 I am using the vars package to calculate the impulse response functions and 
 the forecast error variance decomposition of a VAR model. Unfortunately I do 
 not know whether these functions assume unit or one standard deviation 
 shocks. I tried to look into the code of these functions, but in vain: 
 neither irf, nor vars::irf, nor vars:::irf output the code of the functions. 
 Does someone know whether irf and fevd assume one standard deviation or a 
 unit shock? How can I see the code of these functions?

?getAnywhere

should do it
 
 Regards,
 Martin Ivanov
 
 -
 ?? ??? - ?? ???! www.survivor.btv.bg
 
 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.