Quite a long time ago, there was a thread about generalized eigenvalues,
which ended inconclusively.
http://tolstoy.newcastle.edu.au/R/help/05/06/6832.html
For students, a good proposal for the Google Summer of Code (gsoc-r)
would be a nice interface to things like the QZ algorithm and
This thread unfortunately pushes a number of buttons:
- Excel computing a model by linearization which fits to
residual = log(data) - log(model)
rather than
wanted_residual = data - model
The COBB.RES example in my (freely available but rather dated) book
at
This is a quite general issue that those of us who try to prepare
optimization tools must deal with quite often. The minqa package
internal methods were designed to be used with customized controls to
the algorithm, but we had to package them with some more or less OK
compromise settings. If
In the recent SIAM Review, vol 54, No 3, pp 597-606, Robert Vanderbei
does a nice analysis of daily temperature data. This uses publicly
available data. A version of the paper is available at
http://arxiv.org/pdf/1209.0624
and there is a presentation at
And it could be that you should try nlmrt or minpack.lm.
I don't think you were at my talk in Jena May 23 -- might have been very
helpful to you.
JN
On 13-06-20 06:00 AM, r-help-requ...@r-project.org wrote:
Message: 47
Date: Wed, 19 Jun 2013 13:17:29 -0500
From: Adams, Jeanjvad...@usgs.gov
If preytype is an independent variable, then models based on it should
be OK. If preytype comes into the parameters you are trying to estimate,
then the easiest way is often to generate all the possible combinations
(integers -- fairly modest number of these) and run all the least
squares
Weteringsrobbie.weteri...@gmail.com
To: Prof J C Nash (U30A)nas...@uottawa.ca,r-help@r-project.org
Subject: Re: [R] Non-linear modelling with several variables including
a categorical variable
Message-ID:
CAFe5dHZdXFbFtwKmTE1_QPi1rqNGsd+=82tproyfs6mg6zm...@mail.gmail.com
Content-Type
This reply only addresses the NaN in Jacobian matter. I believe it is a
result of getting a perfect fit (0 sum of squares). I have amended the
r-forge version of nlmrt package in routines nlfb and nlxb and did not
get the error running Elizabeth's example. This only answers the
software issue,
Considering that I devised the code initially on a computer with only 8K
bytes for program and data, and it appears that your problem has 1
parameters, I'm surprised you got any output. I suspect the printout is
the BUILD phase where each weight is being adjusted in turn by the same
shift.
With minor corrections, the original problem can be solved with nlxb
from nlmrt package.
coef(modeln)
a b
-0.8470857 409.5190808
ssquares = 145585533
but since
svd(modeln$jacobian)$d
[1] 5.128345e+04 6.049076e-14
I may have made nlmrt too robust.
JN
On 13-07-15
It's possibly the L in L-BFGS-B that is more interesting for some
problems, namely for the Limited Memory, so running without bounds can
make sense. Unfortunately, the version of L-BFGS-B in R is from the
1990s, and Nocedal et al. released an update in 2011. Maybe someone will
want to work on
Someone might be able to come up with a neat expression, but my own
approach would be to write a residual function to create the vector res
from the parameters, for which core line would be
res-
D1/(p1*((E/(p2-E))^(1/p3)))+D2/(p6*((E/(p2-E))^(1/p4)))
You need to provide reproducible examples if you want to get readers to
actually give you answers.
The two issues both come up from time to time and generally relate to
how the objective function is set up, though sometimes to options in the
call. However, generally really isn't good enough.
JN
1) Why use Nelder-Mead with optimx when it is an optim() function. You
are going from New York to Philadelphia via Beijing because of the extra
overhead. The NM method is there for convenience in comparisons.
2) NM cannot work with NA when it wants to compute the centroid of
points and search
error msg. Just no axis label. And it seems I need to make
sure the labels vector is the same length as I need -- no recycling.
JN
On 13-08-21 06:37 PM, Jim Lemon wrote:
On 08/22/2013 07:56 AM, Prof J C Nash (U30A) wrote:
There are several items on the web about putting month names as tick
This may be one of the many mysteries of the internals of L-BFGS-B,
which I have found fails from time to time. That is one of the reasons
for Rvmmin and Rcgmin (and hopefully sooner rather than later Rtn - a
truncated Newton method, currently working for unconstrained problems,
but still
I use microbenchmark to time various of my code segments and find it
very useful. However, by accident I called it with the expression I
wanted to time quoted. This simply measured the time to evaluate the
quote. The following illustrates the difference. When explained, the
issue is obvious,
Sometimes one has to really read the manual carefully.
If non-trivial bounds are supplied, this
method will be selected, with a warning. (re L-BFGS-B)
Several of us have noted problems occasionally with this code.
You might want to look at the box constrained codes offered in optimx
I think you have chosen a model that is ill-suited to the data.
My initial thoughts were simply that the issue was the usual nls()
singular gradient (actually jacobian if you want to be understood in
the optimization community) woes, but in this case the jacobian really
is bad.
My quick and
I'm wondering what the purpose of the back-quoting of the name is, since
benchmark seems a valid name. The language reference does mention
back-quoting names to make them syntactic names, but I found no
explanation of the why.
Can someone give a concise reason?
JN
---
Sent from my phone. Please excuse my brevity.
Prof J C Nash (U30A) nas...@uottawa.ca wrote:
I'm wondering what the purpose of the back-quoting of the name is,
since
benchmark seems a valid name. The language reference does mention
back-quoting names
And if you need some extra digits:
require(Rmpfr)
testfn-function(x){2^x+3^x-13}
myint-c(mpfr(-5,precBits=1000),mpfr(5,precBits=1000))
myroot-unirootR(testfn, myint, tol=1e-30)
myroot
John Nash
On 13-10-11 06:00 AM, r-help-requ...@r-project.org wrote:
Message: 33
Date: Thu, 10 Oct 2013
In order to have a clean workspace at the start of each chapter of a
book I'm kniting I've written a little script as follows:
# chapclean.R
# This cleans up the R workspace
ilist-c(.GlobalEnv, package:stats, package:graphics,
package:grDevices,
package:utils, package:datasets,
This is one area where more internal communication between the objective
function (inadmissible inputs) and optimizer (e.g., Quasi-Newton) is
needed. This is NOT done at the moment in R, nor in most software. An
area for RD. In Nash and Walker-Smith (1987) we did some of this in
BASIC back in
The advice given is sensible. For a timing study see
http://rwiki.sciviews.org/doku.php?id=tips:rqcasestudy
We found that for optimization calculations, putting the objective
function calculation or parts thereof in Fortran was helpful. But we
kept those routines pretty small -- less than a page
Because you have y on both sides, and at different times, I think you
are going to have to bite the bullet and write down a residual function.
Suggestion: write it as
res[t+1] = (th1*x1 + R1*x2) * exp(a1*x3) + (1-th1*x1 + R1*x2)*y(t) - y[t+1]
(cleaning up the indices -- they are surely needed
I found mixed (and not always easy to predict) results from the
byte-code compiler. It seems necessary to test whether it helps. On some
calculations, it is definitely worthwhile.
JN
On 12-12-07 01:57 PM, Berend Hasselman wrote:
On 07-12-2012, at 19:37, Spencer Graves wrote:
On 12/7/2012
Actually, it likely won't matter where you start. The Gauss-Newton
direction is nearly always close to 90 degrees from the gradient, as
seen by turning trace=TRUE in the package nlmrt function nlxb(), which
does a safeguarded Marquardt calculation. This can be used in place of
nls(), except
deal more efficient than
Marquardt approaches when it works, but suffers from a fairly high
failure rate.
JN
On 13-03-15 10:01 AM, Gabor Grothendieck wrote:
On Fri, Mar 15, 2013 at 9:45 AM, Prof J C Nash (U30A) nas...@uottawa.ca wrote:
Actually, it likely won't matter where you start
I decided to follow up my own suggestion and look at the robustness of
nls vs. nlxb. NOTE: this problem is NOT one that nls() would usually be
applied to. The script below is very crude, but does illustrate that
nls() is unable to find a solution in 70% of tries where nlxb (a
Marquardt
One of the reasons DUD is not available much any more is that methods
have evolved:
- nls (and nlxb() from nlmrt as well as nlsLM from minpack.LM -- which
are more robust but may be less efficient) can use automatic derivative
computation, which avoids the motive for which DUD was written
-
Date: Tue, 2 Apr 2013 06:59:13 -0500
From: Paul Johnson pauljoh...@gmail.com
To: qi A send2...@gmail.com
Cc: R-help r-help@r-project.org
Subject: Re: [R] DUD (Does not Use Derivatives) for nonlinear
regression in R?
Message-ID:
Given nls has a lot of C code (and is pretty complicated), I doubt
you'll find much joy doing that.
nlxb from my nlmrt package is all in R, but you'll need to do quite a
bit of work at each stage. I don't form the J' J matrix, and do a
Marquardt approximation by adding appropriate rows to the
The expression has b[1] and b[2] while start has b[2] and b[3].
The expression needs a different form, for example:
# fit-nlrob(y ~ x1 / (1+ b[1]*x2^b[2]),data = xx, start =
# list(b[2],b[3]))
fit-nlrob(y ~ x1 / (1+ b1*x2^b2),data = xx, start =
list(b1=b[2],b2=b[3]))
This works,
There are lots of errors in your code. In particular, the optimization
routines do not like functions that ignore the parameters.
And you have not provided out or out1 to the optimizer -- they are
returned as elements of func(), but not correctly.
Please try some of the examples for optim or
If you run all methods in package optimx, you will see results all over
the western hemisphere. I suspect a problem with some nasty
computational issues. Possibly the replacement of the function with Inf
when any eigenvalues 0 or nu 0 is one source of this.
Note that Hessian eigenvalues
at
the answers and make sure that they are meaningful.
JN
On 13-12-17 11:32 AM, Adelchi Azzalini wrote:
On Tue, 17 Dec 2013 08:27:36 -0500, Prof J C Nash (U30A) wrote:
PJCN If you run all methods in package optimx, you will see results
PJCN all over the western hemisphere. I suspect a problem
.
AA
On 17 Dec 2013, at 18:18, Prof J C Nash (U30A) wrote:
As indicated, if optimizers check Hessians on every occasion, R would
enrich all the computer manufacturers. In this case it is not too large
a problem, so worth doing.
However, for this problem, the Hessian is being evaluated
This is almost always user error in setting up the problem, but
without a reproducible example, you won't get any real help on this list.
JN
On 14-06-24 06:00 AM, r-help-requ...@r-project.org wrote:
Message: 11
Date: Mon, 23 Jun 2014 09:14:23 -0700
From: Ferra Xu ferra...@yahoo.com
To:
You didn't give your results (but DID give a script -- hooray!). I made
a small change -- got rid of the bounds and added trace=TRUE, and got
the output
## after 5001Jacobian and 6997 function evaluations
## namecoeff SE tstat pval
gradientJSingval
If it is possible, I think you will need to get the expression for
Puro.fun2 and then (essentially manually) put it into nls (or perhaps
better nlmrt or minpack.lm which have better numerics and allow bounds;
nlmrt even has masks or temporarily fixed parameters, but I need to
writa a vignette
One choice is to add a penalty to the objective to enforce the
constraint(s) along with bounds to keep the parameters from going wild.
This generally works reasonably well. Sometimes it helps to run just a
few iterations with a big penalty scale to force the parameters into a
feasible region,
I was looking at the R website (r-project.org).
1) The Books page does not list several books about R, including one of
my own (Nonlinear parameter optimization tools in R) nor that of Karline
Soetaert on differential equations. How is the list updated?
2) The wiki seems to be dead. Is anyone in
If you want this resolved, you are going to have to provide the full
function in a reproducible example. Nearly a half-century with this type
of problem suggests a probability of nearly 1 that nlogL will be poorly
set up.
JN
On 14-12-03 06:00 AM, r-help-requ...@r-project.org wrote:
Message:
This is NOT critical. It arose due to a fumble fingers when developing
an R example, but slightly intriguing.
How could one build a string from substrings with a single backslash (\)
as separator. Here's the reproducible example:
fname = John; lname = Smith
paste(fname, lname)
paste(fname,
/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8
attached base packages:
[1] stats graphics grDevices utils datasets methods base
loaded via a namespace (and not attached):
[1] tools_3.1.0
On Dec 6, 2014, at 2:00 PM, Prof J C Nash (U30A) nas...@uottawa.ca wrote
I'm attempting to run the following script to allow me to
bring a new computer to a state where it can run a set of scripts
that is running on another.
# listcheckinstall.R
# get a list of R packages from a file,
# check if installed,
# install those not installed
# then update all packages
# Run
nlsLM and nls share a numerical gradient approximation and pop up the
singular gradient quite often at the start. Package nlmrt and a very
alpha nls14 (not on CRAN) try to use analytic derivatives for the
Jacobian (most optimization folk will say singular Jacobian rather than
singular
Of the tools I know (and things change every day!), only package trust
uses the Hessian explicitly.
It would not be too difficult to include explicit Hessian by modifying
Rvmmin which is all in R -- I'm currently doing some cleanup on that, so
ask offline if you choose that route.
Given that
I would not say it is fully solved since in using Nelder-Mead
you did not get the Hessian.
The issue is almost certainly that there is an implicit bound due to
log() or sqrt() where a parameter gets to be near zero and the finite
difference approximation of derivatives steps over the cliff.
Some observations -- no solution here though:
1) the code is not executable. I tried. Maybe that makes it reproducible!
Typos such as stat mod, undefined Q etc.
2) My experience is that any setup with a ?apply approach that doesn't
then check to see that the structure of the data is correct
1) It helps to include the require statements for those of us who work
outside your particular box.
lme4 and (as far as I can guess) fastGHQuad
are needed.
2) Most nonlinear functions have domains where they cannot be
evaluated. I'd be richer than Warren Buffett if I got $5 for
each time
Well put. I avoid them too, and go so far as to seek and destroy so they
don't get loaded unnoticed and cause unwanted consequences.
.RData files (the ones with nothing before the period) are just traps
for your future self, with no documentation. I avoid them like the plague.
JN
On 15-03-11
Andrew's suggestion for Year is a help, but package nlmrt shows the
problem you are trying to solve is truly one where there is a Jacobian
singularity. (nlmrt produces the Jacobian singular values -- but read
the output carefully because these are placed for compact output as if
they correspond to
As another post has suggested, r-sig-debian may be more help.
However, it looks like you need to install a different package from the
one you tried. See
http://ubuntuforums.org/showthread.php?t=1774516
about getting libcurl on 12.04.
FYI Ubuntu 12.04 is now getting dated, and I've found people
Your problem is saying (on my machine) that it cannot compute the
gradient. Since it does this numerically, my guess is that the step to
evaluate the gradient violates the bounds and we get log(-something).
I also get
Warning messages:
1: In dnbinom(x = dummyData[, Y], mu = mu, size =
Most of the stochastic optimization methods are directed at multiple
optima. You appear to have an imprecisely determined function e.g., time
taken for racing driver to get round the track, and indeed this is a
different form of stochastic optimization.
With Harry Joe of UBC I did quite a bit of
It looks like the matrix nhatend (for Numerical Hessian AT END) has some
NAs or Infs.
Suggest you turn off the Hessian calculation by
argument hessian=FALSE (that is the default)
and control=list(kkt=FALSE) (the default is TRUE for small problems)
Then take the resulting final parameters and
I have a section (6.4.2) about singular gradient (actually singular
Jacobian to numerical analysts) in my recent book Nonlinear parameter
optimization using R tools. nls() is prone to this, though having all
the starting values the same in many functions can be asking for trouble
of this sort, as
I wrote Rcgmin to do bounds constraints only. Linear constraints are
much more complicated to include.
If your constraints are equality ones, you could solve, but that could
make it awkward to evaluate the gradient.
For inequality constraints, especially if there are only a couple, I
think I'd
Package nlmrt (function nlxb) tries to use symbolic derivatives. In
fact, Duncan Murdoch and I have a very slowly developing nls14 package
to substitute for nls that should advance this even further.
nlxb also allows masked (i.e., fixed) parameters, which would let you
combine your runs, fixing
61 matches
Mail list logo