I might (and that could be a stretch) be expert in unconstrained problems,
but I've nowhere near HWB's experience in constrained ones.
My main reason for wanting gradients is to know when I'm at a solution.
In practice for getting to the solution, I've often found secant methods
work faster, thoug
Hi Hans: I can't help as far as the projection of the gradient onto the
constraint but it may give insight just to see what the value of
the gradient itself is when the optimization stops.
John Nash ( definitely one of THE expeRts when it comes to optimization in
R )
often strongly recommends to
Mark, you're right, and it's a bit embarrassing as I thought I had
looked at it closely enough.
This solves the problem for 'alabama::auglag()' in both cases, but NOT for
* NlcOptim::solnl -- with x0
* nloptr::auglag -- both x0, x1
* Rsolnp::solnp -- with x0
* Rdonlp::donlp
Hi Hans: I think that you are missing minus signs in the 2nd and 3rd
elements of your gradient.
Also, I don't know how all of the optimixation functions work as far as
their arguments but it's best to supply
the gradient when possible. I hope it helps.
On Fri, May 21, 2021 at 11:01 AM Hans W
Just by chance I came across the following example of minimizing
a simple function
(x,y,z) --> 2 (x^2 - y z)
on the unit sphere, the only constraint present.
I tried it with two starting points, x1 = (1,0,0) and x2 = (0,0,1).
#-- Problem definition in R
f = function(x) 2 * (x[1]^2 -
5 matches
Mail list logo