Mark, you're right, and it's a bit embarrassing as I thought I had
looked at it closely enough.

This solves the problem for 'alabama::auglag()' in both cases, but NOT for

  * NlcOptim::solnl     -- with x0
  * nloptr::auglag      -- both x0, x1
  * Rsolnp::solnp       -- with x0
  * Rdonlp::donlp2      -- with x0

as for these solver calls the gradient function g was *not* used.

Actually, 'solnl()' and 'solnp()' do not allow a gradient argument,
'nloptr::auglag()' says it does not use a supplied gradient, and
'donlp2' again does not provide it.
Gradients, if needed, are computed internally which in most cases is
sufficient, anyway.

So the question remains:
Is the fact that the projection of the gradient onto the constraint is
zero, is this the reason for the solvers not finding the minimum?

And how to avoid this? Except, maybe, checking the gradient for all
the given constraints

Thanks  --HW



On Fri, 21 May 2021 at 17:58, Mark Leeds <marklee...@gmail.com> wrote:
>
> Hi Hans: I think  that you are missing minus signs in the 2nd and 3rd 
> elements of your gradient.
> Also, I don't know how all of the optimixation functions work as far as their 
> arguments but it's best to supply
> the gradient when possible. I hope it helps.
>

______________________________________________
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Reply via email to