In case it is of interest this problem can be solved with an
unconstrained optimizer,
here optim, like this:
proj <- function(x) x / sqrt(sum(x * x))
opt <- optim(c(0, 0, 1), function(x) f(proj(x)))
proj(opt$par)
## [1] 5.388907e-09 7.071068e-01 7.071068e-01
On Fri, May 21, 2021
I meant:
x0 = c (1, 1e-3, 0)
Not:
x0 = c (1, 1e6, 0)
So, large intentional error may work too.
Possibly, better...?
On Thu, May 27, 2021 at 6:00 PM Abby Spurdle wrote:
>
> If I can re-answer the original post:
> There's a relatively simple solution.
> (For these problems, at least).
>
> #wrong
If I can re-answer the original post:
There's a relatively simple solution.
(For these problems, at least).
#wrong
x0 = c (1, 0, 0)
NlcOptim::solnl(x0, objfun = f, confun = conf)$par
Rdonlp2::donlp2(x0, fn = f, nlin = list(heq), nlin.lower = 0,
nlin.upper = 0)$par
#right
x0 = c (1, 1e6, 0)
I need to retract my previous post.
(Except the part that the R has extremely good numerical capabilities).
I ran some of the examples, and Hans W was correct.
__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
As someone who works on trying to improve the optimization codes in R,
though mainly in the unconstrained and bounds-constrained area, I think my
experience is more akin to that of HWB. That is, for some problems -- and
the example in question does have a reparametrization that removes the
I received an off-list email, questioning the relevance of my post.
So, I thought I should clarify.
If an optimization algorithm is dependent on the starting point (or
other user-selected parameters), and then fails to find the "correct"
solution because the starting point (or other user-selected
Sorry, missed the top line of code.
library (barsurf)
__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and
For a start, there's two local minima.
Add to that floating point errors.
And possible assumptions by the package authors.
begin code
f <- function (x, y, sign)
{ unsign.z <- sqrt (1 - x^2 - y^2)
2 * (x^2 - sign * y * unsign.z)
}
north.f <- function (x, y) f (x, y, +1)
south.f <-
Yes. "*on* the unit sphere" means on the surface, as you can guess
from the equality constraint. And 'auglag()' does find the minimum, so
no need for a special approach.
I was/am interested in why all these other good solvers get stuck,
i.e., do not move away from the starting point. And how to
Sorry, this might sound like a poor question:
But by "on the unit sphere", do you mean on the ***surface*** of the sphere?
In which case, can't the surface of a sphere be projected onto a pair
of circles?
Where the cost function is reformulated as a function of two (rather
than three) variables.
I might (and that could be a stretch) be expert in unconstrained problems,
but I've nowhere near HWB's experience in constrained ones.
My main reason for wanting gradients is to know when I'm at a solution.
In practice for getting to the solution, I've often found secant methods
work faster,
Hi Hans: I can't help as far as the projection of the gradient onto the
constraint but it may give insight just to see what the value of
the gradient itself is when the optimization stops.
John Nash ( definitely one of THE expeRts when it comes to optimization in
R )
often strongly recommends to
Mark, you're right, and it's a bit embarrassing as I thought I had
looked at it closely enough.
This solves the problem for 'alabama::auglag()' in both cases, but NOT for
* NlcOptim::solnl -- with x0
* nloptr::auglag -- both x0, x1
* Rsolnp::solnp -- with x0
*
Hi Hans: I think that you are missing minus signs in the 2nd and 3rd
elements of your gradient.
Also, I don't know how all of the optimixation functions work as far as
their arguments but it's best to supply
the gradient when possible. I hope it helps.
On Fri, May 21, 2021 at 11:01 AM Hans
14 matches
Mail list logo