Re: [R] Non linear optimization with nloptr package fail to produce true optimal result

2024-12-13 Thread John Fox

Hello Duncan,

On 2024-12-13 5:30 p.m., Duncan Murdoch wrote:

Caution: External email.


On 2024-12-13 5:11 p.m., John Fox wrote:

Dear Daniel,

On 2024-12-13 2:51 p.m., Daniel Lobo wrote:

Caution: External email.


Looks like the solution 1.576708   6.456606   6.195305 -19.007996 is
the best solution that nloptr can produce by increasing the iteration
numbers.

The better set of solution is obtained using pracma package.


Not if I read the output correctly. As I showed, the result from
pracma:fmincon() produces a larger value of the objective function than
the result I obtained from nloptr().

John Nash (who is an expert on optimization -- I'm not) obtained an even
lower value of the objective function from alabama::auglag().

As others have pointed out, one can't really draw general conclusions
from a particular example, and like others, I don't have the time or
inclination to figure out why your problem appears to be ill-conditioned
(though note that the columns of X, excluding the constant, are highly
correlated).


I would guess that the main difficulty with the example is the "sum of
absolute errors" objective function.  That objective function has a


That was one of my first thoughts, but when I tried substituting the 
least-squares objective function, I observed similar behaviour.



discontinuous derivative which will cause problems for many optimizers.
It may also have many local optima, though I don't know in this case.


The model being fit is pretty simple -- a constrained linear model -- 
but as I said the columns of X are fairly highly correlated (though not 
so much as to create problems for unconstrained least-squares):


cor(X[, -1])
  x
x 1.000 0.9710284 0.9254732
  0.9710284 1.000 0.8201191
  0.9254732 0.8201191 1.000

Columns 2 and 3 are x^2 and log(x).

Also, the various solutions aren't all that different, and the 
regression residuals are relatively very small:


> fits <- cbind(X %*% res$solution,
+   X %*% c(0.222, 6.999, 6.17, -19.371),
+   X %*% c(-0.610594, 4.307408, 6.254267, -11.919881),
+   y)
> colnames(fits) = c("nloptr", "fmincon", "auglag", "y")
> cor(fits)
   nloptr   fminconauglag y
nloptr  1.000 0.997 0.990 0.930
fmincon 0.997 1.000 0.988 0.923
auglag  0.990 0.988 1.000 0.970
y   0.930 0.923 0.970 1.000
> cor(log(fits))
   nloptr   fminconauglag y
nloptr  1.000 0.796 0.9989490 0.9957987
fmincon 0.796 1.000 0.9991778 0.9961371
auglag  0.9989490 0.9991778 1.000 0.9986251
y   0.9957987 0.9961371 0.9986251 1.000

And take a look at (graph not shown):

plot(x, log(y), col="lightgray")
lines(x, log(fits[, 1]), col="magenta")
lines(x, log(fits[, 2]), col="blue")
lines(x, log(fits[, 3]), col="orange")
legend("bottomright", inset=0.025, col=c("magenta", "blue", "orange"),
   lty=1, legend=c("nloptr", "fmincon", "auglag"))

There's something peculiar going on at the lower-left.

Best,
 John



Duncan Murdoch



Best,
   John



On Sat, 14 Dec 2024 at 01:14, John Fox  wrote:


Dear Daniel et al.,

Following on Duncan's remark and examining the message produced by
nloptr(), I simply tried increasing the maximum number of function
evaluations:
-- snip ---

   > nloptr(rep(0, 4), f, eval_g_ineq = hin, eval_g_eq = Hx, opts =
+  list("algorithm" = "NLOPT_LN_COBYLA", "xtol_rel" = 1.0e-8,
+   maxeval = 1e5)
+ )

Call:

nloptr(x0 = rep(0, 4), eval_f = f, eval_g_ineq = hin, eval_g_eq = Hx,
   opts = list(algorithm = "NLOPT_LN_COBYLA", xtol_rel = 1e-08,
   maxeval = 1e+05))


Minimization using NLopt version 2.7.1

NLopt solver status: 4 ( NLOPT_XTOL_REACHED: Optimization stopped
because xtol_rel or xtol_abs (above) was reached. )

Number of Iterations: 46317
Termination conditions:  xtol_rel: 1e-08    maxeval: 1e+05
Number of inequality constraints:  1
Number of equality constraints:    1
Optimal value of objective function:  1287.71725107671
Optimal value of controls: 1.576708 6.456606 6.195305 -19.008

-- snip --

That produces a solution closer to, and better than, the one that you
suggested (which you obtained how?):

   > f(c(0.222, 6.999, 6.17, -19.371))
[1] 1325.076

I hope this helps,
    John
--
John Fox, Professor Emeritus
McMaster University
Hamilton, Ontario, Canada
web: https://www.john-fox.ca/
--
On 2024-12-13 1:45 p.m., Duncan Murdoch wrote:

Caution: External email.


You posted a version of this question on StackOverflow, and were given
advice there that you ignored.

nloptr() clearly indicates that it is quitting without reaching an
optimum, but you are hiding that message.  Don't do that.

Duncan Murdoch

On 2024-12-13 12:52 p.m., Daniel Lobo wrote:

library(nloptr)

set.seed(1)
A <- 1.34
B <- 0.5673
C <- 6.356
D <- -1.234
x <- seq(0.5, 20, length.out = 500)
y <- A + B * x + C * x^2 + D * log(x) + runif(500, 

Re: [R] Non linear optimization with nloptr package fail to produce true optimal result

2024-12-13 Thread Duncan Murdoch

On 2024-12-13 5:11 p.m., John Fox wrote:

Dear Daniel,

On 2024-12-13 2:51 p.m., Daniel Lobo wrote:

Caution: External email.


Looks like the solution 1.576708   6.456606   6.195305 -19.007996 is
the best solution that nloptr can produce by increasing the iteration
numbers.

The better set of solution is obtained using pracma package.


Not if I read the output correctly. As I showed, the result from
pracma:fmincon() produces a larger value of the objective function than
the result I obtained from nloptr().

John Nash (who is an expert on optimization -- I'm not) obtained an even
lower value of the objective function from alabama::auglag().

As others have pointed out, one can't really draw general conclusions
from a particular example, and like others, I don't have the time or
inclination to figure out why your problem appears to be ill-conditioned
(though note that the columns of X, excluding the constant, are highly
correlated).


I would guess that the main difficulty with the example is the "sum of 
absolute errors" objective function.  That objective function has a 
discontinuous derivative which will cause problems for many optimizers. 
It may also have many local optima, though I don't know in this case.


Duncan Murdoch



Best,
   John



On Sat, 14 Dec 2024 at 01:14, John Fox  wrote:


Dear Daniel et al.,

Following on Duncan's remark and examining the message produced by
nloptr(), I simply tried increasing the maximum number of function
evaluations:
-- snip ---

   > nloptr(rep(0, 4), f, eval_g_ineq = hin, eval_g_eq = Hx, opts =
+  list("algorithm" = "NLOPT_LN_COBYLA", "xtol_rel" = 1.0e-8,
+   maxeval = 1e5)
+ )

Call:

nloptr(x0 = rep(0, 4), eval_f = f, eval_g_ineq = hin, eval_g_eq = Hx,
   opts = list(algorithm = "NLOPT_LN_COBYLA", xtol_rel = 1e-08,
   maxeval = 1e+05))


Minimization using NLopt version 2.7.1

NLopt solver status: 4 ( NLOPT_XTOL_REACHED: Optimization stopped
because xtol_rel or xtol_abs (above) was reached. )

Number of Iterations: 46317
Termination conditions:  xtol_rel: 1e-08maxeval: 1e+05
Number of inequality constraints:  1
Number of equality constraints:1
Optimal value of objective function:  1287.71725107671
Optimal value of controls: 1.576708 6.456606 6.195305 -19.008

-- snip --

That produces a solution closer to, and better than, the one that you
suggested (which you obtained how?):

   > f(c(0.222, 6.999, 6.17, -19.371))
[1] 1325.076

I hope this helps,
John
--
John Fox, Professor Emeritus
McMaster University
Hamilton, Ontario, Canada
web: https://www.john-fox.ca/
--
On 2024-12-13 1:45 p.m., Duncan Murdoch wrote:

Caution: External email.


You posted a version of this question on StackOverflow, and were given
advice there that you ignored.

nloptr() clearly indicates that it is quitting without reaching an
optimum, but you are hiding that message.  Don't do that.

Duncan Murdoch

On 2024-12-13 12:52 p.m., Daniel Lobo wrote:

library(nloptr)

set.seed(1)
A <- 1.34
B <- 0.5673
C <- 6.356
D <- -1.234
x <- seq(0.5, 20, length.out = 500)
y <- A + B * x + C * x^2 + D * log(x) + runif(500, 0, 3)

#Objective function

X <- cbind(1, x, x^2, log(x))
f <- function(theta) {
sum(abs(X %*% theta - y))
}

#Constraint

eps <- 1e-4

hin <- function(theta) {
 abs(sum(X %*% theta) - sum(y)) - 1e-3 + eps
}

Hx <- function(theta) {
 X[100, , drop = FALSE] %*% theta - (120 - eps)
}

#Optimization with nloptr

Sol = nloptr(rep(0, 4), f, eval_g_ineq = hin, eval_g_eq = Hx, opts =
list("algorithm" = "NLOPT_LN_COBYLA", "xtol_rel" = 1.0e-8))$solution
# -0.2186159 -0.5032066  6.4458823 -0.4125948


__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide https://www.R-project.org/posting-
guide.html
and provide commented, minimal, self-contained, reproducible code.








__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide https://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Non linear optimization with nloptr package fail to produce true optimal result

2024-12-13 Thread John Fox

Dear Daniel,

On 2024-12-13 2:51 p.m., Daniel Lobo wrote:

Caution: External email.


Looks like the solution 1.576708   6.456606   6.195305 -19.007996 is
the best solution that nloptr can produce by increasing the iteration
numbers.

The better set of solution is obtained using pracma package.


Not if I read the output correctly. As I showed, the result from 
pracma:fmincon() produces a larger value of the objective function than 
the result I obtained from nloptr().


John Nash (who is an expert on optimization -- I'm not) obtained an even 
lower value of the objective function from alabama::auglag().


As others have pointed out, one can't really draw general conclusions 
from a particular example, and like others, I don't have the time or 
inclination to figure out why your problem appears to be ill-conditioned 
(though note that the columns of X, excluding the constant, are highly 
correlated).


Best,
 John



On Sat, 14 Dec 2024 at 01:14, John Fox  wrote:


Dear Daniel et al.,

Following on Duncan's remark and examining the message produced by
nloptr(), I simply tried increasing the maximum number of function
evaluations:
-- snip ---

  > nloptr(rep(0, 4), f, eval_g_ineq = hin, eval_g_eq = Hx, opts =
+  list("algorithm" = "NLOPT_LN_COBYLA", "xtol_rel" = 1.0e-8,
+   maxeval = 1e5)
+ )

Call:

nloptr(x0 = rep(0, 4), eval_f = f, eval_g_ineq = hin, eval_g_eq = Hx,
  opts = list(algorithm = "NLOPT_LN_COBYLA", xtol_rel = 1e-08,
  maxeval = 1e+05))


Minimization using NLopt version 2.7.1

NLopt solver status: 4 ( NLOPT_XTOL_REACHED: Optimization stopped
because xtol_rel or xtol_abs (above) was reached. )

Number of Iterations: 46317
Termination conditions:  xtol_rel: 1e-08maxeval: 1e+05
Number of inequality constraints:  1
Number of equality constraints:1
Optimal value of objective function:  1287.71725107671
Optimal value of controls: 1.576708 6.456606 6.195305 -19.008

-- snip --

That produces a solution closer to, and better than, the one that you
suggested (which you obtained how?):

  > f(c(0.222, 6.999, 6.17, -19.371))
[1] 1325.076

I hope this helps,
   John
--
John Fox, Professor Emeritus
McMaster University
Hamilton, Ontario, Canada
web: https://www.john-fox.ca/
--
On 2024-12-13 1:45 p.m., Duncan Murdoch wrote:

Caution: External email.


You posted a version of this question on StackOverflow, and were given
advice there that you ignored.

nloptr() clearly indicates that it is quitting without reaching an
optimum, but you are hiding that message.  Don't do that.

Duncan Murdoch

On 2024-12-13 12:52 p.m., Daniel Lobo wrote:

library(nloptr)

set.seed(1)
A <- 1.34
B <- 0.5673
C <- 6.356
D <- -1.234
x <- seq(0.5, 20, length.out = 500)
y <- A + B * x + C * x^2 + D * log(x) + runif(500, 0, 3)

#Objective function

X <- cbind(1, x, x^2, log(x))
f <- function(theta) {
sum(abs(X %*% theta - y))
}

#Constraint

eps <- 1e-4

hin <- function(theta) {
abs(sum(X %*% theta) - sum(y)) - 1e-3 + eps
}

Hx <- function(theta) {
X[100, , drop = FALSE] %*% theta - (120 - eps)
}

#Optimization with nloptr

Sol = nloptr(rep(0, 4), f, eval_g_ineq = hin, eval_g_eq = Hx, opts =
list("algorithm" = "NLOPT_LN_COBYLA", "xtol_rel" = 1.0e-8))$solution
# -0.2186159 -0.5032066  6.4458823 -0.4125948


__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide https://www.R-project.org/posting-
guide.html
and provide commented, minimal, self-contained, reproducible code.





__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide https://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Non linear optimization with nloptr package fail to produce true optimal result

2024-12-13 Thread J C Nash




On 2024-12-13 13:55, Daniel Lobo wrote:

1. Why nloptr() is failing where other programs can continue with the
same set of data, numbers, and constraints?
2. Is this enough ground to say that nloptr is inferior and user
should not use this in complex problems?


As I indicated in a recent response, nloptr has a lot of settings and 
possibilities.
It is VERY easy to choose poor values. I think Ravi Varadhan has probably done 
a lot
of work in alabama to set things up so the user has an easier time. I certainly 
did.
In fact I'm now getting complaints in your modified script when I try nloptr, 
but that's
likely some typo or other.

I substituted optim's Nelder-Mead for anms, and got same results essentially, 
so you
could run with just old-standard optim() which is built into R. (I even wrote 
the original
BASIC code in 1975.)

JN

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide https://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Non linear optimization with nloptr package fail to produce true optimal result

2024-12-13 Thread Ben Bolker
  It's a long way from "X works better than Y on this particular
problem" to "X is superior to Y".  It's a somewhat loose analogy, but
the 'no free lunch theorem'
 asserts that if
we consider a broad enough class of optimization problems, *no*
optimization algorithm is uniformly best ...

   That answers question #2 ("no"). Answering question #1 requires a
lot of work digging into the details of the problem, which I don't
have the time & energy to do right now ... someone who knows a lot
more about optimization *might* be able to answer based on previous
experience and background knowledge, but even they might have to spend
a lot of time digging ...


On Fri, Dec 13, 2024 at 3:46 PM Daniel Lobo  wrote:
>
> Hi Duncan,
>
> I take your advice.
>
> I posted here in search for a better answer to my problem as I could
> not get that there.
>
> My question is:
> 1. Why nloptr() is failing where other programs can continue with the
> same set of data, numbers, and constraints?
> 2. Is this enough ground to say that nloptr is inferior and user
> should not use this in complex problems?
>
> I wish to get a thoughtful answer to above as my working environment
> only has the nloptr package installed, and it is an isolated system
> due to security issues and installation of a new package requires lot
> lot of approvals and time consuming.
>
> BTW, if someone interested here is my original post
> https://stackoverflow.com/a/79271318/15910619
>
> On Sat, 14 Dec 2024 at 00:15, Duncan Murdoch  wrote:
> >
> > You posted a version of this question on StackOverflow, and were given
> > advice there that you ignored.
> >
> > nloptr() clearly indicates that it is quitting without reaching an
> > optimum, but you are hiding that message.  Don't do that.
> >
> > Duncan Murdoch
> >
> > On 2024-12-13 12:52 p.m., Daniel Lobo wrote:
> > > library(nloptr)
> > >
> > > set.seed(1)
> > > A <- 1.34
> > > B <- 0.5673
> > > C <- 6.356
> > > D <- -1.234
> > > x <- seq(0.5, 20, length.out = 500)
> > > y <- A + B * x + C * x^2 + D * log(x) + runif(500, 0, 3)
> > >
> > > #Objective function
> > >
> > > X <- cbind(1, x, x^2, log(x))
> > > f <- function(theta) {
> > > sum(abs(X %*% theta - y))
> > > }
> > >
> > > #Constraint
> > >
> > > eps <- 1e-4
> > >
> > > hin <- function(theta) {
> > >abs(sum(X %*% theta) - sum(y)) - 1e-3 + eps
> > > }
> > >
> > > Hx <- function(theta) {
> > >X[100, , drop = FALSE] %*% theta - (120 - eps)
> > > }
> > >
> > > #Optimization with nloptr
> > >
> > > Sol = nloptr(rep(0, 4), f, eval_g_ineq = hin, eval_g_eq = Hx, opts =
> > > list("algorithm" = "NLOPT_LN_COBYLA", "xtol_rel" = 1.0e-8))$solution
> > > # -0.2186159 -0.5032066  6.4458823 -0.4125948
> >
>
> __
> R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide https://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide https://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Non linear optimization with nloptr package fail to produce true optimal result

2024-12-13 Thread Daniel Lobo
Hi Duncan,

I take your advice.

I posted here in search for a better answer to my problem as I could
not get that there.

My question is:
1. Why nloptr() is failing where other programs can continue with the
same set of data, numbers, and constraints?
2. Is this enough ground to say that nloptr is inferior and user
should not use this in complex problems?

I wish to get a thoughtful answer to above as my working environment
only has the nloptr package installed, and it is an isolated system
due to security issues and installation of a new package requires lot
lot of approvals and time consuming.

BTW, if someone interested here is my original post
https://stackoverflow.com/a/79271318/15910619

On Sat, 14 Dec 2024 at 00:15, Duncan Murdoch  wrote:
>
> You posted a version of this question on StackOverflow, and were given
> advice there that you ignored.
>
> nloptr() clearly indicates that it is quitting without reaching an
> optimum, but you are hiding that message.  Don't do that.
>
> Duncan Murdoch
>
> On 2024-12-13 12:52 p.m., Daniel Lobo wrote:
> > library(nloptr)
> >
> > set.seed(1)
> > A <- 1.34
> > B <- 0.5673
> > C <- 6.356
> > D <- -1.234
> > x <- seq(0.5, 20, length.out = 500)
> > y <- A + B * x + C * x^2 + D * log(x) + runif(500, 0, 3)
> >
> > #Objective function
> >
> > X <- cbind(1, x, x^2, log(x))
> > f <- function(theta) {
> > sum(abs(X %*% theta - y))
> > }
> >
> > #Constraint
> >
> > eps <- 1e-4
> >
> > hin <- function(theta) {
> >abs(sum(X %*% theta) - sum(y)) - 1e-3 + eps
> > }
> >
> > Hx <- function(theta) {
> >X[100, , drop = FALSE] %*% theta - (120 - eps)
> > }
> >
> > #Optimization with nloptr
> >
> > Sol = nloptr(rep(0, 4), f, eval_g_ineq = hin, eval_g_eq = Hx, opts =
> > list("algorithm" = "NLOPT_LN_COBYLA", "xtol_rel" = 1.0e-8))$solution
> > # -0.2186159 -0.5032066  6.4458823 -0.4125948
>

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide https://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Non linear optimization with nloptr package fail to produce true optimal result

2024-12-13 Thread J C Nash

Interesting that alabama and nloptr both use auglag but alabama gets a lower 
objective fn.
I think there could be lots of exploration of controls and settings to play 
with to find out
what is going on.


alabama::auglag
f, ci, ce,ob,val:   0   -4.71486e-08   1029.77   1029.77  at [1]  -0.610594   
4.307408   6.254267 -11.919881
> solfox<-c( 1.576708, 6.456606, 6.195305, -19.008 )
> conobj(solfox)
f, ci, ce,ob,val:   0   -1.031085e-05   1287.707   1287.707  at [1]   1.576708  
 6.456606   6.195305 -19.008000
 [,1]
[1,] 1287.707

ci is the inequality constraint fn, ce the equality one, ob is the raw objective, val the penalized one, then the 4 
parameters.


JN


On 2024-12-13 14:44, John Fox wrote:

Dear Daniel et al.,

Following on Duncan's remark and examining the message produced by nloptr(), I simply tried increasing the maximum 
number of function evaluations:

-- snip ---

 > nloptr(rep(0, 4), f, eval_g_ineq = hin, eval_g_eq = Hx, opts =
+  list("algorithm" = "NLOPT_LN_COBYLA", "xtol_rel" = 1.0e-8,
+   maxeval = 1e5)
+ )

Call:

nloptr(x0 = rep(0, 4), eval_f = f, eval_g_ineq = hin, eval_g_eq = Hx,
     opts = list(algorithm = "NLOPT_LN_COBYLA", xtol_rel = 1e-08,
     maxeval = 1e+05))


Minimization using NLopt version 2.7.1

NLopt solver status: 4 ( NLOPT_XTOL_REACHED: Optimization stopped
because xtol_rel or xtol_abs (above) was reached. )

Number of Iterations: 46317
Termination conditions:  xtol_rel: 1e-08    maxeval: 1e+05
Number of inequality constraints:  1
Number of equality constraints:    1
Optimal value of objective function:  1287.71725107671
Optimal value of controls: 1.576708 6.456606 6.195305 -19.008

-- snip --

That produces a solution closer to, and better than, the one that you suggested 
(which you obtained how?):

 > f(c(0.222, 6.999, 6.17, -19.371))
[1] 1325.076

I hope this helps,
  John


__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide https://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Non linear optimization with nloptr package fail to produce true optimal result

2024-12-13 Thread Daniel Lobo
Looks like the solution 1.576708   6.456606   6.195305 -19.007996 is
the best solution that nloptr can produce by increasing the iteration
numbers.

The better set of solution is obtained using pracma package.

On Sat, 14 Dec 2024 at 01:14, John Fox  wrote:
>
> Dear Daniel et al.,
>
> Following on Duncan's remark and examining the message produced by
> nloptr(), I simply tried increasing the maximum number of function
> evaluations:
> -- snip ---
>
>  > nloptr(rep(0, 4), f, eval_g_ineq = hin, eval_g_eq = Hx, opts =
> +  list("algorithm" = "NLOPT_LN_COBYLA", "xtol_rel" = 1.0e-8,
> +   maxeval = 1e5)
> + )
>
> Call:
>
> nloptr(x0 = rep(0, 4), eval_f = f, eval_g_ineq = hin, eval_g_eq = Hx,
>  opts = list(algorithm = "NLOPT_LN_COBYLA", xtol_rel = 1e-08,
>  maxeval = 1e+05))
>
>
> Minimization using NLopt version 2.7.1
>
> NLopt solver status: 4 ( NLOPT_XTOL_REACHED: Optimization stopped
> because xtol_rel or xtol_abs (above) was reached. )
>
> Number of Iterations: 46317
> Termination conditions:  xtol_rel: 1e-08maxeval: 1e+05
> Number of inequality constraints:  1
> Number of equality constraints:1
> Optimal value of objective function:  1287.71725107671
> Optimal value of controls: 1.576708 6.456606 6.195305 -19.008
>
> -- snip --
>
> That produces a solution closer to, and better than, the one that you
> suggested (which you obtained how?):
>
>  > f(c(0.222, 6.999, 6.17, -19.371))
> [1] 1325.076
>
> I hope this helps,
>   John
> --
> John Fox, Professor Emeritus
> McMaster University
> Hamilton, Ontario, Canada
> web: https://www.john-fox.ca/
> --
> On 2024-12-13 1:45 p.m., Duncan Murdoch wrote:
> > Caution: External email.
> >
> >
> > You posted a version of this question on StackOverflow, and were given
> > advice there that you ignored.
> >
> > nloptr() clearly indicates that it is quitting without reaching an
> > optimum, but you are hiding that message.  Don't do that.
> >
> > Duncan Murdoch
> >
> > On 2024-12-13 12:52 p.m., Daniel Lobo wrote:
> >> library(nloptr)
> >>
> >> set.seed(1)
> >> A <- 1.34
> >> B <- 0.5673
> >> C <- 6.356
> >> D <- -1.234
> >> x <- seq(0.5, 20, length.out = 500)
> >> y <- A + B * x + C * x^2 + D * log(x) + runif(500, 0, 3)
> >>
> >> #Objective function
> >>
> >> X <- cbind(1, x, x^2, log(x))
> >> f <- function(theta) {
> >> sum(abs(X %*% theta - y))
> >> }
> >>
> >> #Constraint
> >>
> >> eps <- 1e-4
> >>
> >> hin <- function(theta) {
> >>abs(sum(X %*% theta) - sum(y)) - 1e-3 + eps
> >> }
> >>
> >> Hx <- function(theta) {
> >>X[100, , drop = FALSE] %*% theta - (120 - eps)
> >> }
> >>
> >> #Optimization with nloptr
> >>
> >> Sol = nloptr(rep(0, 4), f, eval_g_ineq = hin, eval_g_eq = Hx, opts =
> >> list("algorithm" = "NLOPT_LN_COBYLA", "xtol_rel" = 1.0e-8))$solution
> >> # -0.2186159 -0.5032066  6.4458823 -0.4125948
> >
> > __
> > R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide https://www.R-project.org/posting-
> > guide.html
> > and provide commented, minimal, self-contained, reproducible code.
>
>

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide https://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Non linear optimization with nloptr package fail to produce true optimal result

2024-12-13 Thread John Fox

Dear Daniel et al.,

Following on Duncan's remark and examining the message produced by 
nloptr(), I simply tried increasing the maximum number of function 
evaluations:

-- snip ---

> nloptr(rep(0, 4), f, eval_g_ineq = hin, eval_g_eq = Hx, opts =
+  list("algorithm" = "NLOPT_LN_COBYLA", "xtol_rel" = 1.0e-8,
+   maxeval = 1e5)
+ )

Call:

nloptr(x0 = rep(0, 4), eval_f = f, eval_g_ineq = hin, eval_g_eq = Hx,
opts = list(algorithm = "NLOPT_LN_COBYLA", xtol_rel = 1e-08,
maxeval = 1e+05))


Minimization using NLopt version 2.7.1

NLopt solver status: 4 ( NLOPT_XTOL_REACHED: Optimization stopped
because xtol_rel or xtol_abs (above) was reached. )

Number of Iterations: 46317
Termination conditions:  xtol_rel: 1e-08maxeval: 1e+05
Number of inequality constraints:  1
Number of equality constraints:1
Optimal value of objective function:  1287.71725107671
Optimal value of controls: 1.576708 6.456606 6.195305 -19.008

-- snip --

That produces a solution closer to, and better than, the one that you 
suggested (which you obtained how?):


> f(c(0.222, 6.999, 6.17, -19.371))
[1] 1325.076

I hope this helps,
 John
--
John Fox, Professor Emeritus
McMaster University
Hamilton, Ontario, Canada
web: https://www.john-fox.ca/
--
On 2024-12-13 1:45 p.m., Duncan Murdoch wrote:

Caution: External email.


You posted a version of this question on StackOverflow, and were given
advice there that you ignored.

nloptr() clearly indicates that it is quitting without reaching an
optimum, but you are hiding that message.  Don't do that.

Duncan Murdoch

On 2024-12-13 12:52 p.m., Daniel Lobo wrote:

library(nloptr)

set.seed(1)
A <- 1.34
B <- 0.5673
C <- 6.356
D <- -1.234
x <- seq(0.5, 20, length.out = 500)
y <- A + B * x + C * x^2 + D * log(x) + runif(500, 0, 3)

#Objective function

X <- cbind(1, x, x^2, log(x))
f <- function(theta) {
sum(abs(X %*% theta - y))
}

#Constraint

eps <- 1e-4

hin <- function(theta) {
   abs(sum(X %*% theta) - sum(y)) - 1e-3 + eps
}

Hx <- function(theta) {
   X[100, , drop = FALSE] %*% theta - (120 - eps)
}

#Optimization with nloptr

Sol = nloptr(rep(0, 4), f, eval_g_ineq = hin, eval_g_eq = Hx, opts =
list("algorithm" = "NLOPT_LN_COBYLA", "xtol_rel" = 1.0e-8))$solution
# -0.2186159 -0.5032066  6.4458823 -0.4125948


__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide https://www.R-project.org/posting- 
guide.html

and provide commented, minimal, self-contained, reproducible code.


__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide https://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Non linear optimization with nloptr package fail to produce true optimal result

2024-12-13 Thread J C Nash

Setting penalty scales si, se at 1e+4 gets results somewhat near the alabama 
results.

The problem seems quite sensitive to the constraint.

JN


 Forwarded Message 
Subject: Re: [R] Non linear optimization with nloptr package fail to produce 
true optimal result
Date: Fri, 13 Dec 2024 14:30:03 -0500
From: J C Nash 
To: r-help@r-project.org

The following may or may not be relevant, but definitely getting somewhat 
different results.
As this was a quick and dirty try while having a snack, it may have bugs.

# Lobo2412.R  -- from R Help 20241213

#Original artificial data

library(optimx)
library(nloptr)
library(alabama)

set.seed(1)
A <- 1.34
B <- 0.5673
C <- 6.356
D <- -1.234
x <- seq(0.5, 20, length.out = 500)
y <- A + B * x + C * x^2 + D * log(x) + runif(500, 0, 3)

#Objective function

X <- cbind(1, x, x^2, log(x))
flobo <- function(theta) {
sum(abs(X %*% theta - y))
}

#Constraint

eps <- 1e-4

hinlobo <- function(theta) {
  abs(sum(X %*% theta) - sum(y)) - 1e-3 + eps # ?? weird! (1e-4 - 1e-3)
}

Hxlobo <- function(theta) {
  X[100, , drop = FALSE] %*% theta - (120 - eps) # ditto -- also constant
}

conobj<-function(tt){
   ob <- flobo(tt)
   ci <- hinlobo(tt)
   if (ci > 0) {ci <- 0}
   ce <- Hxlobo(tt)
   si<-1; se<-1
   val<-ob+si*ci^2+se*ce^2
   cat("f, ci, ce,ob,val:"," ",ci," ",ce," ",ob," ",val," at "); print(tt)
   val
}

t0<-rep(0,4)
conobj(t0)
t1 <- c(2.02, 6.764, 6.186, -20.095)
conobj(t1)
t2 <- c( -0.2186159, -0.5032066,  6.4458823, -0.4125948)
conobj(t2)


solo<-optimr(t0, conobj, gr="grcentral", method="anms", control=list(trace=1))
solo
conobj(solo$par)
#Optimization with nloptr

# Sol = nloptr::auglag(t0, flobo, eval_g_ineq = hinlobo, eval_g_eq = Hxlobo, 
opts =
# list("algorithm" = "NLOPT_LN_COBYLA", "xtol_rel" = 1.0e-8, print_level=1))
# -0.2186159 -0.5032066  6.4458823 -0.4125948

sol <- auglag(par=t0, fn=flobo, hin=hinlobo, heq=Hxlobo, 
control.outer=list(trace=TRUE))
sol

#==

J Nash

On 2024-12-13 13:45, Duncan Murdoch wrote:

You posted a version of this question on StackOverflow, and were given advice 
there that you ignored.

nloptr() clearly indicates that it is quitting without reaching an optimum, but you are hiding that message.  Don't do 
that.


Duncan Murdoch

On 2024-12-13 12:52 p.m., Daniel Lobo wrote:

library(nloptr)

set.seed(1)
A <- 1.34
B <- 0.5673
C <- 6.356
D <- -1.234
x <- seq(0.5, 20, length.out = 500)
y <- A + B * x + C * x^2 + D * log(x) + runif(500, 0, 3)

#Objective function

X <- cbind(1, x, x^2, log(x))
f <- function(theta) {
sum(abs(X %*% theta - y))
}

#Constraint

eps <- 1e-4

hin <- function(theta) {
   abs(sum(X %*% theta) - sum(y)) - 1e-3 + eps
}

Hx <- function(theta) {
   X[100, , drop = FALSE] %*% theta - (120 - eps)
}

#Optimization with nloptr

Sol = nloptr(rep(0, 4), f, eval_g_ineq = hin, eval_g_eq = Hx, opts =
list("algorithm" = "NLOPT_LN_COBYLA", "xtol_rel" = 1.0e-8))$solution
# -0.2186159 -0.5032066  6.4458823 -0.4125948


__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide https://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide https://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Non linear optimization with nloptr package fail to produce true optimal result

2024-12-13 Thread J C Nash

The following may or may not be relevant, but definitely getting somewhat 
different results.
As this was a quick and dirty try while having a snack, it may have bugs.

# Lobo2412.R  -- from R Help 20241213

#Original artificial data

library(optimx)
library(nloptr)
library(alabama)

set.seed(1)
A <- 1.34
B <- 0.5673
C <- 6.356
D <- -1.234
x <- seq(0.5, 20, length.out = 500)
y <- A + B * x + C * x^2 + D * log(x) + runif(500, 0, 3)

#Objective function

X <- cbind(1, x, x^2, log(x))
flobo <- function(theta) {
sum(abs(X %*% theta - y))
}

#Constraint

eps <- 1e-4

hinlobo <- function(theta) {
  abs(sum(X %*% theta) - sum(y)) - 1e-3 + eps # ?? weird! (1e-4 - 1e-3)
}

Hxlobo <- function(theta) {
  X[100, , drop = FALSE] %*% theta - (120 - eps) # ditto -- also constant
}

conobj<-function(tt){
   ob <- flobo(tt)
   ci <- hinlobo(tt)
   if (ci > 0) {ci <- 0}
   ce <- Hxlobo(tt)
   si<-1; se<-1
   val<-ob+si*ci^2+se*ce^2
   cat("f, ci, ce,ob,val:"," ",ci," ",ce," ",ob," ",val," at "); print(tt)
   val
}

t0<-rep(0,4)
conobj(t0)
t1 <- c(2.02, 6.764, 6.186, -20.095)
conobj(t1)
t2 <- c( -0.2186159, -0.5032066,  6.4458823, -0.4125948)
conobj(t2)


solo<-optimr(t0, conobj, gr="grcentral", method="anms", control=list(trace=1))
solo
conobj(solo$par)
#Optimization with nloptr

# Sol = nloptr::auglag(t0, flobo, eval_g_ineq = hinlobo, eval_g_eq = Hxlobo, 
opts =
# list("algorithm" = "NLOPT_LN_COBYLA", "xtol_rel" = 1.0e-8, print_level=1))
# -0.2186159 -0.5032066  6.4458823 -0.4125948

sol <- auglag(par=t0, fn=flobo, hin=hinlobo, heq=Hxlobo, 
control.outer=list(trace=TRUE))
sol

#==

J Nash

On 2024-12-13 13:45, Duncan Murdoch wrote:

You posted a version of this question on StackOverflow, and were given advice 
there that you ignored.

nloptr() clearly indicates that it is quitting without reaching an optimum, but you are hiding that message.  Don't do 
that.


Duncan Murdoch

On 2024-12-13 12:52 p.m., Daniel Lobo wrote:

library(nloptr)

set.seed(1)
A <- 1.34
B <- 0.5673
C <- 6.356
D <- -1.234
x <- seq(0.5, 20, length.out = 500)
y <- A + B * x + C * x^2 + D * log(x) + runif(500, 0, 3)

#Objective function

X <- cbind(1, x, x^2, log(x))
f <- function(theta) {
sum(abs(X %*% theta - y))
}

#Constraint

eps <- 1e-4

hin <- function(theta) {
   abs(sum(X %*% theta) - sum(y)) - 1e-3 + eps
}

Hx <- function(theta) {
   X[100, , drop = FALSE] %*% theta - (120 - eps)
}

#Optimization with nloptr

Sol = nloptr(rep(0, 4), f, eval_g_ineq = hin, eval_g_eq = Hx, opts =
list("algorithm" = "NLOPT_LN_COBYLA", "xtol_rel" = 1.0e-8))$solution
# -0.2186159 -0.5032066  6.4458823 -0.4125948


__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide https://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide https://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Non linear optimization with nloptr package fail to produce true optimal result

2024-12-13 Thread Duncan Murdoch
You posted a version of this question on StackOverflow, and were given 
advice there that you ignored.


nloptr() clearly indicates that it is quitting without reaching an 
optimum, but you are hiding that message.  Don't do that.


Duncan Murdoch

On 2024-12-13 12:52 p.m., Daniel Lobo wrote:

library(nloptr)

set.seed(1)
A <- 1.34
B <- 0.5673
C <- 6.356
D <- -1.234
x <- seq(0.5, 20, length.out = 500)
y <- A + B * x + C * x^2 + D * log(x) + runif(500, 0, 3)

#Objective function

X <- cbind(1, x, x^2, log(x))
f <- function(theta) {
sum(abs(X %*% theta - y))
}

#Constraint

eps <- 1e-4

hin <- function(theta) {
   abs(sum(X %*% theta) - sum(y)) - 1e-3 + eps
}

Hx <- function(theta) {
   X[100, , drop = FALSE] %*% theta - (120 - eps)
}

#Optimization with nloptr

Sol = nloptr(rep(0, 4), f, eval_g_ineq = hin, eval_g_eq = Hx, opts =
list("algorithm" = "NLOPT_LN_COBYLA", "xtol_rel" = 1.0e-8))$solution
# -0.2186159 -0.5032066  6.4458823 -0.4125948


__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide https://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Non linear optimization with nloptr package fail to produce true optimal result

2024-12-13 Thread Daniel Lobo
Thanks for your reply.

I have checked the optimized value and applicable constraints. Both
set of the values of parameters satisfy the constraints.

What other solver would you suggest for this problem?

On Fri, 13 Dec 2024 at 23:33, J C Nash  wrote:
>
> COBYLA stands for Contrained Optimization by Linear Approximation.
>
> You seem to have some squares in your functions. Maybe BOBYQA would
> be a better choice, though it only does bounds, so you'd have to introduce
> a penalty, but then more of the optimx solvers would be available. With
> only 4 parameters, possibly one of the Nelder-Mead variants (anms?) would
> be suitable at least for tryout.
>
> Optimizers are like other tools. Some are chainsaws, others are scalpels.
> Don't do neurosurgery with a chainsaw unless you want a mess.
>
> Have you checked that the objective and contraint are computed correctly?
>  > 50% of "your software doesn't work" in optimization are due to such errors.
>
> John Nash
>
>
> On 2024-12-13 12:52, Daniel Lobo wrote:
> > Hi,
> >
> > I have below non-linear constraint optimization problem
> >
> > #Original artificial data
> >
> > library(nloptr)
> >
> > set.seed(1)
> > A <- 1.34
> > B <- 0.5673
> > C <- 6.356
> > D <- -1.234
> > x <- seq(0.5, 20, length.out = 500)
> > y <- A + B * x + C * x^2 + D * log(x) + runif(500, 0, 3)
> >
> > #Objective function
> >
> > X <- cbind(1, x, x^2, log(x))
> > f <- function(theta) {
> > sum(abs(X %*% theta - y))
> > }
> >
> > #Constraint
> >
> > eps <- 1e-4
> >
> > hin <- function(theta) {
> >abs(sum(X %*% theta) - sum(y)) - 1e-3 + eps
> > }
> >
> > Hx <- function(theta) {
> >X[100, , drop = FALSE] %*% theta - (120 - eps)
> > }
> >
> > #Optimization with nloptr
> >
> > Sol = nloptr(rep(0, 4), f, eval_g_ineq = hin, eval_g_eq = Hx, opts =
> > list("algorithm" = "NLOPT_LN_COBYLA", "xtol_rel" = 1.0e-8))$solution
> > # -0.2186159 -0.5032066  6.4458823 -0.4125948
> >
> > However this does not appear to be optimal value. For example, if I
> > use below set,
> > 0.222, 6.999, 6.17, -19.371, value of my objective function is lower
> > that that using nloptr
> >
> > I just wonder in the package nloptr is good for non-linear optimization?
> >
> > __
> > R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide 
> > https://www.R-project.org/posting-guide.html
> > and provide commented, minimal, self-contained, reproducible code.
>

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide https://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Non linear optimization with nloptr package fail to produce true optimal result

2024-12-13 Thread Daniel Lobo
Adding R help

On Fri, 13 Dec 2024 at 23:39, Daniel Lobo  wrote:
>
> If I use "algorithm" = "BOBYQA", the nloptr() fails with below message
>
> Error in is.nloptr(ret) :
>
>   Incorrect algorithm supplied. Use one of the following:
>
> NLOPT_GN_DIRECT
>
> NLOPT_GN_DIRECT_L
>
> NLOPT_GN_DIRECT_L_RAND
>
> NLOPT_GN_DIRECT_NOSCAL
>
> NLOPT_GN_DIRECT_L_NOSCAL
>
> NLOPT_GN_DIRECT_L_RAND_NOSCAL
>
> NLOPT_GN_ORIG_DIRECT
>
> NLOPT_GN_ORIG_DIRECT_L
>
> NLOPT_GD_STOGO
>
> NLOPT_GD_STOGO_RAND
>
> NLOPT_LD_SLSQP
>
> NLOPT_LD_LBFGS_NOCEDAL
>
> NLOPT_LD_LBFGS
>
> NLOPT_LN_PRAXIS
>
> NLOPT_LD_VAR1
>
> NLOPT_LD_VAR2
>
> NLOPT_LD_TNEWTON
>
> NLOPT_LD_TNEWTON_RESTART
>
> NLOPT_LD_TNEWTON_PRECOND
>
> NLOPT_LD_TNEWTON_PRECOND_RESTART
>
> NLOPT_GN_CRS2_LM
>
> NLOPT_GN_MLSL
>
> NLOPT_GD_MLSL
>
> NLOPT_GN_MLSL_LDS
>
> NLOPT_GD_MLSL_LDS
>
> NLOPT_LD_MMA
>
> NLOPT_LD_CCSAQ
>
> NLOPT_LN_COBYLA
>
> NLOPT_LN_NEWUOA
>
> NLOPT_LN_NEWUOA_BOUND
>
> NLOPT_LN_NELDERMEAD
>
> NLOPT_LN_SBPLX
>
> NLOPT_LN_AUGLAG
>
> NLOPT_LD_AUGLAG
>
> NLOPT_LN_AUGLAG_EQ
>
> NLOPT_LD_AUGLAG_EQ
>
> NLOPT_LN_BOBYQA
>
> NLOPT_GN_ISRES
>
> NLOPT_GN_ESCH
>
>
> On Fri, 13 Dec 2024 at 23:33, J C Nash  wrote:
> >
> > COBYLA stands for Contrained Optimization by Linear Approximation.
> >
> > You seem to have some squares in your functions. Maybe BOBYQA would
> > be a better choice, though it only does bounds, so you'd have to introduce
> > a penalty, but then more of the optimx solvers would be available. With
> > only 4 parameters, possibly one of the Nelder-Mead variants (anms?) would
> > be suitable at least for tryout.
> >
> > Optimizers are like other tools. Some are chainsaws, others are scalpels.
> > Don't do neurosurgery with a chainsaw unless you want a mess.
> >
> > Have you checked that the objective and contraint are computed correctly?
> >  > 50% of "your software doesn't work" in optimization are due to such 
> > errors.
> >
> > John Nash
> >
> >
> > On 2024-12-13 12:52, Daniel Lobo wrote:
> > > Hi,
> > >
> > > I have below non-linear constraint optimization problem
> > >
> > > #Original artificial data
> > >
> > > library(nloptr)
> > >
> > > set.seed(1)
> > > A <- 1.34
> > > B <- 0.5673
> > > C <- 6.356
> > > D <- -1.234
> > > x <- seq(0.5, 20, length.out = 500)
> > > y <- A + B * x + C * x^2 + D * log(x) + runif(500, 0, 3)
> > >
> > > #Objective function
> > >
> > > X <- cbind(1, x, x^2, log(x))
> > > f <- function(theta) {
> > > sum(abs(X %*% theta - y))
> > > }
> > >
> > > #Constraint
> > >
> > > eps <- 1e-4
> > >
> > > hin <- function(theta) {
> > >abs(sum(X %*% theta) - sum(y)) - 1e-3 + eps
> > > }
> > >
> > > Hx <- function(theta) {
> > >X[100, , drop = FALSE] %*% theta - (120 - eps)
> > > }
> > >
> > > #Optimization with nloptr
> > >
> > > Sol = nloptr(rep(0, 4), f, eval_g_ineq = hin, eval_g_eq = Hx, opts =
> > > list("algorithm" = "NLOPT_LN_COBYLA", "xtol_rel" = 1.0e-8))$solution
> > > # -0.2186159 -0.5032066  6.4458823 -0.4125948
> > >
> > > However this does not appear to be optimal value. For example, if I
> > > use below set,
> > > 0.222, 6.999, 6.17, -19.371, value of my objective function is lower
> > > that that using nloptr
> > >
> > > I just wonder in the package nloptr is good for non-linear optimization?
> > >
> > > __
> > > R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
> > > https://stat.ethz.ch/mailman/listinfo/r-help
> > > PLEASE do read the posting guide 
> > > https://www.R-project.org/posting-guide.html
> > > and provide commented, minimal, self-contained, reproducible code.
> >
__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide https://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Non linear optimization with nloptr package fail to produce true optimal result

2024-12-13 Thread Ben Bolker



  Fortune candidate?  (Prof Zeileis, are you still collecting these?)


Optimizers are like other tools. Some are chainsaws, others are scalpels.
Don't do neurosurgery with a chainsaw unless you want a mess.


__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide https://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Non linear optimization with nloptr package fail to produce true optimal result

2024-12-13 Thread Daniel Lobo
A small correction, the below combination

2.02, 6.764, 6.186, -20.095

Gives better result.

On Fri, 13 Dec 2024 at 23:22, Daniel Lobo  wrote:
>
> Hi,
>
> I have below non-linear constraint optimization problem
>
> #Original artificial data
>
> library(nloptr)
>
> set.seed(1)
> A <- 1.34
> B <- 0.5673
> C <- 6.356
> D <- -1.234
> x <- seq(0.5, 20, length.out = 500)
> y <- A + B * x + C * x^2 + D * log(x) + runif(500, 0, 3)
>
> #Objective function
>
> X <- cbind(1, x, x^2, log(x))
> f <- function(theta) {
> sum(abs(X %*% theta - y))
> }
>
> #Constraint
>
> eps <- 1e-4
>
> hin <- function(theta) {
>   abs(sum(X %*% theta) - sum(y)) - 1e-3 + eps
> }
>
> Hx <- function(theta) {
>   X[100, , drop = FALSE] %*% theta - (120 - eps)
> }
>
> #Optimization with nloptr
>
> Sol = nloptr(rep(0, 4), f, eval_g_ineq = hin, eval_g_eq = Hx, opts =
> list("algorithm" = "NLOPT_LN_COBYLA", "xtol_rel" = 1.0e-8))$solution
> # -0.2186159 -0.5032066  6.4458823 -0.4125948
>
> However this does not appear to be optimal value. For example, if I
> use below set,
> 0.222, 6.999, 6.17, -19.371, value of my objective function is lower
> that that using nloptr
>
> I just wonder in the package nloptr is good for non-linear optimization?

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide https://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Non linear optimization with nloptr package fail to produce true optimal result

2024-12-13 Thread J C Nash

COBYLA stands for Contrained Optimization by Linear Approximation.

You seem to have some squares in your functions. Maybe BOBYQA would
be a better choice, though it only does bounds, so you'd have to introduce
a penalty, but then more of the optimx solvers would be available. With
only 4 parameters, possibly one of the Nelder-Mead variants (anms?) would
be suitable at least for tryout.

Optimizers are like other tools. Some are chainsaws, others are scalpels.
Don't do neurosurgery with a chainsaw unless you want a mess.

Have you checked that the objective and contraint are computed correctly?
> 50% of "your software doesn't work" in optimization are due to such errors.

John Nash


On 2024-12-13 12:52, Daniel Lobo wrote:

Hi,

I have below non-linear constraint optimization problem

#Original artificial data

library(nloptr)

set.seed(1)
A <- 1.34
B <- 0.5673
C <- 6.356
D <- -1.234
x <- seq(0.5, 20, length.out = 500)
y <- A + B * x + C * x^2 + D * log(x) + runif(500, 0, 3)

#Objective function

X <- cbind(1, x, x^2, log(x))
f <- function(theta) {
sum(abs(X %*% theta - y))
}

#Constraint

eps <- 1e-4

hin <- function(theta) {
   abs(sum(X %*% theta) - sum(y)) - 1e-3 + eps
}

Hx <- function(theta) {
   X[100, , drop = FALSE] %*% theta - (120 - eps)
}

#Optimization with nloptr

Sol = nloptr(rep(0, 4), f, eval_g_ineq = hin, eval_g_eq = Hx, opts =
list("algorithm" = "NLOPT_LN_COBYLA", "xtol_rel" = 1.0e-8))$solution
# -0.2186159 -0.5032066  6.4458823 -0.4125948

However this does not appear to be optimal value. For example, if I
use below set,
0.222, 6.999, 6.17, -19.371, value of my objective function is lower
that that using nloptr

I just wonder in the package nloptr is good for non-linear optimization?

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide https://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide https://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.