Tracing through the code, this appears to be a problem in the original
SLSQP software. SLSQP works by minimizing an approximate quadratic
subproblem at each step. At the first step of Alexander's 1d test
case, SLSQP is minimizing:
min_x (x - s)^2
-1 <= x <= 1
which should give x = 1 for s > 0 -- i.e., it should converge in 1
step. However, for large s, SLSQP (in its subroutine LSQ) seems to be
suffering from extremely large round-off errors, as if it were solving
an ill-conditioned problem or if were using a numerically unstable
algorithm. I'm not sure why that should be the case here, however;
the problem doesn't seem too badly conditioned even for s=1e6, and the
QR algorithm that LSQ claims to be using should be stable, I think.
For s=1e6 the LSQ routine is "solving" the above subproblem with
x=-356.31518661149312, and that is screwing everything up in
subsequent steps.
One thing that would be helpful would be if someone tried Alexander's
test problem in fmin_slsqp from SciPy, which also uses the SLSQP code,
to see if a similar problem (albeit probably with different symptoms)
occurs there.
Unfortunately, the guts of the LSQ routine in SLSQP are rather painful
to debug; this is pretty ugly spaghetti-style code (even before f2c
conversion from Fortran to C), and this appears to be a roundoff
susceptibility rather than a more straightforward bug. It almost
looks easier to rip it out completely and replace it with something
LAPACK-based (i.e. based on modern linear algebra libraries), but that
is not a small job.
For now, your options seem to be (a) rescale your problem so that
gradients etcetera are of order 1 or (b) use a different algorithm in
NLopt.
Steven
On Feb 16, 2011, at 7:39 AM, [email protected] wrote:
Hi,
I would like to support the request that Alexander Riess has posted
on January 28. Just like him, I keep getting std::runtime_errors for
large gradients. I am using the C++ library, not the Python
interface. I could provide additional test cases, if necessary, but
I suppose the simple example posted by Alexander already illustrates
the problem very well.
Best regards,
Peter
Thanks for the Bugfix.
Today I am back with a class of new, but similar testproblems
min -s*x on [-1,1]
with different s.
Configuration:
opt.set_lower_bounds( [-1.0 ] )
opt.set_upper_bounds( [ 1.0 ] )
opt.set_min_objective(f)
opt.set_xtol_abs(1e-10)
opt.set_xtol_rel(1e-10)
opt.set_ftol_abs(1e-10)
opt.set_ftol_rel(1e-10)
x0 = array([0.0])
Tests:
1.) s = 1.0
Algorithm Sequential Quadratic Programming (SQP) (local,
derivative) steps:
Iteration x f(x) f'(x)
1 0.0 -0 -1.0
2 1.0 -1.0 -1.0
xopt = 1.0
fopt = -1.0
2.) s = 1.0E4
Algorithm Sequential Quadratic Programming (SQP) (local,
derivative) steps:
Iteration x f(x) f'(x)
1 0.0 -0 -10000.0
2 0.999806062342 -9998.06062342 -10000.0
3 0.999873394578 -9998.73394578 -10000.0
xopt = 0.999873394578
fopt = -9998.73394578
3.) s = 1.0E5
Algorithm Sequential Quadratic Programming (SQP) (local,
derivative) steps:
Iteration x f(x) f'(x)
1 0.0 -0 -100000.0
2 0.562577480974 -56257.7480974 -100000.0
3 1.0 -100000.0 -100000.0
xopt = 1.0
fopt = -100000.0
4.) s = 1.0E6
Algorithm Sequential Quadratic Programming (SQP) (local,
derivative) steps:
Iteration x f(x) f'(x)
1 0.0 -0 -1000000.0
xopt = 0.0
fopt = -0.0
5.) s = 1.0E7
Algorithm Sequential Quadratic Programming (SQP) (local,
derivative) steps:
Iteration x f(x) f'(x)
1 0.0 -0 -10000000.0
xopt = 0.0
fopt = -0.0
6.) s = 1.0E8
Algorithm Sequential Quadratic Programming (SQP) (local,
derivative) steps:
Iteration x f(x) f'(x)
1 0.0 -0 -100000000.0
Traceback (most recent call last):
File "XXX\minExample.py", line 35, in <module>
xOpt = opt.optimize(x0)
File "XXX\Python265\lib\site-packages\nlopt.py", line 231, in
optimize
def optimize(*args): return _nlopt.opt_optimize(*args)
RuntimeError: nlopt failure
Conclusion: SQP fails on this class of test-problems if there is a
large
gradient ( abs(grad) >= 1.0E6 ). Can you please tell me what is
going wrong.
Note: Using MMA eveything works fine.
Kind regards
Alexander Riess
_______________________________________________
NLopt-discuss mailing list
[email protected]
http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/nlopt-discuss
_______________________________________________
NLopt-discuss mailing list
[email protected]
http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/nlopt-discuss