First— I want to thank everyone involved in NLopt development. I've
been using NLopt regularly for many months now (from C)… it's
frequently one of the first tools I apply to almost any optimization
problem: It's very general, easy to use, and gets me results quickly
without having worry about a lot of details. I'm even prone to
applying it in places where I probably shouldn't… like large linear
problems, or problems that I know have exact algebraic solutions which
I'm too lazy to work out, it's just a very comfortable tool.  Unlike
other general optimization tools at my disposal nlopt is also very
fast and doesn't make it hard for me to use fast (C or CUDA based)
objective functions. It's simply a good tool.

One of the limitations in NLopt that still causes me to write my own
optimization software for some problems is that NLopt isn't especially
applicable to large problems where the objective functions are not
themselves parallizable.

Has any thought been given to an interface to achieve parallel
objective evaluation in NLopt?  I would assume that for the stochastic
algorithms that this wouldn't be too hard...

An obvious way to achieve the API for this would be to have all the
scheduling internal to NLopt... i.e. it simply invokes the objective
functions in threads. But this would require certain cautions and
concessions in the objective functions (i.e. care with shared data
structures) which might be surprising and wouldn't be applicable to
distribution beyond a single shared memory machine.

The method I use in my own custom optimization software is by having
the optimizer implement get_work() API which returns a structure with
an evaluation location and a request ID, or a error code that results
(synchronisation) are needed before it can continue, and a
finished_work() API which reports back the results for a request. I
then have compute threads pre-fetch work shortly before finishing to
hide the communication latency. I don't know if this kind of interface
is something that would make sense for NLopt.  It works pretty well
for the mostly stochastic approaches I've implemented, as they are
tolerant of late data and infrequent model updates.

_______________________________________________
NLopt-discuss mailing list
[email protected]
http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/nlopt-discuss

Reply via email to