Hello,
you might consider several points for parallelization, depending on your
problem. If you can parallelize somewhere inside function evaluation
(independent parts, if there are any), that is probably easiest. Another
point might be the evaluation of the Jacobian, if you must do it
numerically and if you must optimize many parameters simultaneously. And
of course you can start several optimizations from different initial
values at the same, if that is useful.
BR,
Tuomo
On 09/21/2012 05:54 PM, Maxime Boissonneault wrote:
Hi,
I have a multidimensional minimization problem for which the function is
pretty long to compute (think hours or days). I coded a master-slave MPI
communication structure to do the work. I would have the master who
dispatches sets of parameters to compute to different slaves, on
different computers.
What I am wondering is if there is a way to use GSL multidimensional
minimization algorithms in a "queue many and wait for result" fashion
rather than in a "evaluate the function sequentially" fashion.
Thanks,
Maxime Boissonneault
--
[email protected]
http://iki.fi/tuomo.keskitalo