Hi everyone,
after extensive tests, here are the results. The tests were run on a
simplified problem with an absolute minimum number of parameters (16)
without any constraints.
For comparison, I ran the test on the Fletcher-Reeves, Polak-Ribiere,
vector Broyden-Fletcher-Goldfarb-Shanno and a mm_hess algorithm [kindly
provided by James in a private communication, I hope it makes it into
his mlib].
The points at which the different algorithms bailed out are very
close to each other in parameter space.
Convergence was analyzed by recording the chi squared vs iteration #.
Fletcher-Reeves: small plateau at the start, large drop in chi
squared, small plateau, bailed out. Chi squared reached: 6.54 by
iteration 1200.
Polak-Ribiere: 4 plateaus on the way, reached chi squared 6.53 by
iteration 5500.
BFGS: 6 plateaus on the way, reached chi squared 6.53 by iteration 7800.
mm_hess: no plateaus, nice curve like 1/iteration #, reached chi
squared 6.51 by iteration 23'300.
So far, it seems that the algorithms tend to the same point, none can
actually converge. mm_hess takes more iterations, but finds a better chi
squared, and if one measures stability by an absence of plateaus, this
is a nice method, which, I hope, will be available in James' mlib some
time...
I will run some tests on a problem with constraints, and see how the
different algorithms fare there.
Martin Jansche wrote:
On 11/29/05, Max Belushkin <[EMAIL PROTECTED]> wrote:
James, thank you, I will certainly give it a go in the next couple of
days, and will let you know how it works out
Please share your findings once you had a chance to try different
strategies. Another option would be try the optimizers in the TAO
toolkit (http://www-unix.mcs.anl.gov/tao/).
-- mj
_______________________________________________
Help-gsl mailing list
[email protected]
http://lists.gnu.org/mailman/listinfo/help-gsl