Hi all, I am currently working on the problem involving source extraction from astronomical images, which essentially boils down to fitting a number of 2d gaussians to the image.
One of the traditionally used fitters in this field is a Levenberg-Marquardt, which gsl_fdfsolver_lmsder is and implementation of. At some moment I have notices that for the bigger images (about 550 pixels, 20-30 parameters) gsl's lmsder algorithm spends a large fraction of the run-time (about 50%) doing household transform. While looking around for are different minimization algorithms I have made a surprising finding that original netlib/minpack/lmder is almost twice faster that that of gsl. Could anyone explain such a big difference in performace? -- Best regards, Alexander. _______________________________________________ Help-gsl mailing list [email protected] http://lists.gnu.org/mailman/listinfo/help-gsl
