Sorry to be a continuous bother, but I've made an attempt to compensate for any changes due to the callback in the stepping algorithm during the minimization. Quite simply, it adds a vector that acts as a multiplier on the step_size, which is scaled according to the difference between the deltas before and after the callback. So, for example, if a parameter is initially 0.0, and the delta according to the gradient is 1.0 but is constrained in the callback to a limit of 0.5, then the step for that particular parameter is scaled by 0.5 in any further steps taken for the given gradient. Once the gradient is updated, the steps are reset to 1.0. Therefore, this is only useful if the minimizer gets caught up in an iteration - which seems reasonable, but could be adjusted otherwise.
Again, I've attached the patch, which works the same as the previous
implementation. All my tests have been using fdf minimizers - the
modifications haven't been made to the nelder mead algorithm, but
certainly could be.
Regards,
Tim
--
---------------------------------------------------------
Tim Fenn
[EMAIL PROTECTED]
Stanford University, School of Medicine
James H. Clark Center
318 Campus Drive, Room E300
Stanford, CA 94305-5432
Phone: (650) 736-1714
FAX: (650) 736-1961
---------------------------------------------------------
multimin_bound.patch.gz
Description: Binary data
_______________________________________________ Help-gsl mailing list [email protected] http://lists.gnu.org/mailman/listinfo/help-gsl
