Hi, I'm trying to understand the behavior of the v3 optimizers and SingleValuedVnlCostFunctionAdaptor.
The optimizers (e.g. LBFGSOptimizer) scale up the initial parameters by the user-set scales in StartOptimization(). The comments say: // We also scale the initial parameters up if scales are defined. // This compensates for later scaling them down in the cost function adaptor // and at the end of this function. Then in the cost-func adaptor, before the metric is called, the parameters supplied by the optimizer are scaled down, so they're back to the original values. Then the metric is evaluated. The derivative is then scaled down by the scales. Then at the end of optimization, the parameters are scaled down again back to their original values. Does anyone know why this is done? It seems the vnl optimizers are effectively getting a gradient that's scaled down by scales^2 relative to their internal parameters. Thanks, Michael _______________________________________________ Powered by www.kitware.com Visit other Kitware open-source projects at http://www.kitware.com/opensource/opensource.html Kitware offers ITK Training Courses, for more information visit: http://kitware.com/products/protraining.php Please keep messages on-topic and check the ITK FAQ at: http://www.itk.org/Wiki/ITK_FAQ Follow this link to subscribe/unsubscribe: http://www.itk.org/mailman/listinfo/insight-developers
