[ 
https://issues.apache.org/jira/browse/MATH-1656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17761050#comment-17761050
 ] 

François Laferrière commented on MATH-1656:
-------------------------------------------

I prepared a new patch taking into account most remarks above:
 * Javadoc : OK there was room for enhancement :)
 * MathArrays:
 ** scaleInPlace() has been removed
 ** unitaryVector() has been moved to GradientLikeOptimizer() and made package 
protected.
 * BaseOptimiser::resetCounter is back to private visibility
 * MissingOptimizationDataException has been simplified (as a RuntimeException 
without localization) and moved to 
org.apache.commons.math4.legacy.optim.nonlinear.scalar.gradient package
 * MinDirectionNorm has been documented as
 ** _Minimal value of direction norm below which convergence is considered_ 
_reached. Direction is gradient or something very much like the gradient. When_ 
_the norm of direction is too small, there can be various problems due to_ 
_round off error such as misdirection, failure of line search, ill conditioned_ 
_matrix problems._
 * commented-out has been removed
 * {{{}minGradientNorm -{}}}> MIN_GRADIENT_NORM and made final (but not private 
as it is used in sub{{{}classes){}}}
 * {{TestFunction.java}}  now contains gradient and hessian. there are now 
three generator for each enum value:
 ** 
public MultivariateFunction functionWithDimension(final int dim);
public MultivariateVectorFunction gradientWithDimension(final int dim);
public MultivariateMatrixFunction hessianWithDimension(final int dim);

withDimension has been deprecated because its make less sense now 

*For the points that raises some questions*
 * visibility of getObjectiveFunction(): As a reminder, it is a replacement for 
computeObjectiveValue(double[] params) see 
https://issues.apache.org/jira/browse/MATH-1657 . As such it is called in 
LineSearch and thus, should be public.

 * ObjectiveFunctionDimension: I am still uneasy with the fact that 
MultivariateFunction interface does not provide (and perhaps should not 
provide) any way to know the number of parameters to the function. We could 
workaround this by using the dimension of initialGuess array, but I think that 
this is an error prone hack.
 * OptimizationStatus : In my use cases, I call the Optimizer millions of time 
on my datasets. I don't really care if, from time to time, it does not 
converge. But I want to know the number and reasons of failures. But if you 
have a proposal to deal with my problem in a simple way, why not?

 * DebugMode: I think that if we want to get rid of debug mode, it is not to 
replace it by something much more complicated (a callback pattern). In my order 
of preference, options are

 ** Keep DebugMode as is (and possibly use it in other legacy optimizers)

 ** remove DebugMode and keep track of the trajectory all the time. This is not 
that expensive (except if optimization takes zillons of iterations, in witch 
case the problem is probably somewhere else)

 ** Remove DebugMode and NOT keep track of trajectory any more.

 ** Use a callback architecture

> Classical multivariate optimizers (gradient descent, Raphson-Newton, BFGS) 
> are missing
> --------------------------------------------------------------------------------------
>
>                 Key: MATH-1656
>                 URL: https://issues.apache.org/jira/browse/MATH-1656
>             Project: Commons Math
>          Issue Type: Wish
>          Components: legacy
>    Affects Versions: 4.0-beta1
>            Reporter: François Laferrière
>            Priority: Major
>              Labels: features
>         Attachments: MATH-1656-GradientDescent-Newton-BFGS-v2.0.zip, 
> MATH-1658-GradientDescent-Newton-BFGS-v3.0.patch, 
> MATH-1658-GradientDescent-Newton-BFGS.patch, Screenshot from 2023-07-10 
> 12-13-38.png
>
>
> Some classical multivariate such as
>  * gradient descent,
>  * Raphson-Newton,
>  * BFGS
> are missing.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to