Very true indeed. I am preparing an article on constrained optimization. 
One of the examples I'm discussing is from the Hock-Schittkowski test 
suite. The augmented Lagrangian solvers do not find the true minimum if 
applying exact or very accurate gradients, only by diminishing the accuracy 
of the numerical gradients do they hit the true optimum. Very strange; 
control of the numerical derivatives is important in constrained 
optimization.

On Wednesday, January 22, 2014 6:01:19 PM UTC+1, John Myles White wrote:
>
> This sounds like a great approach, Tim. (And, for the record, I’m 
> legitimately amazed by the amount of functionality you’re successfully 
> maintaining.) 
>
> Since we’re adding feature requests, here’s another one: 
>
> (Feature) Implement lower and upper bounds on FD gradient calculations. If 
> lower or upper bounds are violated by chosen forward or central 
> differencing method, change behavior to stay within bounds. 
>
> This would make it much easier for us to use finite differencing in 
> constrained optimization problems. 
>
>  — John 
>
>

Reply via email to