Dear Milind,

It's often helpful to try (where possible):

1. Generate some synthetic data and check if the minimum you find corresponds 
to the 'true' solution. You could manually evaluate the objective function at 
the 'true' solution and a few nearby points in order to verify that it is 
indeed a minimum.

2. A very simple case where you can plot the objective function and verify that 
the minimum is being located correctly.

Kind Regards,
James

--
James Barrett
Postdoctoral Research Student
Institute of Mathematical and Molecular Biomedicine
King's College London

On 24 Jun 2013, at 22:26, Steven G. Johnson <[email protected]> wrote:

> 
> On Jun 24, 2013, at 3:57 PM, milind d <[email protected]> wrote: 
>>     I have just newly started to use nlopt, I have written a program which 
>> uses the nlopt to minize a function, My doubt here was I am getting the 
>> functional value same at each optimization cycle, I knw this as i print the 
>> functional value inside myfunction subroutine, i m using a gradient based 
>> optimization algorithms, SLSQP, I have written a subroutine which calculate 
>> the gradient of the function n i use the gradient subroutine in myfunction. 
>> My data set is so huge that its not actully feasible to validate my gradient 
>> with tht data set, allthough i have validated it by small data sets. So my 
>> question is what are the possibles things which will make my functional 
>> values constant for many optimization cycles.
> 
> I would try validating your gradient just by comparing with finite-difference 
> calculations.
> 
> 
> 
> _______________________________________________
> NLopt-discuss mailing list
> [email protected]
> http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/nlopt-discuss
> 

_______________________________________________
NLopt-discuss mailing list
[email protected]
http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/nlopt-discuss

Reply via email to