Hey Jason:
I’ve added the flag and it works like a charm (see output below). Yaaayyyy! :)
Thanks! /A
P.S: I still wonder why the successive evaluations of the objective kicked me
out, though (i.e. why I did get monotonically *increasing* objective values; I
also tried to switch the search direction using “tao_ls_armijo_nondescending”
which did not help). Anyway...
Output:
Tao Object: 1 MPI processes
type: nls
Newton steps: 2
BFGS steps: 0
Scaled gradient steps: 0
Gradient steps: 0
nls ksp atol: 0
nls ksp rtol: 2
nls ksp ctol: 0
nls ksp negc: 0
nls ksp dtol: 0
nls ksp iter: 0
nls ksp othr: 0
TaoLineSearch Object: 1 MPI processes
type: armijo
KSP Object: 1 MPI processes
type: cg
total KSP iterations: 31
convergence tolerances: fatol=0, frtol=0
convergence tolerances: gatol=0, steptol=0, gttol=0.0001
Residual in Function/Gradient:=2.69882e-06
Objective value=0.370287
total number of iterations=2, (max: 50)
total number of function evaluations=2, max: 10000
total number of gradient evaluations=2, max: 10000
total number of function/gradient evaluations=1, (max: 10000)
total number of Hessian evaluations=2
Solution converged: ||g(X)||/||g(X0)|| <= gttol
> On Jun 11, 2015, at 4:13 PM, Jason Sarich <[email protected]> wrote:
>
> Hi Andreas,
>
> I'm pretty sure this is a bug on my side, directly related to the ls->reason
>
> Can you try fixing src/tao/linesearch/impls/armijo/armijo.c so it sets this
> correctly?
>
> before (line 275):
> /* Successful termination, update memory */
> armP->lastReference = ref;
>
> /* Successful termination, update memory */
> ls->reason = TAOLINESEARCH_SUCCESS;
> armP->lastReference = ref;
>
> On Thu, Jun 11, 2015 at 3:26 PM, Andreas Mang <[email protected]
> <mailto:[email protected]>> wrote:
> Hey Jason:
>
> Thanks for looking into this. In the meantime I have checked against your
> elliptic tao example using an Armijo linesearch. It works (i.e. converges) as
> you suggested in an earlier email even though it returns the “wrong" flag.
> For my problem it does even choke if I set the regularization parameter to
> 1E6 (essentially solving a quadratic problem).
>
> A final question before I continue the struggle by myself: It says “gradient
> steps: 1” instead of “gradient steps: 0” in the outputs (Armijo vs.
> More-Thuente / Unit). Does it start doing gradient evaluations? Maybe this
> helps me to further poke my code.
>
> I’ll continue to look into this. I’ll come back to you if I discover that the
> problem is on the PETSc side of things and I can reproduce the problem with a
> toy example.
>
> Thanks for your time! /Andreas
>
>
>> On Jun 11, 2015, at 3:03 PM, Jason Sarich <[email protected]
>> <mailto:[email protected]>> wrote:
>>
>> Hi Andreas,
>>
>> I don't see anything obviously wrong. If the function is very flat, you can
>> try setting -tao_ls_armijo_sigma to a smaller number. If you continue to
>> have problems, please let me know. It would definitely help if you have an
>> example you could send me that reproduces this behavior.
>>
>> Jason
>>
>>
>>
>>
>> On Thu, Jun 11, 2015 at 1:12 PM, Andreas Mang <[email protected]
>> <mailto:[email protected]>> wrote:
>> Hey Jason:
>>
>> The line search fails. If I use Armijo I get
>>
>> TaoLineSearch Object:
>> type: armijo
>> maxf=30, ftol=1e-10, gtol=0.0001
>> Armijo linesearch : alpha=1 beta=0.5 sigma=0.0001 memsize=1
>> maximum function evaluations=30
>> tolerances: ftol=0.0001, rtol=1e-10, gtol=0.9
>> total number of function evaluations=1
>> total number of gradient evaluations=1
>> total number of function/gradient evaluations=0
>> Termination reason: 0
>>
>> The parameters seem to be the default ones also suggested by Nocedal and
>> Wright. So I did not change anything. The termination reason is equivalent
>> to TAOLINESEARCH_CONTINUE_ITERATING. I am not checking the reason directly.
>> I guess it starts reducing the step size after that. I can see that my
>> objective function get’s evaluated (as expected); however, the objective
>> values increase (from what I see when monitoring the evaluations of my
>> objective). This leads to a failure in the line search and made (still
>> makes) me believe there is a bug on my side (which I have not found yet).
>> However, if I use a unit step it converges (relative change of the gradient
>> e.g. to 1E-9; see bottom of this email). If I use More & Thuente, same
>> thing. No reduction in step size necessary.
>>
>> If you suggest that I should do some further testing on simpler problems,
>> I’m happy to do so. After looking at the code, I just felt like there
>> obviously is something wrong in the line-search implementation.
>>
>> Thanks for your help.
>> /Andreas
>>
>> Here’s the output after the first iteration (where the Armijo line search
>> fails):
>>
>> TaoLineSearch Object:
>> type: armijo
>> maxf=30, ftol=1e-10, gtol=0.0001
>> Armijo linesearch : alpha=1 beta=0.5 sigma=0.0001 memsize=1
>> maximum function evaluations=30
>> tolerances: ftol=0.0001, rtol=1e-10, gtol=0.9
>> total number of function evaluations=1
>> total number of gradient evaluations=1
>> total number of function/gradient evaluations=0
>> Termination reason: 0
>> TaoLineSearch Object:
>> type: armijo
>> maxf=30, ftol=1e-10, gtol=0.0001
>> Armijo linesearch : alpha=1 beta=0.5 sigma=0.0001 memsize=1
>> maximum function evaluations=30
>> tolerances: ftol=0.0001, rtol=1e-10, gtol=0.9
>> total number of function evaluations=30
>> total number of gradient evaluations=0
>> total number of function/gradient evaluations=0
>> Termination reason: 4
>>
>> With final output (end of optimization):
>>
>> Tao Object: 1 MPI processes
>> type: nls
>> Newton steps: 1
>> BFGS steps: 0
>> Scaled gradient steps: 0
>> Gradient steps: 1
>> nls ksp atol: 0
>> nls ksp rtol: 1
>> nls ksp ctol: 0
>> nls ksp negc: 0
>> nls ksp dtol: 0
>> nls ksp iter: 0
>> nls ksp othr: 0
>> TaoLineSearch Object: 1 MPI processes
>> type: armijo
>> KSP Object: 1 MPI processes
>> type: cg
>> total KSP iterations: 21
>> convergence tolerances: fatol=0, frtol=0
>> convergence tolerances: gatol=0, steptol=0, gttol=0.0001
>> Residual in Function/Gradient:=0.038741
>> Objective value=0.639121
>> total number of iterations=0, (max: 50)
>> total number of function evaluations=31, max: 10000
>> total number of gradient evaluations=1, max: 10000
>> total number of function/gradient evaluations=1, (max: 10000)
>> total number of Hessian evaluations=1
>> Solver terminated: -6 Line Search Failure
>>
>> This is without line-search (unit step size):
>>
>> Tao Object: 1 MPI processes
>> type: nls
>> Newton steps: 3
>> BFGS steps: 0
>> Scaled gradient steps: 0
>> Gradient steps: 0
>> nls ksp atol: 0
>> nls ksp rtol: 3
>> nls ksp ctol: 0
>> nls ksp negc: 0
>> nls ksp dtol: 0
>> nls ksp iter: 0
>> nls ksp othr: 0
>> TaoLineSearch Object: 1 MPI processes
>> type: unit
>> KSP Object: 1 MPI processes
>> type: cg
>> total KSP iterations: 71
>> convergence tolerances: fatol=0, frtol=0
>> convergence tolerances: gatol=0, steptol=0, gttol=0.0001
>> Residual in Function/Gradient:=1.91135e-11
>> Objective value=0.160914
>> total number of iterations=3, (max: 50)
>> total number of function/gradient evaluations=4, (max: 10000)
>> total number of Hessian evaluations=3
>> Solution converged: ||g(X)||/||g(X0)|| <= gttol
>>
>>
>>
>>> On Jun 11, 2015, at 12:44 PM, Jason Sarich <[email protected]
>>> <mailto:[email protected]>> wrote:
>>>
>>> Hi Andreas,
>>>
>>> Yes it looks like a bug that the reason is never set, but the line should
>>> still terminate. Is the problem you are having with the line search itself,
>>> or is it failing because you are checking this ls->reason directly?
>>>
>>> Jason Sarich
>>>
>>>
>>> On Thu, Jun 11, 2015 at 9:53 AM, Andreas Mang <[email protected]
>>> <mailto:[email protected]>> wrote:
>>> Hi guys:
>>>
>>> I have a problem with the TAO Armijo line search (petsc-3.5.4). My
>>> algorithm works if I use the More & Thuente line search (default). I have
>>> numerically checked the gradient of my objective. It’s correct. I am happy
>>> to write a small snippet of code and do an easy test if you guys disagree,
>>> but from what I’ve seen in the line search code it seems obvious to me that
>>> there is a bug. Am I missing something or are you not setting
>>>
>>> ls->reason
>>>
>>> to
>>>
>>> TAOLINESEARCH_SUCCESS
>>>
>>> if the Armijo condition is fulfilled (TaoLineSearchApply_Armijo in
>>> armijo.c; line 118 - 302)?!
>>>
>>> It seems to me that ls->reason is and will remain to be set to
>>>
>>> TAOLINESEARCH_CONTINUE_ITERATING
>>>
>>> if everything works (i.e. I don’t hit one of the exceptions). Does this
>>> make sense? If not I’ll invest the time and put together a simple test case
>>> and, if that works, continue to check my code.
>>>
>>> /Andreas
>>>
>>>
>>>
>>
>>
>
>