Re: [petsc-dev] vector inner products

2018-04-18 Thread Oxberry, Geoffrey Malcolm


> On Apr 18, 2018, at 15:06, Munson, Todd  wrote:
> 
> 
> In the master branch, setting a H0 matrix and the gradient norm appear to be
> correct only for the quasi-Newton code.  Those notions have not been 
> propagated anywhere else.

Yes, that’s my understanding as well. 

> 
> The quasi-Newton approximation, for example, can be used as a preconditioner 
> in the Newton methods.  If the H0 matrix is set and it uses a KSP solver to
> apply the "inverse", then do we need to add flexible variants of NASH, 
> STCG, and GLTR for correctness?

Yes, or disallow quasi-Newton preconditioning for non-identity H0 matrices for 
these methods. 

> 
> The H0 also need to be symmetric or act like a symmetric operator for the
> quasi-Newton approximation to be reasonable, and certainly when it is
> used as a preconditioner in NASH, STCG, and GLTR.  Does this restrict 
> the KSP solvers that can be applied when we need a inv(H0)*vector 
> product?

I don’t see why it would restrict you to methods for symmetric operators.

> 
> At the end go the day, the question becomes how much thought and effort do
> we want to spend in order to support the H0 matrix?  Or do we want to 
> leave this as supported only for the TAO quasi-Newton method?

I think that’s up to you and the TAO developers.

It’s helpful for mesh-independent convergence — it corresponds to a natural 
preconditioner — but if that feature isn’t critical for you guys in terms of 
funding or customers, I understand not implementing it.   

The abstraction also makes sensible the idea of mesh refinement or coarsening 
during the optimization algorithm because these operations correspond to 
mapping between different subsets of the same Hilbert space, and should be 
endowed with the same norms and inner products. These operations don’t really 
make sense if the decision variables are an n-tuple of reals with no additional 
structure.

Geoff

> 
> Todd.
> 
>> On Apr 13, 2018, at 9:11 PM, Oxberry, Geoffrey Malcolm  
>> wrote:
>> 
>> 
>> 
>> On 4/12/18, 15:06, "petsc-dev on behalf of Munson, Todd" 
>>  wrote:
>> 
>> 
>>   I am not looking at Geoff's pull request right now.
>> 
>>   Let me try to be clearer, in the master branch, the TaoGradientNorm() 
>> function is only 
>>   used for termination tests inside the optimization methods.  It does not 
>> change anything 
>>   else that goes on inside of the methods.  
>> 
>> Gradients and Hessians also depend on duality pairings, which in turn 
>> affects how users write these quantities, and how Hessians are approximated 
>> in quasi-Newton methods. Related code for the Hilbert space case was 
>> included in PR#347 (h/t Patrick Farrell).
>> 
>>   A user-defined convergence test (presuming we 
>>   can get the callbacks right) would suffice.  As all norms are equivalent 
>> in finite
>>   dimensions, a user could also scale the standard termination tolerance by 
>>   the correct constant.
>> 
>> This decision makes tolerances discretization-dependent, which is a leaky 
>> abstraction, and an unnatural way to encode algorithm convergence criteria. 
>> Scaling the $\ell_{2}$-norm in place of using the correct primal and dual 
>> norms also ignores the metric geometry of the primal and dual spaces, and 
>> will affect algorithm convergence negatively. A callback for a user-defined 
>> convergence test would be preferable. 
>> 
>>   If you need to live in function spaces, which seems to be the argument, 
>> then it seems
>>   that PETSc needs to be changed by more than just a single termination test.
>> 
>> A similar discussion can be found in the discussion of PR#347.
>> 
>>   Thanks,
>>   Todd.
>> 
>>> On Apr 12, 2018, at 3:27 PM, Stefano Zampini  
>>> wrote:
>>> 
>>> The gradient norm is the one induced by the mass matrix of the DM 
>>> associated with the control.
>>> In principle, TaoGradientNorm() can be replaced by DMCreateMassMatrix() + 
>>> solve with the mass matrix.
>>> 
>>> For PDE constrained optimization, the “gradient norm” is crucial, since we 
>>> consider optimization problems in Banach spaces.
>>> We should keep supporting it, maybe differently than as it is now, but keep 
>>> it.
>>> 
 On Apr 12, 2018, at 11:21 PM, Jed Brown  wrote:
 
 Are you thinking about this PR again?
 
 https://bitbucket.org/petsc/petsc/pull-requests/506
 
 There's an issue here that Krylov methods operate in the discrete inner
 product while some higher level operations are of interest in
 (approximations of) continuous inner products (or norms).  The object in
 PETSc that endows continuous attributes (like a hierarchy, subdomains,
 fields) on discrete quantities is DM, so my first inclination is that
 any continuous interpretation of vectors, including inner products and
 norms, belongs in DM.
 
 "Munson, Todd"  writes:
 
> There is a bit of code in TAO that allows the user to change the norm to 
> a matrix norm.  This was introduced to get some mesh indepen

Re: [petsc-dev] vector inner products

2018-04-18 Thread Munson, Todd

In the master branch, setting a H0 matrix and the gradient norm appear to be
correct only for the quasi-Newton code.  Those notions have not been 
propagated anywhere else.

The quasi-Newton approximation, for example, can be used as a preconditioner 
in the Newton methods.  If the H0 matrix is set and it uses a KSP solver to
apply the "inverse", then do we need to add flexible variants of NASH, 
STCG, and GLTR for correctness?

The H0 also need to be symmetric or act like a symmetric operator for the
quasi-Newton approximation to be reasonable, and certainly when it is
used as a preconditioner in NASH, STCG, and GLTR.  Does this restrict 
the KSP solvers that can be applied when we need a inv(H0)*vector 
product?

At the end go the day, the question becomes how much thought and effort do
we want to spend in order to support the H0 matrix?  Or do we want to 
leave this as supported only for the TAO quasi-Newton method?

Todd.

> On Apr 13, 2018, at 9:11 PM, Oxberry, Geoffrey Malcolm  
> wrote:
> 
> 
> 
> On 4/12/18, 15:06, "petsc-dev on behalf of Munson, Todd" 
>  wrote:
> 
> 
>I am not looking at Geoff's pull request right now.
> 
>Let me try to be clearer, in the master branch, the TaoGradientNorm() 
> function is only 
>used for termination tests inside the optimization methods.  It does not 
> change anything 
>else that goes on inside of the methods.  
> 
> Gradients and Hessians also depend on duality pairings, which in turn affects 
> how users write these quantities, and how Hessians are approximated in 
> quasi-Newton methods. Related code for the Hilbert space case was included in 
> PR#347 (h/t Patrick Farrell).
> 
>A user-defined convergence test (presuming we 
>can get the callbacks right) would suffice.  As all norms are equivalent 
> in finite
>dimensions, a user could also scale the standard termination tolerance by 
>the correct constant.
> 
> This decision makes tolerances discretization-dependent, which is a leaky 
> abstraction, and an unnatural way to encode algorithm convergence criteria. 
> Scaling the $\ell_{2}$-norm in place of using the correct primal and dual 
> norms also ignores the metric geometry of the primal and dual spaces, and 
> will affect algorithm convergence negatively. A callback for a user-defined 
> convergence test would be preferable. 
> 
>If you need to live in function spaces, which seems to be the argument, 
> then it seems
>that PETSc needs to be changed by more than just a single termination test.
> 
> A similar discussion can be found in the discussion of PR#347.
> 
>Thanks,
>Todd.
> 
>> On Apr 12, 2018, at 3:27 PM, Stefano Zampini  
>> wrote:
>> 
>> The gradient norm is the one induced by the mass matrix of the DM associated 
>> with the control.
>> In principle, TaoGradientNorm() can be replaced by DMCreateMassMatrix() + 
>> solve with the mass matrix.
>> 
>> For PDE constrained optimization, the “gradient norm” is crucial, since we 
>> consider optimization problems in Banach spaces.
>> We should keep supporting it, maybe differently than as it is now, but keep 
>> it.
>> 
>>> On Apr 12, 2018, at 11:21 PM, Jed Brown  wrote:
>>> 
>>> Are you thinking about this PR again?
>>> 
>>> https://bitbucket.org/petsc/petsc/pull-requests/506
>>> 
>>> There's an issue here that Krylov methods operate in the discrete inner
>>> product while some higher level operations are of interest in
>>> (approximations of) continuous inner products (or norms).  The object in
>>> PETSc that endows continuous attributes (like a hierarchy, subdomains,
>>> fields) on discrete quantities is DM, so my first inclination is that
>>> any continuous interpretation of vectors, including inner products and
>>> norms, belongs in DM.
>>> 
>>> "Munson, Todd"  writes:
>>> 
 There is a bit of code in TAO that allows the user to change the norm to 
 a matrix norm.  This was introduced to get some mesh independent 
 behavior in one example (tao/examples/tutorials/ex3.c).  That 
 norm, however, does not propagate down into the KSP methods
 and is only used for testing convergence of the nonlinear
 problem.
 
 A few questions then:  Is similar functionality needed in SNES?  Are 
 TAO and SNES even the right place for this functionality?  Should 
 it belong to the Vector class so that you can change the inner 
 products and have all the KSP methods (hopefully) work 
 correctly?
 
 Note: that this discussion brings us to the brink of supporting an 
 optimize-then-discretize approach.  I am not convinced we should 
 go down that rabbit hole.
 
 Thanks, Todd.
>> 
> 
> 



Re: [petsc-dev] vector inner products

2018-04-13 Thread Oxberry, Geoffrey Malcolm


On 4/12/18, 15:06, "petsc-dev on behalf of Munson, Todd" 
 wrote:


I am not looking at Geoff's pull request right now.

Let me try to be clearer, in the master branch, the TaoGradientNorm() 
function is only 
used for termination tests inside the optimization methods.  It does not 
change anything 
else that goes on inside of the methods.  

Gradients and Hessians also depend on duality pairings, which in turn affects 
how users write these quantities, and how Hessians are approximated in 
quasi-Newton methods. Related code for the Hilbert space case was included in 
PR#347 (h/t Patrick Farrell).

A user-defined convergence test (presuming we 
can get the callbacks right) would suffice.  As all norms are equivalent in 
finite
dimensions, a user could also scale the standard termination tolerance by 
the correct constant.

This decision makes tolerances discretization-dependent, which is a leaky 
abstraction, and an unnatural way to encode algorithm convergence criteria. 
Scaling the $\ell_{2}$-norm in place of using the correct primal and dual norms 
also ignores the metric geometry of the primal and dual spaces, and will affect 
algorithm convergence negatively. A callback for a user-defined convergence 
test would be preferable. 

If you need to live in function spaces, which seems to be the argument, 
then it seems
that PETSc needs to be changed by more than just a single termination test.

A similar discussion can be found in the discussion of PR#347.

Thanks,
Todd.

> On Apr 12, 2018, at 3:27 PM, Stefano Zampini  
wrote:
> 
> The gradient norm is the one induced by the mass matrix of the DM 
associated with the control.
> In principle, TaoGradientNorm() can be replaced by DMCreateMassMatrix() + 
solve with the mass matrix.
> 
> For PDE constrained optimization, the “gradient norm” is crucial, since 
we consider optimization problems in Banach spaces.
> We should keep supporting it, maybe differently than as it is now, but 
keep it.
> 
>> On Apr 12, 2018, at 11:21 PM, Jed Brown  wrote:
>> 
>> Are you thinking about this PR again?
>> 
>> https://bitbucket.org/petsc/petsc/pull-requests/506
>> 
>> There's an issue here that Krylov methods operate in the discrete inner
>> product while some higher level operations are of interest in
>> (approximations of) continuous inner products (or norms).  The object in
>> PETSc that endows continuous attributes (like a hierarchy, subdomains,
>> fields) on discrete quantities is DM, so my first inclination is that
>> any continuous interpretation of vectors, including inner products and
>> norms, belongs in DM.
>> 
>> "Munson, Todd"  writes:
>> 
>>> There is a bit of code in TAO that allows the user to change the norm 
to 
>>> a matrix norm.  This was introduced to get some mesh independent 
>>> behavior in one example (tao/examples/tutorials/ex3.c).  That 
>>> norm, however, does not propagate down into the KSP methods
>>> and is only used for testing convergence of the nonlinear
>>> problem.
>>> 
>>> A few questions then:  Is similar functionality needed in SNES?  Are 
>>> TAO and SNES even the right place for this functionality?  Should 
>>> it belong to the Vector class so that you can change the inner 
>>> products and have all the KSP methods (hopefully) work 
>>> correctly?
>>> 
>>> Note: that this discussion brings us to the brink of supporting an 
>>> optimize-then-discretize approach.  I am not convinced we should 
>>> go down that rabbit hole.
>>> 
>>> Thanks, Todd.
> 




smime.p7s
Description: S/MIME cryptographic signature


Re: [petsc-dev] vector inner products

2018-04-12 Thread Munson, Todd

I am not looking at Geoff's pull request right now.

Let me try to be clearer, in the master branch, the TaoGradientNorm() function 
is only 
used for termination tests inside the optimization methods.  It does not change 
anything 
else that goes on inside of the methods.  A user-defined convergence test 
(presuming we 
can get the callbacks right) would suffice.  As all norms are equivalent in 
finite
dimensions, a user could also scale the standard termination tolerance by 
the correct constant.

If you need to live in function spaces, which seems to be the argument, then it 
seems
that PETSc needs to be changed by more than just a single termination test.

Thanks,
Todd.

> On Apr 12, 2018, at 3:27 PM, Stefano Zampini  
> wrote:
> 
> The gradient norm is the one induced by the mass matrix of the DM associated 
> with the control.
> In principle, TaoGradientNorm() can be replaced by DMCreateMassMatrix() + 
> solve with the mass matrix.
> 
> For PDE constrained optimization, the “gradient norm” is crucial, since we 
> consider optimization problems in Banach spaces.
> We should keep supporting it, maybe differently than as it is now, but keep 
> it.
> 
>> On Apr 12, 2018, at 11:21 PM, Jed Brown  wrote:
>> 
>> Are you thinking about this PR again?
>> 
>> https://bitbucket.org/petsc/petsc/pull-requests/506
>> 
>> There's an issue here that Krylov methods operate in the discrete inner
>> product while some higher level operations are of interest in
>> (approximations of) continuous inner products (or norms).  The object in
>> PETSc that endows continuous attributes (like a hierarchy, subdomains,
>> fields) on discrete quantities is DM, so my first inclination is that
>> any continuous interpretation of vectors, including inner products and
>> norms, belongs in DM.
>> 
>> "Munson, Todd"  writes:
>> 
>>> There is a bit of code in TAO that allows the user to change the norm to 
>>> a matrix norm.  This was introduced to get some mesh independent 
>>> behavior in one example (tao/examples/tutorials/ex3.c).  That 
>>> norm, however, does not propagate down into the KSP methods
>>> and is only used for testing convergence of the nonlinear
>>> problem.
>>> 
>>> A few questions then:  Is similar functionality needed in SNES?  Are 
>>> TAO and SNES even the right place for this functionality?  Should 
>>> it belong to the Vector class so that you can change the inner 
>>> products and have all the KSP methods (hopefully) work 
>>> correctly?
>>> 
>>> Note: that this discussion brings us to the brink of supporting an 
>>> optimize-then-discretize approach.  I am not convinced we should 
>>> go down that rabbit hole.
>>> 
>>> Thanks, Todd.
> 



Re: [petsc-dev] vector inner products

2018-04-12 Thread Jed Brown
"Oxberry, Geoffrey Malcolm"  writes:

> Agreed; we find the Hilbert space inner product improves the convergence a 
> great deal when doing mesh refinement studies with quasi-Newton methods in a 
> discretize-then-optimize approach.
>
> The best example I can think of to argue against “hiding” the inner
> product inside of a  DM

I don't think of it as hiding, just associating.  Making a link from a
Vec to a Mat violates the usual dependency direction.  The Vec can
unwittingly carry a reference to a Mat, but the normal Vec operations
shouldn't be changed.  We do create this sort of dependency inversion
with DM that changes the way VecView behaves, for example.

If DM isn't used, we would need to either create this extra association
for the Mat (gross) or build the plumbing to inform every user of the
Vec about the associated inner product.  But I think every use that
needs this special inner product is either already aware of the DM or
logically should be aware of it.

> instead of a Mat is that it could be used for automatically scaling
> the KKT systems solved in interior point methods (e.g., IPOPT);
> poorly-scaled problems arise sometimes in applications. Admittedly,
> these inner products tend to be diagonal, and thus there may be a
> better interface or abstraction for this functionality.
>
>> On Apr 12, 2018, at 13:28, Stefano Zampini  wrote:
>> 
>> The gradient norm is the one induced by the mass matrix of the DM associated 
>> with the control.
>> In principle, TaoGradientNorm() can be replaced by DMCreateMassMatrix() + 
>> solve with the mass matrix.
>> 
>> For PDE constrained optimization, the “gradient norm” is crucial, since we 
>> consider optimization problems in Banach spaces.
>> We should keep supporting it, maybe differently than as it is now, but keep 
>> it.
>> 
>>> On Apr 12, 2018, at 11:21 PM, Jed Brown  wrote:
>>> 
>>> Are you thinking about this PR again?
>>> 
>>> https://bitbucket.org/petsc/petsc/pull-requests/506
>>> 
>>> There's an issue here that Krylov methods operate in the discrete inner
>>> product while some higher level operations are of interest in
>>> (approximations of) continuous inner products (or norms).  The object in
>>> PETSc that endows continuous attributes (like a hierarchy, subdomains,
>>> fields) on discrete quantities is DM, so my first inclination is that
>>> any continuous interpretation of vectors, including inner products and
>>> norms, belongs in DM.
>>> 
>>> "Munson, Todd"  writes:
>>> 
 There is a bit of code in TAO that allows the user to change the norm to 
 a matrix norm.  This was introduced to get some mesh independent 
 behavior in one example (tao/examples/tutorials/ex3.c).  That 
 norm, however, does not propagate down into the KSP methods
 and is only used for testing convergence of the nonlinear
 problem.
 
 A few questions then:  Is similar functionality needed in SNES?  Are 
 TAO and SNES even the right place for this functionality?  Should 
 it belong to the Vector class so that you can change the inner 
 products and have all the KSP methods (hopefully) work 
 correctly?
 
 Note: that this discussion brings us to the brink of supporting an 
 optimize-then-discretize approach.  I am not convinced we should 
 go down that rabbit hole.
 
 Thanks, Todd.
>> 


Re: [petsc-dev] vector inner products

2018-04-12 Thread Oxberry, Geoffrey Malcolm
Agreed; we find the Hilbert space inner product improves the convergence a 
great deal when doing mesh refinement studies with quasi-Newton methods in a 
discretize-then-optimize approach.

The best example I can think of to argue against “hiding” the inner product 
inside of a  DM instead of a Mat is that it could be used for automatically 
scaling the KKT systems solved in interior point methods (e.g., IPOPT); 
poorly-scaled problems arise sometimes in applications. Admittedly, these inner 
products tend to be diagonal, and thus there may be a better interface or 
abstraction for this functionality.

> On Apr 12, 2018, at 13:28, Stefano Zampini  wrote:
> 
> The gradient norm is the one induced by the mass matrix of the DM associated 
> with the control.
> In principle, TaoGradientNorm() can be replaced by DMCreateMassMatrix() + 
> solve with the mass matrix.
> 
> For PDE constrained optimization, the “gradient norm” is crucial, since we 
> consider optimization problems in Banach spaces.
> We should keep supporting it, maybe differently than as it is now, but keep 
> it.
> 
>> On Apr 12, 2018, at 11:21 PM, Jed Brown  wrote:
>> 
>> Are you thinking about this PR again?
>> 
>> https://bitbucket.org/petsc/petsc/pull-requests/506
>> 
>> There's an issue here that Krylov methods operate in the discrete inner
>> product while some higher level operations are of interest in
>> (approximations of) continuous inner products (or norms).  The object in
>> PETSc that endows continuous attributes (like a hierarchy, subdomains,
>> fields) on discrete quantities is DM, so my first inclination is that
>> any continuous interpretation of vectors, including inner products and
>> norms, belongs in DM.
>> 
>> "Munson, Todd"  writes:
>> 
>>> There is a bit of code in TAO that allows the user to change the norm to 
>>> a matrix norm.  This was introduced to get some mesh independent 
>>> behavior in one example (tao/examples/tutorials/ex3.c).  That 
>>> norm, however, does not propagate down into the KSP methods
>>> and is only used for testing convergence of the nonlinear
>>> problem.
>>> 
>>> A few questions then:  Is similar functionality needed in SNES?  Are 
>>> TAO and SNES even the right place for this functionality?  Should 
>>> it belong to the Vector class so that you can change the inner 
>>> products and have all the KSP methods (hopefully) work 
>>> correctly?
>>> 
>>> Note: that this discussion brings us to the brink of supporting an 
>>> optimize-then-discretize approach.  I am not convinced we should 
>>> go down that rabbit hole.
>>> 
>>> Thanks, Todd.
> 


Re: [petsc-dev] vector inner products

2018-04-12 Thread Stefano Zampini
The gradient norm is the one induced by the mass matrix of the DM associated 
with the control.
In principle, TaoGradientNorm() can be replaced by DMCreateMassMatrix() + solve 
with the mass matrix.

For PDE constrained optimization, the “gradient norm” is crucial, since we 
consider optimization problems in Banach spaces.
We should keep supporting it, maybe differently than as it is now, but keep it.

> On Apr 12, 2018, at 11:21 PM, Jed Brown  wrote:
> 
> Are you thinking about this PR again?
> 
> https://bitbucket.org/petsc/petsc/pull-requests/506
> 
> There's an issue here that Krylov methods operate in the discrete inner
> product while some higher level operations are of interest in
> (approximations of) continuous inner products (or norms).  The object in
> PETSc that endows continuous attributes (like a hierarchy, subdomains,
> fields) on discrete quantities is DM, so my first inclination is that
> any continuous interpretation of vectors, including inner products and
> norms, belongs in DM.
> 
> "Munson, Todd"  writes:
> 
>> There is a bit of code in TAO that allows the user to change the norm to 
>> a matrix norm.  This was introduced to get some mesh independent 
>> behavior in one example (tao/examples/tutorials/ex3.c).  That 
>> norm, however, does not propagate down into the KSP methods
>> and is only used for testing convergence of the nonlinear
>> problem.
>> 
>> A few questions then:  Is similar functionality needed in SNES?  Are 
>> TAO and SNES even the right place for this functionality?  Should 
>> it belong to the Vector class so that you can change the inner 
>> products and have all the KSP methods (hopefully) work 
>> correctly?
>> 
>> Note: that this discussion brings us to the brink of supporting an 
>> optimize-then-discretize approach.  I am not convinced we should 
>> go down that rabbit hole.
>> 
>> Thanks, Todd.



Re: [petsc-dev] vector inner products

2018-04-12 Thread Jed Brown
Are you thinking about this PR again?

https://bitbucket.org/petsc/petsc/pull-requests/506

There's an issue here that Krylov methods operate in the discrete inner
product while some higher level operations are of interest in
(approximations of) continuous inner products (or norms).  The object in
PETSc that endows continuous attributes (like a hierarchy, subdomains,
fields) on discrete quantities is DM, so my first inclination is that
any continuous interpretation of vectors, including inner products and
norms, belongs in DM.

"Munson, Todd"  writes:

> There is a bit of code in TAO that allows the user to change the norm to 
> a matrix norm.  This was introduced to get some mesh independent 
> behavior in one example (tao/examples/tutorials/ex3.c).  That 
> norm, however, does not propagate down into the KSP methods
> and is only used for testing convergence of the nonlinear
> problem.
>
> A few questions then:  Is similar functionality needed in SNES?  Are 
> TAO and SNES even the right place for this functionality?  Should 
> it belong to the Vector class so that you can change the inner 
> products and have all the KSP methods (hopefully) work 
> correctly?
>
> Note: that this discussion brings us to the brink of supporting an 
> optimize-then-discretize approach.  I am not convinced we should 
> go down that rabbit hole.
>
> Thanks, Todd.


Re: [petsc-dev] vector inner products

2018-04-12 Thread Munson, Todd

My first choice is to remove the TaoGradientNorm() wrapper if there 
is not anyone using it.  It seems like a hack that would need more 
significant thought to turn it into "not a hack".

Maybe Patrick F. can comment, as he is the one that added it.

FWIW, I think the user can already change the inner products by 
writing their own using a VecShell.  

Todd.

> On Apr 12, 2018, at 2:17 PM, Smith, Barry F.  wrote:
> 
> 
>  I don't know if we are ready for this dramatic change. I wouldn't do it 
> until there was a clear need (not just potential future usage) that would be 
> cumbersome to do without this generality. I am a little scared of this 
> without demonstrated general need.
> 
>  I would start by having an abstract inner product object (which includes a 
> norm function) that has the standard l2 implementation, then another 
> implementation that is based on passing in a matrix, maybe one based on 
> passing in a vector.  Then each solver would have a XXXSetInnerProduct() 
> while defaulting to l2.
> 
>  Barry
> 
> 
>> On Apr 12, 2018, at 12:21 PM, Munson, Todd  wrote:
>> 
>> 
>> There is a bit of code in TAO that allows the user to change the norm to 
>> a matrix norm.  This was introduced to get some mesh independent 
>> behavior in one example (tao/examples/tutorials/ex3.c).  That 
>> norm, however, does not propagate down into the KSP methods
>> and is only used for testing convergence of the nonlinear
>> problem.
>> 
>> A few questions then:  Is similar functionality needed in SNES?  Are 
>> TAO and SNES even the right place for this functionality?  Should 
>> it belong to the Vector class so that you can change the inner 
>> products and have all the KSP methods (hopefully) work 
>> correctly?
>> 
>> Note: that this discussion brings us to the brink of supporting an 
>> optimize-then-discretize approach.  I am not convinced we should 
>> go down that rabbit hole.
>> 
>> Thanks, Todd.
>> 
> 



Re: [petsc-dev] vector inner products

2018-04-12 Thread Smith, Barry F.

  I don't know if we are ready for this dramatic change. I wouldn't do it until 
there was a clear need (not just potential future usage) that would be 
cumbersome to do without this generality. I am a little scared of this without 
demonstrated general need.

  I would start by having an abstract inner product object (which includes a 
norm function) that has the standard l2 implementation, then another 
implementation that is based on passing in a matrix, maybe one based on passing 
in a vector.  Then each solver would have a XXXSetInnerProduct() while 
defaulting to l2.

  Barry


> On Apr 12, 2018, at 12:21 PM, Munson, Todd  wrote:
> 
> 
> There is a bit of code in TAO that allows the user to change the norm to 
> a matrix norm.  This was introduced to get some mesh independent 
> behavior in one example (tao/examples/tutorials/ex3.c).  That 
> norm, however, does not propagate down into the KSP methods
> and is only used for testing convergence of the nonlinear
> problem.
> 
> A few questions then:  Is similar functionality needed in SNES?  Are 
> TAO and SNES even the right place for this functionality?  Should 
> it belong to the Vector class so that you can change the inner 
> products and have all the KSP methods (hopefully) work 
> correctly?
> 
> Note: that this discussion brings us to the brink of supporting an 
> optimize-then-discretize approach.  I am not convinced we should 
> go down that rabbit hole.
> 
> Thanks, Todd.
> 



[petsc-dev] vector inner products

2018-04-12 Thread Munson, Todd

There is a bit of code in TAO that allows the user to change the norm to 
a matrix norm.  This was introduced to get some mesh independent 
behavior in one example (tao/examples/tutorials/ex3.c).  That 
norm, however, does not propagate down into the KSP methods
and is only used for testing convergence of the nonlinear
problem.

A few questions then:  Is similar functionality needed in SNES?  Are 
TAO and SNES even the right place for this functionality?  Should 
it belong to the Vector class so that you can change the inner 
products and have all the KSP methods (hopefully) work 
correctly?

Note: that this discussion brings us to the brink of supporting an 
optimize-then-discretize approach.  I am not convinced we should 
go down that rabbit hole.

Thanks, Todd.