Re: [petsc-users] Change Amat in FormJacobian

2021-06-14 Thread Dave May
On Mon 14. Jun 2021 at 17:27, Anton Popov  wrote:

>
> On 14.06.21 15:04, Dave May wrote:
>
>
> Hi Anton,
>
> Hi Dave,
>
>
> On Mon, 14 Jun 2021 at 14:47, Anton Popov  wrote:
>
>> Hi Barry & Matt,
>>
>> thanks for your quick response. These options were exactly what I needed
>> and expected:
>>
>> -pc_mg_galerkin pmat
>> -pc_use_amat false
>>
>> I just assumed that it’s a default behavior of the PC object.
>>
>> So to clarify my case, I don't use nonlinear multigrid. Galerkin is
>> expected to deal with Pmat only, and it's enough if Amat implements a
>> matrix-vector product for the Krylov accelerator.
>>
>> Matt, the reason for switching Amat during the iteration is a quite
>> common Picard-Newton combination. Jacobian matrix gives accurate updates
>> close to the solution, but is rather unstable far form the solution. Picard
>> matrix (approximate Jacobian) is quite the opposite – it’s kind of stable,
>> but slow. So the idea is to begin the iteration with Picard matrix, and
>> switch to the Jacobian later.
>>
>> If the assembled matrices are used, then the standard SNES interface is
>> just perfect. I can decide how to fill the matrices. But I don’t bother
>> with  Jacobian assembly and want to use a built-in MFFD approximation
>> instead. I did quite a few tests previously and figured out that MFFD is
>> practically the same as closed-from matrix-free Jacobian for the later
>> stages of the iteration. The Picard matrix still does a good job as a
>> preconditioner. But it is important to start the iteration with Picard and
>> only change to MFFD later.
>>
>> Is my workaround with the shell matrix acceptable, or there is a better
>> solution?
>>
>
> Given what you write, it sounds like you already have a good heuristic for
> when to switch from Picard to Newton,
> Hence I think the simplest option is just to use two seperate SNES objects
> - one for performing Picard and one for Newton.
>
>
> Yes, I considered this option initially. Sometimes it is necessary to
> switch back and forth between the methods, so it becomes a bit messy to
> organize this in the code.
>
> But maybe if Newton fails after Picard, I should just reduce the time step
> and restart, instead of switching back to Picard. Thanks, Dave.
>
>
Oh yeah, for a nasty multiphysics problem a single SNES is likely not the
end of the story! Definitely there is almost certainly an entire other
outer loop of several nonlinear solver strategies slapped together in some
problem specific manner in an effort to get the monolithic nonlinear
residual down. Aborting the time step and dropping dt is often the thing
you fall back to when all of those fail.

Thanks,
Dave


Thanks,
>
> Anton
>
>
> The stopping condition for the Picard object would encode your heuristic
> to abort earlier when the solution was deemed sufficiently accurate.
>
> Thanks,
> Dave
>
>
>>
>> Thanks,
>> Anton
>> On 13.06.21 20:52, Barry Smith wrote:
>>
>>
>>   Anton,
>>
>>   -pc_mg_galerkin pmat
>>
>>   Though it seems simple, there is some subtly in swapping out matrices
>> with SNES.
>>
>>   When using multigrid with SNES there are at least five distinct uses of
>> the Jacobian operator.
>>
>>- Perform matrix-vector product in line search to check Wolf
>>   sufficient decrease convergence criteria
>>   - Perform the matrix-vector product for the Krylov accelerator of
>>   the system
>>   - Perform smoothing on the finest level of MG
>>   - Perform the matrix-vector product needed on the finest level of
>>   MG to compute the residual that will be restricted to the coarser 
>> level of
>>   multigrid
>>   - When using Galerkin products to compute the coarser grid
>>   operator performing the Galerkin matrix triple product
>>
>>
>> when one swaps out the mat, which of these do they wish to change? The
>> first two seem to naturally go together as do the last three. In your case
>> I presume you want to swap for the first two, but always use pmat for the
>> last three? To achieve this you will also need -pc_use_amat  false
>>
>> If you run with -info and -snes_view it will print out some of the
>> information about which operator it is using for each case, but probably
>> not all of them.
>>
>> Note: if the pmat is actually an accurate computation of the Jacobian
>> then it is likely best not to ever use a matrix-free product. It is only
>> when pmat is approximated in some specific way that using the matrix-free
>> product would be useful to insure the "Newton" method actually computes a
>> Newton step.
>>
>> Barry
>>
>>
>>
>> On Jun 13, 2021, at 11:21 AM, Matthew Knepley  wrote:
>>
>> On Sun, Jun 13, 2021 at 10:55 AM Anton Popov  wrote:
>>
>>> Hi,
>>>
>>> I want a simple(?) thing. I want to decide and be able to assign the
>>> Jacobian matrix (Amat) on the fly within the FormJacobian function (i.e.
>>> during Newton iteration) to one of the following options:
>>>
>>> 1) built-in MFFD approximation
>>> 2) assembled preconditioner matrix (Pmat)

Re: [petsc-users] Change Amat in FormJacobian

2021-06-14 Thread Anton Popov


On 14.06.21 15:04, Dave May wrote:


Hi Anton,

Hi Dave,


On Mon, 14 Jun 2021 at 14:47, Anton Popov > wrote:


Hi Barry & Matt,

thanks for your quick response. These options were exactly what I
needed and expected:

-pc_mg_galerkin pmat
-pc_use_amat false

I just assumed that it’s a default behavior of the PC object.

So to clarify my case, I don't use nonlinear multigrid. Galerkin
is expected to deal with Pmat only, and it's enough if Amat
implements a matrix-vector product for the Krylov accelerator.

Matt, the reason for switching Amat during the iteration is a
quite common Picard-Newton combination. Jacobian matrix gives
accurate updates close to the solution, but is rather unstable far
form the solution. Picard matrix (approximate Jacobian) is quite
the opposite – it’s kind of stable, but slow. So the idea is to
begin the iteration with Picard matrix, and switch to the Jacobian
later.

If the assembled matrices are used, then the standard SNES
interface is just perfect. I can decide how to fill the matrices.
But I don’t bother with  Jacobian assembly and want to use a
built-in MFFD approximation instead. I did quite a few tests
previously and figured out that MFFD is practically the same as
closed-from matrix-free Jacobian for the later stages of the
iteration. The Picard matrix still does a good job as a
preconditioner. But it is important to start the iteration with
Picard and only change to MFFD later.

Is my workaround with the shell matrix acceptable, or there is a
better solution?


Given what you write, it sounds like you already have a good heuristic 
for when to switch from Picard to Newton,
Hence I think the simplest option is just to use two seperate SNES 
objects - one for performing Picard and one for Newton.



Yes, I considered this option initially. Sometimes it is necessary to 
switch back and forth between the methods, so it becomes a bit messy to 
organize this in the code.


But maybe if Newton fails after Picard, I should just reduce the time 
step and restart, instead of switching back to Picard. Thanks, Dave.


Thanks,

Anton


The stopping condition for the Picard object would encode your 
heuristic to abort earlier when the solution was deemed sufficiently 
accurate.


Thanks,
Dave


Thanks,
Anton

On 13.06.21 20:52, Barry Smith wrote:


  Anton,

  -pc_mg_galerkin pmat

  Though it seems simple, there is some subtly in swapping out
matrices with SNES.

  When using multigrid with SNES there are at least five distinct
uses of the Jacobian operator.

  * Perform matrix-vector product in line search to check
Wolf sufficient decrease convergence criteria
  * Perform the matrix-vector product for the Krylov
accelerator of the system
  * Perform smoothing on the finest level of MG
  * Perform the matrix-vector product needed on the finest
level of MG to compute the residual that will be
restricted to the coarser level of multigrid
  * When using Galerkin products to compute the coarser grid
operator performing the Galerkin matrix triple product


when one swaps out the mat, which of these do they wish to
change? The first two seem to naturally go together as do the
last three. In your case I presume you want to swap for the first
two, but always use pmat for the last three? To achieve this you
will also need -pc_use_amat  false

If you run with -info and -snes_view it will print out some of
the information about which operator it is using for each case,
but probably not all of them.

Note: if the pmat is actually an accurate computation of the
Jacobian then it is likely best not to ever use a matrix-free
product. It is only when pmat is approximated in some specific
way that using the matrix-free product would be useful to insure
the "Newton" method actually computes a Newton step.

Barry




On Jun 13, 2021, at 11:21 AM, Matthew Knepley mailto:knep...@gmail.com>> wrote:

On Sun, Jun 13, 2021 at 10:55 AM Anton Popov mailto:po...@uni-mainz.de>> wrote:

Hi,

I want a simple(?) thing. I want to decide and be able to
assign the
Jacobian matrix (Amat) on the fly within the FormJacobian
function (i.e.
during Newton iteration) to one of the following options:

1) built-in MFFD approximation
2) assembled preconditioner matrix (Pmat)

I have not found this scenario demonstrated in any example,
therefore
I’m asking how to do that.

Currently I do the following:

1) setup Amat as a shell matrix with a MATOP_MULT operation
that simply
retrieves a matrix object form its context and calls MatMult
on it.

2) if I need MFFD, I put 

Re: [petsc-users] Change Amat in FormJacobian

2021-06-14 Thread Anton Popov


On 14.06.21 15:02, Matthew Knepley wrote:
On Mon, Jun 14, 2021 at 8:45 AM Anton Popov > wrote:


Hi Barry & Matt,

thanks for your quick response. These options were exactly what I
needed and expected:

-pc_mg_galerkin pmat
-pc_use_amat false

I just assumed that it’s a default behavior of the PC object.

So to clarify my case, I don't use nonlinear multigrid. Galerkin
is expected to deal with Pmat only, and it's enough if Amat
implements a matrix-vector product for the Krylov accelerator.

Matt, the reason for switching Amat during the iteration is a
quite common Picard-Newton combination. Jacobian matrix gives
accurate updates close to the solution, but is rather unstable far
form the solution. Picard matrix (approximate Jacobian) is quite
the opposite – it’s kind of stable, but slow. So the idea is to
begin the iteration with Picard matrix, and switch to the Jacobian
later.

If the assembled matrices are used, then the standard SNES
interface is just perfect. I can decide how to fill the matrices.
But I don’t bother with  Jacobian assembly and want to use a
built-in MFFD approximation instead. I did quite a few tests
previously and figured out that MFFD is practically the same as
closed-from matrix-free Jacobian for the later stages of the
iteration. The Picard matrix still does a good job as a
preconditioner. But it is important to start the iteration with
Picard and only change to MFFD later.

Is my workaround with the shell matrix acceptable, or there is a
better solution?

It is fine. However, I wonder if you have time to try out a slightly 
different solution. I experimented with this, and it worked as well 
for me as switching between
Picard and Newton. I made a SNESCOMPOSITE with Picard and Newton as 
the two components. I made Newton take a single step, and then 
adjusted the
Picard steps a little.The additiveoptimal variant always converged for 
me, and took the same or less time. I also used left preconditioning 
of Picard by Newton,
but that is more involved, and for my simple problem was not a lot 
better. The advantage I see is that both solves are not plain vanilla, 
and you do not have any

switching logic.

hmmm, this sounds really interesting. I will definitely look into this 
possibility. Thanks Matt.


Thanks,

Anton



  Thanks,

    Matt

Thanks,
Anton

On 13.06.21 20:52, Barry Smith wrote:


  Anton,

  -pc_mg_galerkin pmat

  Though it seems simple, there is some subtly in swapping out
matrices with SNES.

  When using multigrid with SNES there are at least five distinct
uses of the Jacobian operator.

  * Perform matrix-vector product in line search to check
Wolf sufficient decrease convergence criteria
  * Perform the matrix-vector product for the Krylov
accelerator of the system
  * Perform smoothing on the finest level of MG
  * Perform the matrix-vector product needed on the finest
level of MG to compute the residual that will be
restricted to the coarser level of multigrid
  * When using Galerkin products to compute the coarser grid
operator performing the Galerkin matrix triple product


when one swaps out the mat, which of these do they wish to
change? The first two seem to naturally go together as do the
last three. In your case I presume you want to swap for the first
two, but always use pmat for the last three? To achieve this you
will also need -pc_use_amat  false

If you run with -info and -snes_view it will print out some of
the information about which operator it is using for each case,
but probably not all of them.

Note: if the pmat is actually an accurate computation of the
Jacobian then it is likely best not to ever use a matrix-free
product. It is only when pmat is approximated in some specific
way that using the matrix-free product would be useful to insure
the "Newton" method actually computes a Newton step.

Barry




On Jun 13, 2021, at 11:21 AM, Matthew Knepley mailto:knep...@gmail.com>> wrote:

On Sun, Jun 13, 2021 at 10:55 AM Anton Popov mailto:po...@uni-mainz.de>> wrote:

Hi,

I want a simple(?) thing. I want to decide and be able to
assign the
Jacobian matrix (Amat) on the fly within the FormJacobian
function (i.e.
during Newton iteration) to one of the following options:

1) built-in MFFD approximation
2) assembled preconditioner matrix (Pmat)

I have not found this scenario demonstrated in any example,
therefore
I’m asking how to do that.

Currently I do the following:

1) setup Amat as a shell matrix with a MATOP_MULT operation
that simply
retrieves a matrix object form its context and calls MatMult
 

Re: [petsc-users] Change Amat in FormJacobian

2021-06-14 Thread Dave May
Hi Anton,

On Mon, 14 Jun 2021 at 14:47, Anton Popov  wrote:

> Hi Barry & Matt,
>
> thanks for your quick response. These options were exactly what I needed
> and expected:
>
> -pc_mg_galerkin pmat
> -pc_use_amat false
>
> I just assumed that it’s a default behavior of the PC object.
>
> So to clarify my case, I don't use nonlinear multigrid. Galerkin is
> expected to deal with Pmat only, and it's enough if Amat implements a
> matrix-vector product for the Krylov accelerator.
>
> Matt, the reason for switching Amat during the iteration is a quite common
> Picard-Newton combination. Jacobian matrix gives accurate updates close to
> the solution, but is rather unstable far form the solution. Picard matrix
> (approximate Jacobian) is quite the opposite – it’s kind of stable, but
> slow. So the idea is to begin the iteration with Picard matrix, and switch
> to the Jacobian later.
>
> If the assembled matrices are used, then the standard SNES interface is
> just perfect. I can decide how to fill the matrices. But I don’t bother
> with  Jacobian assembly and want to use a built-in MFFD approximation
> instead. I did quite a few tests previously and figured out that MFFD is
> practically the same as closed-from matrix-free Jacobian for the later
> stages of the iteration. The Picard matrix still does a good job as a
> preconditioner. But it is important to start the iteration with Picard and
> only change to MFFD later.
>
> Is my workaround with the shell matrix acceptable, or there is a better
> solution?
>

Given what you write, it sounds like you already have a good heuristic for
when to switch from Picard to Newton,
Hence I think the simplest option is just to use two seperate SNES objects
- one for performing Picard and one for Newton.
The stopping condition for the Picard object would encode your heuristic to
abort earlier when the solution was deemed sufficiently accurate.

Thanks,
Dave


>
> Thanks,
> Anton
> On 13.06.21 20:52, Barry Smith wrote:
>
>
>   Anton,
>
>   -pc_mg_galerkin pmat
>
>   Though it seems simple, there is some subtly in swapping out matrices
> with SNES.
>
>   When using multigrid with SNES there are at least five distinct uses of
> the Jacobian operator.
>
>- Perform matrix-vector product in line search to check Wolf
>   sufficient decrease convergence criteria
>   - Perform the matrix-vector product for the Krylov accelerator of
>   the system
>   - Perform smoothing on the finest level of MG
>   - Perform the matrix-vector product needed on the finest level of
>   MG to compute the residual that will be restricted to the coarser level 
> of
>   multigrid
>   - When using Galerkin products to compute the coarser grid operator
>   performing the Galerkin matrix triple product
>
>
> when one swaps out the mat, which of these do they wish to change? The
> first two seem to naturally go together as do the last three. In your case
> I presume you want to swap for the first two, but always use pmat for the
> last three? To achieve this you will also need -pc_use_amat  false
>
> If you run with -info and -snes_view it will print out some of the
> information about which operator it is using for each case, but probably
> not all of them.
>
> Note: if the pmat is actually an accurate computation of the Jacobian then
> it is likely best not to ever use a matrix-free product. It is only when
> pmat is approximated in some specific way that using the matrix-free
> product would be useful to insure the "Newton" method actually computes a
> Newton step.
>
> Barry
>
>
>
> On Jun 13, 2021, at 11:21 AM, Matthew Knepley  wrote:
>
> On Sun, Jun 13, 2021 at 10:55 AM Anton Popov  wrote:
>
>> Hi,
>>
>> I want a simple(?) thing. I want to decide and be able to assign the
>> Jacobian matrix (Amat) on the fly within the FormJacobian function (i.e.
>> during Newton iteration) to one of the following options:
>>
>> 1) built-in MFFD approximation
>> 2) assembled preconditioner matrix (Pmat)
>>
>> I have not found this scenario demonstrated in any example, therefore
>> I’m asking how to do that.
>>
>> Currently I do the following:
>>
>> 1) setup Amat as a shell matrix with a MATOP_MULT operation that simply
>> retrieves a matrix object form its context and calls MatMult on it.
>>
>> 2) if I need MFFD, I put a matrix generated with MatCreateSNESMF in the
>> Amat context (of course I also call MatMFFDComputeJacobian before that).
>>
>> 3) if I need Pmat, I simply put Pmat in the Amat context.
>>
>> 4) call MatAssemblyBegin/End on Amat
>>
>> So far so good.
>>
>> However, shell Amat and assembled Pmat generate a problem if Galerkin
>> multigrid is requested as a preconditioner (I just test on 1 CPU):
>>
>> [0]PETSC ERROR: MatPtAP requires A, shell, to be compatible with P,
>> seqaij (Misses composed function MatPtAP_shell_seqaij_C)
>> [0]PETSC ERROR: #1 MatPtAP()
>> [0]PETSC ERROR: #2 MatGalerkin()
>> [0]PETSC ERROR: #3 PCSetUp_MG()
>> [0]PETSC ERROR: #4 PCSetUp()
>> 

Re: [petsc-users] Change Amat in FormJacobian

2021-06-14 Thread Matthew Knepley
On Mon, Jun 14, 2021 at 8:45 AM Anton Popov  wrote:

> Hi Barry & Matt,
>
> thanks for your quick response. These options were exactly what I needed
> and expected:
>
> -pc_mg_galerkin pmat
> -pc_use_amat false
>
> I just assumed that it’s a default behavior of the PC object.
>
> So to clarify my case, I don't use nonlinear multigrid. Galerkin is
> expected to deal with Pmat only, and it's enough if Amat implements a
> matrix-vector product for the Krylov accelerator.
>
> Matt, the reason for switching Amat during the iteration is a quite common
> Picard-Newton combination. Jacobian matrix gives accurate updates close to
> the solution, but is rather unstable far form the solution. Picard matrix
> (approximate Jacobian) is quite the opposite – it’s kind of stable, but
> slow. So the idea is to begin the iteration with Picard matrix, and switch
> to the Jacobian later.
>
> If the assembled matrices are used, then the standard SNES interface is
> just perfect. I can decide how to fill the matrices. But I don’t bother
> with  Jacobian assembly and want to use a built-in MFFD approximation
> instead. I did quite a few tests previously and figured out that MFFD is
> practically the same as closed-from matrix-free Jacobian for the later
> stages of the iteration. The Picard matrix still does a good job as a
> preconditioner. But it is important to start the iteration with Picard and
> only change to MFFD later.
>
> Is my workaround with the shell matrix acceptable, or there is a better
> solution?
>
It is fine. However, I wonder if you have time to try out a slightly
different solution. I experimented with this, and it worked as well for me
as switching between
Picard and Newton. I made a SNESCOMPOSITE with Picard and Newton as the two
components. I made Newton take a single step, and then adjusted the
Picard steps a little.The additiveoptimal variant always converged for me,
and took the same or less time. I also used left preconditioning of Picard
by Newton,
but that is more involved, and for my simple problem was not a lot better.
The advantage I see is that both solves are not plain vanilla, and you do
not have any
switching logic.

  Thanks,

Matt


> Thanks,
> Anton
> On 13.06.21 20:52, Barry Smith wrote:
>
>
>   Anton,
>
>   -pc_mg_galerkin pmat
>
>   Though it seems simple, there is some subtly in swapping out matrices
> with SNES.
>
>   When using multigrid with SNES there are at least five distinct uses of
> the Jacobian operator.
>
>- Perform matrix-vector product in line search to check Wolf
>   sufficient decrease convergence criteria
>   - Perform the matrix-vector product for the Krylov accelerator of
>   the system
>   - Perform smoothing on the finest level of MG
>   - Perform the matrix-vector product needed on the finest level of
>   MG to compute the residual that will be restricted to the coarser level 
> of
>   multigrid
>   - When using Galerkin products to compute the coarser grid operator
>   performing the Galerkin matrix triple product
>
>
> when one swaps out the mat, which of these do they wish to change? The
> first two seem to naturally go together as do the last three. In your case
> I presume you want to swap for the first two, but always use pmat for the
> last three? To achieve this you will also need -pc_use_amat  false
>
> If you run with -info and -snes_view it will print out some of the
> information about which operator it is using for each case, but probably
> not all of them.
>
> Note: if the pmat is actually an accurate computation of the Jacobian then
> it is likely best not to ever use a matrix-free product. It is only when
> pmat is approximated in some specific way that using the matrix-free
> product would be useful to insure the "Newton" method actually computes a
> Newton step.
>
> Barry
>
>
>
> On Jun 13, 2021, at 11:21 AM, Matthew Knepley  wrote:
>
> On Sun, Jun 13, 2021 at 10:55 AM Anton Popov  wrote:
>
>> Hi,
>>
>> I want a simple(?) thing. I want to decide and be able to assign the
>> Jacobian matrix (Amat) on the fly within the FormJacobian function (i.e.
>> during Newton iteration) to one of the following options:
>>
>> 1) built-in MFFD approximation
>> 2) assembled preconditioner matrix (Pmat)
>>
>> I have not found this scenario demonstrated in any example, therefore
>> I’m asking how to do that.
>>
>> Currently I do the following:
>>
>> 1) setup Amat as a shell matrix with a MATOP_MULT operation that simply
>> retrieves a matrix object form its context and calls MatMult on it.
>>
>> 2) if I need MFFD, I put a matrix generated with MatCreateSNESMF in the
>> Amat context (of course I also call MatMFFDComputeJacobian before that).
>>
>> 3) if I need Pmat, I simply put Pmat in the Amat context.
>>
>> 4) call MatAssemblyBegin/End on Amat
>>
>> So far so good.
>>
>> However, shell Amat and assembled Pmat generate a problem if Galerkin
>> multigrid is requested as a preconditioner (I just test on 1 CPU):
>>
>> 

Re: [petsc-users] Change Amat in FormJacobian

2021-06-14 Thread Anton Popov

Hi Barry & Matt,

thanks for your quick response. These options were exactly what I needed 
and expected:


-pc_mg_galerkin pmat
-pc_use_amat false

I just assumed that it’s a default behavior of the PC object.

So to clarify my case, I don't use nonlinear multigrid. Galerkin is 
expected to deal with Pmat only, and it's enough if Amat implements a 
matrix-vector product for the Krylov accelerator.


Matt, the reason for switching Amat during the iteration is a quite 
common Picard-Newton combination. Jacobian matrix gives accurate updates 
close to the solution, but is rather unstable far form the solution. 
Picard matrix (approximate Jacobian) is quite the opposite – it’s kind 
of stable, but slow. So the idea is to begin the iteration with Picard 
matrix, and switch to the Jacobian later.


If the assembled matrices are used, then the standard SNES interface is 
just perfect. I can decide how to fill the matrices. But I don’t bother 
with  Jacobian assembly and want to use a built-in MFFD approximation 
instead. I did quite a few tests previously and figured out that MFFD is 
practically the same as closed-from matrix-free Jacobian for the later 
stages of the iteration. The Picard matrix still does a good job as a 
preconditioner. But it is important to start the iteration with Picard 
and only change to MFFD later.


Is my workaround with the shell matrix acceptable, or there is a better 
solution?


Thanks,
Anton

On 13.06.21 20:52, Barry Smith wrote:


  Anton,

  -pc_mg_galerkin pmat

  Though it seems simple, there is some subtly in swapping out 
matrices with SNES.


  When using multigrid with SNES there are at least five distinct uses 
of the Jacobian operator.


  * Perform matrix-vector product in line search to check Wolf
sufficient decrease convergence criteria
  * Perform the matrix-vector product for the Krylov accelerator
of the system
  * Perform smoothing on the finest level of MG
  * Perform the matrix-vector product needed on the finest level
of MG to compute the residual that will be restricted to the
coarser level of multigrid
  * When using Galerkin products to compute the coarser grid
operator performing the Galerkin matrix triple product


when one swaps out the mat, which of these do they wish to change? The 
first two seem to naturally go together as do the last three. In your 
case I presume you want to swap for the first two, but always use pmat 
for the last three? To achieve this you will also need -pc_use_amat  false


If you run with -info and -snes_view it will print out some of the 
information about which operator it is using for each case, but 
probably not all of them.


Note: if the pmat is actually an accurate computation of the Jacobian 
then it is likely best not to ever use a matrix-free product. It is 
only when pmat is approximated in some specific way that using the 
matrix-free product would be useful to insure the "Newton" method 
actually computes a Newton step.


Barry



On Jun 13, 2021, at 11:21 AM, Matthew Knepley > wrote:


On Sun, Jun 13, 2021 at 10:55 AM Anton Popov > wrote:


Hi,

I want a simple(?) thing. I want to decide and be able to assign the
Jacobian matrix (Amat) on the fly within the FormJacobian
function (i.e.
during Newton iteration) to one of the following options:

1) built-in MFFD approximation
2) assembled preconditioner matrix (Pmat)

I have not found this scenario demonstrated in any example,
therefore
I’m asking how to do that.

Currently I do the following:

1) setup Amat as a shell matrix with a MATOP_MULT operation that
simply
retrieves a matrix object form its context and calls MatMult on it.

2) if I need MFFD, I put a matrix generated with MatCreateSNESMF
in the
Amat context (of course I also call MatMFFDComputeJacobian before
that).

3) if I need Pmat, I simply put Pmat in the Amat context.

4) call MatAssemblyBegin/End on Amat

So far so good.

However, shell Amat and assembled Pmat generate a problem if
Galerkin
multigrid is requested as a preconditioner (I just test on 1 CPU):

[0]PETSC ERROR: MatPtAP requires A, shell, to be compatible with P,
seqaij (Misses composed function MatPtAP_shell_seqaij_C)
[0]PETSC ERROR: #1 MatPtAP()
[0]PETSC ERROR: #2 MatGalerkin()
[0]PETSC ERROR: #3 PCSetUp_MG()
[0]PETSC ERROR: #4 PCSetUp()
[0]PETSC ERROR: #5 KSPSetUp()
[0]PETSC ERROR: #6 KSPSolve()
[0]PETSC ERROR: #7 SNESSolve_NEWTONLS()
[0]PETSC ERROR: #8 SNESSolve()

It seems like PETSc tries to apply Galerkin coarsening to the
shell Amat
matrix instead of the assembled Pmat. Why?

Pmat is internally generated by SNES using a DMDA attached to
SNES, so
it has everything to define Galerkin coarsening. And it actually
works
if I just get rid of the shell Amat.

 

Re: [petsc-users] Change Amat in FormJacobian

2021-06-13 Thread Barry Smith

  Anton,

  -pc_mg_galerkin pmat

  Though it seems simple, there is some subtly in swapping out matrices with 
SNES.

  When using multigrid with SNES there are at least five distinct uses of the 
Jacobian operator. 
Perform matrix-vector product in line search to check Wolf sufficient decrease 
convergence criteria
Perform the matrix-vector product for the Krylov accelerator of the system
Perform smoothing on the finest level of MG
Perform the matrix-vector product needed on the finest level of MG to compute 
the residual that will be restricted to the coarser level of multigrid
When using Galerkin products to compute the coarser grid operator performing 
the Galerkin matrix triple product

when one swaps out the mat, which of these do they wish to change? The first 
two seem to naturally go together as do the last three. In your case I presume 
you want to swap for the first two, but always use pmat for the last three? To 
achieve this you will also need -pc_use_amat  false

If you run with -info and -snes_view it will print out some of the information 
about which operator it is using for each case, but probably not all of them.

Note: if the pmat is actually an accurate computation of the Jacobian then it 
is likely best not to ever use a matrix-free product. It is only when pmat is 
approximated in some specific way that using the matrix-free product would be 
useful to insure the "Newton" method actually computes a Newton step.

Barry



> On Jun 13, 2021, at 11:21 AM, Matthew Knepley  wrote:
> 
> On Sun, Jun 13, 2021 at 10:55 AM Anton Popov  > wrote:
> Hi,
> 
> I want a simple(?) thing. I want to decide and be able to assign the 
> Jacobian matrix (Amat) on the fly within the FormJacobian function (i.e. 
> during Newton iteration) to one of the following options:
> 
> 1) built-in MFFD approximation
> 2) assembled preconditioner matrix (Pmat)
> 
> I have not found this scenario demonstrated in any example, therefore 
> I’m asking how to do that.
> 
> Currently I do the following:
> 
> 1) setup Amat as a shell matrix with a MATOP_MULT operation that simply 
> retrieves a matrix object form its context and calls MatMult on it.
> 
> 2) if I need MFFD, I put a matrix generated with MatCreateSNESMF in the 
> Amat context (of course I also call MatMFFDComputeJacobian before that).
> 
> 3) if I need Pmat, I simply put Pmat in the Amat context.
> 
> 4) call MatAssemblyBegin/End on Amat
> 
> So far so good.
> 
> However, shell Amat and assembled Pmat generate a problem if Galerkin 
> multigrid is requested as a preconditioner (I just test on 1 CPU):
> 
> [0]PETSC ERROR: MatPtAP requires A, shell, to be compatible with P, 
> seqaij (Misses composed function MatPtAP_shell_seqaij_C)
> [0]PETSC ERROR: #1 MatPtAP()
> [0]PETSC ERROR: #2 MatGalerkin()
> [0]PETSC ERROR: #3 PCSetUp_MG()
> [0]PETSC ERROR: #4 PCSetUp()
> [0]PETSC ERROR: #5 KSPSetUp()
> [0]PETSC ERROR: #6 KSPSolve()
> [0]PETSC ERROR: #7 SNESSolve_NEWTONLS()
> [0]PETSC ERROR: #8 SNESSolve()
> 
> It seems like PETSc tries to apply Galerkin coarsening to the shell Amat 
> matrix instead of the assembled Pmat. Why?
> 
> Pmat is internally generated by SNES using a DMDA attached to SNES, so 
> it has everything to define Galerkin coarsening. And it actually works 
> if I just get rid of the shell Amat.
> 
> I can get around this problem by wrapping a PC object in a shell matrix 
> and pass it as Pmat. This is a rather ugly approach, so I wonder how to 
> resolve the situation in a better way, if it is possible.
> 
> Hi Anton,
> 
> You are correct that there is no specific test, so I can believe that a bug 
> might be lurking here.
> I believe the way it is supposed to work is that you set the type of Galerkin 
> coarsening that you
> want
> 
>   
> https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/PC/PCMGSetGalerkin.html
>  
> 
> 
> So I am thinking you want 'pmat' only, and you would be in charge of making 
> the coarse MFFD operator
> and telling PCMG about it. I could imagine that you might want us to 
> automatically do that, but we would
> need some way to know that it is MFFD, and with the scheme above we do not 
> have that.
> 
> What is the advantage of switching representations during the Newton 
> iteration?
> 
>   Thanks,
> 
>  Matt
>  
> Thanks.
> Anton
> -- 
> What most experimenters take for granted before they begin their experiments 
> is infinitely more interesting than any results to which their experiments 
> lead.
> -- Norbert Wiener
> 
> https://www.cse.buffalo.edu/~knepley/ 



Re: [petsc-users] Change Amat in FormJacobian

2021-06-13 Thread Matthew Knepley
On Sun, Jun 13, 2021 at 10:55 AM Anton Popov  wrote:

> Hi,
>
> I want a simple(?) thing. I want to decide and be able to assign the
> Jacobian matrix (Amat) on the fly within the FormJacobian function (i.e.
> during Newton iteration) to one of the following options:
>
> 1) built-in MFFD approximation
> 2) assembled preconditioner matrix (Pmat)
>
> I have not found this scenario demonstrated in any example, therefore
> I’m asking how to do that.
>
> Currently I do the following:
>
> 1) setup Amat as a shell matrix with a MATOP_MULT operation that simply
> retrieves a matrix object form its context and calls MatMult on it.
>
> 2) if I need MFFD, I put a matrix generated with MatCreateSNESMF in the
> Amat context (of course I also call MatMFFDComputeJacobian before that).
>
> 3) if I need Pmat, I simply put Pmat in the Amat context.
>
> 4) call MatAssemblyBegin/End on Amat
>
> So far so good.
>
> However, shell Amat and assembled Pmat generate a problem if Galerkin
> multigrid is requested as a preconditioner (I just test on 1 CPU):
>
> [0]PETSC ERROR: MatPtAP requires A, shell, to be compatible with P,
> seqaij (Misses composed function MatPtAP_shell_seqaij_C)
> [0]PETSC ERROR: #1 MatPtAP()
> [0]PETSC ERROR: #2 MatGalerkin()
> [0]PETSC ERROR: #3 PCSetUp_MG()
> [0]PETSC ERROR: #4 PCSetUp()
> [0]PETSC ERROR: #5 KSPSetUp()
> [0]PETSC ERROR: #6 KSPSolve()
> [0]PETSC ERROR: #7 SNESSolve_NEWTONLS()
> [0]PETSC ERROR: #8 SNESSolve()
>
> It seems like PETSc tries to apply Galerkin coarsening to the shell Amat
> matrix instead of the assembled Pmat. Why?
>
> Pmat is internally generated by SNES using a DMDA attached to SNES, so
> it has everything to define Galerkin coarsening. And it actually works
> if I just get rid of the shell Amat.
>
> I can get around this problem by wrapping a PC object in a shell matrix
> and pass it as Pmat. This is a rather ugly approach, so I wonder how to
> resolve the situation in a better way, if it is possible.
>

Hi Anton,

You are correct that there is no specific test, so I can believe that a bug
might be lurking here.
I believe the way it is supposed to work is that you set the type of
Galerkin coarsening that you
want


https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/PC/PCMGSetGalerkin.html

So I am thinking you want 'pmat' only, and you would be in charge of making
the coarse MFFD operator
and telling PCMG about it. I could imagine that you might want us to
automatically do that, but we would
need some way to know that it is MFFD, and with the scheme above we do not
have that.

What is the advantage of switching representations during the Newton
iteration?

  Thanks,

 Matt


> Thanks.
> Anton
>
-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ 


[petsc-users] Change Amat in FormJacobian

2021-06-13 Thread Anton Popov

Hi,

I want a simple(?) thing. I want to decide and be able to assign the 
Jacobian matrix (Amat) on the fly within the FormJacobian function (i.e. 
during Newton iteration) to one of the following options:


1) built-in MFFD approximation
2) assembled preconditioner matrix (Pmat)

I have not found this scenario demonstrated in any example, therefore 
I’m asking how to do that.


Currently I do the following:

1) setup Amat as a shell matrix with a MATOP_MULT operation that simply 
retrieves a matrix object form its context and calls MatMult on it.


2) if I need MFFD, I put a matrix generated with MatCreateSNESMF in the 
Amat context (of course I also call MatMFFDComputeJacobian before that).


3) if I need Pmat, I simply put Pmat in the Amat context.

4) call MatAssemblyBegin/End on Amat

So far so good.

However, shell Amat and assembled Pmat generate a problem if Galerkin 
multigrid is requested as a preconditioner (I just test on 1 CPU):


[0]PETSC ERROR: MatPtAP requires A, shell, to be compatible with P, 
seqaij (Misses composed function MatPtAP_shell_seqaij_C)

[0]PETSC ERROR: #1 MatPtAP()
[0]PETSC ERROR: #2 MatGalerkin()
[0]PETSC ERROR: #3 PCSetUp_MG()
[0]PETSC ERROR: #4 PCSetUp()
[0]PETSC ERROR: #5 KSPSetUp()
[0]PETSC ERROR: #6 KSPSolve()
[0]PETSC ERROR: #7 SNESSolve_NEWTONLS()
[0]PETSC ERROR: #8 SNESSolve()

It seems like PETSc tries to apply Galerkin coarsening to the shell Amat 
matrix instead of the assembled Pmat. Why?


Pmat is internally generated by SNES using a DMDA attached to SNES, so 
it has everything to define Galerkin coarsening. And it actually works 
if I just get rid of the shell Amat.


I can get around this problem by wrapping a PC object in a shell matrix 
and pass it as Pmat. This is a rather ugly approach, so I wonder how to 
resolve the situation in a better way, if it is possible.


Thanks.
Anton