[petsc-users] MatFDColoringSetUp with Periodic BC

2023-02-14 Thread Pierre Bernigaud

Dear all,

I hope this email finds you well.
We are currently working on a solver which is employing DMDA with SNES. 
The jacobian is computed via FDColoring, ie:


call DMDACreate1D(PETSC_COMM_WORLD, DM_BOUNDARY_GHOSTED, NY, NC, NGc, 
PETSC_NULL_INTEGER, dmF, ierr)


! - Some steps ...

call DMCreateColoring(dmF, IS_COLORING_GLOBAL, iscoloring, ierr)
call MatFDColoringCreate(Jac,iscoloring, matfdcoloring, ierr)
call MatFDColoringSetFunction(matfdcoloring, FormFunction, CTX, ierr)
call MatFDColoringSetUp(Jac ,iscoloring,matfdcoloring, ierr)
call SNESSetJacobian(snes, Jac, Jac, SNESComputeJacobianDefaultColor, 
matfdcoloring, ierr)


Everything is running smoothly.
Recently, we modified the boundary conditions such as to use periodic 
BC:


call DMDACreate1D(PETSC_COMM_WORLD, DM_BOUNDARY_PERIODIC, NY, NC, NGc, 
PETSC_NULL_INTEGER, dmF, ierr)


We then encountered frequent crashes when calling MatFDColoringSetUp, 
depending on the number of cells NY. After looking for an solution, I 
found this old thread: 
https://lists.mcs.anl.gov/pipermail/petsc-users/2013-May/017449.html
It appears that when using periodic BC, FDColoring can only be used if 
the number of cells is divisible by 2*NGc+1. Even though this is only a 
slight annoyance, I was wondering if you were working on this matter / 
if you had a quick fix at hand? At any rate, I think it would be nice if 
a warning was displayed in the FDColoring documentation?


Respectfully,
Pierre Bernigaud

Re: [petsc-users] PETSc / AMRex

2022-06-22 Thread Pierre Bernigaud

Mark,

Thank you for this roadmap. It should be doable to go from a DMDA to a 
DMPLex code.
I wasn't aware of the existence of p4est. From what I've seen, it should 
fulfil our needs.


I will contact you again if we encounter any trouble.

Thanks again,
Pierre

Le 2022-06-21 19:57, Mark Adams a écrit :


(keep on the list, you will need Matt and Toby soon anyway).

So you want to add AMRex to your code.

I think the first thing that you want to do is move your DMDA code into 
a DMPLex code. You can create a "box" mesh and it is not hard.
Others like Matt can give advice on how to get started on that 
translation.
There is a simple step to create a DMForest (p4/8est) that Matt 
mentioned from the DMPlex .


Now at this point you can run your current SNES tests and get back to 
where you started, but AMR is easy now.

Or as easy as it gets.

As far as AMRex, well, it's not clear what AMRex does for you at this 
point.

You don't seem to have AMRex code that you want to reuse.
If there is some functionality that you need then we can talk about it 
or if you have some programmatic reason to use it (eg, they are paying 
you) then, again, we can talk about it.


PETSc/p4est and AMRex are similar with different strengths and design, 
and you could use both but that would complicate things.


Hope that helps,
Mark

On Tue, Jun 21, 2022 at 1:18 PM Bernigaud Pierre 
 wrote:


Hello Mark,

We have a working solver employing SNES, to which is attached a DMDA to 
handle ghost cells / data sharing between processors for flux 
evaluation (using DMGlobalToLocalBegin / DMGlobalToLocalEnd) . We are 
considering to add an AMReX layer to the solver, but no work has been 
done yet, as we are currently evaluating if it would be feasible 
without too much trouble.


Our main subject of concern would be to understand how to interface 
correctly PETSc (SNES+DMDA) and AMRex, as AMRex also appears to have 
his own methods for parallel data management. Hence our inquiry for 
examples, just to get a feel for how it would work out.


Best,

Pierre

Le 21/06/2022 à 18:00, Mark Adams a écrit :
Hi Bernigaud,

To be clear, you have SNES working with DMDA in AMRex, but without 
adapting dynamically and you want to know what to do next.


Is that right?

Mark

On Tue, Jun 21, 2022 at 11:46 AM Bernigaud Pierre 
 wrote: Greetings,


I hope you are doing great.

We are currently working on parallel solver employing PETSc for the 
main

numerical methods (GMRES, Newton-Krylov method). We would be interested
in combining the PETSc solvers with the AMR framework provided by the
library AMReX (https://amrex-codes.github.io/amrex/). I know that 
within

the AMReX framework the KSP solvers provided by PETSc can be used, but
what about the SNES solvers? More specifically, we are using a DMDA to
manage parallel communications during the SNES calculations, and I am
wondering how it would behave in a context where the data layout 
between

processors is modified by the AMR code when refining the grid.

Would you have any experience on this matter ? Is there any
collaboration going on between PETsc and AMReX, or would you know of a
code using both of them?

Respectfully,

Pierre Bernigaud

[petsc-users] Nested SNES in FormFunction

2021-05-21 Thread Pierre Bernigaud
Greetings, 

I am currently working on a CFD solver using PETSc. I have a non linear system 
which is solved using 2D_DMDA/SNES, and submitted to boundary conditions that 
are treated implicitly and updated in the FormFunction. The calculation of one 
of these boundary conditions requires the resolution of an other non linear 
system. 

I am hence using a nested 1D_DMDA/SNES system within the FormFunction of my 
main SNES solver to solve for this boundary condition. This is working fine, 
but doing a scalability study we found out that this causes the code to show 
sub-par acceleration properties. 

Have you ever encountered this kind of nested SNES application, and are there 
some critical points to be aware of in order to avoid a loss of performances? 
For instance, the sub 1D_DMDA/SNES objects are created and destroyed at each 
update of the boundary, hence at each call to FormFunction, which results in an 
important number of object manipulation. Could this be a problem? 

Furthermore, the use of a sub 1D_DMDA/SNES allows to use multiple processors to 
solve for the boundary condition, composed of say N cells. When running the 
code with M > N processors, everything is working great, but I am curious about 
the state of the (M-N) processors which aren’t working on boundary condition 
problem. Do they just stay idle? 

Thank you for your help. 
Respectfully, 
Pierre Bernigaud