Re: [petsc-users] Null space and preconditioners

2022-03-22 Thread Marco Cisternino
Thank you, Matt.
I rechecked and the mean value of the solution is 1.0-15!!
Clumsily,  I was checking an integral average and on an octree mesh it is not 
exactly the same.

Thanks again!

Marco Cisternino


From: Matthew Knepley 
Sent: martedì 22 marzo 2022 15:22
To: Marco Cisternino 
Cc: Barry Smith ; petsc-users@mcs.anl.gov
Subject: Re: [petsc-users] Null space and preconditioners

On Tue, Mar 22, 2022 at 9:55 AM Marco Cisternino 
mailto:marco.cistern...@optimad.it>> wrote:
Thank you Barry!
No, no reason for FGMRES (some old tests showed shorter wall-times relative to 
GMRES), I’m going to use GMRES.
I tried GMRES with GAMG using PCSVD on the coarser level on real cases, larger 
than the toy case, I cannot get machine epsilon mean value of the solution, as 
in the toy case but about 10e-5, which is much better than before. The rest of 
GAMG is on default configuration. The mesh is much more complicated being an 
octree with immersed geometries. I get the same behaviour using GMRES+ASM+ILU 
as well.

This does not seem possible. Setting the null space will remove the mean value. 
Something is not configured correctly here. Can you show -ksp_view
for this solve?

  Thanks,

Matt

I cannot really get why the solution is not at zero mean value. I’m using a 
rtol~1.0e-8 but I get the same mean value using rtol~1.o-6, I tried 1.0-12 too, 
but no improvement in the constant, definitely tiny but not zero.
Thanks.

Marco Cisternino

From: Barry Smith mailto:bsm...@petsc.dev>>
Sent: lunedì 21 marzo 2022 19:49
To: Marco Cisternino 
mailto:marco.cistern...@optimad.it>>
Cc: Mark Adams mailto:mfad...@lbl.gov>>; 
petsc-users@mcs.anl.gov<mailto:petsc-users@mcs.anl.gov>
Subject: Re: [petsc-users] Null space and preconditioners


  Marco,

 I have confirmed your results.

 Urgg, it appears we do not have something well documented. The removal of 
the null space only works for left preconditioned solvers and FGMRES only works 
with right preconditioning. Here is the reasoning.

 The Krylov space for left preconditioning is built from [r, BAr, (BA)^2 r, 
...] and the solution space is built from this basis. If A has a null space of 
n then the left preconditioned Krylov methods simply remove n from the "full" 
Krylov space after applying each B preconditioner and the resulting "reduced" 
Krylov space has no components in the n directions hence the solution built by 
GMRES naturally has no component in the n.

 But with right preconditioning the Krylov space is [s ABs (AB)^2 s, ] 
We would need to remove B^-1 n from the Krylov space so that (A B) B^-1 n = 0 
In general we don't have any way of applying B^-1 to a vector so we cannot 
create the appropriate "reduced" Krylov space.

 If I run with GMRES (which defaults to left preconditioner) and the 
options ./testPreconditioners -pc_type gamg -ksp_type gmres 
-ksp_monitor_true_residual -ksp_rtol 1.e-12 -ksp_view -mg_coarse_pc_type svd

  Then  it handles the null space correctly and the solution has Solution mean 
= 4.51028e-17

Is there any reason to use FGMRES instead of GMRES? You just cannot use GMRES 
as the smoother inside GAMG if you use GMRES on the outside, but for pressure 
equations you don't want use such a strong smoother anyways.

  Barry

  I feel we should add some information to the documentation on the removal of 
the null space to the user's manual when using right preconditioning and maybe 
even have an error check in the code so that people don't fall into this trap. 
But I am not sure exactly what to do. When the A and B are both symmetric I 
think special stuff happens that doesn't require providing a null space; but I 
am not sure.





On Mar 21, 2022, at 12:41 PM, Marco Cisternino 
mailto:marco.cistern...@optimad.it>> wrote:

Thank you, Mark.
However, doing this with my toy code
mpirun -n 1 ./testpreconditioner -pc_type gamg 
-pc_gamg_use_parallel_coarse_grid_solver -mg_coarse_pc_type jacobi 
-mg_coarse_ksp_type cg

I get 16 inf elements. Do I miss anything?

Thanks again

Marco Cisternino


From: Mark Adams mailto:mfad...@lbl.gov>>
Sent: lunedì 21 marzo 2022 17:31
To: Marco Cisternino 
mailto:marco.cistern...@optimad.it>>
Cc: petsc-users@mcs.anl.gov<mailto:petsc-users@mcs.anl.gov>
Subject: Re: [petsc-users] Null space and preconditioners

And for GAMG you can use:

-pc_gamg_use_parallel_coarse_grid_solver -mg_coarse_pc_type jacobi 
-mg_coarse_ksp_type cg

Note if you are using more that one MPI process you can use 'lu' instead of 
'jacobi'

If GAMG converges fast enough it can solve before the constant creeps in and 
works without cleaning in the KSP method.

On Mon, Mar 21, 2022 at 12:06 PM Mark Adams 
mailto:mfad...@lbl.gov>> wrote:
The solution for Neumann problems can "float away" if the constant is not 
controlled in some way because floating point errors can introduce it even if 
your RHS is exactly orthogonal to it

Re: [petsc-users] Null space and preconditioners

2022-03-22 Thread Marco Cisternino
Chebyshev, it should be the default, isn’t it? I checked for it in the toy code.
I’m preparing a lanuch using -ksp_view to check for the real case,
I will prepare a monitor for computing solution average if ksp_view is going to 
say nothing relevant.

Thanks

Marco Cisternino


From: Barry Smith 
Sent: martedì 22 marzo 2022 15:27
To: Marco Cisternino 
Cc: Mark Adams ; petsc-users@mcs.anl.gov
Subject: Re: [petsc-users] Null space and preconditioners




On Mar 22, 2022, at 9:55 AM, Marco Cisternino 
mailto:marco.cistern...@optimad.it>> wrote:

Thank you Barry!
No, no reason for FGMRES (some old tests showed shorter wall-times relative to 
GMRES), I’m going to use GMRES.
I tried GMRES with GAMG using PCSVD on the coarser level on real cases, larger 
than the toy case, I cannot get machine epsilon mean value of the solution, as 
in the toy case but about 10e-5, which is much better than before. The rest of 
GAMG is on default configuration. The mesh is much more complicated being an 
octree with immersed geometries. I get the same behaviour using GMRES+ASM+ILU 
as well.
I cannot really get why the solution is not at zero mean value. I’m using a 
rtol~1.0e-8 but I get the same mean value using rtol~1.o-6, I tried 1.0-12 too, 
but no improvement in the constant, definitely tiny but not zero.
Thanks.

   Are you using GMRES in the smoother or Chebyshev? Recommend using Chebyshev.

   I have no explanation why the average is not much closer to zero.

   Here is one thing to try. Use KSPMonitorSet() to provide a monitor that 
calls KSPBuildSolution() and then computes the average and prints it. Is the 
average always around 1e-5 from the first iteration or does it start close to 
1e-12 and get larger with more iterations?




Marco Cisternino

From: Barry Smith mailto:bsm...@petsc.dev>>
Sent: lunedì 21 marzo 2022 19:49
To: Marco Cisternino 
mailto:marco.cistern...@optimad.it>>
Cc: Mark Adams mailto:mfad...@lbl.gov>>; 
petsc-users@mcs.anl.gov<mailto:petsc-users@mcs.anl.gov>
Subject: Re: [petsc-users] Null space and preconditioners


  Marco,

 I have confirmed your results.

 Urgg, it appears we do not have something well documented. The removal of 
the null space only works for left preconditioned solvers and FGMRES only works 
with right preconditioning. Here is the reasoning.

 The Krylov space for left preconditioning is built from [r, BAr, (BA)^2 r, 
...] and the solution space is built from this basis. If A has a null space of 
n then the left preconditioned Krylov methods simply remove n from the "full" 
Krylov space after applying each B preconditioner and the resulting "reduced" 
Krylov space has no components in the n directions hence the solution built by 
GMRES naturally has no component in the n.

 But with right preconditioning the Krylov space is [s ABs (AB)^2 s, ] 
We would need to remove B^-1 n from the Krylov space so that (A B) B^-1 n = 0 
In general we don't have any way of applying B^-1 to a vector so we cannot 
create the appropriate "reduced" Krylov space.

 If I run with GMRES (which defaults to left preconditioner) and the 
options ./testPreconditioners -pc_type gamg -ksp_type gmres 
-ksp_monitor_true_residual -ksp_rtol 1.e-12 -ksp_view -mg_coarse_pc_type svd

  Then  it handles the null space correctly and the solution has Solution mean 
= 4.51028e-17

Is there any reason to use FGMRES instead of GMRES? You just cannot use GMRES 
as the smoother inside GAMG if you use GMRES on the outside, but for pressure 
equations you don't want use such a strong smoother anyways.

  Barry

  I feel we should add some information to the documentation on the removal of 
the null space to the user's manual when using right preconditioning and maybe 
even have an error check in the code so that people don't fall into this trap. 
But I am not sure exactly what to do. When the A and B are both symmetric I 
think special stuff happens that doesn't require providing a null space; but I 
am not sure.







On Mar 21, 2022, at 12:41 PM, Marco Cisternino 
mailto:marco.cistern...@optimad.it>> wrote:

Thank you, Mark.
However, doing this with my toy code
mpirun -n 1 ./testpreconditioner -pc_type gamg 
-pc_gamg_use_parallel_coarse_grid_solver -mg_coarse_pc_type jacobi 
-mg_coarse_ksp_type cg

I get 16 inf elements. Do I miss anything?

Thanks again

Marco Cisternino


From: Mark Adams mailto:mfad...@lbl.gov>>
Sent: lunedì 21 marzo 2022 17:31
To: Marco Cisternino 
mailto:marco.cistern...@optimad.it>>
Cc: petsc-users@mcs.anl.gov<mailto:petsc-users@mcs.anl.gov>
Subject: Re: [petsc-users] Null space and preconditioners

And for GAMG you can use:

-pc_gamg_use_parallel_coarse_grid_solver -mg_coarse_pc_type jacobi 
-mg_coarse_ksp_type cg

Note if you are using more that one MPI process you can use 'lu' instead of 
'jacobi'

If GAMG converges fast enough it can solve before the constant creeps in and 
wo

Re: [petsc-users] Null space and preconditioners

2022-03-22 Thread Marco Cisternino
Thank you Barry!
No, no reason for FGMRES (some old tests showed shorter wall-times relative to 
GMRES), I’m going to use GMRES.
I tried GMRES with GAMG using PCSVD on the coarser level on real cases, larger 
than the toy case, I cannot get machine epsilon mean value of the solution, as 
in the toy case but about 10e-5, which is much better than before. The rest of 
GAMG is on default configuration. The mesh is much more complicated being an 
octree with immersed geometries. I get the same behaviour using GMRES+ASM+ILU 
as well.
I cannot really get why the solution is not at zero mean value. I’m using a 
rtol~1.0e-8 but I get the same mean value using rtol~1.o-6, I tried 1.0-12 too, 
but no improvement in the constant, definitely tiny but not zero.
Thanks.

Marco Cisternino

From: Barry Smith 
Sent: lunedì 21 marzo 2022 19:49
To: Marco Cisternino 
Cc: Mark Adams ; petsc-users@mcs.anl.gov
Subject: Re: [petsc-users] Null space and preconditioners


  Marco,

 I have confirmed your results.

 Urgg, it appears we do not have something well documented. The removal of 
the null space only works for left preconditioned solvers and FGMRES only works 
with right preconditioning. Here is the reasoning.

 The Krylov space for left preconditioning is built from [r, BAr, (BA)^2 r, 
...] and the solution space is built from this basis. If A has a null space of 
n then the left preconditioned Krylov methods simply remove n from the "full" 
Krylov space after applying each B preconditioner and the resulting "reduced" 
Krylov space has no components in the n directions hence the solution built by 
GMRES naturally has no component in the n.

 But with right preconditioning the Krylov space is [s ABs (AB)^2 s, ] 
We would need to remove B^-1 n from the Krylov space so that (A B) B^-1 n = 0 
In general we don't have any way of applying B^-1 to a vector so we cannot 
create the appropriate "reduced" Krylov space.

 If I run with GMRES (which defaults to left preconditioner) and the 
options ./testPreconditioners -pc_type gamg -ksp_type gmres 
-ksp_monitor_true_residual -ksp_rtol 1.e-12 -ksp_view -mg_coarse_pc_type svd

  Then  it handles the null space correctly and the solution has Solution mean 
= 4.51028e-17

Is there any reason to use FGMRES instead of GMRES? You just cannot use GMRES 
as the smoother inside GAMG if you use GMRES on the outside, but for pressure 
equations you don't want use such a strong smoother anyways.

  Barry

  I feel we should add some information to the documentation on the removal of 
the null space to the user's manual when using right preconditioning and maybe 
even have an error check in the code so that people don't fall into this trap. 
But I am not sure exactly what to do. When the A and B are both symmetric I 
think special stuff happens that doesn't require providing a null space; but I 
am not sure.






On Mar 21, 2022, at 12:41 PM, Marco Cisternino 
mailto:marco.cistern...@optimad.it>> wrote:

Thank you, Mark.
However, doing this with my toy code
mpirun -n 1 ./testpreconditioner -pc_type gamg 
-pc_gamg_use_parallel_coarse_grid_solver -mg_coarse_pc_type jacobi 
-mg_coarse_ksp_type cg

I get 16 inf elements. Do I miss anything?

Thanks again

Marco Cisternino


From: Mark Adams mailto:mfad...@lbl.gov>>
Sent: lunedì 21 marzo 2022 17:31
To: Marco Cisternino 
mailto:marco.cistern...@optimad.it>>
Cc: petsc-users@mcs.anl.gov<mailto:petsc-users@mcs.anl.gov>
Subject: Re: [petsc-users] Null space and preconditioners

And for GAMG you can use:

-pc_gamg_use_parallel_coarse_grid_solver -mg_coarse_pc_type jacobi 
-mg_coarse_ksp_type cg

Note if you are using more that one MPI process you can use 'lu' instead of 
'jacobi'

If GAMG converges fast enough it can solve before the constant creeps in and 
works without cleaning in the KSP method.

On Mon, Mar 21, 2022 at 12:06 PM Mark Adams 
mailto:mfad...@lbl.gov>> wrote:
The solution for Neumann problems can "float away" if the constant is not 
controlled in some way because floating point errors can introduce it even if 
your RHS is exactly orthogonal to it.

You should use a special coarse grid solver for GAMG but it seems to be working 
for you.

I have lost track of the simply way to have the KSP solver clean the constant 
out, which is what you want.

can someone help Marco?

Mark





On Mon, Mar 21, 2022 at 8:18 AM Marco Cisternino 
mailto:marco.cistern...@optimad.it>> wrote:
Good morning,
I’m observing an unexpected (to me) behaviour of my code.
I tried to reduce the problem in a toy code here attached.
The toy code archive contains a small main, a matrix and a rhs.
The toy code solves the linear system and check the norms and the mean of the 
solution.
The problem into the matrix and the rhs is the finite volume discretization of 
the pressure equation of an incompressible NS solver.
It has been cooked as tiny as possible (16 cells!).
It

Re: [petsc-users] Null space and preconditioners

2022-03-21 Thread Marco Cisternino

From: Matthew Knepley 
Sent: lunedì 21 marzo 2022 18:31
To: Marco Cisternino 
Cc: Mark Adams ; petsc-users@mcs.anl.gov
Subject: Re: [petsc-users] Null space and preconditioners

On Mon, Mar 21, 2022 at 1:25 PM Marco Cisternino 
mailto:marco.cistern...@optimad.it>> wrote:
Thank you, Matt.

  1.  I already set the null space and test it in the toy code
If you set this and it is not working, something is wrong. This will remove 
null space components from each update in the Krylov space.
This is done in an example where I check convergence to the exact solution: 
https://gitlab.com/petsc/petsc/-/blob/main/src/snes/tutorials/ex69.c

Do I have to set a special null space when I use GAMG?
The toy code works for PCNONE and PCILU, giving a zero mean solution the first 
PC and an almost zero mean solution the second one.
GAMG floats away, quoting Mark.
Looking at what I do:
MatNullSpaceCreate(PETSC_COMM_WORLD, PETSC_TRUE, 0, nullptr, );
MatSetNullSpace(matrix, nullspace);
it is more or less what you do at lines 3231-3233 of your reference. Am I wrong?
What about lines 3220-3223? What is the difference between nullSpace and 
nullSpacePres?

  1.
  2.  I tried your suggestion: the norm and the mean of the solution using 
-mg_coarse_pc_type svd with PCGAMG is much closer to the one of PCNONE (the 
norm is the same up to the 6th digit, the mean is about 10e-4 with “svd” PCGAMG 
and 10e-17 with PCNONE). I’m going to try with the real code and see what 
happens on larger meshes
This is not perfect since null space components can be introduced by the rest 
of the preconditioner, but when I use range-space smoothers and
local interpolation it tends to be much better for me. Maybe it is just my 
problems.

Is there a way to set -mg_coarse_pc_type svd with the API into the code?

Thanks,

Marco

  Thanks,

 Matt

Thank you all.

Marco Cisternino


From: Matthew Knepley mailto:knep...@gmail.com>>
Sent: lunedì 21 marzo 2022 18:16
To: Mark Adams mailto:mfad...@lbl.gov>>
Cc: Marco Cisternino 
mailto:marco.cistern...@optimad.it>>; 
petsc-users@mcs.anl.gov<mailto:petsc-users@mcs.anl.gov>
Subject: Re: [petsc-users] Null space and preconditioners

On Mon, Mar 21, 2022 at 12:06 PM Mark Adams 
mailto:mfad...@lbl.gov>> wrote:
The solution for Neumann problems can "float away" if the constant is not 
controlled in some way because floating point errors can introduce it even if 
your RHS is exactly orthogonal to it.

You should use a special coarse grid solver for GAMG but it seems to be working 
for you.

I have lost track of the simply way to have the KSP solver clean the constant 
out, which is what you want.

can someone help Marco?

I have not had time to look at the code. However, here are two ways we use to 
fix the pure Neumann solution:

1) Attach a null space to the operator using 
https://petsc.org/main/docs/manualpages/Mat/MatSetNullSpace.html

2) Use a coarse grid solver that does least-squares

  -mg_coarse_pc_type svd

  Thanks,

 Matt

Mark





On Mon, Mar 21, 2022 at 8:18 AM Marco Cisternino 
mailto:marco.cistern...@optimad.it>> wrote:
Good morning,
I’m observing an unexpected (to me) behaviour of my code.
I tried to reduce the problem in a toy code here attached.
The toy code archive contains a small main, a matrix and a rhs.
The toy code solves the linear system and check the norms and the mean of the 
solution.
The problem into the matrix and the rhs is the finite volume discretization of 
the pressure equation of an incompressible NS solver.
It has been cooked as tiny as possible (16 cells!).
It is important to say that it is an elliptic problem with homogeneous Neumann 
boundary conditions only, for this reason the toy code sets a null space 
containing the constant.

The unexpected (to me) behaviour is evident by launching the code using 
different preconditioners, using -pc-type 
I tested using PCNONE (“none”), PCGAMG (“gamg”) and PCILU (“ilu”). The default 
solver is KSPFGMRES.
Using the three PC, I get 3 different solutions. It seems to me that they 
differ in the mean value, but GAMG is impressive.
PCNONE gives me the zero mean solution I expected. What about the others?

Asking for residuals monitor, the ratio ||r||/||b|| shows convergence for 
PCNONE and PCILU (~10^-16), but it stalls for PCGAMG (~10^-4).
I cannot see why. Am I doing anything wrong or incorrectly thinking about the 
expected behaviour?

Generalizing to larger mesh the behaviour is similar.

Thank you for any help.

Marco Cisternino




--
What most experimenters take for granted before they begin their experiments is 
infinitely more interesting than any results to which their experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/<http://www.cse.buffalo.edu/~knepley/>


--
What most experimenters take for granted before they begin their experiments is 
infinitely more interesting than any results to which their experiments lead.
-- Norbert Wiener

https:

Re: [petsc-users] Null space and preconditioners

2022-03-21 Thread Marco Cisternino
Thank you, Matt.

  1.  I already set the null space and test it in the toy code
  2.  I tried your suggestion: the norm and the mean of the solution using 
-mg_coarse_pc_type svd with PCGAMG is much closer to the one of PCNONE (the 
norm is the same up to the 6th digit, the mean is about 10e-4 with “svd” PCGAMG 
and 10e-17 with PCNONE). I’m going to try with the real code and see what 
happens on larger meshes.

Thank you all.

Marco Cisternino


From: Matthew Knepley 
Sent: lunedì 21 marzo 2022 18:16
To: Mark Adams 
Cc: Marco Cisternino ; petsc-users@mcs.anl.gov
Subject: Re: [petsc-users] Null space and preconditioners

On Mon, Mar 21, 2022 at 12:06 PM Mark Adams 
mailto:mfad...@lbl.gov>> wrote:
The solution for Neumann problems can "float away" if the constant is not 
controlled in some way because floating point errors can introduce it even if 
your RHS is exactly orthogonal to it.

You should use a special coarse grid solver for GAMG but it seems to be working 
for you.

I have lost track of the simply way to have the KSP solver clean the constant 
out, which is what you want.

can someone help Marco?

I have not had time to look at the code. However, here are two ways we use to 
fix the pure Neumann solution:

1) Attach a null space to the operator using 
https://petsc.org/main/docs/manualpages/Mat/MatSetNullSpace.html

2) Use a coarse grid solver that does least-squares

  -mg_coarse_pc_type svd

  Thanks,

 Matt

Mark





On Mon, Mar 21, 2022 at 8:18 AM Marco Cisternino 
mailto:marco.cistern...@optimad.it>> wrote:
Good morning,
I’m observing an unexpected (to me) behaviour of my code.
I tried to reduce the problem in a toy code here attached.
The toy code archive contains a small main, a matrix and a rhs.
The toy code solves the linear system and check the norms and the mean of the 
solution.
The problem into the matrix and the rhs is the finite volume discretization of 
the pressure equation of an incompressible NS solver.
It has been cooked as tiny as possible (16 cells!).
It is important to say that it is an elliptic problem with homogeneous Neumann 
boundary conditions only, for this reason the toy code sets a null space 
containing the constant.

The unexpected (to me) behaviour is evident by launching the code using 
different preconditioners, using -pc-type 
I tested using PCNONE (“none”), PCGAMG (“gamg”) and PCILU (“ilu”). The default 
solver is KSPFGMRES.
Using the three PC, I get 3 different solutions. It seems to me that they 
differ in the mean value, but GAMG is impressive.
PCNONE gives me the zero mean solution I expected. What about the others?

Asking for residuals monitor, the ratio ||r||/||b|| shows convergence for 
PCNONE and PCILU (~10^-16), but it stalls for PCGAMG (~10^-4).
I cannot see why. Am I doing anything wrong or incorrectly thinking about the 
expected behaviour?

Generalizing to larger mesh the behaviour is similar.

Thank you for any help.

Marco Cisternino




--
What most experimenters take for granted before they begin their experiments is 
infinitely more interesting than any results to which their experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/<http://www.cse.buffalo.edu/~knepley/>


Re: [petsc-users] Null space and preconditioners

2022-03-21 Thread Marco Cisternino
Thank you, Mark.
However, doing this with my toy code
mpirun -n 1 ./testpreconditioner -pc_type gamg 
-pc_gamg_use_parallel_coarse_grid_solver -mg_coarse_pc_type jacobi 
-mg_coarse_ksp_type cg

I get 16 inf elements. Do I miss anything?

Thanks again

Marco Cisternino


From: Mark Adams 
Sent: lunedì 21 marzo 2022 17:31
To: Marco Cisternino 
Cc: petsc-users@mcs.anl.gov
Subject: Re: [petsc-users] Null space and preconditioners

And for GAMG you can use:

-pc_gamg_use_parallel_coarse_grid_solver -mg_coarse_pc_type jacobi 
-mg_coarse_ksp_type cg

Note if you are using more that one MPI process you can use 'lu' instead of 
'jacobi'

If GAMG converges fast enough it can solve before the constant creeps in and 
works without cleaning in the KSP method.

On Mon, Mar 21, 2022 at 12:06 PM Mark Adams 
mailto:mfad...@lbl.gov>> wrote:
The solution for Neumann problems can "float away" if the constant is not 
controlled in some way because floating point errors can introduce it even if 
your RHS is exactly orthogonal to it.

You should use a special coarse grid solver for GAMG but it seems to be working 
for you.

I have lost track of the simply way to have the KSP solver clean the constant 
out, which is what you want.

can someone help Marco?

Mark





On Mon, Mar 21, 2022 at 8:18 AM Marco Cisternino 
mailto:marco.cistern...@optimad.it>> wrote:
Good morning,
I’m observing an unexpected (to me) behaviour of my code.
I tried to reduce the problem in a toy code here attached.
The toy code archive contains a small main, a matrix and a rhs.
The toy code solves the linear system and check the norms and the mean of the 
solution.
The problem into the matrix and the rhs is the finite volume discretization of 
the pressure equation of an incompressible NS solver.
It has been cooked as tiny as possible (16 cells!).
It is important to say that it is an elliptic problem with homogeneous Neumann 
boundary conditions only, for this reason the toy code sets a null space 
containing the constant.

The unexpected (to me) behaviour is evident by launching the code using 
different preconditioners, using -pc-type 
I tested using PCNONE (“none”), PCGAMG (“gamg”) and PCILU (“ilu”). The default 
solver is KSPFGMRES.
Using the three PC, I get 3 different solutions. It seems to me that they 
differ in the mean value, but GAMG is impressive.
PCNONE gives me the zero mean solution I expected. What about the others?

Asking for residuals monitor, the ratio ||r||/||b|| shows convergence for 
PCNONE and PCILU (~10^-16), but it stalls for PCGAMG (~10^-4).
I cannot see why. Am I doing anything wrong or incorrectly thinking about the 
expected behaviour?

Generalizing to larger mesh the behaviour is similar.

Thank you for any help.

Marco Cisternino




[petsc-users] Null space and preconditioners

2022-03-21 Thread Marco Cisternino
Good morning,
I'm observing an unexpected (to me) behaviour of my code.
I tried to reduce the problem in a toy code here attached.
The toy code archive contains a small main, a matrix and a rhs.
The toy code solves the linear system and check the norms and the mean of the 
solution.
The problem into the matrix and the rhs is the finite volume discretization of 
the pressure equation of an incompressible NS solver.
It has been cooked as tiny as possible (16 cells!).
It is important to say that it is an elliptic problem with homogeneous Neumann 
boundary conditions only, for this reason the toy code sets a null space 
containing the constant.

The unexpected (to me) behaviour is evident by launching the code using 
different preconditioners, using -pc-type 
I tested using PCNONE ("none"), PCGAMG ("gamg") and PCILU ("ilu"). The default 
solver is KSPFGMRES.
Using the three PC, I get 3 different solutions. It seems to me that they 
differ in the mean value, but GAMG is impressive.
PCNONE gives me the zero mean solution I expected. What about the others?

Asking for residuals monitor, the ratio ||r||/||b|| shows convergence for 
PCNONE and PCILU (~10^-16), but it stalls for PCGAMG (~10^-4).
I cannot see why. Am I doing anything wrong or incorrectly thinking about the 
expected behaviour?

Generalizing to larger mesh the behaviour is similar.

Thank you for any help.

Marco Cisternino




toyCode.tar.gz
Description: toyCode.tar.gz


Re: [petsc-users] Nullspaces

2022-01-19 Thread Marco Cisternino
Thank you, Matthew.
I’m going to pay attention to our non-dimensionalization, avoiding division by 
cell volume helps a lot.

Sorry, Mark, I cannot get your point: which 1D problem are you referring to? 
The case I’m talking about is based on a 3D octree mesh.

Thank you all for your support.

Bests,
Marco Cisternino

From: Mark Adams 
Sent: mercoledì 19 gennaio 2022 15:16
To: Matthew Knepley 
Cc: Marco Cisternino ; petsc-users 

Subject: Re: [petsc-users] Nullspaces

ILU is LU for this 1D problem and its singular.
ILU might have some logic to deal with a singular system. Not sure. LU should 
fail.
You might try -ksp_type preonly and -pc_type lu
And -pc_type jacobi should not have any numerical problems. Try that.

On Wed, Jan 19, 2022 at 7:19 AM Matthew Knepley 
mailto:knep...@gmail.com>> wrote:
On Wed, Jan 19, 2022 at 4:52 AM Marco Cisternino 
mailto:marco.cistern...@optimad.it>> wrote:
Thank you Matthew.
But I cannot get the point. I got the point about the test but to try to 
explain my doubt I’m going to prepare another toy code.

By words…
I usually have a finite volume discretization of the Laplace operator with 
homogeneous Neumann BC on an octree mesh and it reads
Aij * xj = bi,
being the discretization of
Int|Vi(nabla^2 pi dV) = Int|Vi(nabla dot ui)
(I omit constants), where Int|Vi(…dV) is a volume integral on the I cell, pi is 
cell pressure, ui is the cell velocity.
The computational domain contains 2 separated sub-domains.
Let’s consider 4 cells into the whole domain and 2 cells for each sub-domain.
I would expect a null space made of 2 vectors and from your patch they should 
look like [1/sqrt(2) 1/sqrt(2) 0 0] and [0 0 1/sqrt(2) 1/sqrt(2)], i.e. norm2 = 
1 for both.
With MatNullSpaceCreate(getCommunicator(), PETSC_TRUE, 0, nullptr, ) 
I’m saying that [1/sqrt(4) 1/sqrt(4) 1/sqrt(4) 1/sqrt(4)] is the null space, 
which is not but it is in the null space.
But this is not the case I sent you, not exactly.

The case I sent is 1/Vi * Aij * xj = 1/Vi bi, where Vi is the volume of the 
cell i.
Let’s admit that yj is in the null space of of Aij, it should be in the null 
space of 1/Vi * Aij, therefore Aij*yj = 0 and 1/Vi * Aij*yj = 0 too.
But in the framework of the test, this is true with infinite precision.
What happens if norm2(Aij*yj) = 10^-15 and Vi = 10^-5? Norm2(1/Vi * Aij * yj) = 
10^-10!!! Is yi still in the null space numerically?
Let’s say yi is the constant vector over the whole domain, i.e. [1/sqrt(4) 
1/sqrt(4) 1/sqrt(4) 1/sqrt(4)]. Should this be in the null space of 1/Vi * Aij, 
shouldn’t it?
An analogous argument should be for the compatibility condition that concerns 
bi.

My current problem is that properly creating the null space for Aij, i.e. 
[1/sqrt(2) 1/sqrt(2) 0 0] and [0 0 1/sqrt(2) 1/sqrt(2)], allows me to solve and 
find xi, but multiplying by 1/Vi, I cannot get any solution using both 
FGMRES+ILU and FGMRE+GAMG.

The tiny problem will load Aij, Vi and bi and show the problem by testing the 
proper null space and trying to solve. I will include the patch to my PETSc 
version. I hope to come back to you very soon.

This sounds like a dimensionalization problem to me. It is best to choose 
length (and other) units that make the matrix entries about 1. It seems
like you are far from this, and it is causing a loss of accuracy (your null 
vector has a residual of about 1e-7).

  Thanks,

  Matt

Thank you very much for your support!


Marco Cisternino

From: Matthew Knepley mailto:knep...@gmail.com>>
Sent: martedì 18 gennaio 2022 21:25
To: Marco Cisternino 
mailto:marco.cistern...@optimad.it>>
Cc: petsc-users mailto:petsc-users@mcs.anl.gov>>
Subject: Re: [petsc-users] Nullspaces

I made a fix for this:

  https://gitlab.com/petsc/petsc/-/merge_requests/4729

  Thanks,

 Matt

On Tue, Jan 18, 2022 at 3:20 PM Matthew Knepley 
mailto:knep...@gmail.com>> wrote:
On Thu, Dec 16, 2021 at 11:09 AM Marco Cisternino 
mailto:marco.cistern...@optimad.it>> wrote:
Hello Matthew,
as promised I prepared a minimal (112960 rows. I’m not able to produce anything 
smaller than this and triggering the issue) example of the behavior I was 
talking about some days ago.
What I did is to produce matrix, right hand side and initial solution of the 
linear system.

As I told you before, this linear system is the discretization of the pressure 
equation of a predictor-corrector method for NS equations in the framework of 
finite volume method.
This case has homogeneous Neumann boundary conditions. Computational domain has 
two independent and separated sub-domains.
I discretize the weak formulation and I divide every row of the linear system 
by the volume of the relative cell.
The underlying mesh is not uniform, therefore cells have different volumes.
The issue I’m going to explain does not show up if the mesh is uniform, same 
volume for all the cells.

I usually build the null space sub-domain by sub-domain with
MatNullSpaceCreate(getCommunic

Re: [petsc-users] Nullspaces

2022-01-19 Thread Marco Cisternino
Thank you Matthew.
But I cannot get the point. I got the point about the test but to try to 
explain my doubt I’m going to prepare another toy code.

By words…
I usually have a finite volume discretization of the Laplace operator with 
homogeneous Neumann BC on an octree mesh and it reads
Aij * xj = bi,
being the discretization of
Int|Vi(nabla^2 pi dV) = Int|Vi(nabla dot ui)
(I omit constants), where Int|Vi(…dV) is a volume integral on the I cell, pi is 
cell pressure, ui is the cell velocity.
The computational domain contains 2 separated sub-domains.
Let’s consider 4 cells into the whole domain and 2 cells for each sub-domain.
I would expect a null space made of 2 vectors and from your patch they should 
look like [1/sqrt(2) 1/sqrt(2) 0 0] and [0 0 1/sqrt(2) 1/sqrt(2)], i.e. norm2 = 
1 for both.
With MatNullSpaceCreate(getCommunicator(), PETSC_TRUE, 0, nullptr, ) 
I’m saying that [1/sqrt(4) 1/sqrt(4) 1/sqrt(4) 1/sqrt(4)] is the null space, 
which is not but it is in the null space.
But this is not the case I sent you, not exactly.

The case I sent is 1/Vi * Aij * xj = 1/Vi bi, where Vi is the volume of the 
cell i.
Let’s admit that yj is in the null space of of Aij, it should be in the null 
space of 1/Vi * Aij, therefore Aij*yj = 0 and 1/Vi * Aij*yj = 0 too.
But in the framework of the test, this is true with infinite precision.
What happens if norm2(Aij*yj) = 10^-15 and Vi = 10^-5? Norm2(1/Vi * Aij * yj) = 
10^-10!!! Is yi still in the null space numerically?
Let’s say yi is the constant vector over the whole domain, i.e. [1/sqrt(4) 
1/sqrt(4) 1/sqrt(4) 1/sqrt(4)]. Should this be in the null space of 1/Vi * Aij, 
shouldn’t it?
An analogous argument should be for the compatibility condition that concerns 
bi.

My current problem is that properly creating the null space for Aij, i.e. 
[1/sqrt(2) 1/sqrt(2) 0 0] and [0 0 1/sqrt(2) 1/sqrt(2)], allows me to solve and 
find xi, but multiplying by 1/Vi, I cannot get any solution using both 
FGMRES+ILU and FGMRE+GAMG.

The tiny problem will load Aij, Vi and bi and show the problem by testing the 
proper null space and trying to solve. I will include the patch to my PETSc 
version. I hope to come back to you very soon.

Thank you very much for your support!


Marco Cisternino

From: Matthew Knepley 
Sent: martedì 18 gennaio 2022 21:25
To: Marco Cisternino 
Cc: petsc-users 
Subject: Re: [petsc-users] Nullspaces

I made a fix for this:

  https://gitlab.com/petsc/petsc/-/merge_requests/4729

  Thanks,

 Matt

On Tue, Jan 18, 2022 at 3:20 PM Matthew Knepley 
mailto:knep...@gmail.com>> wrote:
On Thu, Dec 16, 2021 at 11:09 AM Marco Cisternino 
mailto:marco.cistern...@optimad.it>> wrote:
Hello Matthew,
as promised I prepared a minimal (112960 rows. I’m not able to produce anything 
smaller than this and triggering the issue) example of the behavior I was 
talking about some days ago.
What I did is to produce matrix, right hand side and initial solution of the 
linear system.

As I told you before, this linear system is the discretization of the pressure 
equation of a predictor-corrector method for NS equations in the framework of 
finite volume method.
This case has homogeneous Neumann boundary conditions. Computational domain has 
two independent and separated sub-domains.
I discretize the weak formulation and I divide every row of the linear system 
by the volume of the relative cell.
The underlying mesh is not uniform, therefore cells have different volumes.
The issue I’m going to explain does not show up if the mesh is uniform, same 
volume for all the cells.

I usually build the null space sub-domain by sub-domain with
MatNullSpaceCreate(getCommunicator(), PETSC_FALSE, nConstants, constants, 
);
Where nConstants = 2 and constants contains two normalized arrays with constant 
values on degrees of freedom relative to the associated sub-domain and zeros 
elsewhere.

However, as a test I tried the constant over the whole domain using 2 
alternatives that should produce the same null space:

  1.  MatNullSpaceCreate(getCommunicator(), PETSC_TRUE, 0, nullptr, );
  2.  Vec* nsp;

VecDuplicateVecs(solution, 1, );

VecSet(nsp[0],1.0);

VecNormalize(nsp[0], nullptr);

MatNullSpaceCreate(getCommunicator(), PETSC_FALSE, 1, nsp, );


Once I created the null space I test it using:
MatNullSpaceTest(nullspace, m_A, );

The case 1 pass the test while case 2 don’t.

I have a small code for matrix loading, null spaces creation and testing.
Unfortunately I cannot implement a small code able to produce that linear 
system.

As attachment you can find an archive containing the matrix, the initial 
solution (used to manually build the null space) and the rhs (not used in the 
test code) in binary format.
You can also find the testing code in the same archive.
I used petsc 3.12(gcc+openMPI) and petsc 3.15.2(intelOneAPI) same results.
If the attachment is not delivered, I can share a link to it.

Marco, please forgive me for taking so long to get to your issue. It has

Re: [petsc-users] Nullspaces

2022-01-12 Thread Marco Cisternino
Hello Barry,
To answer your question, both the eigenvectors contain only two values: the 
eigenvectors entries are different in the two eigenvectors but coherent with 
the belonging of the entry to the sub-domains.

However, I was able to get the same behavior of the MatTestNullSpace using the 
two ways of creating the null space.

To summarize the issue, I did:

- MatNullSpaceCreate(getCommunicator(), PETSC_TRUE, 0, nullptr, ) 
(CASE 1)

- Vec* nsp;
  VecDuplicateVecs(m_rhs, 1, );
  VecSet(nsp[0],1.0);
  VecNormalize(nsp[0], nullptr);
  MatNullSpaceCreate(getCommunicator(), PETSC_FALSE, 1, nsp, ); (CASE 
2)

and then I tested with MatNullSpaceTest(nullspace, m_A, ).
CASE 1 gave isNullSpaceValid=true but CASE 2 gave isNullSpaceValid=false.
I found the problem is the normalization of the Vec nsp[0].

Modifying CASE 2 like this:

Vec* nsp;
VecDuplicateVecs(m_rhs, 1, );
PetscInt N;
VecGetSize(nsp[0],);
VecSet(nsp[0],1.0 / N);
MatNullSpaceCreate(getCommunicator(), PETSC_FALSE, 1, nsp, );

I can get isNullSpaceValid= true even for CASE 2.

But most importantly, I can get isNullSpaceValid=true even creating the true 
2-dimensional null space with explicit normalization (using the number of 
non-zero entries sub-domain per sub-domain) and not VecNormalize:

MatNullSpaceCreate(getCommunicator(), PETSC_FALSE, nConstants, constants, 
);
where nConstants=2 and constants[0] (constants[1]) has been set with 1/N0 
(1/N1) in entries relative to sub-domain 0 (1).
I going to check which is the impact on CFD solution.

Any comment on this would be really appreciated.

Thank you all.

Marco Cisternino 

-Original Message-
From: Barry Smith  
Sent: mercoledì 5 gennaio 2022 23:17
To: Marco Cisternino 
Cc: Jose E. Roman ; petsc-users 
Subject: Re: [petsc-users] Nullspaces


  What do you get for the two eigenvectors ?



> On Jan 5, 2022, at 11:21 AM, Marco Cisternino  
> wrote:
> 
> Hello Jose and Stefano.
> Thank you, Jose for your hints.
> I computed the two smallest eigenvalues of my operator and they are tiny but 
> not zero.
> The smallest 0 eigenvalue is = (4.71506e-08, 0) with abs error = 
> 3.95575e-07 The smallest 1 eigenvalue is = (1.95628e-07, 0) with abs 
> error = 4.048e-07 As Stefano remarked, I would have expected much tinier 
> values, closer to zero.
> Probably something is wrong in what I do:
>EPS eps;
>EPSCreate(PETSC_COMM_WORLD, );
>EPSSetOperators( eps, matrix, NULL );
>EPSSetWhichEigenpairs(eps, EPS_SMALLEST_MAGNITUDE);
>EPSSetProblemType( eps, EPS_NHEP );
>EPSSetConvergenceTest(eps,EPS_CONV_ABS);
>EPSSetTolerances(eps, 1.0e-10, 1000);
>EPSSetDimensions(eps,2,PETSC_DEFAULT,PETSC_DEFAULT);
>EPSSetFromOptions( eps );
>EPSSolve( eps );
> 
> Even commenting " EPSSetTolerances(eps, 1.0e-10, 1000);" and use default 
> values, the results are exactly the same.
> 
> Am I correctly computing the 2 smallest eigenvalues?
> 
> They should be zeros but they are not. Any suggestions about how 
> understanding why? 
> 
> In a previous email Mark remarked: "Also you say you divide by the cell 
> volume. Maybe I am not understanding this but that is basically diagonal 
> scaling and that will change the null space (ie, not a constant anymore)", 
> therefore why does the null space built with 
> MatNullSpaceCreate(PETSC_COMM_WORLD, PETSC_TRUE, 0, nullptr, ); 
> passes the MatNullSpaceTest??
> 
> Thank you all!
> 
> Marco Cisternino
> 
> -Original Message-
> From: Jose E. Roman 
> Sent: martedì 4 gennaio 2022 19:30
> To: Marco Cisternino 
> Cc: Stefano Zampini ; petsc-users 
> 
> Subject: Re: [petsc-users] Nullspaces
> 
> To compute more than one eigenpair, call 
> EPSSetDimensions(eps,nev,PETSC_DEFAULT,PETSC_DEFAULT).
> 
> To compute zero eigenvalues you may want to use an absolute convergence 
> criterion, with EPSSetConvergenceTest(eps,EPS_CONV_ABS), but then a tolerance 
> of 1e-12 is probably too small. You can try without this, anyway.
> 
> Jose
> 
> 
>> El 4 ene 2022, a las 18:44, Marco Cisternino  
>> escribió:
>> 
>> Hello Stefano and thank you for your support.
>> I never used SLEPc before but I did this:
>> right after the matrix loading from file I added the following lines to my 
>> shared tiny code
>>MatLoad(matrix, v);
>> 
>>EPS eps;
>>EPSCreate(PETSC_COMM_WORLD, );
>>EPSSetOperators( eps, matrix, NULL );
>>EPSSetWhichEigenpairs(eps, EPS_SMALLEST_MAGNITUDE);
>>EPSSetProblemType( eps, EPS_NHEP );
>>EPSSetTolerances(eps, 1.0e-12, 1000);
>>EPSSetFromOptions( eps );
>>EPSSolve( eps );
>> 
>>Vec xr, xi; /* eigenvector, x */
>>PetscScalar kr, ki; 

Re: [petsc-users] Nullspaces

2022-01-05 Thread Marco Cisternino
Hello Jose and Stefano.
Thank you, Jose for your hints.
I computed the two smallest eigenvalues of my operator and they are tiny but 
not zero.
The smallest 0 eigenvalue is = (4.71506e-08, 0) with abs error = 3.95575e-07
The smallest 1 eigenvalue is = (1.95628e-07, 0) with abs error = 4.048e-07
As Stefano remarked, I would have expected much tinier values, closer to zero.
Probably something is wrong in what I do:
EPS eps;
EPSCreate(PETSC_COMM_WORLD, );
EPSSetOperators( eps, matrix, NULL );
EPSSetWhichEigenpairs(eps, EPS_SMALLEST_MAGNITUDE);
EPSSetProblemType( eps, EPS_NHEP );
EPSSetConvergenceTest(eps,EPS_CONV_ABS);
EPSSetTolerances(eps, 1.0e-10, 1000);
EPSSetDimensions(eps,2,PETSC_DEFAULT,PETSC_DEFAULT);
EPSSetFromOptions( eps );
EPSSolve( eps );

Even commenting " EPSSetTolerances(eps, 1.0e-10, 1000);" and use default 
values, the results are exactly the same.

Am I correctly computing the 2 smallest eigenvalues?

They should be zeros but they are not. Any suggestions about how understanding 
why? 

In a previous email Mark remarked: "Also you say you divide by the cell volume. 
Maybe I am not understanding this but that is basically diagonal scaling and 
that will change the null space (ie, not a constant anymore)", therefore why 
does the null space built with MatNullSpaceCreate(PETSC_COMM_WORLD, PETSC_TRUE, 
0, nullptr, ); passes the MatNullSpaceTest??

Thank you all!

Marco Cisternino 

-Original Message-
From: Jose E. Roman  
Sent: martedì 4 gennaio 2022 19:30
To: Marco Cisternino 
Cc: Stefano Zampini ; petsc-users 

Subject: Re: [petsc-users] Nullspaces

To compute more than one eigenpair, call 
EPSSetDimensions(eps,nev,PETSC_DEFAULT,PETSC_DEFAULT).

To compute zero eigenvalues you may want to use an absolute convergence 
criterion, with EPSSetConvergenceTest(eps,EPS_CONV_ABS), but then a tolerance 
of 1e-12 is probably too small. You can try without this, anyway.

Jose


> El 4 ene 2022, a las 18:44, Marco Cisternino  
> escribió:
> 
> Hello Stefano and thank you for your support.
> I never used SLEPc before but I did this:
> right after the matrix loading from file I added the following lines to my 
> shared tiny code
> MatLoad(matrix, v);
>  
> EPS eps;
> EPSCreate(PETSC_COMM_WORLD, );
> EPSSetOperators( eps, matrix, NULL );
> EPSSetWhichEigenpairs(eps, EPS_SMALLEST_MAGNITUDE);
> EPSSetProblemType( eps, EPS_NHEP );
> EPSSetTolerances(eps, 1.0e-12, 1000);
> EPSSetFromOptions( eps );
> EPSSolve( eps );
>  
> Vec xr, xi; /* eigenvector, x */
> PetscScalar kr, ki; /* eigenvalue, k */
> PetscInt j, nconv;
> PetscReal error;
> EPSGetConverged( eps,  );
> for (j=0; j EPSGetEigenpair( eps, j, , , xr, xi );
> EPSComputeError( eps, j, EPS_ERROR_ABSOLUTE,  );
> std::cout << "The smallest eigenvalue is = (" << kr << ", " << ki << 
> ") with error = " << error << std::endl;
> }
>  
> I launched using
> mpirun -n 1 ./testnullspace -eps_monitor
>  
> and the output is
>  
>   1 EPS nconv=0 first unconverged value (error) -1499.29 (6.57994794e+01)
>   2 EPS nconv=0 first unconverged value (error) -647.468 (5.39939262e+01)
>   3 EPS nconv=0 first unconverged value (error) -177.157 (9.49337698e+01)
>   4 EPS nconv=0 first unconverged value (error) 59.6771 (1.62531943e+02)
>   5 EPS nconv=0 first unconverged value (error) 41.755 (1.41965990e+02)
>   6 EPS nconv=0 first unconverged value (error) -11.5462 (3.60453662e+02)
>   7 EPS nconv=0 first unconverged value (error) -6.04493 (4.60890030e+02)
>   8 EPS nconv=0 first unconverged value (error) -22.7362 (8.67630086e+01)
>   9 EPS nconv=0 first unconverged value (error) -12.9637 
> (1.08507821e+02)
> 10 EPS nconv=0 first unconverged value (error) 7.7234 (1.53561979e+02) 
> …
> 111 EPS nconv=0 first unconverged value (error) -2.27e-08 
> (6.84762319e+00)
> 112 EPS nconv=0 first unconverged value (error) -2.60619e-08 
> (4.45245528e+00)
> 113 EPS nconv=0 first unconverged value (error) -5.49592e-09 
> (1.87798984e+01)
> 114 EPS nconv=0 first unconverged value (error) -9.9456e-09 
> (7.96711076e+00)
> 115 EPS nconv=0 first unconverged value (error) -1.89779e-08 
> (4.15471472e+00)
> 116 EPS nconv=0 first unconverged value (error) -2.05288e-08 
> (2.52953194e+00)
> 117 EPS nconv=0 first unconverged value (error) -2.02919e-08 
> (2.90090711e+00)
> 118 EPS nconv=0 first unconverged value (error) -3.8706e-08 
> (8.03595736e-01)
> 119 EPS nconv=1 first unconverged value (error) -61751.8 
> (9.58036571e-07) Computed 1 pairs The smallest eigenvalue is = 
> (-3.8706e-08, 0) with error = 4.9707e-07
>  
> Am I using SLEPc in the right way at least for the first smallest eigenvalue? 
> If I’m on the right way I can find out how to compute the second one.
>  
> Thanks a lot
>  
> Marco Cisternino



Re: [petsc-users] Nullspaces

2022-01-04 Thread Marco Cisternino
Hello Stefano and thank you for your support.
I never used SLEPc before but I did this:
right after the matrix loading from file I added the following lines to my 
shared tiny code
MatLoad(matrix, v);

EPS eps;
EPSCreate(PETSC_COMM_WORLD, );
EPSSetOperators( eps, matrix, NULL );
EPSSetWhichEigenpairs(eps, EPS_SMALLEST_MAGNITUDE);
EPSSetProblemType( eps, EPS_NHEP );
EPSSetTolerances(eps, 1.0e-12, 1000);
EPSSetFromOptions( eps );
EPSSolve( eps );

Vec xr, xi; /* eigenvector, x */
PetscScalar kr, ki; /* eigenvalue, k */
PetscInt j, nconv;
PetscReal error;
EPSGetConverged( eps,  );
for (j=0; jmailto:marco.cistern...@optimad.it>> ha scritto:
Hello Mark,
I analyzed the codes with valgrind, both the real code and the tiny one.
I obviously used memcheck tool but with full leak check compiling the codes 
with debug info.
Not considering OpenMPI events (I have no wrappers on the machine I used for 
the analysis), the real code gave zero errors and the tiny one gave this
==17911== 905,536 (1,552 direct, 903,984 indirect) bytes in 1 blocks are 
definitely lost in loss record 33 of 33
==17911==at 0x483E340: memalign (in 
/usr/lib/x86_64-linux-gnu/valgrind/vgpreload_memcheck-amd64-linux.so)
==17911==by 0x49CB672: PetscMallocAlign (in 
/usr/lib/x86_64-linux-gnu/libpetsc_real.so.3.12.4)
==17911==by 0x49CBE1D: PetscMallocA (in 
/usr/lib/x86_64-linux-gnu/libpetsc_real.so.3.12.4)
==17911==by 0x4B26187: VecCreate (in 
/usr/lib/x86_64-linux-gnu/libpetsc_real.so.3.12.4)
==17911==by 0x10940D: main (testNullSpace.cpp:30)

due to the fact that I forgot to destroy the solution Vec (adding 
VecDestroy() at the end of the main, the error disappear).
For both the codes, I analyzed the two ways of passing the constant to the null 
space of the operator, no memory errors but still the same results from 
MatNullSpaceTest, i.e.

MatNullSpaceCreate(PETSC_COMM_WORLD, PETSC_TRUE, 0, nullptr, );

passes the test while

Vec* nsp;
VecDuplicateVecs(solution, 1, );
VecSet(nsp[0],1.0);
VecNormalize(nsp[0], nullptr);
MatNullSpaceCreate(PETSC_COMM_WORLD, PETSC_FALSE, 1, nsp, );
VecDestroyVecs(1,);
PetscFree(nsp);

does not.

I hope this can satisfy your doubt about the memory behavior, but please do not 
hesitate to ask for more analysis if it cannot.

As Matthew said some weeks ago, something should be wrong in the code, I would 
say in the matrix, that’s why I provided the matrix and the way I test it.
Unfortunately, it is hard (read impossible) for me to share the code producing 
the matrix. I hope the minimal code I provided is enough to understand 
something.

Try running slepc to find the smallest eigenvalues.  There should be two zero 
eigenvalues,  then inspect the eigenvectors

Thank you all.

Marco Cisternino

From: Marco Cisternino
Sent: lunedì 3 gennaio 2022 16:08
To: Mark Adams mailto:mfad...@lbl.gov>>
Cc: Matthew Knepley mailto:knep...@gmail.com>>; petsc-users 
mailto:petsc-users@mcs.anl.gov>>
Subject: RE: [petsc-users] Nullspaces

We usually analyze the code with valgrind, when important changes are 
implemented.
I have to admit that this analysis is still not automatic and the case we are 
talking about is not a test case for our workload.
The test cases we have give no errors in valgrind analysis.

However, I will analyze both the real code and the tiny one for this case with 
valgrind and report the results.

Thank you,


Marco Cisternino

From: Mark Adams mailto:mfad...@lbl.gov>>
Sent: lunedì 3 gennaio 2022 15:50
To: Marco Cisternino 
mailto:marco.cistern...@optimad.it>>
Cc: Matthew Knepley mailto:knep...@gmail.com>>; petsc-users 
mailto:petsc-users@mcs.anl.gov>>
Subject: Re: [petsc-users] Nullspaces

I have not looked at your code, but as a general observation you want to have 
some sort of memory checker, like valgrid for CPUs, in your workflow.
It is the fastest way to find some classes of bugs.

On Mon, Jan 3, 2022 at 8:47 AM Marco Cisternino 
mailto:marco.cistern...@optimad.it>> wrote:
Are you talking about the code that produce the linear system or about the tiny 
code that test the null space?
In the first case, it is absolutely possible, but I would expect no problem in 
the tiny code, do you agree?
It is important to remark that the real code and the tiny one behave in the 
same way when testing the null space of the operator. I can analyze with 
valgrind and I will, but I would not expect great insights.

Thanks,

Marco Cisternino, PhD
marco.cistern...@optimad.it<mailto:marco.cistern...@optimad.it>
__
Optimad Engineering Srl
Via Bligny 5, Torino, Italia.
+3901119719782
www.optimad.it<http://www.optimad.it/>

From: Mark Adams mailto:mfad...@lbl.gov>>
Sent: lunedì 3 gennaio 2022 14:42
To: Marco Cisternino 
mailto:marco.cistern...@optimad.it>>
Cc: Matthew Knepley mailto:knep...@gmail.com>>; petsc-users 
mailto:petsc-users@mcs.anl.gov>&

Re: [petsc-users] Nullspaces

2022-01-04 Thread Marco Cisternino
Hello Mark,
I analyzed the codes with valgrind, both the real code and the tiny one.
I obviously used memcheck tool but with full leak check compiling the codes 
with debug info.
Not considering OpenMPI events (I have no wrappers on the machine I used for 
the analysis), the real code gave zero errors and the tiny one gave this
==17911== 905,536 (1,552 direct, 903,984 indirect) bytes in 1 blocks are 
definitely lost in loss record 33 of 33
==17911==at 0x483E340: memalign (in 
/usr/lib/x86_64-linux-gnu/valgrind/vgpreload_memcheck-amd64-linux.so)
==17911==by 0x49CB672: PetscMallocAlign (in 
/usr/lib/x86_64-linux-gnu/libpetsc_real.so.3.12.4)
==17911==by 0x49CBE1D: PetscMallocA (in 
/usr/lib/x86_64-linux-gnu/libpetsc_real.so.3.12.4)
==17911==by 0x4B26187: VecCreate (in 
/usr/lib/x86_64-linux-gnu/libpetsc_real.so.3.12.4)
==17911==by 0x10940D: main (testNullSpace.cpp:30)

due to the fact that I forgot to destroy the solution Vec (adding 
VecDestroy() at the end of the main, the error disappear).
For both the codes, I analyzed the two ways of passing the constant to the null 
space of the operator, no memory errors but still the same results from 
MatNullSpaceTest, i.e.

MatNullSpaceCreate(PETSC_COMM_WORLD, PETSC_TRUE, 0, nullptr, );

passes the test while

Vec* nsp;
VecDuplicateVecs(solution, 1, );
VecSet(nsp[0],1.0);
VecNormalize(nsp[0], nullptr);
MatNullSpaceCreate(PETSC_COMM_WORLD, PETSC_FALSE, 1, nsp, );
VecDestroyVecs(1,);
PetscFree(nsp);

does not.

I hope this can satisfy your doubt about the memory behavior, but please do not 
hesitate to ask for more analysis if it cannot.

As Matthew said some weeks ago, something should be wrong in the code, I would 
say in the matrix, that’s why I provided the matrix and the way I test it.
Unfortunately, it is hard (read impossible) for me to share the code producing 
the matrix. I hope the minimal code I provided is enough to understand 
something.

Thank you all.

Marco Cisternino

From: Marco Cisternino
Sent: lunedì 3 gennaio 2022 16:08
To: Mark Adams 
Cc: Matthew Knepley ; petsc-users 
Subject: RE: [petsc-users] Nullspaces

We usually analyze the code with valgrind, when important changes are 
implemented.
I have to admit that this analysis is still not automatic and the case we are 
talking about is not a test case for our workload.
The test cases we have give no errors in valgrind analysis.

However, I will analyze both the real code and the tiny one for this case with 
valgrind and report the results.

Thank you,


Marco Cisternino

From: Mark Adams mailto:mfad...@lbl.gov>>
Sent: lunedì 3 gennaio 2022 15:50
To: Marco Cisternino 
mailto:marco.cistern...@optimad.it>>
Cc: Matthew Knepley mailto:knep...@gmail.com>>; petsc-users 
mailto:petsc-users@mcs.anl.gov>>
Subject: Re: [petsc-users] Nullspaces

I have not looked at your code, but as a general observation you want to have 
some sort of memory checker, like valgrid for CPUs, in your workflow.
It is the fastest way to find some classes of bugs.

On Mon, Jan 3, 2022 at 8:47 AM Marco Cisternino 
mailto:marco.cistern...@optimad.it>> wrote:
Are you talking about the code that produce the linear system or about the tiny 
code that test the null space?
In the first case, it is absolutely possible, but I would expect no problem in 
the tiny code, do you agree?
It is important to remark that the real code and the tiny one behave in the 
same way when testing the null space of the operator. I can analyze with 
valgrind and I will, but I would not expect great insights.

Thanks,

Marco Cisternino, PhD
marco.cistern...@optimad.it<mailto:marco.cistern...@optimad.it>
__
Optimad Engineering Srl
Via Bligny 5, Torino, Italia.
+3901119719782
www.optimad.it<http://www.optimad.it/>

From: Mark Adams mailto:mfad...@lbl.gov>>
Sent: lunedì 3 gennaio 2022 14:42
To: Marco Cisternino 
mailto:marco.cistern...@optimad.it>>
Cc: Matthew Knepley mailto:knep...@gmail.com>>; petsc-users 
mailto:petsc-users@mcs.anl.gov>>
Subject: Re: [petsc-users] Nullspaces

There could be a memory bug that does not cause a noticeable problem until it 
hits some vital data and valgrind might find it on a small problem.

However you might have a bug like a hardwired buffer size that overflows that 
is in fact not a bug until you get to this large size and in that case valgrid 
would need to be run on the large case and would have a good chance of finding 
it.


On Mon, Jan 3, 2022 at 4:42 AM Marco Cisternino 
mailto:marco.cistern...@optimad.it>> wrote:
My comments are between the Mark’s lines and they starts with “#”

Marco Cisternino

From: Mark Adams mailto:mfad...@lbl.gov>>
Sent: sabato 25 dicembre 2021 14:59
To: Marco Cisternino 
mailto:marco.cistern...@optimad.it>>
Cc: Matthew Knepley mailto:knep...@gmail.com>>; petsc-users 
mailto:petsc-users@mcs.anl.gov>>
Subject: Re: [petsc-users] Nullspaces

If  "triggering t

Re: [petsc-users] Nullspaces

2022-01-03 Thread Marco Cisternino
We usually analyze the code with valgrind, when important changes are 
implemented.
I have to admit that this analysis is still not automatic and the case we are 
talking about is not a test case for our workload.
The test cases we have give no errors in valgrind analysis.

However, I will analyze both the real code and the tiny one for this case with 
valgrind and report the results.

Thank you,


Marco Cisternino

From: Mark Adams 
Sent: lunedì 3 gennaio 2022 15:50
To: Marco Cisternino 
Cc: Matthew Knepley ; petsc-users 
Subject: Re: [petsc-users] Nullspaces

I have not looked at your code, but as a general observation you want to have 
some sort of memory checker, like valgrid for CPUs, in your workflow.
It is the fastest way to find some classes of bugs.

On Mon, Jan 3, 2022 at 8:47 AM Marco Cisternino 
mailto:marco.cistern...@optimad.it>> wrote:
Are you talking about the code that produce the linear system or about the tiny 
code that test the null space?
In the first case, it is absolutely possible, but I would expect no problem in 
the tiny code, do you agree?
It is important to remark that the real code and the tiny one behave in the 
same way when testing the null space of the operator. I can analyze with 
valgrind and I will, but I would not expect great insights.

Thanks,

Marco Cisternino, PhD
marco.cistern...@optimad.it<mailto:marco.cistern...@optimad.it>
__
Optimad Engineering Srl
Via Bligny 5, Torino, Italia.
+3901119719782
www.optimad.it<http://www.optimad.it/>

From: Mark Adams mailto:mfad...@lbl.gov>>
Sent: lunedì 3 gennaio 2022 14:42
To: Marco Cisternino 
mailto:marco.cistern...@optimad.it>>
Cc: Matthew Knepley mailto:knep...@gmail.com>>; petsc-users 
mailto:petsc-users@mcs.anl.gov>>
Subject: Re: [petsc-users] Nullspaces

There could be a memory bug that does not cause a noticeable problem until it 
hits some vital data and valgrind might find it on a small problem.

However you might have a bug like a hardwired buffer size that overflows that 
is in fact not a bug until you get to this large size and in that case valgrid 
would need to be run on the large case and would have a good chance of finding 
it.


On Mon, Jan 3, 2022 at 4:42 AM Marco Cisternino 
mailto:marco.cistern...@optimad.it>> wrote:
My comments are between the Mark’s lines and they starts with “#”

Marco Cisternino

From: Mark Adams mailto:mfad...@lbl.gov>>
Sent: sabato 25 dicembre 2021 14:59
To: Marco Cisternino 
mailto:marco.cistern...@optimad.it>>
Cc: Matthew Knepley mailto:knep...@gmail.com>>; petsc-users 
mailto:petsc-users@mcs.anl.gov>>
Subject: Re: [petsc-users] Nullspaces

If  "triggering the issue" requires a substantial mesh, that makes me think 
there is a logic bug somewhere. Maybe use valgrind.

# Are you suggesting to use valgrind on this tiny toy code or on the original 
one? However, considering the purpose of the tiny code, i.e. testing the 
constant null space, why there should be a logical bug? Case 1 passes and case 
2 should be exactly the same, shouldn’t be it?

Also you say you divide by the cell volume. Maybe I am not understanding this 
but that is basically diagonal scaling and that will change the null space (ie, 
not a constant anymore)

# I agree on this, but it pushes a question: why the case 1 passes the test?
# Thank you, Mark.

On Thu, Dec 16, 2021 at 11:11 AM Marco Cisternino 
mailto:marco.cistern...@optimad.it>> wrote:
Hello Matthew,
as promised I prepared a minimal (112960 rows. I’m not able to produce anything 
smaller than this and triggering the issue) example of the behavior I was 
talking about some days ago.
What I did is to produce matrix, right hand side and initial solution of the 
linear system.

As I told you before, this linear system is the discretization of the pressure 
equation of a predictor-corrector method for NS equations in the framework of 
finite volume method.
This case has homogeneous Neumann boundary conditions. Computational domain has 
two independent and separated sub-domains.
I discretize the weak formulation and I divide every row of the linear system 
by the volume of the relative cell.
The underlying mesh is not uniform, therefore cells have different volumes.
The issue I’m going to explain does not show up if the mesh is uniform, same 
volume for all the cells.

I usually build the null space sub-domain by sub-domain with
MatNullSpaceCreate(getCommunicator(), PETSC_FALSE, nConstants, constants, 
);
Where nConstants = 2 and constants contains two normalized arrays with constant 
values on degrees of freedom relative to the associated sub-domain and zeros 
elsewhere.

However, as a test I tried the constant over the whole domain using 2 
alternatives that should produce the same null space:

  1.  MatNullSpaceCreate(getCommunicator(), PETSC_TRUE, 0, nullptr, );
  2.  Vec* nsp;

VecDuplicateVecs(solution, 1, );

VecSet(nsp[0],1.0);

VecNormalize(n

Re: [petsc-users] Nullspaces

2022-01-03 Thread Marco Cisternino
Are you talking about the code that produce the linear system or about the tiny 
code that test the null space?
In the first case, it is absolutely possible, but I would expect no problem in 
the tiny code, do you agree?
It is important to remark that the real code and the tiny one behave in the 
same way when testing the null space of the operator. I can analyze with 
valgrind and I will, but I would not expect great insights.

Thanks,

Marco Cisternino, PhD
marco.cistern...@optimad.it<mailto:marco.cistern...@optimad.it>
__
Optimad Engineering Srl
Via Bligny 5, Torino, Italia.
+3901119719782
www.optimad.it<http://www.optimad.it/>

From: Mark Adams 
Sent: lunedì 3 gennaio 2022 14:42
To: Marco Cisternino 
Cc: Matthew Knepley ; petsc-users 
Subject: Re: [petsc-users] Nullspaces

There could be a memory bug that does not cause a noticeable problem until it 
hits some vital data and valgrind might find it on a small problem.

However you might have a bug like a hardwired buffer size that overflows that 
is in fact not a bug until you get to this large size and in that case valgrid 
would need to be run on the large case and would have a good chance of finding 
it.


On Mon, Jan 3, 2022 at 4:42 AM Marco Cisternino 
mailto:marco.cistern...@optimad.it>> wrote:
My comments are between the Mark’s lines and they starts with “#”

Marco Cisternino

From: Mark Adams mailto:mfad...@lbl.gov>>
Sent: sabato 25 dicembre 2021 14:59
To: Marco Cisternino 
mailto:marco.cistern...@optimad.it>>
Cc: Matthew Knepley mailto:knep...@gmail.com>>; petsc-users 
mailto:petsc-users@mcs.anl.gov>>
Subject: Re: [petsc-users] Nullspaces

If  "triggering the issue" requires a substantial mesh, that makes me think 
there is a logic bug somewhere. Maybe use valgrind.

# Are you suggesting to use valgrind on this tiny toy code or on the original 
one? However, considering the purpose of the tiny code, i.e. testing the 
constant null space, why there should be a logical bug? Case 1 passes and case 
2 should be exactly the same, shouldn’t be it?

Also you say you divide by the cell volume. Maybe I am not understanding this 
but that is basically diagonal scaling and that will change the null space (ie, 
not a constant anymore)

# I agree on this, but it pushes a question: why the case 1 passes the test?
# Thank you, Mark.

On Thu, Dec 16, 2021 at 11:11 AM Marco Cisternino 
mailto:marco.cistern...@optimad.it>> wrote:
Hello Matthew,
as promised I prepared a minimal (112960 rows. I’m not able to produce anything 
smaller than this and triggering the issue) example of the behavior I was 
talking about some days ago.
What I did is to produce matrix, right hand side and initial solution of the 
linear system.

As I told you before, this linear system is the discretization of the pressure 
equation of a predictor-corrector method for NS equations in the framework of 
finite volume method.
This case has homogeneous Neumann boundary conditions. Computational domain has 
two independent and separated sub-domains.
I discretize the weak formulation and I divide every row of the linear system 
by the volume of the relative cell.
The underlying mesh is not uniform, therefore cells have different volumes.
The issue I’m going to explain does not show up if the mesh is uniform, same 
volume for all the cells.

I usually build the null space sub-domain by sub-domain with
MatNullSpaceCreate(getCommunicator(), PETSC_FALSE, nConstants, constants, 
);
Where nConstants = 2 and constants contains two normalized arrays with constant 
values on degrees of freedom relative to the associated sub-domain and zeros 
elsewhere.

However, as a test I tried the constant over the whole domain using 2 
alternatives that should produce the same null space:

  1.  MatNullSpaceCreate(getCommunicator(), PETSC_TRUE, 0, nullptr, );
  2.  Vec* nsp;

VecDuplicateVecs(solution, 1, );

VecSet(nsp[0],1.0);

VecNormalize(nsp[0], nullptr);

MatNullSpaceCreate(getCommunicator(), PETSC_FALSE, 1, nsp, );


Once I created the null space I test it using:
MatNullSpaceTest(nullspace, m_A, );

The case 1 pass the test while case 2 don’t.

I have a small code for matrix loading, null spaces creation and testing.
Unfortunately I cannot implement a small code able to produce that linear 
system.

As attachment you can find an archive containing the matrix, the initial 
solution (used to manually build the null space) and the rhs (not used in the 
test code) in binary format.
You can also find the testing code in the same archive.
I used petsc 3.12(gcc+openMPI) and petsc 3.15.2(intelOneAPI) same results.
If the attachment is not delivered, I can share a link to it.

Thanks for any help.

Marco Cisternino


Marco Cisternino, PhD
marco.cistern...@optimad.it<mailto:marco.cistern...@optimad.it>
__
Optimad Engineering Srl
Via Bligny 5, Torino, Italia.
+3901119719782
www.optimad.it<http://www.optimad.it/&

Re: [petsc-users] Nullspaces

2022-01-03 Thread Marco Cisternino
My comments are between the Mark’s lines and they starts with “#”

Marco Cisternino

From: Mark Adams 
Sent: sabato 25 dicembre 2021 14:59
To: Marco Cisternino 
Cc: Matthew Knepley ; petsc-users 
Subject: Re: [petsc-users] Nullspaces

If  "triggering the issue" requires a substantial mesh, that makes me think 
there is a logic bug somewhere. Maybe use valgrind.

# Are you suggesting to use valgrind on this tiny toy code or on the original 
one? However, considering the purpose of the tiny code, i.e. testing the 
constant null space, why there should be a logical bug? Case 1 passes and case 
2 should be exactly the same, shouldn’t be it?

Also you say you divide by the cell volume. Maybe I am not understanding this 
but that is basically diagonal scaling and that will change the null space (ie, 
not a constant anymore)

# I agree on this, but it pushes a question: why the case 1 passes the test?
# Thank you, Mark.

On Thu, Dec 16, 2021 at 11:11 AM Marco Cisternino 
mailto:marco.cistern...@optimad.it>> wrote:
Hello Matthew,
as promised I prepared a minimal (112960 rows. I’m not able to produce anything 
smaller than this and triggering the issue) example of the behavior I was 
talking about some days ago.
What I did is to produce matrix, right hand side and initial solution of the 
linear system.

As I told you before, this linear system is the discretization of the pressure 
equation of a predictor-corrector method for NS equations in the framework of 
finite volume method.
This case has homogeneous Neumann boundary conditions. Computational domain has 
two independent and separated sub-domains.
I discretize the weak formulation and I divide every row of the linear system 
by the volume of the relative cell.
The underlying mesh is not uniform, therefore cells have different volumes.
The issue I’m going to explain does not show up if the mesh is uniform, same 
volume for all the cells.

I usually build the null space sub-domain by sub-domain with
MatNullSpaceCreate(getCommunicator(), PETSC_FALSE, nConstants, constants, 
);
Where nConstants = 2 and constants contains two normalized arrays with constant 
values on degrees of freedom relative to the associated sub-domain and zeros 
elsewhere.

However, as a test I tried the constant over the whole domain using 2 
alternatives that should produce the same null space:

  1.  MatNullSpaceCreate(getCommunicator(), PETSC_TRUE, 0, nullptr, );
  2.  Vec* nsp;

VecDuplicateVecs(solution, 1, );

VecSet(nsp[0],1.0);

VecNormalize(nsp[0], nullptr);

MatNullSpaceCreate(getCommunicator(), PETSC_FALSE, 1, nsp, );


Once I created the null space I test it using:
MatNullSpaceTest(nullspace, m_A, );

The case 1 pass the test while case 2 don’t.

I have a small code for matrix loading, null spaces creation and testing.
Unfortunately I cannot implement a small code able to produce that linear 
system.

As attachment you can find an archive containing the matrix, the initial 
solution (used to manually build the null space) and the rhs (not used in the 
test code) in binary format.
You can also find the testing code in the same archive.
I used petsc 3.12(gcc+openMPI) and petsc 3.15.2(intelOneAPI) same results.
If the attachment is not delivered, I can share a link to it.

Thanks for any help.

Marco Cisternino


Marco Cisternino, PhD
marco.cistern...@optimad.it<mailto:marco.cistern...@optimad.it>
__
Optimad Engineering Srl
Via Bligny 5, Torino, Italia.
+3901119719782
www.optimad.it<http://www.optimad.it/>

From: Marco Cisternino 
mailto:marco.cistern...@optimad.it>>
Sent: martedì 7 dicembre 2021 19:36
To: Matthew Knepley mailto:knep...@gmail.com>>
Cc: petsc-users mailto:petsc-users@mcs.anl.gov>>
Subject: Re: [petsc-users] Nullspaces

I will, as soon as possible...

Scarica Outlook per Android<https://aka.ms/AAb9ysg>

From: Matthew Knepley mailto:knep...@gmail.com>>
Sent: Tuesday, December 7, 2021 7:25:43 PM
To: Marco Cisternino 
mailto:marco.cistern...@optimad.it>>
Cc: petsc-users mailto:petsc-users@mcs.anl.gov>>
Subject: Re: [petsc-users] Nullspaces

On Tue, Dec 7, 2021 at 11:19 AM Marco Cisternino 
mailto:marco.cistern...@optimad.it>> wrote:

Good morning,

I’m still struggling with the Poisson equation with Neumann BCs.

I discretize the equation by finite volume method and I divide every line of 
the linear system by the volume of the cell. I could avoid this division, but 
I’m trying to understand.

My mesh is not uniform, i.e. cells have different volumes (it is an octree 
mesh).

Moreover, in my computational domain there are 2 separated sub-domains.

I build the null space and then I use MatNullSpaceTest to check it.



If I do this:

MatNullSpaceCreate(getCommunicator(), PETSC_TRUE, 0, nullptr, );

It works

This produces the normalized constant vector.


If I do this:

Vec nsp;

VecDuplicate(m_rhs, );

VecSet(nsp,1.0);

VecNormalize(nsp, nu

Re: [petsc-users] Nullspaces

2022-01-03 Thread Marco Cisternino
My comments are between Barry’s lines

Marco Cisternino

From: Barry Smith 
Sent: venerdì 24 dicembre 2021 23:57
To: Marco Cisternino 
Cc: Matthew Knepley ; petsc-users 
Subject: Re: [petsc-users] Nullspaces


   I tried your code but it appears the ./gyroid_solution.txt contains a vector 
of all zeros. Is this intended?

Yes it is, it should be the initial guess and not the solution of the linear 
system. It is read just to build a null space vector of the right size in case 
2.
However, the purpose here is not to solve the linear system but to understand 
why the constant null space is working (it passes the MatNullSpaceTest) in case 
1 and it is not working in case 2. That was what Matthew asked me in his last 
message to this thread.

   Actually VecDuplicateVecs() does not copy the values in the vector so your 
nsp[0] will contain the zero vector anyways.

I know, that’s why there is a VecSet on nsp Vec. Is it not correct? It should 
be a Vec of 1s …

  Would you be able to send the data that indicates what rows of the vector are 
associated with each subdomain? For example a vector with all 1s on the first 
domain and all 2s on the second domain?  I think with this one should be able 
to construct the 2 dimensional null space.

I would, but again the case was meant to test the two ways of building the 
constant null space. The idea is to understand why case 1 passes the 
MatNullSpaceTest and case 2 doesn’t.

  Barry

Thank you, Barry. As soon as the issue Matthew noted is understood, I will come 
back to the 2-dimensional null space.


On Dec 16, 2021, at 11:09 AM, Marco Cisternino 
mailto:marco.cistern...@optimad.it>> wrote:

Hello Matthew,
as promised I prepared a minimal (112960 rows. I’m not able to produce anything 
smaller than this and triggering the issue) example of the behavior I was 
talking about some days ago.
What I did is to produce matrix, right hand side and initial solution of the 
linear system.

As I told you before, this linear system is the discretization of the pressure 
equation of a predictor-corrector method for NS equations in the framework of 
finite volume method.
This case has homogeneous Neumann boundary conditions. Computational domain has 
two independent and separated sub-domains.
I discretize the weak formulation and I divide every row of the linear system 
by the volume of the relative cell.
The underlying mesh is not uniform, therefore cells have different volumes.
The issue I’m going to explain does not show up if the mesh is uniform, same 
volume for all the cells.

I usually build the null space sub-domain by sub-domain with
MatNullSpaceCreate(getCommunicator(), PETSC_FALSE, nConstants, constants, 
);
Where nConstants = 2 and constants contains two normalized arrays with constant 
values on degrees of freedom relative to the associated sub-domain and zeros 
elsewhere.

However, as a test I tried the constant over the whole domain using 2 
alternatives that should produce the same null space:

  1.  MatNullSpaceCreate(getCommunicator(), PETSC_TRUE, 0, nullptr, );
  2.  Vec* nsp;
VecDuplicateVecs(solution, 1, );
VecSet(nsp[0],1.0);
VecNormalize(nsp[0], nullptr);
MatNullSpaceCreate(getCommunicator(), PETSC_FALSE, 1, nsp, );

Once I created the null space I test it using:
MatNullSpaceTest(nullspace, m_A, );

The case 1 pass the test while case 2 don’t.

I have a small code for matrix loading, null spaces creation and testing.
Unfortunately I cannot implement a small code able to produce that linear 
system.

As attachment you can find an archive containing the matrix, the initial 
solution (used to manually build the null space) and the rhs (not used in the 
test code) in binary format.
You can also find the testing code in the same archive.
I used petsc 3.12(gcc+openMPI) and petsc 3.15.2(intelOneAPI) same results.
If the attachment is not delivered, I can share a link to it.

Thanks for any help.

Marco Cisternino


Marco Cisternino, PhD
marco.cistern...@optimad.it<mailto:marco.cistern...@optimad.it>
__
Optimad Engineering Srl
Via Bligny 5, Torino, Italia.
+3901119719782
www.optimad.it<http://www.optimad.it/>

From: Marco Cisternino 
mailto:marco.cistern...@optimad.it>>
Sent: martedì 7 dicembre 2021 19:36
To: Matthew Knepley mailto:knep...@gmail.com>>
Cc: petsc-users mailto:petsc-users@mcs.anl.gov>>
Subject: Re: [petsc-users] Nullspaces

I will, as soon as possible...

Scarica Outlook per Android<https://aka.ms/AAb9ysg>

From: Matthew Knepley mailto:knep...@gmail.com>>
Sent: Tuesday, December 7, 2021 7:25:43 PM
To: Marco Cisternino 
mailto:marco.cistern...@optimad.it>>
Cc: petsc-users mailto:petsc-users@mcs.anl.gov>>
Subject: Re: [petsc-users] Nullspaces

On Tue, Dec 7, 2021 at 11:19 AM Marco Cisternino 
mailto:marco.cistern...@optimad.it>> wrote:

Good morning,

I’m still struggling with the Poisson equation with Neumann BCs.

I discr

Re: [petsc-users] Nullspaces

2021-12-07 Thread Marco Cisternino
I will, as soon as possible...

Scarica Outlook per Android<https://aka.ms/AAb9ysg>

From: Matthew Knepley 
Sent: Tuesday, December 7, 2021 7:25:43 PM
To: Marco Cisternino 
Cc: petsc-users 
Subject: Re: [petsc-users] Nullspaces

On Tue, Dec 7, 2021 at 11:19 AM Marco Cisternino 
mailto:marco.cistern...@optimad.it>> wrote:

Good morning,

I’m still struggling with the Poisson equation with Neumann BCs.

I discretize the equation by finite volume method and I divide every line of 
the linear system by the volume of the cell. I could avoid this division, but 
I’m trying to understand.

My mesh is not uniform, i.e. cells have different volumes (it is an octree 
mesh).

Moreover, in my computational domain there are 2 separated sub-domains.

I build the null space and then I use MatNullSpaceTest to check it.



If I do this:

MatNullSpaceCreate(getCommunicator(), PETSC_TRUE, 0, nullptr, );

It works

This produces the normalized constant vector.


If I do this:

Vec nsp;

VecDuplicate(m_rhs, );

VecSet(nsp,1.0);

VecNormalize(nsp, nullptr);

MatNullSpaceCreate(getCommunicator(), PETSC_FALSE, 1, , );

It does not work

This is also the normalized constant vector.

So you are saying that these two vectors give different results with 
MatNullSpaceTest()?
Something must be wrong in the code. Can you send a minimal example of this? I 
will go
through and debug it.

  Thanks,

 Matt


Probably, I have wrong expectations, but should not it be the same?



Thanks



Marco Cisternino, PhD
marco.cistern...@optimad.it<mailto:marco.cistern...@optimad.it>

__

Optimad Engineering Srl

Via Bligny 5, Torino, Italia.
+3901119719782
www.optimad.it<http://www.optimad.it/>




--
What most experimenters take for granted before they begin their experiments is 
infinitely more interesting than any results to which their experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/<http://www.cse.buffalo.edu/~knepley/>


Re: [petsc-users] Nullspaces

2021-12-07 Thread Marco Cisternino
Thanks Barry.
I already did it, I mean a constant vec per domain with 1 on rows of that 
domain and zero on rows of the other one, and exactly I did this:
Vec *constants = nullptr;
PetscInt nConstants = nActiveDomains;
VecDuplicateVecs(m_solution, nConstants, );
for (PetscInt i = 0; i < nConstants; ++i) {
VecSet(constants[i],0.0);
}

std::vector rawConstants(nConstants);
for (PetscInt i = 0; i < nConstants; ++i) {
VecGetArray(constants[i], [i]);
}

long nRows = getRowCount();
const CBaseLinearSolverMapping *cellRowMapping = getMapping();
for (long cellRow = 0; cellRow < nRows; ++cellRow) {
std::size_t cellRawId = cellRowMapping->getRowRawCell(cellRow);
int cellDomain = 
m_grid->getCellStorage().getDomains().rawAt(cellRawId);
if (m_grid->isCellDomainActive(cellDomain)) {
std::size_t activeDomainId = activeDomainIds[cellDomain];
rawConstants[activeDomainId][cellRow] = 1.;
}
}

for (PetscInt i = 0; i < nConstants; ++i) {
VecRestoreArray(constants[i], [i]);
VecAssemblyBegin(constants[i]);
VecAssemblyEnd(constants[i]);
VecNormalize(constants[i], nullptr);
}

MatNullSpaceCreate(getCommunicator(), PETSC_FALSE, nConstants, 
constants, );

But this does not pass the test if I divide by cell volume every row of the 
linear system.
It works if I do not perform the division

Thanks


Marco Cisternino, PhD
marco.cistern...@optimad.it<mailto:marco.cistern...@optimad.it>
__
Optimad Engineering Srl
Via Bligny 5, Torino, Italia.
+3901119719782
www.optimad.it<http://www.optimad.it/>

From: Barry Smith 
Sent: martedì 7 dicembre 2021 17:53
To: Marco Cisternino 
Cc: petsc-users 
Subject: Re: [petsc-users] Nullspaces


  A side note: The MatNullSpaceTest tells you that the null space you provided 
is in the null space of the operator, it does not say if you have found the 
entire null space. In your case with two subdomains the null space is actually 
two dimensional; all constant on one domain (0 on the other) and 0 on the first 
domain and all constant on the second. So you need to pass two vectors into the 
MatNullSpaceCreate().  But regardless the test should work for all constant on 
both domains.


On Dec 7, 2021, at 11:19 AM, Marco Cisternino 
mailto:marco.cistern...@optimad.it>> wrote:

Good morning,
I’m still struggling with the Poisson equation with Neumann BCs.
I discretize the equation by finite volume method and I divide every line of 
the linear system by the volume of the cell. I could avoid this division, but 
I’m trying to understand.
My mesh is not uniform, i.e. cells have different volumes (it is an octree 
mesh).
Moreover, in my computational domain there are 2 separated sub-domains.
I build the null space and then I use MatNullSpaceTest to check it.

If I do this:
MatNullSpaceCreate(getCommunicator(), PETSC_TRUE, 0, nullptr, );
It works

If I do this:
Vec nsp;
VecDuplicate(m_rhs, );
VecSet(nsp,1.0);
VecNormalize(nsp, nullptr);
MatNullSpaceCreate(getCommunicator(), PETSC_FALSE, 1, , );
It does not work

Probably, I have wrong expectations, but should not it be the same?

Thanks

Marco Cisternino, PhD
marco.cistern...@optimad.it<mailto:marco.cistern...@optimad.it>
__
Optimad Engineering Srl
Via Bligny 5, Torino, Italia.
+3901119719782
www.optimad.it<http://www.optimad.it/>



Re: [petsc-users] Nullspaces

2021-12-07 Thread Marco Cisternino
I’m sorry, I believed it was clear:
“it works” means MatNullSpaceTest returns true
“it does not work” means MatNullSpaceTest returns false

Is it enough?
Thanks

Marco Cisternino, PhD
marco.cistern...@optimad.it<mailto:marco.cistern...@optimad.it>
__
Optimad Engineering Srl
Via Bligny 5, Torino, Italia.
+3901119719782
www.optimad.it<http://www.optimad.it/>

From: Mark Adams 
Sent: martedì 7 dicembre 2021 17:47
To: Marco Cisternino 
Cc: petsc-users 
Subject: Re: [petsc-users] Nullspaces

Can you please give more details on what 'does not work' is.
More detail on how you judge what works would also be useful.
Mark

On Tue, Dec 7, 2021 at 11:19 AM Marco Cisternino 
mailto:marco.cistern...@optimad.it>> wrote:
Good morning,
I’m still struggling with the Poisson equation with Neumann BCs.
I discretize the equation by finite volume method and I divide every line of 
the linear system by the volume of the cell. I could avoid this division, but 
I’m trying to understand.
My mesh is not uniform, i.e. cells have different volumes (it is an octree 
mesh).
Moreover, in my computational domain there are 2 separated sub-domains.
I build the null space and then I use MatNullSpaceTest to check it.

If I do this:
MatNullSpaceCreate(getCommunicator(), PETSC_TRUE, 0, nullptr, );
It works

If I do this:
Vec nsp;
VecDuplicate(m_rhs, );
VecSet(nsp,1.0);
VecNormalize(nsp, nullptr);
MatNullSpaceCreate(getCommunicator(), PETSC_FALSE, 1, , );
It does not work

Probably, I have wrong expectations, but should not it be the same?

Thanks

Marco Cisternino, PhD
marco.cistern...@optimad.it<mailto:marco.cistern...@optimad.it>
__
Optimad Engineering Srl
Via Bligny 5, Torino, Italia.
+3901119719782
www.optimad.it<http://www.optimad.it/>



[petsc-users] Nullspaces

2021-12-07 Thread Marco Cisternino
Good morning,
I'm still struggling with the Poisson equation with Neumann BCs.
I discretize the equation by finite volume method and I divide every line of 
the linear system by the volume of the cell. I could avoid this division, but 
I'm trying to understand.
My mesh is not uniform, i.e. cells have different volumes (it is an octree 
mesh).
Moreover, in my computational domain there are 2 separated sub-domains.
I build the null space and then I use MatNullSpaceTest to check it.

If I do this:
MatNullSpaceCreate(getCommunicator(), PETSC_TRUE, 0, nullptr, );
It works

If I do this:
Vec nsp;
VecDuplicate(m_rhs, );
VecSet(nsp,1.0);
VecNormalize(nsp, nullptr);
MatNullSpaceCreate(getCommunicator(), PETSC_FALSE, 1, , );
It does not work

Probably, I have wrong expectations, but should not it be the same?

Thanks

Marco Cisternino, PhD
marco.cistern...@optimad.it<mailto:marco.cistern...@optimad.it>
__
Optimad Engineering Srl
Via Bligny 5, Torino, Italia.
+3901119719782
www.optimad.it<http://www.optimad.it/>



Re: [petsc-users] Disconnected domains and Poisson equation

2021-10-06 Thread Marco Cisternino
Hello Barry.
I tried to force the solver to start from an initial guess which is not the 
solution of the problem. For sake of completeness, the solution has to be a 
constant field.
With this initial condition, the solver iterates to a solution which is 
constant in the 2 sub-domains but

  *   the constants have not the same value
  *   they are not close to zero  (minimal norm solution)
  *   they  are not opposite (zero-average solution over the whole domain, like 
3 and -3)
After 20 CFD iterations my pressure is 32 in one sub-domain and 2.2 in the 
other one. And their norm is increasing.
How can I force the solver to give me minimal norm solution, or in other words 
the zero constant?
I can do it by myself, anchoring domain-by-domain the solution removing its 
local average, but I was wondering if the solver can do this for me.
In some way, giving a null space made of 2 vectors (1 on dofs living in the 
sub-domain and zero elsewhere), I would expect a solution with zero average in 
the 2 sub-domains, separately, but I’m wrong, probably.
Finally, which is the closure of the problem defining the value of the 
constant? Zero-average condition, minimal norm condition, or none of them?

Thanks!

Bests,

Marco Cisternino, PhD
marco.cistern...@optimad.it<mailto:marco.cistern...@optimad.it>
__
Optimad Engineering Srl
Via Bligny 5, Torino, Italia.
+3901119719782
www.optimad.it<http://www.optimad.it/>

From: Barry Smith 
Sent: venerdì 1 ottobre 2021 16:56
To: Marco Cisternino 
Cc: petsc-users@mcs.anl.gov
Subject: Re: [petsc-users] Disconnected domains and Poisson equation




On Oct 1, 2021, at 6:38 AM, Marco Cisternino 
mailto:marco.cistern...@optimad.it>> wrote:

Thank you Barry.
I added a custom atoll = 1.0e-12 and this makes the CFD stable with all the 
linear solver types. CFD solution is good and pressure is a good “zero” field 
at every CFD iteration.
I did the same test using ASM+ILU+FGMRES(BCGS and GMRES) and the behaviour is 
the same.
During some CFD iteration the residual of linear system starts slightly higher 
than atol and the linear solver makes some iteration (2/3 iterations) before it 
stops because of atol.
The pressure is still different in the 2 sub-domains (order 1.0e-14 because of 
those few linear solver iterations), therefore no symmetry of the solution In 
the 2 sub-domains.
I think it is a matter of round-off, do you agree on this? Or do I need to take 
care of this difference as a symptom of something wrong?

  Yes, if the differences in the two solutions are order 1.e-14 that is very 
good, one cannot expect them to be identical.


Thank you for your support.

Marco Cisternino

From: Barry Smith mailto:bsm...@petsc.dev>>
Sent: giovedì 30 settembre 2021 16:39
To: Marco Cisternino 
mailto:marco.cistern...@optimad.it>>
Cc: petsc-users@mcs.anl.gov<mailto:petsc-users@mcs.anl.gov>
Subject: Re: [petsc-users] Disconnected domains and Poisson equation


   It looks like the initial solution (guess) is to round-off the solution to 
the linear system 9.010260489109e-14

0 KSP unpreconditioned resid norm 9.010260489109e-14 true resid norm 
9.010260489109e-14 ||r(i)||/||b|| 2.021559024868e+00
  0 KSP Residual norm 9.010260489109e-14 % max 1.e+00 min 
1.e+00 max/min 1.e+00
  1 KSP unpreconditioned resid norm 4.918108339808e-15 true resid norm 
4.918171792537e-15 ||r(i)||/||b|| 1.103450292594e-01
  1 KSP Residual norm 4.918108339808e-15 % max 9.566256813737e-01 min 
9.566256813737e-01 max/min 1.e+00
  2 KSP unpreconditioned resid norm 1.443599554690e-15 true resid norm 
1.444867143493e-15 ||r(i)||/||b|| 3.241731154382e-02
  2 KSP Residual norm 1.443599554690e-15 % max 9.614019380614e-01 min 
7.360950481750e-01 max/min 1.306083963538e+00

Thus the Krylov solver will not be able to improve the solution, it then gets 
stuck trying to improve the solution but cannot because of round off.

In other words the algorithm has converged (even at the initial solution 
(guess) and should stop immediately.

You can use -ksp_atol 1.e-12 to get it to stop immediately without iterating if 
the initial residual is less than 1e-12.

Barry





On Sep 30, 2021, at 4:16 AM, Marco Cisternino 
mailto:marco.cistern...@optimad.it>> wrote:

Hello Barry.
This is the output of ksp_view using fgmres and gamg. It has to be said that 
the solution of the linear system should be a zero values field. As you can see 
both unpreconditioned residual and r/b converge at this iteration of the CFD 
solver. During the time integration of the CFD, I can observe pressure linear 
solver residuals behaving in a different way: unpreconditioned residual stil 
converges but r/b stalls. After the output of ksp_view I add the output of 
ksp_monitor_true_residual for one of these iteration where r/b stalls.
Thanks,

KSP Object: 1 MPI processes
  type: fgmres
restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalizati

Re: [petsc-users] Disconnected domains and Poisson equation

2021-10-01 Thread Marco Cisternino
Thank you Barry.
I added a custom atoll = 1.0e-12 and this makes the CFD stable with all the 
linear solver types. CFD solution is good and pressure is a good “zero” field 
at every CFD iteration.
I did the same test using ASM+ILU+FGMRES(BCGS and GMRES) and the behaviour is 
the same.
During some CFD iteration the residual of linear system starts slightly higher 
than atol and the linear solver makes some iteration (2/3 iterations) before it 
stops because of atol.
The pressure is still different in the 2 sub-domains (order 1.0e-14 because of 
those few linear solver iterations), therefore no symmetry of the solution In 
the 2 sub-domains.
I think it is a matter of round-off, do you agree on this? Or do I need to take 
care of this difference as a symptom of something wrong?

Thank you for your support.

Marco Cisternino

From: Barry Smith 
Sent: giovedì 30 settembre 2021 16:39
To: Marco Cisternino 
Cc: petsc-users@mcs.anl.gov
Subject: Re: [petsc-users] Disconnected domains and Poisson equation


   It looks like the initial solution (guess) is to round-off the solution to 
the linear system 9.010260489109e-14

0 KSP unpreconditioned resid norm 9.010260489109e-14 true resid norm 
9.010260489109e-14 ||r(i)||/||b|| 2.021559024868e+00
  0 KSP Residual norm 9.010260489109e-14 % max 1.e+00 min 
1.e+00 max/min 1.e+00
  1 KSP unpreconditioned resid norm 4.918108339808e-15 true resid norm 
4.918171792537e-15 ||r(i)||/||b|| 1.103450292594e-01
  1 KSP Residual norm 4.918108339808e-15 % max 9.566256813737e-01 min 
9.566256813737e-01 max/min 1.e+00
  2 KSP unpreconditioned resid norm 1.443599554690e-15 true resid norm 
1.444867143493e-15 ||r(i)||/||b|| 3.241731154382e-02
  2 KSP Residual norm 1.443599554690e-15 % max 9.614019380614e-01 min 
7.360950481750e-01 max/min 1.306083963538e+00

Thus the Krylov solver will not be able to improve the solution, it then gets 
stuck trying to improve the solution but cannot because of round off.

In other words the algorithm has converged (even at the initial solution 
(guess) and should stop immediately.

You can use -ksp_atol 1.e-12 to get it to stop immediately without iterating if 
the initial residual is less than 1e-12.

Barry




On Sep 30, 2021, at 4:16 AM, Marco Cisternino 
mailto:marco.cistern...@optimad.it>> wrote:

Hello Barry.
This is the output of ksp_view using fgmres and gamg. It has to be said that 
the solution of the linear system should be a zero values field. As you can see 
both unpreconditioned residual and r/b converge at this iteration of the CFD 
solver. During the time integration of the CFD, I can observe pressure linear 
solver residuals behaving in a different way: unpreconditioned residual stil 
converges but r/b stalls. After the output of ksp_view I add the output of 
ksp_monitor_true_residual for one of these iteration where r/b stalls.
Thanks,

KSP Object: 1 MPI processes
  type: fgmres
restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization 
with no iterative refinement
happy breakdown tolerance 1e-30
  maximum iterations=100, nonzero initial guess
  tolerances:  relative=1e-05, absolute=1e-50, divergence=1.
  right preconditioning
  using UNPRECONDITIONED norm type for convergence test
PC Object: 1 MPI processes
  type: gamg
type is MULTIPLICATIVE, levels=4 cycles=v
  Cycles per PCApply=1
  Using externally compute Galerkin coarse grid matrices
  GAMG specific options
Threshold for dropping small values in graph on each level =   0.02   
0.02
Threshold scaling factor for each level not specified = 1.
AGG specific options
  Symmetric graph true
  Number of levels to square graph 1
  Number smoothing steps 0
  Coarse grid solver -- level ---
KSP Object: (mg_coarse_) 1 MPI processes
  type: preonly
  maximum iterations=1, initial guess is zero
  tolerances:  relative=1e-05, absolute=1e-50, divergence=1.
  left preconditioning
  using NONE norm type for convergence test
PC Object: (mg_coarse_) 1 MPI processes
  type: bjacobi
number of blocks = 1
Local solve is same for all blocks, in the following KSP and PC objects:
KSP Object: (mg_coarse_sub_) 1 MPI processes
  type: preonly
  maximum iterations=1, initial guess is zero
  tolerances:  relative=1e-05, absolute=1e-50, divergence=1.
  left preconditioning
  using DEFAULT norm type for convergence test
PC Object: (mg_coarse_sub_) 1 MPI processes
  type: lu
  PC has not been set up so information may be incomplete
out-of-place factorization
tolerance for zero pivot 2.22045e-14
using diagonal shift on blocks to prevent zero pivot [INBLOCKS]
matrix ordering: nd
  linear system matrix = precond matrix:
  Mat Object: 1 MPI pro

Re: [petsc-users] Disconnected domains and Poisson equation

2021-09-30 Thread Marco Cisternino
 
4.918171792537e-15 ||r(i)||/||b|| 1.103450292594e-01
  1 KSP Residual norm 4.918108339808e-15 % max 9.566256813737e-01 min 
9.566256813737e-01 max/min 1.e+00
  2 KSP unpreconditioned resid norm 1.443599554690e-15 true resid norm 
1.444867143493e-15 ||r(i)||/||b|| 3.241731154382e-02
  2 KSP Residual norm 1.443599554690e-15 % max 9.614019380614e-01 min 
7.360950481750e-01 max/min 1.306083963538e+00
  3 KSP unpreconditioned resid norm 6.623206616803e-16 true resid norm 
6.654132553541e-16 ||r(i)||/||b|| 1.492933720678e-02
  3 KSP Residual norm 6.623206616803e-16 % max 9.764112945239e-01 min 
4.911485418014e-01 max/min 1.988016274960e+00
  4 KSP unpreconditioned resid norm 6.551896936698e-16 true resid norm 
6.646157296305e-16 ||r(i)||/||b|| 1.491144376933e-02
  4 KSP Residual norm 6.551896936698e-16 % max 9.883425885532e-01 min 
1.461270778833e-01 max/min 6.763582786091e+00
  5 KSP unpreconditioned resid norm 6.97644887e-16 true resid norm 
1.720560536914e-15 ||r(i)||/||b|| 3.860282047823e-02
  5 KSP Residual norm 6.97644887e-16 % max 1.000409371755e+00 min 
4.989767363560e-03 max/min 2.004921870829e+02
  6 KSP unpreconditioned resid norm 6.496945794974e-17 true resid norm 
2.031914800253e-14 ||r(i)||/||b|| 4.558842341106e-01
  6 KSP Residual norm 6.496945794974e-17 % max 1.004914985753e+00 min 
1.459258738706e-03 max/min 6.886475709192e+02
  7 KSP unpreconditioned resid norm 1.965237342540e-17 true resid norm 
1.684522207337e-14 ||r(i)||/||b|| 3.779425772373e-01
  7 KSP Residual norm 1.965237342540e-17 % max 1.005737762541e+00 min 
1.452603803766e-03 max/min 6.923689446035e+02
  8 KSP unpreconditioned resid norm 1.627718951285e-17 true resid norm 
1.958642967520e-14 ||r(i)||/||b|| 4.394448276241e-01
  8 KSP Residual norm 1.627718951285e-17 % max 1.006364278765e+00 min 
1.452081813014e-03 max/min 6.930492963590e+02
  9 KSP unpreconditioned resid norm 1.616577677764e-17 true resid norm 
2.019110946644e-14 ||r(i)||/||b|| 4.530115373837e-01
  9 KSP Residual norm 1.616577677764e-17 % max 1.006648747131e+00 min 
1.452031376577e-03 max/min 6.932692801059e+02
10 KSP unpreconditioned resid norm 1.285788988203e-17 true resid norm 
2.065082694477e-14 ||r(i)||/||b|| 4.633258453698e-01
10 KSP Residual norm 1.285788988203e-17 % max 1.007469033514e+00 min 
1.433291867068e-03 max/min 7.029057072477e+02
11 KSP unpreconditioned resid norm 5.490854431580e-19 true resid norm 
1.798071628891e-14 ||r(i)||/||b|| 4.034187394623e-01
11 KSP Residual norm 5.490854431580e-19 % max 1.008058905554e+00 min 
1.369401685301e-03 max/min 7.361309076612e+02
12 KSP unpreconditioned resid norm 1.371754802104e-20 true resid norm 
1.965688920064e-14 ||r(i)||/||b|| 4.410256708163e-01
12 KSP Residual norm 1.371754802104e-20 % max 1.008409402214e+00 min 
1.369243011779e-03 max/min 7.364721919624e+02
Linear solve converged due to CONVERGED_RTOL iterations 12



Marco Cisternino

From: Barry Smith 
Sent: mercoledì 29 settembre 2021 18:34
To: Marco Cisternino 
Cc: petsc-users@mcs.anl.gov
Subject: Re: [petsc-users] Disconnected domains and Poisson equation




On Sep 29, 2021, at 11:59 AM, Marco Cisternino 
mailto:marco.cistern...@optimad.it>> wrote:

For sake of completeness, explicitly building the null space using a vector per 
sub-domain make s the CFD runs using BCGS and GMRES more stable, but still 
slower than FGMRES.

  Something is strange. Please run with -ksp_view and send the output on the 
solver details.


I had divergence using BCGS and GMRES setting the null space with only one 
constant.
Thanks

Marco Cisternino

From: Marco Cisternino
Sent: mercoledì 29 settembre 2021 17:54
To: Barry Smith mailto:bsm...@petsc.dev>>
Cc: petsc-users@mcs.anl.gov<mailto:petsc-users@mcs.anl.gov>
Subject: RE: [petsc-users] Disconnected domains and Poisson equation

Thank you Barry for the quick reply.
About the null space: I already tried what you suggest, building 2 Vec 
(constants) with 0 and 1 chosen by sub-domain, normalizing them and setting the 
null space like this
MatNullSpaceCreate(PETSC_COMM_WORLD,PETSC_FALSE,nconstants,constants,);
The solution is slightly different in values but it is still different in the 
two sub-domains.
About the solver: I tried BCGS, GMRES and FGMRES. The linear system is a 
pressure system in a navier-stokes solver and only solving with FGMRES makes 
the CFD stable, with BCGS and GMRES the CFD solution diverges. Moreover, in the 
same case but with a single domain, CFD solution is stable using all the 
solvers, but FGMRES converges in much less iterations than the others.

Marco Cisternino

From: Barry Smith mailto:bsm...@petsc.dev>>
Sent: mercoledì 29 settembre 2021 15:59
To: Marco Cisternino 
mailto:marco.cistern...@optimad.it>>
Cc: petsc-users@mcs.anl.gov<mailto:petsc-users@mcs.anl.gov>
Subject: Re: [petsc-users] Disconnected domains and Poisson equation


  The problem actually has a two dimensional null space; constant on each 
domain but possibly different 

Re: [petsc-users] Disconnected domains and Poisson equation

2021-09-29 Thread Marco Cisternino
For sake of completeness, explicitly building the null space using a vector per 
sub-domain make s the CFD runs using BCGS and GMRES more stable, but still 
slower than FGMRES.
I had divergence using BCGS and GMRES setting the null space with only one 
constant.
Thanks

Marco Cisternino

From: Marco Cisternino
Sent: mercoledì 29 settembre 2021 17:54
To: Barry Smith 
Cc: petsc-users@mcs.anl.gov
Subject: RE: [petsc-users] Disconnected domains and Poisson equation

Thank you Barry for the quick reply.
About the null space: I already tried what you suggest, building 2 Vec 
(constants) with 0 and 1 chosen by sub-domain, normalizing them and setting the 
null space like this
MatNullSpaceCreate(PETSC_COMM_WORLD,PETSC_FALSE,nconstants,constants,);
The solution is slightly different in values but it is still different in the 
two sub-domains.
About the solver: I tried BCGS, GMRES and FGMRES. The linear system is a 
pressure system in a navier-stokes solver and only solving with FGMRES makes 
the CFD stable, with BCGS and GMRES the CFD solution diverges. Moreover, in the 
same case but with a single domain, CFD solution is stable using all the 
solvers, but FGMRES converges in much less iterations than the others.

Marco Cisternino

From: Barry Smith mailto:bsm...@petsc.dev>>
Sent: mercoledì 29 settembre 2021 15:59
To: Marco Cisternino 
mailto:marco.cistern...@optimad.it>>
Cc: petsc-users@mcs.anl.gov<mailto:petsc-users@mcs.anl.gov>
Subject: Re: [petsc-users] Disconnected domains and Poisson equation


  The problem actually has a two dimensional null space; constant on each 
domain but possibly different constants. I think you need to build the 
MatNullSpace by explicitly constructing two vectors, one with 0 on one domain 
and constant value on the other and one with 0 on the other domain and constant 
on the first.

   Separate note: why use FGMRES instead of just GMRES? If the problem is 
linear and the preconditioner is linear (no GMRES inside the smoother) then you 
can just use GMRES and it will save a little space/work and be conceptually 
clearer.

  Barry

On Sep 29, 2021, at 8:46 AM, Marco Cisternino 
mailto:marco.cistern...@optimad.it>> wrote:

Good morning,
I want to solve the Poisson equation on a 3D domain with 2 non-connected 
sub-domains.
I am using FGMRES+GAMG and I have no problem if the two sub-domains see a 
Dirichlet boundary condition each.
On the same domain I would like to solve the Poisson equation imposing periodic 
boundary condition in one direction and homogenous Neumann boundary conditions 
in the other two directions. The two sub-domains are symmetric with respect to 
the separation between them and the operator discretization and the right hand 
side are symmetric as well. It would be nice to have the same solution in both 
the sub-domains.
Setting the null space to the constant, the solver converges to a solution 
having the same gradients in both sub-domains but different values.
Am I doing some wrong with the null space? I’m not setting a block matrix (one 
block for each sub-domain), should I?
I tested the null space against the matrix using MatNullSpaceTest and the 
answer is true. Can I do something more to have a symmetric solution as outcome 
of the solver?
Thank you in advance for any comments and hints.

Best regards,

Marco Cisternino



Re: [petsc-users] Disconnected domains and Poisson equation

2021-09-29 Thread Marco Cisternino
Thank you Barry for the quick reply.
About the null space: I already tried what you suggest, building 2 Vec 
(constants) with 0 and 1 chosen by sub-domain, normalizing them and setting the 
null space like this
MatNullSpaceCreate(PETSC_COMM_WORLD,PETSC_FALSE,nconstants,constants,);
The solution is slightly different in values but it is still different in the 
two sub-domains.
About the solver: I tried BCGS, GMRES and FGMRES. The linear system is a 
pressure system in a navier-stokes solver and only solving with FGMRES makes 
the CFD stable, with BCGS and GMRES the CFD solution diverges. Moreover, in the 
same case but with a single domain, CFD solution is stable using all the 
solvers, but FGMRES converges in much less iterations than the others.

Marco Cisternino

From: Barry Smith 
Sent: mercoledì 29 settembre 2021 15:59
To: Marco Cisternino 
Cc: petsc-users@mcs.anl.gov
Subject: Re: [petsc-users] Disconnected domains and Poisson equation


  The problem actually has a two dimensional null space; constant on each 
domain but possibly different constants. I think you need to build the 
MatNullSpace by explicitly constructing two vectors, one with 0 on one domain 
and constant value on the other and one with 0 on the other domain and constant 
on the first.

   Separate note: why use FGMRES instead of just GMRES? If the problem is 
linear and the preconditioner is linear (no GMRES inside the smoother) then you 
can just use GMRES and it will save a little space/work and be conceptually 
clearer.

  Barry


On Sep 29, 2021, at 8:46 AM, Marco Cisternino 
mailto:marco.cistern...@optimad.it>> wrote:

Good morning,
I want to solve the Poisson equation on a 3D domain with 2 non-connected 
sub-domains.
I am using FGMRES+GAMG and I have no problem if the two sub-domains see a 
Dirichlet boundary condition each.
On the same domain I would like to solve the Poisson equation imposing periodic 
boundary condition in one direction and homogenous Neumann boundary conditions 
in the other two directions. The two sub-domains are symmetric with respect to 
the separation between them and the operator discretization and the right hand 
side are symmetric as well. It would be nice to have the same solution in both 
the sub-domains.
Setting the null space to the constant, the solver converges to a solution 
having the same gradients in both sub-domains but different values.
Am I doing some wrong with the null space? I’m not setting a block matrix (one 
block for each sub-domain), should I?
I tested the null space against the matrix using MatNullSpaceTest and the 
answer is true. Can I do something more to have a symmetric solution as outcome 
of the solver?
Thank you in advance for any comments and hints.

Best regards,

Marco Cisternino



[petsc-users] FGMRES and BCGS

2021-09-29 Thread Marco Cisternino
Good Morning,
I usually solve a non-symmetric discretization of the Poisson equation using 
GAMG+FGMRES.
In the last days I tried to use BCGS in place of FGMRES, still using GAMG as 
preconditioner.
No problem in finding the solution but I'm experiencing something I didn't 
expect.
The test case is a 25 millions cells domain with Dirichlet and Neumann boundary 
conditions.
Both the solvers are able to solve the problem with an increasing number of MPI 
processes, but:

  *   FGMRES is about 25% faster than BCGS for all the processes number
  *   Both solvers have the same scalability from 48 to 384 processes
  *   Both solvers almost use the same amount of memory (FGMRES use a 
restart=30)
Am I wrong expecting less memory consumption and more performance from BCGS 
with respect to FGMRES?
Thank you in advance for any help.

Best regards,
Marco Cisternino




[petsc-users] Disconnected domains and Poisson equation

2021-09-29 Thread Marco Cisternino
Good morning,
I want to solve the Poisson equation on a 3D domain with 2 non-connected 
sub-domains.
I am using FGMRES+GAMG and I have no problem if the two sub-domains see a 
Dirichlet boundary condition each.
On the same domain I would like to solve the Poisson equation imposing periodic 
boundary condition in one direction and homogenous Neumann boundary conditions 
in the other two directions. The two sub-domains are symmetric with respect to 
the separation between them and the operator discretization and the right hand 
side are symmetric as well. It would be nice to have the same solution in both 
the sub-domains.
Setting the null space to the constant, the solver converges to a solution 
having the same gradients in both sub-domains but different values.
Am I doing some wrong with the null space? I'm not setting a block matrix (one 
block for each sub-domain), should I?
I tested the null space against the matrix using MatNullSpaceTest and the 
answer is true. Can I do something more to have a symmetric solution as outcome 
of the solver?
Thank you in advance for any comments and hints.

Best regards,

Marco Cisternino



[petsc-users] R: AMG Variable block preconditioner

2019-10-28 Thread Marco Cisternino via petsc-users
Hi Stefano,
thanks for your interest.
Mainly, elliptic operators both in the context of methods in NS solvers and in 
the context of mesh deformation.
I didn't think about AMG on each block. What I was thinking about is an AMG 
preconditioner aware about the variable size blocks.
I hope it is clearer than before.
Thanks,


Marco Cisternino, PhD
marco.cistern...@optimad.it
___
OPTIMAD Engineering srl
Via Giacinto Collegno 18, Torino, Italia.
+3901119719782
www.optimad.it



Da: Stefano Zampini 
Inviato: lunedì 28 ottobre 2019 17:09
A: Marco Cisternino
Cc: petsc-users
Oggetto: Re: [petsc-users] AMG Variable block preconditioner

Marco,

you should define what do you mean by "AMG block preconditioning"; separate AMG 
on each block? What operator do you plan to precondition?
AFAIK, MatSetVariableBlockSizes is just exploited by the Jacobi solver .

Thanks
Stefano

Il giorno lun 28 ott 2019 alle ore 19:05 Marco Cisternino via petsc-users 
mailto:petsc-users@mcs.anl.gov>> ha scritto:
Hello everybody.
We would like to setup an algebraic multi-grid block preconditioner and I would 
like to be able to define custom blocks of variable sizes. Reading the 
documentation, it seems that this can be achieved using the "PCGAMG" 
preconditioner and defining the blocks with the function 
"MatSetVariableBlockSizes". Is this correct?
Are the variable blocks exploited by PCGAMG algorithms? We suppose this PC 
structure is already exploited in PCVPBJACOBI.
Are the different options like PCHYPRE (setting "-pc_hypre_type" to 
"boomeramg") suitable for AMG preconditioner with variable size blocks?
If none of the above solutions are available, could anyone suggest any other 
strategy in order to get a preconditioner like this using PETSc?
Any help is much appreciated.
Thanks.

Best regards,

Marco Cisternino


--
Stefano


[petsc-users] AMG Variable block preconditioner

2019-10-28 Thread Marco Cisternino via petsc-users
Hello everybody.
We would like to setup an algebraic multi-grid block preconditioner and I would 
like to be able to define custom blocks of variable sizes. Reading the 
documentation, it seems that this can be achieved using the "PCGAMG" 
preconditioner and defining the blocks with the function 
"MatSetVariableBlockSizes". Is this correct?
Are the variable blocks exploited by PCGAMG algorithms? We suppose this PC 
structure is already exploited in PCVPBJACOBI.
Are the different options like PCHYPRE (setting "-pc_hypre_type" to 
"boomeramg") suitable for AMG preconditioner with variable size blocks?
If none of the above solutions are available, could anyone suggest any other 
strategy in order to get a preconditioner like this using PETSc?
Any help is much appreciated.
Thanks.

Best regards,

Marco Cisternino

[petsc-users] R: Multiple linear solver defined at command line

2019-09-24 Thread Marco Cisternino via petsc-users
Thank you, Lawrence.
Cool! That's perfect!


Bests,
Marco Cisternino



Da: Lawrence Mitchell 
Inviato: martedì 24 settembre 2019 13:59
A: Marco Cisternino
Cc: petsc-users
Oggetto: Re: [petsc-users] Multiple linear solver defined at command line

Dear Marco,

> On 24 Sep 2019, at 12:06, Marco Cisternino via petsc-users 
>  wrote:
>
> Good morning,
> in my code I need to solve 2 linear systems. I would like to use different 
> solvers for the 2 systems and most of all I would like to choose the single 
> solver by flags from command line, is it possible?
> I can call PetscInitialize/PetscFinalize multiple times passing 
> PetscInitialize different argc and argv. What happens if I call the second 
> PetscInitiliaze before the first PetscFinalize with different argc and argv?

The way you should do this is by giving your two different solvers two 
different options prefixes:

Assuming they are KSP objects call:

KSPSetOptionsPrefix(ksp1, "solver1_");
KSPSetOptionsPrefix(ksp2, "solver2_");

Now you can configure ksp1 with:

-solver1_ksp_type ... -solver1_pc_type ...

And ksp2 with:

-solver2_ksp_type ... -solver2_pc_type ...

In general, all PETSc objects can be given such an options prefix so that they 
may be controlled separately.

Thanks,

Lawrence



[petsc-users] Multiple linear solver defined at command line

2019-09-24 Thread Marco Cisternino via petsc-users
Good morning,
in my code I need to solve 2 linear systems. I would like to use different 
solvers for the 2 systems and most of all I would like to choose the single 
solver by flags from command line, is it possible?
I can call PetscInitialize/PetscFinalize multiple times passing PetscInitialize 
different argc and argv. What happens if I call the second PetscInitiliaze 
before the first PetscFinalize with different argc and argv? 
Thanks.

Bests,
Marco Cisternino

Re: [petsc-users] Elliptic operator with Neumann conditions

2018-02-07 Thread Marco Cisternino
Barry, thanks a lot!
Exactly what I wanted to understand and clearly explained.
Again thank you very much.

Marco

Ottieni Outlook per Android<https://aka.ms/ghei36>


From: Smith, Barry F. <bsm...@mcs.anl.gov>
Sent: Wednesday, February 7, 2018 8:24:22 PM
To: Marco Cisternino
Cc: petsc-users
Subject: Re: [petsc-users] Elliptic operator with Neumann conditions


   A square matrix with a null space results in an underdetermined system, that 
is a solution with more than one solution. The solutions can be written asx 
+ alpha_1 v_1 + ... alpha_n v_n where the v_n form an orthonormal basis for the 
null space and x is orthogonal to the null space.

   When you provide the null space KSP Krylov methods find the norm minimizing 
solution (x) , that is it finds the x with the smallest norm that satisfies the 
system. This is exactly the same as saying you take any solution of the system 
and remove all the components in the directions of the null space.

   If you do not provide the null space then the Krylov space may find you a 
solution that is not the norm minimizing solution, thus that solution has a 
component of the null space within it. What component of the null space in the 
solution depends on what you use for an initial guess and right hand side.

   When you have a preconditioner then things can get trickier because the 
preconditioner can (unless you remove them) components in the direction of the 
null space. These components can get amplified with each iteration of the 
Krylov method so it looks like the Krylov method is not converging since the 
norm of the solution is getting larger and larger (these larger components are 
in the null space.) This is why one should always provide the null space when 
solving singular systems with singular matrices.

  Barry


> On Feb 7, 2018, at 11:43 AM, Marco Cisternino <marco.cistern...@optimad.it> 
> wrote:
>
> Hi everybody,
> I would like to ask what solution is computed if I try to solve the linear 
> system relative to the problem in subject without creating the null space.
> I tried with and without the call to
> MatNullSpaceCreate(m_communicator, PETSC_TRUE, 0, NULL, );
> and I get zero averaged solution with and the same solution plus a constant 
> without.
> How does PETSc  work in the second case?
> Does it check the matrix singularity? And is it able to create the null space 
> with the constant automatically?
> Thanks.
>
>
> Marco Cisternino, PhD
> marco.cistern...@optimad.it
> ___
> OPTIMAD Engineering srl
> Via Giacinto Collegno 18, Torino, Italia.
> +3901119719782
> www.optimad.it<http://www.optimad.it>
>



Re: [petsc-users] Elliptic operator with Neumann conditions

2018-02-07 Thread Marco Cisternino
I'm sorry Matt but I cannot understand what flexible gmres computes when no 
null space is created.
Could you give me some hints, please? Even in very simple cases...
Thanks


Marco Cisternino, PhD
marco.cistern...@optimad.it
___
OPTIMAD Engineering srl
Via Giacinto Collegno 18, Torino, Italia.
+3901119719782
www.optimad.it



Da: Matthew Knepley <knep...@gmail.com>
Inviato: mercoledì 7 febbraio 2018 19:38:08
A: Marco Cisternino
Cc: petsc-users
Oggetto: Re: [petsc-users] Elliptic operator with Neumann conditions

On Thu, Feb 8, 2018 at 5:29 AM, Marco Cisternino 
<marco.cistern...@optimad.it<mailto:marco.cistern...@optimad.it>> wrote:
Thanks Matt,
I'm using KSPFGMRES and I'm sorry but what do you mean with initial residual?
I also force a non-zero initial guess.

If your initial residual has a component in the null space of the operator, it 
is likely to stay there.

   Matt

Thanks again


Marco Cisternino, PhD
marco.cistern...@optimad.it<mailto:marco.cistern...@optimad.it>
___
OPTIMAD Engineering srl
Via Giacinto Collegno 18, Torino, Italia.
+3901119719782<tel:%2B3901119719782>
www.optimad.it<http://www.optimad.it>



Da: Matthew Knepley <knep...@gmail.com<mailto:knep...@gmail.com>>
Inviato: mercoledì 7 febbraio 2018 18:57:56
A: Marco Cisternino
Cc: petsc-users
Oggetto: Re: [petsc-users] Elliptic operator with Neumann conditions

On Thu, Feb 8, 2018 at 4:43 AM, Marco Cisternino 
<marco.cistern...@optimad.it<mailto:marco.cistern...@optimad.it><mailto:marco.cistern...@optimad.it<mailto:marco.cistern...@optimad.it>>>
 wrote:
Hi everybody,
I would like to ask what solution is computed if I try to solve the linear 
system relative to the problem in subject without creating the null space.
I tried with and without the call to
MatNullSpaceCreate(m_communicator, PETSC_TRUE, 0, NULL, );
and I get zero averaged solution with and the same solution plus a constant 
without.
How does PETSc  work in the second case?

It depends on the Krylov method you use and the initial residual. We do not do 
anything special.

  Thanks,

 Matt

Does it check the matrix singularity? And is it able to create the null space 
with the constant automatically?
Thanks.


Marco Cisternino, PhD
marco.cistern...@optimad.it<mailto:marco.cistern...@optimad.it><mailto:marco.cistern...@optimad.it<mailto:marco.cistern...@optimad.it>>
___
OPTIMAD Engineering srl
Via Giacinto Collegno 18, Torino, Italia.
+3901119719782<tel:%2B3901119719782><tel:%2B3901119719782>
www.optimad.it<http://www.optimad.it><http://www.optimad.it>




--
What most experimenters take for granted before they begin their experiments is 
infinitely more interesting than any results to which their experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/<http://www.caam.rice.edu/~mk51/>



--
What most experimenters take for granted before they begin their experiments is 
infinitely more interesting than any results to which their experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/<http://www.caam.rice.edu/~mk51/>


Re: [petsc-users] Elliptic operator with Neumann conditions

2018-02-07 Thread Marco Cisternino
Thanks Matt,
I'm using KSPFGMRES and I'm sorry but what do you mean with initial residual?
I also force a non-zero initial guess.
Thanks again


Marco Cisternino, PhD
marco.cistern...@optimad.it
___
OPTIMAD Engineering srl
Via Giacinto Collegno 18, Torino, Italia.
+3901119719782
www.optimad.it



Da: Matthew Knepley <knep...@gmail.com>
Inviato: mercoledì 7 febbraio 2018 18:57:56
A: Marco Cisternino
Cc: petsc-users
Oggetto: Re: [petsc-users] Elliptic operator with Neumann conditions

On Thu, Feb 8, 2018 at 4:43 AM, Marco Cisternino 
<marco.cistern...@optimad.it<mailto:marco.cistern...@optimad.it>> wrote:
Hi everybody,
I would like to ask what solution is computed if I try to solve the linear 
system relative to the problem in subject without creating the null space.
I tried with and without the call to
MatNullSpaceCreate(m_communicator, PETSC_TRUE, 0, NULL, );
and I get zero averaged solution with and the same solution plus a constant 
without.
How does PETSc  work in the second case?

It depends on the Krylov method you use and the initial residual. We do not do 
anything special.

  Thanks,

 Matt

Does it check the matrix singularity? And is it able to create the null space 
with the constant automatically?
Thanks.


Marco Cisternino, PhD
marco.cistern...@optimad.it<mailto:marco.cistern...@optimad.it>
___
OPTIMAD Engineering srl
Via Giacinto Collegno 18, Torino, Italia.
+3901119719782<tel:%2B3901119719782>
www.optimad.it<http://www.optimad.it>




--
What most experimenters take for granted before they begin their experiments is 
infinitely more interesting than any results to which their experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/<http://www.caam.rice.edu/~mk51/>


[petsc-users] Elliptic operator with Neumann conditions

2018-02-07 Thread Marco Cisternino
Hi everybody,
I would like to ask what solution is computed if I try to solve the linear 
system relative to the problem in subject without creating the null space.
I tried with and without the call to
MatNullSpaceCreate(m_communicator, PETSC_TRUE, 0, NULL, );
and I get zero averaged solution with and the same solution plus a constant 
without.
How does PETSc  work in the second case?
Does it check the matrix singularity? And is it able to create the null space 
with the constant automatically?
Thanks.


Marco Cisternino, PhD
marco.cistern...@optimad.it
___
OPTIMAD Engineering srl
Via Giacinto Collegno 18, Torino, Italia.
+3901119719782
www.optimad.it



[petsc-users] customized MatMult

2015-01-19 Thread Marco Cisternino
Good morning, 
I'm writing my MatMult function, but I don't know how to treat the ghost 
elements of the result vec. Let's say myMatMult(A,x,y). 

First question is: if I use my MatMult in a KSP to solve a linear system, do I 
have to update ghost elements in the result vec of MatMult?? Consider that any 
time the code enters myMatMult the vec x is copied in my container and the 
ghost of my container are updated. I made computation for matrix-vector and I 
copied the results in vec y, without considering ghost elements of y. 

Second question: if I have to, how? 
Let's say I state: 
ierr = myMatMult(A,x,y) ; 
and x has been built out of myMatMult with VecGhostCreate. 
How do I update ghost elements of y in myMatMult? 
VecGhostUpdateBegin/VecGhostUpdateEnd in myMatMult? 

Any hints would be appreciated. 
Thanks for you advices in advance. 

Bests, 

Marco 


Re: [petsc-users] customized MatMult

2015-01-19 Thread Marco Cisternino
Thank you Matthew, 
If I have understood: 
- I have to pass to myMatMult global vecs. And Vec created with VecGhostCreate 
are global, right? 
- In myMatMult I use VecGetArray, I store my result in the associated c-array, 
I call VecRestoreArray and the global vec is done! Right? 
- If I re-use VecGetArray/VecRestore array, ghosted location s in the c-array 
are updated, right? 
I'm sorry about confusion and thanks, again. 

Marco 




-- 
--- 
Marco Cisternino, PhD 
Software Developer 
OPTIMAD Engineering s.r.l. 
Via Giacinto Collegno 18 
10143 Torino - Italy 
www.optimad.it 
marco.cistern...@optimad.it 
+39 011 19719782 
--- 

- Messaggio originale -

Da: Matthew Knepley knep...@gmail.com 
A: Marco Cisternino marco.cistern...@optimad.it 
Cc: petsc-users@mcs.anl.gov 
Inviato: Lunedì, 19 gennaio 2015 16:40:28 
Oggetto: Re: [petsc-users] customized MatMult 




On Mon, Jan 19, 2015 at 9:28 AM, Marco Cisternino  marco.cistern...@optimad.it 
 wrote: 




Good morning, 
I'm writing my MatMult function, but I don't know how to treat the ghost 
elements of the result vec. Let's say myMatMult(A,x,y). 

First question is: if I use my MatMult in a KSP to solve a linear system, do I 
have to update ghost elements in the result vec of MatMult?? Consider that any 
time the code enters myMatMult the vec x is copied in my container and the 
ghost of my container are updated. I made computation for matrix-vector and I 
copied the results in vec y, without considering ghost elements of y. 

Second question: if I have to, how? 
Let's say I state: 
ierr = myMatMult(A,x,y); 
and x has been built out of myMatMult with VecGhostCreate. 
How do I update ghost elements of y in myMatMult? 
VecGhostUpdateBegin/VecGhostUpdateEnd in myMatMult? 

Any hints would be appreciated. 





There is some confusion here about the spaces that Vecs represent. Vecs in the 
solver are globally 
assembled vectors, which come from the global space. Vecs used in calculated 
integrals, differences, 
etc. for assembly often have ghost entries. We will say that these come from 
the local space. 


MatMult takes in a global Vec x and returns a global Vec y. 


Hopefully that is clear, 


Matt 

blockquote


Thanks for you advices in advance. 

Bests, 

Marco 

/blockquote




-- 

What most experimenters take for granted before they begin their experiments is 
infinitely more interesting than any results to which their experiments lead. 
-- Norbert Wiener 


Re: [petsc-users] customized MatMult

2015-01-19 Thread Marco Cisternino
Understood! 
Thanks, 

Marco 

- Messaggio originale -

Da: Barry Smith bsm...@mcs.anl.gov 
A: Marco Cisternino marco.cistern...@optimad.it 
Cc: Matthew Knepley knep...@gmail.com, petsc-users@mcs.anl.gov 
Inviato: Lunedì, 19 gennaio 2015 17:38:34 
Oggetto: Re: [petsc-users] customized MatMult 


 On Jan 19, 2015, at 10:09 AM, Marco Cisternino marco.cistern...@optimad.it 
 wrote: 
 
 Thank you Matthew, 
 If I have understood: 
 - I have to pass to myMatMult global vecs. And Vec created with 
 VecGhostCreate are global, right? 
 - In myMatMult I use VecGetArray, I store my result in the associated 
 c-array, I call VecRestoreArray and the global vec is done! Right? 
 - If I re-use VecGetArray/VecRestore array, ghosted locations in the c-array 
 are updated, right? 

VecGet/RestoreArray() does not trigger updating ghost points. You need to use 
VecGhostUpdateBegin/End() see the manual page to update the ghost points. 

barry 



 I'm sorry about confusion and thanks, again. 
 
 Marco 
 
 
 
 -- 
 --- 
 Marco Cisternino, PhD 
 Software Developer 
 OPTIMAD Engineering s.r.l. 
 Via Giacinto Collegno 18 
 10143 Torino - Italy 
 www.optimad.it 
 marco.cistern...@optimad.it 
 +39 011 19719782 
 --- 
 
 Da: Matthew Knepley knep...@gmail.com 
 A: Marco Cisternino marco.cistern...@optimad.it 
 Cc: petsc-users@mcs.anl.gov 
 Inviato: Lunedì, 19 gennaio 2015 16:40:28 
 Oggetto: Re: [petsc-users] customized MatMult 
 
 On Mon, Jan 19, 2015 at 9:28 AM, Marco Cisternino 
 marco.cistern...@optimad.it wrote: 
 Good morning, 
 I'm writing my MatMult function, but I don't know how to treat the ghost 
 elements of the result vec. Let's say myMatMult(A,x,y). 
 
 First question is: if I use my MatMult in a KSP to solve a linear system, do 
 I have to update ghost elements in the result vec of MatMult?? Consider that 
 any time the code enters myMatMult the vec x is copied in my container and 
 the ghost of my container are updated. I made computation for matrix-vector 
 and I copied the results in vec y, without considering ghost elements of y. 
 
 Second question: if I have to, how? 
 Let's say I state: 
 ierr = myMatMult(A,x,y); 
 and x has been built out of myMatMult with VecGhostCreate. 
 How do I update ghost elements of y in myMatMult? 
 VecGhostUpdateBegin/VecGhostUpdateEnd in myMatMult? 
 
 Any hints would be appreciated. 
 
 There is some confusion here about the spaces that Vecs represent. Vecs in 
 the solver are globally 
 assembled vectors, which come from the global space. Vecs used in 
 calculated integrals, differences, 
 etc. for assembly often have ghost entries. We will say that these come from 
 the local space. 
 
 MatMult takes in a global Vec x and returns a global Vec y. 
 
 Hopefully that is clear, 
 
 Matt 
 
 Thanks for you advices in advance. 
 
 Bests, 
 
 Marco 
 
 
 
 -- 
 What most experimenters take for granted before they begin their experiments 
 is infinitely more interesting than any results to which their experiments 
 lead. 
 -- Norbert Wiener 




[petsc-users] template function as argument to MatShellSetOperation

2015-01-12 Thread Marco Cisternino
Hi PETSc users, 
I'm trying to set my matrix multiplication with MatShellSetOperation. 
The question is: is it possible to pass a c++ template function as my matrix 
multiplication to MatShellSetOperation? 
And if yes, how? 
Thanks for any suggestions. 

Marco 



[petsc-users] KSP_DIVERGED_NAN

2011-10-13 Thread Marco Cisternino
Hi to everybody,
I wrote an interface elliptic problem solver in 2D and 3D. The linear 
system I assemble depends on the geometrical characteristics of a level 
set function. I tested them with several and more and more complex 
interfaces, but now I would try it with a real 3D interface. But I 
probably have some problem with the computation of a good level set 
function, maybe the cause is the low resolution of the original 
interface. The evidence of my problem is in the subject. No matter what 
iterative petsc solver I try to use the run stops at the first iteration 
with a Nan residual. I don't want to ask you to understand the problem, 
but to give me some information about how to debug I situation like 
this. The interface is too complex to be deeply checked, then I hope 
that looking how the ksp yields a nan residual could help me.
Thank you so much.

Best regards,

 Marco

-- 
Marco Cisternino
PhD Student
Politecnico di Torino
Mobile:+393281189696
Email:marco.cisternino at polito.it



[petsc-users] Neumann BC for interface elliptic problem

2011-09-13 Thread Marco Cisternino
Hi to everybody,
I'm trying to solve this equation
nabla(k*nabla(u))=f,
with neumann homogeneous  boundary conditions, where k is piecewise 
constant and f is a zero mean source (compatibility is respected).
To do this I solve an extended by transmission conditions linear system, 
obtained with FD second order scheme.
Everything works fine with dirichlet bc and with neumann bc too, if the 
interface, where k jumps, cut the computational domain.
But if the interface is in the middle of the computational domain, 
something wrong happens where the processes overlap, with an overall 
loss of symmetry: the source, the interface and the bc are symmetric.
I use gmres with asm. These are the lines to create the nullspace

   has_cnst=PETSC_TRUE
   call 
MatNullSpaceCreate(MPI_CART_COMM,has_cnst,0,PETSC_NULL_OBJECT,nsppoi,ierr)
   call KSPSetNullSpace(ksppoi,nsppoi,ierr)

Ask me everything you need to better understand the problem.
Could you help me?
Thanks.

 Marco

-- 
Marco Cisternino
PhD Student
Politecnico di Torino
Email:marco.cisternino at polito.it



[petsc-users] Neumann BC for interface elliptic problem

2011-09-13 Thread Marco Cisternino
Thanks, Matthew, for the quick reply.
My matrices are not symmetric. The differential problem is symmetric, 
but matrices are not.
And not only for the presence of transmission conditions at the 
interface but also because I explicitly put the discretization of the 
forward or backward second order Neumann boundary conditions as rows in 
my matrix. Then even if I try to solve a problem without interfaces my 
matrix is not symmetric. And even in this case I got problems along the 
processes boundaries and about the loss of  symmetry of the solution.
But the matrix structure is exactly the same when I solve the problem 
with the interface cutting the domain (always with the same 
implementation of Neumann boundaries conditions) with no problem.
I can eliminate the bc rows nesting them in the discretization of the 
laplacian, but the interface will always give me an asymmetric matrix.
But most of all I can't understand why for one kind of interface 
everything works and for the other not.
Thanks again.

 Marco



[petsc-users] scattering and communications

2011-07-08 Thread Marco Cisternino
Thanks for the reply, Matt.
I tried what you suggested, but I saw no inconsistency in the portion 
given to GP or extended_P.
Moreover, I tried to see what happens giving no interface, i.e. a simple 
Laplace equation.
In this case GP and extended_P have exactly the same structure and 
partitioning. And nothing changes.
All the communications happens in VecScatterCreate, the actual scatter 
(begin and end) uses almost no time and no communications, but 
VecScatterCreate does.
Could you explain me why?
The two IS I use to create the scatter context are from sequential c 
vectors as large as the entire computational domain. Could be this the 
problem?
If I used smaller and local vectors to create the ISs and then the 
scatter context, would VecScatterCreate be more performing??
Thanks a lot.

 Marco


 On Thu, Jul 7, 2011 at 4:55 PM, Marco Cisterninomarco.cisternino at polito.it
 wrote:
 Hi,
 I would like to understand better how VecScatterBegin and VecScatterEnd
 work.
 I solve an interface elliptic problem on a Cartesian grid introducing extra
 unknowns (intersections between the interface and the grid axes).
 Therefore, my linear system is augmented with extra condition on these new
 unknowns.
 I use a DA to manage the grid, but I have to use MatCreateMPIAIJ to build
 the matrix because of the nature of the problem.
   MatCreateMPIAIJ(proc-cart_**comm,proc-intersections[proc-**
 rank],proc-intersections[**proc-rank],g_rows,g_cols,21,**
 PETSC_NULL,21,PETSC_NULL,**fsolv-AA);
 VecCreateMPI(proc-cart_comm,**proc-intersections[proc-**
 rank],PETSC_DECIDE,fsolv-**extended_P);
 VecCreateMPI(proc-cart_comm,**proc-intersections[proc-**
 rank],PETSC_DECIDE,fsolv-**extended_RHS);

 where
 proc-intersections[proc-**rank] is the total number of unknowns for each
 processor  in its sub-domain (grid points + intersections).
 g_rows=g_cols is the total number of unknowns in the entire computational
 domain (grid points + intersections).
 cart_comm is a Cartesian communicator.

 The arrangement of the unknowns is such that every processor has the rows
 of the matrix and of extended_P(the solution) relative to the actual
 unknowns in its sub-domain.
 I solve the system and then I call VecScatterBegin and VecScatterEnd:

   ierr=VecScatterCreate(fsolv-**extended_P,scatter-from,vec-**
 GP,scatter-to,scatter-**scatt);
   ierr=VecScatterBegin(scatter-**scatt,fsolv-extended_P,vec-**
 GP,INSERT_VALUES,SCATTER_**FORWARD);
   ierr=VecScatterEnd(scatter-**scatt,fsolv-extended_P,vec-**
 GP,INSERT_VALUES,SCATTER_**FORWARD);

 in order to get in GP (made using DACreateGlobalVector(grid-da,**vec-GP);
 ) only the solution on the grid points.
 It works, I mean I can get the right solution in GP, but the scattering
 doesn't scale at all!
 I would expect no communications during the scattering, doing what I do,
 but -log_summary shows me a number of MPI message growing with the number of
 processors.
 Every portion of GP contains only the grid points of a processor, while
 every portion of extended_P contains the same grid points plus the
 intersections in the relative sub-domain. which is the need for the
 communications doing such a scattering?

 There is just a mismatch somewhere in your indices. It should be easy to
 check the locally owned indices in
 GP using


 http://www.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-dev/docs/manualpages/Vec/VecGetOwnershipRange.html

 and compare to what you have.

 Matt


 I don't know if I was clear enough. Please, ask me what you need to
 understand my problem.
 Thanks a lot.

 Best regards,

 Marco

 --
 Marco Cisternino
 PhD Student
 Politecnico di Torino
 Mobile:+393281189696
 Email:marco.cisternino at polito.**itEmail%3Amarco.cisternino at polito.it




-- 
Marco Cisternino
PhD Student
Politecnico di Torino
Mobile:+393281189696
Email:marco.cisternino at polito.it



[petsc-users] scattering and communications

2011-07-07 Thread Marco Cisternino
Hi,
I would like to understand better how VecScatterBegin and VecScatterEnd 
work.
I solve an interface elliptic problem on a Cartesian grid introducing 
extra unknowns (intersections between the interface and the grid axes).
Therefore, my linear system is augmented with extra condition on these 
new unknowns.
I use a DA to manage the grid, but I have to use MatCreateMPIAIJ to 
build the matrix because of the nature of the problem.
   
MatCreateMPIAIJ(proc-cart_comm,proc-intersections[proc-rank],proc-intersections[proc-rank],g_rows,g_cols,21,PETSC_NULL,21,PETSC_NULL,fsolv-AA);
VecCreateMPI(proc-cart_comm,proc-intersections[proc-rank],PETSC_DECIDE,fsolv-extended_P);
 

VecCreateMPI(proc-cart_comm,proc-intersections[proc-rank],PETSC_DECIDE,fsolv-extended_RHS);
 


where
proc-intersections[proc-rank] is the total number of unknowns for each 
processor  in its sub-domain (grid points + intersections).
g_rows=g_cols is the total number of unknowns in the entire 
computational domain (grid points + intersections).
cart_comm is a Cartesian communicator.

The arrangement of the unknowns is such that every processor has the 
rows of the matrix and of extended_P(the solution) relative to the 
actual unknowns in its sub-domain.
I solve the system and then I call VecScatterBegin and VecScatterEnd:

   
ierr=VecScatterCreate(fsolv-extended_P,scatter-from,vec-GP,scatter-to,scatter-scatt);
   
ierr=VecScatterBegin(scatter-scatt,fsolv-extended_P,vec-GP,INSERT_VALUES,SCATTER_FORWARD);
   
ierr=VecScatterEnd(scatter-scatt,fsolv-extended_P,vec-GP,INSERT_VALUES,SCATTER_FORWARD);

in order to get in GP (made using 
DACreateGlobalVector(grid-da,vec-GP); ) only the solution on the grid 
points.
It works, I mean I can get the right solution in GP, but the scattering 
doesn't scale at all!
I would expect no communications during the scattering, doing what I do, 
but -log_summary shows me a number of MPI message growing with the 
number of processors.
Every portion of GP contains only the grid points of a processor, while 
every portion of extended_P contains the same grid points plus the 
intersections in the relative sub-domain. which is the need for the 
communications doing such a scattering?
I don't know if I was clear enough. Please, ask me what you need to 
understand my problem.
Thanks a lot.

Best regards,

 Marco

-- 
Marco Cisternino
PhD Student
Politecnico di Torino
Mobile:+393281189696
Email:marco.cisternino at polito.it