Re: [petsc-users] An issue of extraction of factorization matrices in sparse direct solver

2022-10-24 Thread Xiaoye S. Li
There are some utility routines for printing L\U in superlu_dist:
SRC/dutil_dist.c

   1. You can output the L factor to a file with the triplet format by using

   
https://github.com/xiaoyeli/superlu_dist/blob/324d65fced6ce8abf0eb900223cba0207d538db7/SRC/dutil_dist.c#L675
   but use line 755 instead of line 753.
   2. You can convert the L factor to CSR or triplet using
   
https://github.com/xiaoyeli/superlu_dist/blob/324d65fced6ce8abf0eb900223cba0207d538db7/SRC/dutil_dist.c#L815

   
https://github.com/xiaoyeli/superlu_dist/blob/324d65fced6ce8abf0eb900223cba0207d538db7/SRC/dutil_dist.c#L1075
   but need to make sure you only use 1 MPI to call superlu_dist
   3. You can modify

   
https://github.com/xiaoyeli/superlu_dist/blob/324d65fced6ce8abf0eb900223cba0207d538db7/SRC/dutil_dist.c#L1229
   to generate CSR/triplet for the U factor as well.


Sherry Li

On Sun, Oct 23, 2022 at 3:38 AM Matthew Knepley  wrote:

> On Sun, Oct 23, 2022 at 2:58 AM 赵刚  wrote:
>
>> Dear developers,
>>
>> I have another question. How can I get the L and U matrices and store
>> them in a file when I call SuperLU through PETSc? Thanks.
>
>
> SuperLU stores these matrices in its own format. If you want to do I/O
> with them, you would probably have to
> extract them from the Petsc Mat and call SuperLU I/O functions, if they
> exist.
>
>   Thanks,
>
>  Matt
>
>
>> Best Regards,
>> Gang
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
> https://www.cse.buffalo.edu/~knepley/
> 
>


Re: [petsc-users] A question about solving a saddle point system with a direct solver

2022-10-24 Thread Jau-Uei Chen
I see. Thanks so much for the comment.

On Mon, Oct 24, 2022 at 10:47 AM Jed Brown  wrote:

> You can get lucky with null spaces even with factorization
> preconditioners, especially if the right hand side is orthogonal to the
> null space. But it's fragile and you shouldn't rely on that being true as
> you change the problem. You can either remove the null space in your
> problem formulation (maybe) or use iterative solvers/fieldsplit
> preconditioning (which can use a direct solver on nonsingular blocks).
>
> Jau-Uei Chen  writes:
>
> > To whom it may concern,
> >
> > I am writing to ask about using PETSc with a direct solver to solve a
> > linear system where a single zero-value eigenvalue exists.
> >
> > Currently, I am working on developing a finite-element solver for a
> > linearized incompressible MHD equation. The code is based on an
> open-source
> > library called MFEM which has its own wrapper for PETSc and is used in my
> > code. From analysis, I already know that the linear system (Ax=b) to be
> > solved is a saddle point system. By using the flags "solver_pc_type svd"
> > and "solver_pc_svd_monitor", I indeed observe it. Here is an example of
> an
> > output:
> >
> > SVD: condition number 3.271390119581e+18, 1 of 66 singular values are
> > (nearly) zero
> > SVD: smallest singular values: 3.236925932523e-17 3.108788619412e-04
> > 3.840514506502e-04 4.599292003910e-04 4.909419974671e-04
> > SVD: largest singular values : 4.007319935079e+00 4.027759008411e+00
> > 4.817755760754e+00 4.176127583956e+01 1.058924751347e+02
> >
> >
> > However, What surprises me is that the numerical solutions are still
> > relatively accurate by comparing to the exact ones (i.e. manufactured
> > solutions) when I perform convergence tests even if I am using a direct
> > solver (i.e. -solver_ksp_type preonly -solver_pc_type lu
> > -solver_pc_factor_mat_solver_type
> > mumps). My question is: Why the direct solver won't break down in this
> > context? I understand that it won't be an issue for iterative solvers
> such
> > as GMRES [1][2] but not really sure why it won't cause trouble in direct
> > solvers.
> >
> > Any comments or suggestions are greatly appreciated.
> >
> > Best Regards,
> > Jau-Uei Chen
> >
> > Reference:
> > [1] Benzi, Michele, et al. “Numerical Solution of Saddle Point Problems.”
> > Acta Numerica, vol. 14, May 2005, pp. 1–137. DOI.org (Crossref),
> > https://doi.org/10.1017/S0962492904000212.
> > [2] Elman, Howard C., et al. Finite Elements and Fast Iterative Solvers:
> > With Applications in Incompressible Fluid Dynamics. Second edition,
> Oxford
> > University Press, 2014.
>


Re: [petsc-users] Clarification on MatMPIBAIJSetPreallocation (d_nnz and o_nnz)

2022-10-24 Thread Jed Brown
This looks like one block row per process? (BAIJ formats store explicit zeros 
that appear within nonzero blocks.) You'd use d_nnz[] = {1}, o_nnz[] = {1} on 
each process.

If each of the dummy numbers there was replaced by a nonzero block (so the 
diagram would be sketching nonzero 3x3 blocks of an 18x18 matrix), then you'd 
have bs=3 with:

rank 0:
d_nnz[] = {2,2,1}, o_nnz[] = {1,1,2};
rank 1:
d_nnz[] = {1,1,1}, o_nnz[] = {2,1,1};

Edoardo alinovi  writes:

> Thank you Jed for the hint.
>
> So just to understand it with an example. Say I have this matrix here,
> which has 4 3x3 blocks
>
> 1 2 0 | 0 5 0 |
> 0 2 3 | 0 0 1 |   < Proc 1
> 0 0 1 | 0 2 2 |
> ||
> 1 2 0 | 0 5 0 |
> 0 2 0 | 0 0 1 |   < Proc 2
> 0 0 1 | 0 0 2 |
> ---|-|
>
> This can be represented as a collection of submatrices like:
> A  B
> C  D
>
> A and D are the diagonal blocks, while B and C are the off-diagonal ones.
> How should I set d_nnz and o_nnz in this case?


Re: [petsc-users] Clarification on MatMPIBAIJSetPreallocation (d_nnz and o_nnz)

2022-10-24 Thread Edoardo alinovi
Thank you Jed for the hint.

So just to understand it with an example. Say I have this matrix here,
which has 4 3x3 blocks

1 2 0 | 0 5 0 |
0 2 3 | 0 0 1 |   < Proc 1
0 0 1 | 0 2 2 |
||
1 2 0 | 0 5 0 |
0 2 0 | 0 0 1 |   < Proc 2
0 0 1 | 0 0 2 |
---|-|

This can be represented as a collection of submatrices like:
A  B
C  D

A and D are the diagonal blocks, while B and C are the off-diagonal ones.
How should I set d_nnz and o_nnz in this case?


Re: [petsc-users] A question about solving a saddle point system with a direct solver

2022-10-24 Thread Jed Brown
You can get lucky with null spaces even with factorization preconditioners, 
especially if the right hand side is orthogonal to the null space. But it's 
fragile and you shouldn't rely on that being true as you change the problem. 
You can either remove the null space in your problem formulation (maybe) or use 
iterative solvers/fieldsplit preconditioning (which can use a direct solver on 
nonsingular blocks).

Jau-Uei Chen  writes:

> To whom it may concern,
>
> I am writing to ask about using PETSc with a direct solver to solve a
> linear system where a single zero-value eigenvalue exists.
>
> Currently, I am working on developing a finite-element solver for a
> linearized incompressible MHD equation. The code is based on an open-source
> library called MFEM which has its own wrapper for PETSc and is used in my
> code. From analysis, I already know that the linear system (Ax=b) to be
> solved is a saddle point system. By using the flags "solver_pc_type svd"
> and "solver_pc_svd_monitor", I indeed observe it. Here is an example of an
> output:
>
> SVD: condition number 3.271390119581e+18, 1 of 66 singular values are
> (nearly) zero
> SVD: smallest singular values: 3.236925932523e-17 3.108788619412e-04
> 3.840514506502e-04 4.599292003910e-04 4.909419974671e-04
> SVD: largest singular values : 4.007319935079e+00 4.027759008411e+00
> 4.817755760754e+00 4.176127583956e+01 1.058924751347e+02
>
>
> However, What surprises me is that the numerical solutions are still
> relatively accurate by comparing to the exact ones (i.e. manufactured
> solutions) when I perform convergence tests even if I am using a direct
> solver (i.e. -solver_ksp_type preonly -solver_pc_type lu
> -solver_pc_factor_mat_solver_type
> mumps). My question is: Why the direct solver won't break down in this
> context? I understand that it won't be an issue for iterative solvers such
> as GMRES [1][2] but not really sure why it won't cause trouble in direct
> solvers.
>
> Any comments or suggestions are greatly appreciated.
>
> Best Regards,
> Jau-Uei Chen
>
> Reference:
> [1] Benzi, Michele, et al. “Numerical Solution of Saddle Point Problems.”
> Acta Numerica, vol. 14, May 2005, pp. 1–137. DOI.org (Crossref),
> https://doi.org/10.1017/S0962492904000212.
> [2] Elman, Howard C., et al. Finite Elements and Fast Iterative Solvers:
> With Applications in Incompressible Fluid Dynamics. Second edition, Oxford
> University Press, 2014.


Re: [petsc-users] Clarification on MatMPIBAIJSetPreallocation (d_nnz and o_nnz)

2022-10-24 Thread Jed Brown
I recommend calling this one preallocation function, which will preallocate 
scalar and block formats. It takes one value per block row, counting in blocks.

https://petsc.org/release/docs/manualpages/Mat/MatXAIJSetPreallocation/

Edoardo alinovi  writes:

> Hello Barry,
>
> I am doing some preliminary work to add a coupled solver in my code.  I am
> reading some documentation on MPIBAIJ as the matrix will be composed of
> blocks of size 3x3 in 2D and 4x4 in 3D for each cell in the domain.
>
> I can't quite understand how to set d_nnz and o_nnz. What is their size?
> Should I provide a number of non-zero entries for each block or should I do
> it line by line as in MATAIJ?
>
> Would you be able to provide me with a silly example?
>
> Thank you!


[petsc-users] A question about solving a saddle point system with a direct solver

2022-10-24 Thread Jau-Uei Chen
To whom it may concern,

I am writing to ask about using PETSc with a direct solver to solve a
linear system where a single zero-value eigenvalue exists.

Currently, I am working on developing a finite-element solver for a
linearized incompressible MHD equation. The code is based on an open-source
library called MFEM which has its own wrapper for PETSc and is used in my
code. From analysis, I already know that the linear system (Ax=b) to be
solved is a saddle point system. By using the flags "solver_pc_type svd"
and "solver_pc_svd_monitor", I indeed observe it. Here is an example of an
output:

SVD: condition number 3.271390119581e+18, 1 of 66 singular values are
(nearly) zero
SVD: smallest singular values: 3.236925932523e-17 3.108788619412e-04
3.840514506502e-04 4.599292003910e-04 4.909419974671e-04
SVD: largest singular values : 4.007319935079e+00 4.027759008411e+00
4.817755760754e+00 4.176127583956e+01 1.058924751347e+02


However, What surprises me is that the numerical solutions are still
relatively accurate by comparing to the exact ones (i.e. manufactured
solutions) when I perform convergence tests even if I am using a direct
solver (i.e. -solver_ksp_type preonly -solver_pc_type lu
-solver_pc_factor_mat_solver_type
mumps). My question is: Why the direct solver won't break down in this
context? I understand that it won't be an issue for iterative solvers such
as GMRES [1][2] but not really sure why it won't cause trouble in direct
solvers.

Any comments or suggestions are greatly appreciated.

Best Regards,
Jau-Uei Chen

Reference:
[1] Benzi, Michele, et al. “Numerical Solution of Saddle Point Problems.”
Acta Numerica, vol. 14, May 2005, pp. 1–137. DOI.org (Crossref),
https://doi.org/10.1017/S0962492904000212.
[2] Elman, Howard C., et al. Finite Elements and Fast Iterative Solvers:
With Applications in Incompressible Fluid Dynamics. Second edition, Oxford
University Press, 2014.


[petsc-users] Clarification on MatMPIBAIJSetPreallocation (d_nnz and o_nnz)

2022-10-24 Thread Edoardo alinovi
Hello Barry,

I am doing some preliminary work to add a coupled solver in my code.  I am
reading some documentation on MPIBAIJ as the matrix will be composed of
blocks of size 3x3 in 2D and 4x4 in 3D for each cell in the domain.

I can't quite understand how to set d_nnz and o_nnz. What is their size?
Should I provide a number of non-zero entries for each block or should I do
it line by line as in MATAIJ?

Would you be able to provide me with a silly example?

Thank you!