Re: [petsc-users] Kokkos Interface for PETSc

2022-02-17 Thread Richard Tran Mills via petsc-users

Hi Philip,

Sorry to be a bit late in my reply. Jed has explained the gist of what's 
involved with using the Kokkos/Kokkos-kernels back-end for the PETSc 
solves, though, depending on exactly how Xolotl creates its vectors, 
there may be a bit of work required to ensure that the command-line 
options specifying the matrix and GPU types get applied to the right 
objects, and that non-GPU types are not being hardcoded somewhere (by a 
call like "DMSetMatType(dm,MATAIJ)").


In addition to looking at the -log_view output, since Xolotl uses TS you 
can specify "-ts_view" and look at the output that describes the solver 
hierarchy that Xolotl sets up. If matrix types are being set correctly, 
you'll see things like


  Mat Object: 1 MPI processes
    type: seqaijkokkos

(I note that I've also sent a related message about getting Xolotl 
working with Kokkos back-ends on Summit to you, Sophie, and Phil in 
reply to old thread about this.)


Were you also asking about how to use Kokkos for PETSc matrix assembly, 
or is that a question for later?


Cheers,
Richard

On 2/15/22 09:07, Satish Balay via petsc-users wrote:

Also - perhaps the following info might be useful

Satish



balay@sb /home/balay/petsc (main=)
$ git grep -l download-kokkos-kernels config/examples
config/examples/arch-ci-freebsd-cxx-cmplx-pkgs-dbg.py
config/examples/arch-ci-linux-cuda-double.py
config/examples/arch-ci-linux-gcc-ifc-cmplx.py
config/examples/arch-ci-linux-hip-double.py
config/examples/arch-ci-linux-pkgs-dbg-ftn-interfaces.py
config/examples/arch-ci-linux-pkgs-valgrind.py
config/examples/arch-ci-osx-cxx-pkgs-opt.py
config/examples/arch-nvhpc.py
config/examples/arch-olcf-crusher.py
config/examples/arch-olcf-spock.py
balay@sb /home/balay/petsc (main=)
$ git grep -l "requires:.*kokkos_kernels"
src/ksp/ksp/tests/ex3.c
src/ksp/ksp/tests/ex43.c
src/ksp/ksp/tests/ex60.c
src/ksp/ksp/tutorials/ex7.c
src/mat/tests/ex123.c
src/mat/tests/ex132.c
src/mat/tests/ex2.c
src/mat/tests/ex250.c
src/mat/tests/ex251.c
src/mat/tests/ex252.c
src/mat/tests/ex254.c
src/mat/tests/ex5.c
src/mat/tests/ex62.c
src/mat/tutorials/ex5k.kokkos.cxx
src/snes/tests/ex13.c
src/snes/tutorials/ex13.c
src/snes/tutorials/ex3k.kokkos.cxx
src/snes/tutorials/ex56.c
src/ts/utils/dmplexlandau/tutorials/ex1.c
src/ts/utils/dmplexlandau/tutorials/ex1f90.F90
src/ts/utils/dmplexlandau/tutorials/ex2.c
src/vec/vec/tests/ex21.c
src/vec/vec/tests/ex22.c
src/vec/vec/tests/ex23.c
src/vec/vec/tests/ex28.c
src/vec/vec/tests/ex34.c
src/vec/vec/tests/ex37.c
src/vec/vec/tests/ex38.c
src/vec/vec/tests/ex4.c
src/vec/vec/tests/ex43.c
src/vec/vec/tests/ex60.c
src/vec/vec/tutorials/ex1.c
balay@sb /home/balay/petsc (main=)
$

On Tue, 15 Feb 2022, Satish Balay via petsc-users wrote:


Also - best to use petsc repo - 'main' branch.

And for install on crusher - check config/examples/arch-olcf-crusher.py

Satish

On Tue, 15 Feb 2022, Jed Brown wrote:


We need to make these docs more explicit, but the short answer is configure with 
--download-kokkos --download-kokkos-kernels and run almost any example with 
-dm_mat_type aijkokkos -dm_vec_type kokkos. If you run with -log_view, you should 
see that all the flops take place on the device and there are few host->device 
transfers. Message packing is done on the device and it'll use GPU-aware MPI. 
There are a few examples of residual evaluation and matrix assembly on the device 
using Kokkos. You can also see libCEED examples for assembly on the device into 
Kokkos matrices and vectors without touching host memory.

"Fackler, Philip via petsc-users"  writes:


We're intending to transitioning the Xolotl interfaces with PETSc.

I am hoping someone (can) point us to some documentation (and examples) for 
using PETSc's Kokkos-based interface. If this does not yet exist, then perhaps 
some slides (like the ones Richard Mills showed at the NE-SciDAC all-hands 
meeting) showing some examples could get us started.

Thanks for any help that can be provided,

Philip Fackler
Research Software Engineer, Application Engineering Group
Advanced Computing Systems Research Section
Computer Science and Mathematics Division
Oak Ridge National Laboratory


Re: [petsc-users] (no subject)

2022-02-17 Thread Mark Adams
Please keep on list,

On Thu, Feb 17, 2022 at 12:36 PM Bojan Niceno <
bojan.niceno.scient...@gmail.com> wrote:

> Dear Mark,
>
> Sorry for mistakenly calling you Adam before.
>
> I was thinking about the o_nnz as you suggested, but then something else
> occurred to me.  So, I determine the d_nnz and o_nnz based on METIS domain
> decomposition which I perform outside of PETSc, before I even call PETSc
> initialize.  Hence, if PETSc works out its own domain decomposition
>

PETSc does not work out its own decomposition. You specify the
decomposition completely with
https://petsc.org/main/docs/manualpages/Mat/MatCreateAIJ.html#MatCreateAIJ



> and communication patterns, they might be different from mine, and
> therefore MatSetValue misses some entries.  Will PETSc follow the same
> domain decomposition which it works from calls to MatSetValue from
> different processors, or will it re-shuffle the matrix entries?
>
> Cheers,
>
> Bojan
>
> On Thu, Feb 17, 2022 at 6:14 PM Bojan Niceno <
> bojan.niceno.scient...@gmail.com> wrote:
>
>> Thanks a lot for the hints Adam :-)
>>
>> Cheers,
>>
>> Bojan
>>
>> On Thu, Feb 17, 2022 at 6:05 PM Mark Adams  wrote:
>>
>>>
>>>
>>> On Thu, Feb 17, 2022 at 11:46 AM Bojan Niceno <
>>> bojan.niceno.scient...@gmail.com> wrote:
>>>
 Dear all,


 I am experiencing difficulties when using PETSc in parallel in an
 unstructured CFD code.  It uses CRS format to store its matrices.  I use
 the following sequence of PETSc call in the hope to get PETSc solving my
 linear systems in parallel.  Before I continue, I would just like to say
 that the code is MPI parallel since long time ago, and performs its own
 domain decomposition through METIS, and it works out its communication
 patterns which work with its home-grown (non-PETSc) linear solvers.
 Anyhow, I issue the following calls:

 err = PetscInitialize(0, NULL, (char*)0, help);

 err = MatCreate(MPI_COMM_WORLD, A);
 In the above, I use MPI_COMM_WORLD instead of PETSC_COMM_SELF because
 the call to MPI_Init is invoked outside of PETSc, from the main program.

 err = MatSetSizes(A, m, m, M, M);
 Since my matrices are exclusively square, M is set to the total number
 of computational cells, while m is equal to the number of computational
 cells within each subdomain/processor.  (Not all processors necessarily
 have the same m, it depends on domain decomposition.)  I do not distinguish
 between m (M) and n (N) since matrices are all square.  Am I wrong to
 assume that?

 err = MatSetType(A, MATAIJ);
 I set the matrix to be of type MATAIJ, to cover runs on one and on more
 processors.  By the way, on one processors everything works fine

 err = MatMPIAIJSetPreallocation(A, 0, d_nnz, 0, o_nnz);
 err = MatSeqAIJSetPreallocation(A, 0, d_nnz);
 The two lines above specify matrix preallocation.  Since d_nz and o_nz
 vary from cell to cell (row to row), I set them to zero and provide arrays
 with number of diagonal and off diagonal zeroes instead.  To my
 understanding, that is legit since d_nz and o_nz are neglected if d_nnz and
 o_nnz are provided.  Am I wrong?

 Finally, inside a loop through rows and columns I call:

 err = MatSetValue(A, row, col, value, INSERT_VALUES);
 Here I make sure that row and col point to global cell (unknown)
 numbers.

 Yet, when I run the code on more than one processor, I get the error:

 [3]PETSC ERROR: - Error Message
 --
 [3]PETSC ERROR: Argument out of range
 [3]PETSC ERROR: New nonzero at (21,356) caused a malloc
 Use MatSetOption(A, MAT_NEW_NONZERO_ALLOCATION_ERR, PETSC_FALSE) to
 turn off this check

 [3]PETSC ERROR: #1 MatSetValues_MPIAIJ() at
 /home/niceno/Development/petsc-debug/src/mat/impls/aij/mpi/mpiaij.c:517
 [3]PETSC ERROR: #2 MatSetValues() at
 /home/niceno/Development/petsc-debug/src/mat/interface/matrix.c:1398
 [3]PETSC ERROR: #3 MatSetValues_MPIAIJ() at
 /home/niceno/Development/petsc-debug/src/mat/impls/aij/mpi/mpiaij.c:517
 [3]PETSC ERROR: #4 MatSetValues() at
 /home/niceno/Development/petsc-debug/src/mat/interface/matrix.c:1398

 and so forth, for roughly 10% of all matrix entries.  I checked if
 these errors occur only for off-diagonal parts of the matrix entries, but
 that is not the case.

 Error code is 63; PETSC_ERR_ARG_OUTOFRANGE

 Does anyone have an idea what am I doing wrong?  Is any of my
 assumptions above (like thinking n(N) is always m(M) for square matrices,
 that I can send zeros as d_nz and o_nz if I provide arrays d_nnz[] and
 o_nnz[] wrong?

>>>
>>> That is correct.
>>>
>>>
 Any idea how to debug it, where to look for an error?


>>> I would guess that you are counting your o_nnz 

Re: [petsc-users] (no subject)

2022-02-17 Thread Mark Adams
On Thu, Feb 17, 2022 at 11:46 AM Bojan Niceno <
bojan.niceno.scient...@gmail.com> wrote:

> Dear all,
>
>
> I am experiencing difficulties when using PETSc in parallel in an
> unstructured CFD code.  It uses CRS format to store its matrices.  I use
> the following sequence of PETSc call in the hope to get PETSc solving my
> linear systems in parallel.  Before I continue, I would just like to say
> that the code is MPI parallel since long time ago, and performs its own
> domain decomposition through METIS, and it works out its communication
> patterns which work with its home-grown (non-PETSc) linear solvers.
> Anyhow, I issue the following calls:
>
> err = PetscInitialize(0, NULL, (char*)0, help);
>
> err = MatCreate(MPI_COMM_WORLD, A);
> In the above, I use MPI_COMM_WORLD instead of PETSC_COMM_SELF because the
> call to MPI_Init is invoked outside of PETSc, from the main program.
>
> err = MatSetSizes(A, m, m, M, M);
> Since my matrices are exclusively square, M is set to the total number of
> computational cells, while m is equal to the number of computational cells
> within each subdomain/processor.  (Not all processors necessarily have the
> same m, it depends on domain decomposition.)  I do not distinguish between
> m (M) and n (N) since matrices are all square.  Am I wrong to assume that?
>
> err = MatSetType(A, MATAIJ);
> I set the matrix to be of type MATAIJ, to cover runs on one and on more
> processors.  By the way, on one processors everything works fine
>
> err = MatMPIAIJSetPreallocation(A, 0, d_nnz, 0, o_nnz);
> err = MatSeqAIJSetPreallocation(A, 0, d_nnz);
> The two lines above specify matrix preallocation.  Since d_nz and o_nz
> vary from cell to cell (row to row), I set them to zero and provide arrays
> with number of diagonal and off diagonal zeroes instead.  To my
> understanding, that is legit since d_nz and o_nz are neglected if d_nnz and
> o_nnz are provided.  Am I wrong?
>
> Finally, inside a loop through rows and columns I call:
>
> err = MatSetValue(A, row, col, value, INSERT_VALUES);
> Here I make sure that row and col point to global cell (unknown) numbers.
>
> Yet, when I run the code on more than one processor, I get the error:
>
> [3]PETSC ERROR: - Error Message
> --
> [3]PETSC ERROR: Argument out of range
> [3]PETSC ERROR: New nonzero at (21,356) caused a malloc
> Use MatSetOption(A, MAT_NEW_NONZERO_ALLOCATION_ERR, PETSC_FALSE) to turn
> off this check
>
> [3]PETSC ERROR: #1 MatSetValues_MPIAIJ() at
> /home/niceno/Development/petsc-debug/src/mat/impls/aij/mpi/mpiaij.c:517
> [3]PETSC ERROR: #2 MatSetValues() at
> /home/niceno/Development/petsc-debug/src/mat/interface/matrix.c:1398
> [3]PETSC ERROR: #3 MatSetValues_MPIAIJ() at
> /home/niceno/Development/petsc-debug/src/mat/impls/aij/mpi/mpiaij.c:517
> [3]PETSC ERROR: #4 MatSetValues() at
> /home/niceno/Development/petsc-debug/src/mat/interface/matrix.c:1398
>
> and so forth, for roughly 10% of all matrix entries.  I checked if these
> errors occur only for off-diagonal parts of the matrix entries, but that is
> not the case.
>
> Error code is 63; PETSC_ERR_ARG_OUTOFRANGE
>
> Does anyone have an idea what am I doing wrong?  Is any of my assumptions
> above (like thinking n(N) is always m(M) for square matrices, that I can
> send zeros as d_nz and o_nz if I provide arrays d_nnz[] and o_nnz[] wrong?
>

That is correct.


> Any idea how to debug it, where to look for an error?
>
>
I would guess that you are counting your o_nnz  incorrectly. It looks like
a small number of equations per process because the 4th process has row 21,
apparently. Does that sound right?

And column 356 is going to be in the off-diagonal block (ie, "o").  I would
start with a serial matrix and run with -info. This will be noisy but you
will see things like"number of unneeded..." that you can verify that you
have set d_nnz perfectly (there should be 0 unneeded).
Then try two processors. If it fails you could a print statement in
everytime the row (eg, 21) is added to and check what your code for
computing o_nnz is doing.

I am carefully checking all the data I send to PETSc functions and looks
> correct to me, but maybe I lack some fundamental understanding of what
> should be provided to PETSc, as I write above?
>

It is a bit confusing at first. The man page gives a concrete example
https://petsc.org/main/docs/manualpages/Mat/MatCreateAIJ.html#MatCreateAIJ


>
>
> Best regards,
>
>
> Bojan Niceno
>
>


[petsc-users] (no subject)

2022-02-17 Thread Bojan Niceno
Dear all,


I am experiencing difficulties when using PETSc in parallel in an
unstructured CFD code.  It uses CRS format to store its matrices.  I use
the following sequence of PETSc call in the hope to get PETSc solving my
linear systems in parallel.  Before I continue, I would just like to say
that the code is MPI parallel since long time ago, and performs its own
domain decomposition through METIS, and it works out its communication
patterns which work with its home-grown (non-PETSc) linear solvers.
Anyhow, I issue the following calls:

err = PetscInitialize(0, NULL, (char*)0, help);

err = MatCreate(MPI_COMM_WORLD, A);
In the above, I use MPI_COMM_WORLD instead of PETSC_COMM_SELF because the
call to MPI_Init is invoked outside of PETSc, from the main program.

err = MatSetSizes(A, m, m, M, M);
Since my matrices are exclusively square, M is set to the total number of
computational cells, while m is equal to the number of computational cells
within each subdomain/processor.  (Not all processors necessarily have the
same m, it depends on domain decomposition.)  I do not distinguish between
m (M) and n (N) since matrices are all square.  Am I wrong to assume that?

err = MatSetType(A, MATAIJ);
I set the matrix to be of type MATAIJ, to cover runs on one and on more
processors.  By the way, on one processors everything works fine

err = MatMPIAIJSetPreallocation(A, 0, d_nnz, 0, o_nnz);
err = MatSeqAIJSetPreallocation(A, 0, d_nnz);
The two lines above specify matrix preallocation.  Since d_nz and o_nz vary
from cell to cell (row to row), I set them to zero and provide arrays with
number of diagonal and off diagonal zeroes instead.  To my understanding,
that is legit since d_nz and o_nz are neglected if d_nnz and o_nnz are
provided.  Am I wrong?

Finally, inside a loop through rows and columns I call:

err = MatSetValue(A, row, col, value, INSERT_VALUES);
Here I make sure that row and col point to global cell (unknown) numbers.

Yet, when I run the code on more than one processor, I get the error:

[3]PETSC ERROR: - Error Message
--
[3]PETSC ERROR: Argument out of range
[3]PETSC ERROR: New nonzero at (21,356) caused a malloc
Use MatSetOption(A, MAT_NEW_NONZERO_ALLOCATION_ERR, PETSC_FALSE) to turn
off this check

[3]PETSC ERROR: #1 MatSetValues_MPIAIJ() at
/home/niceno/Development/petsc-debug/src/mat/impls/aij/mpi/mpiaij.c:517
[3]PETSC ERROR: #2 MatSetValues() at
/home/niceno/Development/petsc-debug/src/mat/interface/matrix.c:1398
[3]PETSC ERROR: #3 MatSetValues_MPIAIJ() at
/home/niceno/Development/petsc-debug/src/mat/impls/aij/mpi/mpiaij.c:517
[3]PETSC ERROR: #4 MatSetValues() at
/home/niceno/Development/petsc-debug/src/mat/interface/matrix.c:1398

and so forth, for roughly 10% of all matrix entries.  I checked if these
errors occur only for off-diagonal parts of the matrix entries, but that is
not the case.

Error code is 63; PETSC_ERR_ARG_OUTOFRANGE

Does anyone have an idea what am I doing wrong?  Is any of my assumptions
above (like thinking n(N) is always m(M) for square matrices, that I can
send zeros as d_nz and o_nz if I provide arrays d_nnz[] and o_nnz[] wrong?
Any idea how to debug it, where to look for an error?

I am carefully checking all the data I send to PETSc functions and looks
correct to me, but maybe I lack some fundamental understanding of what
should be provided to PETSc, as I write above?


Best regards,


Bojan Niceno


Re: [petsc-users] Why is there MAPMPIAIJ

2022-02-17 Thread Matthew Knepley
On Thu, Feb 17, 2022 at 7:01 AM Bojan Niceno <
bojan.niceno.scient...@gmail.com> wrote:

> Dear all,
>
> I am coupling my unstructured CFD solver with PETSc.  At this moment,
> sequential version is working fine, but I obviously want to migrate to MPI
> parallel.  My code is MPI parallel since ages.
>
> Anyhow, as a part of the migration to parallel, I changed the matrix type
> from MATSEQAIJ to MATMPIAIJ.  The code compiled, but when I executed it one
> processor, I received an error message that combination of matrix format
> does not support BICG solver and PCILU preconditoner.  I took a look at the
> compatibility matrix (
> https://petsc.org/release/overview/linear_solve_table/#preconditioners)
> and noticed that MATMPIAIJ supports only MKL CParadiso preconditioner which
> seems to belong to Intel.
>
> I did some more reading and realised that I should probably continue with
> MATAIJ (which should work in sequential and parallel), but I am wondering
> why would there even be MATMPIAJ if it supports only one third-party
> preconditioner?
>

1) MATAIJ is not a concrete type, it just creates MATSEQAIJ in serial and
MATMPIAIJ in parallel

2) MATMPIAIJ supports many parallel direct solvers (see the end of
https://petsc.org/main/docs/manual/ksp/), including

  MUMPS
  SuperLU_dist
  Hypre (Euclid)
  CPardiso

There are also parallel AMG solvers, parallel DD solvers, and Krylov
solvers.

The complaint you got said that a serial LU was being used with a parallel
matrix type, so using AIJ is the right solution.

  Thanks,

 Matt


> Cheers,
>
> Bojan Niceno
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ 


[petsc-users] Why is there MAPMPIAIJ

2022-02-17 Thread Bojan Niceno
Dear all,

I am coupling my unstructured CFD solver with PETSc.  At this moment,
sequential version is working fine, but I obviously want to migrate to MPI
parallel.  My code is MPI parallel since ages.

Anyhow, as a part of the migration to parallel, I changed the matrix type
from MATSEQAIJ to MATMPIAIJ.  The code compiled, but when I executed it one
processor, I received an error message that combination of matrix format
does not support BICG solver and PCILU preconditoner.  I took a look at the
compatibility matrix (
https://petsc.org/release/overview/linear_solve_table/#preconditioners) and
noticed that MATMPIAIJ supports only MKL CParadiso preconditioner which
seems to belong to Intel.

I did some more reading and realised that I should probably continue with
MATAIJ (which should work in sequential and parallel), but I am wondering
why would there even be MATMPIAJ if it supports only one third-party
preconditioner?

Cheers,

Bojan Niceno


[petsc-users] Reuse symbolic factorization with petsc - mumps

2022-02-17 Thread 459543524 via petsc-users
Thanks sir.
I now modify my code into following.
Everything works good.
 -
// stage 1:

Vec x1, b2;
Vec x1, b2;
Mat A, P, F;

PC pc;
// solve first system
MatCreateAIJ(A, ...)
MatSetVaules(A, ...)
MatAssembleBegin(A, ...)
MatAssembleBegin(A, ...)

KSPSetOperators(ksp, A, A);
KSPSetType(ksp, KSPPREONLY);
KSPGetPC(ksp, pc);
PCSetType(pc, PCLU);
PCFactorSetMatSolverType(pc, MATSOLVERMUMPS);
PCFactorSetUpMatSolverType(pc);
PCFactorGetMatrix(pc, F);
MatMumpsSetIcntl(F, 7, 5); // configure mumps.
KSPSolve(ksp, b1, x1);

// solve second system
MatZeroEntries(A);
MatSetVaules(A, ...);
MatAssembleBegin(A, ...);
MatAssembleBegin(A, ...);

KSPSetOperators(ksp, A, A);
KSPSolve(ksp, b2, x2);
  -


----
??: 
   "Jose E. Roman"  
  
https://lists.mcs.anl.gov/pipermail/petsc-users/2013-March/016646.html
  
  I have see there exist API: MatLUFactorSymbolic, 
MatLUFactorNumeric(). but I have no idea how to call it.
  
  Could you please give me an example how to reuse the symbolic 
factorization when using MUMPS in petsc?
  
  Thanks for your time.
  
  Xu Hui
  
  
 

Re: [petsc-users] Reuse symbolic factorization with petsc - mumps

2022-02-17 Thread Jose E. Roman
Please always respond to the list.

Yes, those lines are not needed every time, just the first one. Anyway, they do 
not imply a big overhead.

Jose


> El 17 feb 2022, a las 11:45, 459543524 <459543...@qq.com> escribió:
> 
> Thanks for your reply sir. 
> 
> I now can reuse the sparsity pattern.
> I solve two linear system and found call 'MatLUFactorSym' 1 time and 
> 'MatLUFactorNum' 2 time.
> I modify my code by following.
> 
> > -
> // stage 1:
> 
> Vec x1, b2;
> Vec x1, b2;
> Mat A, P, F;
> PC pc;
> 
> // solve first system
> MatCreateAIJ(A, ...)
> MatSetVaules(A, ...)
> MatAssembleBegin(A, ...)
> MatAssembleBegin(A, ...)
> 
> KSPSetOperators(ksp, A, A);
> KSPSetType(ksp, KSPPREONLY);
> KSPGetPC(ksp, );
> PCSetType(pc, PCLU);
> PCFactorSetMatSolverType(pc, MATSOLVERMUMPS);
> PCFactorSetUpMatSolverType(pc);
> PCFactorGetMatrix(pc, );
> MatMumpsSetIcntl(F, 7, 5); // configure mumps.
> KSPSolve(ksp, b1, x1);
> 
> // solve second system
> MatZeroEntries(A);
> MatSetVaules(A, ...);
> MatAssembleBegin(A, ...);
> MatAssembleBegin(A, ...);
> 
> KSPSetOperators(ksp, A, A);
> KSPSetType(ksp, KSPPREONLY);
> KSPGetPC(ksp, );
> PCSetType(pc, PCLU);
> PCFactorSetMatSolverType(pc, MATSOLVERMUMPS);
> PCFactorSetUpMatSolverType(pc);
> PCFactorGetMatrix(pc, );
> MatMumpsSetIcntl(F, 7, 5); // configure mumps.
> KSPSolve(ksp, b2, x2);
> > -
> 
> I question is, in the code, we call follow code block twice
> ---
> KSPSetType(ksp, KSPPREONLY);
> KSPGetPC(ksp, );
> PCSetType(pc, PCLU);
> PCFactorSetMatSolverType(pc, MATSOLVERMUMPS);
> PCFactorSetUpMatSolverType(pc);
> PCFactorGetMatrix(pc, );
> MatMumpsSetIcntl(F, 7, 5); // configure mumps.
> ---
> Does this introduce unnecessary big computation overhead?
> Can the code further simpilfy to enhance a better performance?
> 
> Thanks for your time.
>  
> 
> 
> -- 原始邮件 --
> 发件人: "Jose E. Roman" ;
> 发送时间: 2022年2月17日(星期四) 下午5:17
> 收件人: "459543524"<459543...@qq.com>;
> 抄送: "petsc-users";
> 主题: Re: [petsc-users] Reuse symbolic factorization with petsc - mumps
> 
> Since version 3.5, KSPSetOperators() will check if the passed matrix has the 
> same sparse pattern as the previously set one, so you don't have to do 
> anything.
> 
> The list of changes in version 3.5 has this note:
> "KSPSetOperators() no longer has the MatStructure argument. The Mat objects 
> now track that information themselves. Use KSP/PCSetReusePreconditioner() to 
> prevent the recomputation of the preconditioner if the operator changed in 
> the way that SAME_PRECONDITIONER did with KSPSetOperators()."
> 
> You don't call MatLUFactorSymbolic() yourself, it is called internally. You 
> can check with -log_view if the number of calls to MatLUFactorSymbolic() is 
> as expected.
> 
> Jose
> 
> 
> 
> > El 17 feb 2022, a las 9:42, 459543524 via petsc-users 
> >  escribió:
> > 
> > Sir, I have a problem when using petsc.
> > 
> > I want to solve a series of linear equations.
> > 
> > A1*x1=b1, A2*x2=b2, A3*x3=b3 ...
> > 
> > The A1,A2,A3 have the same sparstiy pattern.
> > 
> > I want to use MUMPS to solve the system.
> > In order to enhance performance, I want to reuse the symbolic factorization.
> > 
> > Here my code for solve a single linear system is
> > -
> > Mat A, P, F;
> > PC pc;
> > Vec rhs_vec, result_vec;
> > KSPSetOperators(ksp, A, A);
> > KSPSetType(ksp, KSPPREONLY);
> > KSPGetPC(ksp, );
> > PCSetType(pc, PCLU);
> > PCFactorSetMatSolverType(pc, MATSOLVERMUMPS);
> > PCFactorSetUpMatSolverType(pc);
> > PCFactorGetMatrix(pc, );
> > MatMumpsSetIcntl(F, 7, 5); // configure mumps.
> > KSPSolve(ksp, rhs_vec, result_vec);
> > -
> > 
> > 
> > I have no idea how to reuse symbolic factorization when using MUMPS.
> > 
> > I have see the information from interent. The petsc developper have 
> > suggested that using:
> > KSPSetOperators(KSP_A, A, A, DIFFERENT_NONZERO_PATTERN)
> > KSPSetOperators(KSP_A, A, A, SAME_NONZERO_PATTERN)
> > However, this API seems depreacted.
> > see https://lists.mcs.anl.gov/pipermail/petsc-users/2013-March/016646.html
> > 
> > I have see there exist API: MatLUFactorSymbolic,  MatLUFactorNumeric(). but 
> > I have no idea how to call it.
> > 
> > Could you please give me an example how to reuse the symbolic factorization 
> > when using MUMPS in petsc?
> > 
> > Thanks for your time.
> > 
> > Xu Hui
> > 
> > 
> >  



Re: [petsc-users] Reuse symbolic factorization with petsc - mumps

2022-02-17 Thread Jose E. Roman
Since version 3.5, KSPSetOperators() will check if the passed matrix has the 
same sparse pattern as the previously set one, so you don't have to do anything.

The list of changes in version 3.5 has this note:
"KSPSetOperators() no longer has the MatStructure argument. The Mat objects now 
track that information themselves. Use KSP/PCSetReusePreconditioner() to 
prevent the recomputation of the preconditioner if the operator changed in the 
way that SAME_PRECONDITIONER did with KSPSetOperators()."

You don't call MatLUFactorSymbolic() yourself, it is called internally. You can 
check with -log_view if the number of calls to MatLUFactorSymbolic() is as 
expected.

Jose



> El 17 feb 2022, a las 9:42, 459543524 via petsc-users 
>  escribió:
> 
> Sir, I have a problem when using petsc.
> 
> I want to solve a series of linear equations.
> 
> A1*x1=b1, A2*x2=b2, A3*x3=b3 ...
> 
> The A1,A2,A3 have the same sparstiy pattern.
> 
> I want to use MUMPS to solve the system.
> In order to enhance performance, I want to reuse the symbolic factorization.
> 
> Here my code for solve a single linear system is
> -
> Mat A, P, F;
> PC pc;
> Vec rhs_vec, result_vec;
> KSPSetOperators(ksp, A, A);
> KSPSetType(ksp, KSPPREONLY);
> KSPGetPC(ksp, );
> PCSetType(pc, PCLU);
> PCFactorSetMatSolverType(pc, MATSOLVERMUMPS);
> PCFactorSetUpMatSolverType(pc);
> PCFactorGetMatrix(pc, );
> MatMumpsSetIcntl(F, 7, 5); // configure mumps.
> KSPSolve(ksp, rhs_vec, result_vec);
> -
> 
> 
> I have no idea how to reuse symbolic factorization when using MUMPS.
> 
> I have see the information from interent. The petsc developper have suggested 
> that using:
> KSPSetOperators(KSP_A, A, A, DIFFERENT_NONZERO_PATTERN)
> KSPSetOperators(KSP_A, A, A, SAME_NONZERO_PATTERN)
> However, this API seems depreacted.
> see https://lists.mcs.anl.gov/pipermail/petsc-users/2013-March/016646.html
> 
> I have see there exist API: MatLUFactorSymbolic,  MatLUFactorNumeric(). but I 
> have no idea how to call it.
> 
> Could you please give me an example how to reuse the symbolic factorization 
> when using MUMPS in petsc?
> 
> Thanks for your time.
> 
> Xu Hui
> 
> 
>  



Re: [petsc-users] DMView and DMLoad

2022-02-17 Thread Berend van Wachem

Dear Koki,

Many thanks for your help and sorry for the slow reply.

I haven't been able to get it to work successfully. I have attached a 
small example that replicates the main features of our code. In this 
example a Box with one random field is generated, saved and loaded. The 
case works for non-periodic domains and fails for periodic ones. I've 
also included the error output at the bottom of this email.


To switch between periodic and non-periodic, please comment/uncomment 
lines 47 to 52 in src/main.c. To compile, the files "compile" and 
"CMakeLists.txt" are included in a separate tar file, if you want to use 
this. Your library paths should be updated in the latter file. The PETSc 
main distribution is used.


Many thanks for your help!

Thanks and best regards,

Berend.



The error message with --with-debugging=no --with-errorchecking=no:

[0]PETSC ERROR: - Error Message 


[0]PETSC ERROR: Invalid argument
[0]PETSC ERROR: Number of coordinates loaded 3168 does not match number 
of vertices 1000

[0]PETSC ERROR: See https://petsc.org/release/faq/ for trouble shooting.
[0]PETSC ERROR: Petsc Development GIT revision: v3.16.2-499-g9039b796d1 
GIT Date: 2021-12-24 23:23:09 +
[0]PETSC ERROR: ./bin/restart_periodic on a arch-linux-c-opt named james 
by serbenlo Thu Dec 30 20:53:22 2021
[0]PETSC ERROR: Configure options --with-debugging=no 
--with-errorchecking=no --with-clean --download-metis=yes 
--download-parmetis=yes --download-hdf5 --download-p4est 
--download-triangle --download-tetgen 
--with-zlib-lib=/usr/lib/x86_64-linux-gnu/libz.a 
--with-zlib-include=/usr/include --with-mpi=yes --with-mpi-dir=/usr 
--with-mpiexec=/usr/bin/mpiexec
[0]PETSC ERROR: #1 DMPlexCoordinatesLoad_HDF5_V0_Private() at 
/usr/local/petsc_main/src/dm/impls/plex/plexhdf5.c:1387
[0]PETSC ERROR: #2 DMPlexCoordinatesLoad_HDF5_Internal() at 
/usr/local/petsc_main/src/dm/impls/plex/plexhdf5.c:1419
[0]PETSC ERROR: #3 DMPlexCoordinatesLoad() at 
/usr/local/petsc_main/src/dm/impls/plex/plex.c:2070
[0]PETSC ERROR: #4 main() at 
/media/MANNHEIM/Arbeit/OvGU_PostDoc_2021/Projects/MF_Restart/Periodic_DM/restart-periodic/src/main.c:229

[0]PETSC ERROR: No PETSc Option Table entries
[0]PETSC ERROR: End of Error Message ---send entire 
error message to petsc-ma...@mcs.anl.gov--

application called MPI_Abort(MPI_COMM_WORLD, 62) - process 0


The error message with --with-debugging=yes --with-errorchecking=yes:

[0]PETSC ERROR: - Error Message 
-

[0]PETSC ERROR: Null argument, when expecting valid pointer
[1]PETSC ERROR: - Error Message 
-

[1]PETSC ERROR: Null argument, when expecting valid pointer
[1]PETSC ERROR: Null Object: Parameter # 1
[1]PETSC ERROR: See https://petsc.org/release/faq/ for trouble shooting.
[1]PETSC ERROR: Petsc Development GIT revision: v3.16.2-499-g9039b796d1 
GIT Date: 2021-12-24 23:23:09 +
[1]PETSC ERROR: ./bin/restart_periodic on a arch-linux-c-opt named james 
by serbenlo Thu Dec 30 20:17:22 2021
[1]PETSC ERROR: Configure options --with-debugging=yes 
--with-errorchecking=yes --with-clean --download-metis=yes 
--download-parmetis=yes --download-hdf5 --download-p4est 
--download-triangle --download-tetgen 
--with-zlib-lib=/usr/lib/x86_64-linux-gnu/libz.a 
--with-zlib-include=/usr/include --with-mpi=yes --with-mpi-dir=/usr 
--with-mpiexec=/usr/bin/mpiexec
[1]PETSC ERROR: #1 PetscSectionGetDof() at 
/usr/local/petsc_main/src/vec/is/section/interface/section.c:807

[1]PETSC ERROR: [0]PETSC ERROR: Null Object: Parameter # 1
[0]PETSC ERROR: See https://petsc.org/release/faq/ for trouble shooting.
[0]PETSC ERROR: Petsc Development GIT revision: v3.16.2-499-g9039b796d1 
GIT Date: 2021-12-24 23:23:09 +
[0]PETSC ERROR: ./bin/restart_periodic on a arch-linux-c-opt named james 
by serbenlo Thu Dec 30 20:17:22 2021
[0]PETSC ERROR: Configure options --with-debugging=yes 
--with-errorchecking=yes --with-clean --download-metis=yes 
--download-parmetis=yes --download-hdf5 --download-p4est 
--download-triangle --download-tetgen 
--with-zlib-lib=/usr/lib/x86_64-linux-gnu/libz.a 
--with-zlib-include=/usr/include --with-mpi=yes --with-mpi-dir=/usr 
--with-mpiexec=/usr/bin/mpiexec
#2 DMDefaultSectionCheckConsistency_Internal() at 
/usr/local/petsc_main/src/dm/interface/dm.c:4489
[1]PETSC ERROR: #3 DMSetGlobalSection() at 
/usr/local/petsc_main/src/dm/interface/dm.c:4583
[1]PETSC ERROR: [0]PETSC ERROR: #1 PetscSectionGetDof() at 
/usr/local/petsc_main/src/vec/is/section/interface/section.c:807
[0]PETSC ERROR: #4 main() at 
/media/MANNHEIM/Arbeit/OvGU_PostDoc_2021/Projects/MF_Restart/Periodic_DM/restart-periodic/src/main.c:164

[1]PETSC ERROR: No PETSc Option Table entries
[1]PETSC ERROR: #2 DMDefaultSectionCheckConsistency_Internal() at 
/usr/local/petsc_main/src/dm/interface/dm.c:4489

[petsc-users] Reuse symbolic factorization with petsc - mumps

2022-02-17 Thread 459543524 via petsc-users
Sir, I have a problem when using petsc.


I want to solve a series of linear equations.


A1*x1=b1, A2*x2=b2, A3*x3=b3 ...


The A1,A2,A3 have the same sparstiy pattern.


I want to use MUMPS to solve the system.
In order to enhance performance, I want to reuse the symbolic factorization.


Here my code for solve a single linear system is
-
Mat A, P, F;
PC pc;
Vec rhs_vec, result_vec;
KSPSetOperators(ksp, A, A);
KSPSetType(ksp, KSPPREONLY);
KSPGetPC(ksp, pc);
PCSetType(pc, PCLU);
PCFactorSetMatSolverType(pc, MATSOLVERMUMPS);
PCFactorSetUpMatSolverType(pc);
PCFactorGetMatrix(pc, F);
MatMumpsSetIcntl(F, 7, 5); // configure mumps.
KSPSolve(ksp, rhs_vec, result_vec);
-




I have no idea how to reuse symbolic factorization when using MUMPS.


I have see the information from interent. The petsc developper have suggested 
that using:
KSPSetOperators(KSP_A, A, A, DIFFERENT_NONZERO_PATTERN)
KSPSetOperators(KSP_A, A, A, SAME_NONZERO_PATTERN)
However, this API seems depreacted.
seehttps://lists.mcs.anl.gov/pipermail/petsc-users/2013-March/016646.html


I have see there exist API: MatLUFactorSymbolic, MatLUFactorNumeric(). 
but I have no idea how to call it.


Could you please give me an example how to reuse the symbolic factorization 
when using MUMPS in petsc?


Thanks for your time.


Xu Hui