Re: [petsc-users] Kokkos Interface for PETSc

2022-02-17 Thread Richard Tran Mills via petsc-users
Hi Philip, Sorry to be a bit late in my reply. Jed has explained the gist of what's involved with using the Kokkos/Kokkos-kernels back-end for the PETSc solves, though, depending on exactly how Xolotl creates its vectors, there may be a bit of work required to ensure that the command-line

Re: [petsc-users] (no subject)

2022-02-17 Thread Mark Adams
Please keep on list, On Thu, Feb 17, 2022 at 12:36 PM Bojan Niceno < bojan.niceno.scient...@gmail.com> wrote: > Dear Mark, > > Sorry for mistakenly calling you Adam before. > > I was thinking about the o_nnz as you suggested, but then something else > occurred to me. So, I determine the d_nnz

Re: [petsc-users] (no subject)

2022-02-17 Thread Mark Adams
On Thu, Feb 17, 2022 at 11:46 AM Bojan Niceno < bojan.niceno.scient...@gmail.com> wrote: > Dear all, > > > I am experiencing difficulties when using PETSc in parallel in an > unstructured CFD code. It uses CRS format to store its matrices. I use > the following sequence of PETSc call in the

[petsc-users] (no subject)

2022-02-17 Thread Bojan Niceno
Dear all, I am experiencing difficulties when using PETSc in parallel in an unstructured CFD code. It uses CRS format to store its matrices. I use the following sequence of PETSc call in the hope to get PETSc solving my linear systems in parallel. Before I continue, I would just like to say

Re: [petsc-users] Why is there MAPMPIAIJ

2022-02-17 Thread Matthew Knepley
On Thu, Feb 17, 2022 at 7:01 AM Bojan Niceno < bojan.niceno.scient...@gmail.com> wrote: > Dear all, > > I am coupling my unstructured CFD solver with PETSc. At this moment, > sequential version is working fine, but I obviously want to migrate to MPI > parallel. My code is MPI parallel since

[petsc-users] Why is there MAPMPIAIJ

2022-02-17 Thread Bojan Niceno
Dear all, I am coupling my unstructured CFD solver with PETSc. At this moment, sequential version is working fine, but I obviously want to migrate to MPI parallel. My code is MPI parallel since ages. Anyhow, as a part of the migration to parallel, I changed the matrix type from MATSEQAIJ to

[petsc-users] Reuse symbolic factorization with petsc - mumps

2022-02-17 Thread 459543524 via petsc-users
Thanks sir. I now modify my code into following. Everything works good. - // stage 1: Vec x1, b2; Vec x1, b2; Mat A, P, F; PC pc; // solve first system MatCreateAIJ(A, ...) MatSetVaules(A, ...) MatAssembleBegin(A, ...) MatAssembleBegin(A, ...)

Re: [petsc-users] Reuse symbolic factorization with petsc - mumps

2022-02-17 Thread Jose E. Roman
Please always respond to the list. Yes, those lines are not needed every time, just the first one. Anyway, they do not imply a big overhead. Jose > El 17 feb 2022, a las 11:45, 459543524 <459543...@qq.com> escribió: > > Thanks for your reply sir. > > I now can reuse the sparsity pattern. >

Re: [petsc-users] Reuse symbolic factorization with petsc - mumps

2022-02-17 Thread Jose E. Roman
Since version 3.5, KSPSetOperators() will check if the passed matrix has the same sparse pattern as the previously set one, so you don't have to do anything. The list of changes in version 3.5 has this note: "KSPSetOperators() no longer has the MatStructure argument. The Mat objects now track

Re: [petsc-users] DMView and DMLoad

2022-02-17 Thread Berend van Wachem
Dear Koki, Many thanks for your help and sorry for the slow reply. I haven't been able to get it to work successfully. I have attached a small example that replicates the main features of our code. In this example a Box with one random field is generated, saved and loaded. The case works for

[petsc-users] Reuse symbolic factorization with petsc - mumps

2022-02-17 Thread 459543524 via petsc-users
Sir, I have a problem when using petsc. I want to solve a series of linear equations. A1*x1=b1, A2*x2=b2, A3*x3=b3 ... The A1,A2,A3 have the same sparstiy pattern. I want to use MUMPS to solve the system. In order to enhance performance, I want to reuse the symbolic factorization. Here