Re: [petsc-users] 'Inserting a new nonzero' issue on a reassembled matrix in parallel

2019-10-25 Thread Thibaut Appel via petsc-users
eeing an error. Am I not running it correctly? Thanks, MAtt Thibaut On 22/10/2019 17:48, Matthew Knepley wrote: On Tue, Oct 22, 2019 at 12:43 PM Thibaut Appel via petsc-users wrote: Hi Hong, Thank you for having a look, I copied/pasted your code snippet into ex28.c and the error i

Re: [petsc-users] 'Inserting a new nonzero' issue on a reassembled matrix in parallel

2019-10-25 Thread Thibaut Appel via petsc-users
cesses >>   type: seqaij >> row 0: (0, 1.) >> row 1: (1, 1.) >> row 2: (2, 1.) >> row 3: (3, 1.) >> row 4: (4, 1.) >> row 5: (5, 1.) >> row 6: (6, 1.) >> row 7: (7, 1.) >> row 8: (8, 1.) >> row 9: (9, 1.) >>  row:  0 col:  9 val: 

Re: [petsc-users] 'Inserting a new nonzero' issue on a reassembled matrix in parallel

2019-10-24 Thread Thibaut Appel via petsc-users
0E+00 I am not seeing an error. Am I not running it correctly?   Thanks,      MAtt Thibaut On 22/10/2019 17:48, Matthew Knepley wrote: On Tue, Oct 22, 2019 at 12:43 PM Thibaut Appel via petsc-users mailto:petsc-users@mcs.anl.gov>> wrote: Hi Hong, Thank yo

Re: [petsc-users] 'Inserting a new nonzero' issue on a reassembled matrix in parallel

2019-10-22 Thread Thibaut Appel via petsc-users
Hi both, Please find attached a tiny example (in Fortran, sorry Matthew) that - I think - reproduces the problem we mentioned. Let me know. Thibaut On 22/10/2019 17:48, Matthew Knepley wrote: On Tue, Oct 22, 2019 at 12:43 PM Thibaut Appel via petsc-users mailto:petsc-users@mcs.anl.gov

Re: [petsc-users] 'Inserting a new nonzero' issue on a reassembled matrix in parallel

2019-10-22 Thread Thibaut Appel via petsc-users
Hi Hong, Thank you for having a look, I copied/pasted your code snippet into ex28.c and the error indeed appears if you change that col[0]. That's because you did not allow a new non-zero location in the matrix with the option MAT_NEW_NONZERO_LOCATION_ERR. I spent the day debugging the code

[petsc-users] 'Inserting a new nonzero' issue on a reassembled matrix in parallel

2019-10-21 Thread Thibaut Appel via petsc-users
Dear PETSc developers, I'm extending a validated matrix preallocation/assembly part of my code to solve multiple linear systems with MUMPS at each iteration of a main loop, following the example src/mat/examples/tests/ex28.c that Hong Zhang added a few weeks ago. The difference is that I'm

Re: [petsc-users] MAT_NEW_NONZERO_LOCATION_ERR

2019-10-10 Thread Thibaut Appel via petsc-users
Hi Hong, Thank you that was unclear to me, now I understand its purpose! Thibaut On 08/10/2019 16:18, Zhang, Hong wrote: Thibaut : Sorry, I did not explain it clearly. You call MatSetOption(A,MAT_NEW_NONZERO_LOCATION_ERR,PETSC_TRUE); AFTER matrix is assembled. Then no more new zero is allowed

Re: [petsc-users] MAT_NEW_NONZERO_LOCATION_ERR

2019-10-08 Thread Thibaut Appel via petsc-users
Well, try and create a small SEQAIJ/MPIAIJ matrix and preallocate memory for the diagonal. When I try to call MatSetValues to fill the diagonal, on the first row I get [0]PETSC ERROR: Argument out of range [0]PETSC ERROR: Inserting a new nonzero at (0,0) in the matrix Which is within my

[petsc-users] MAT_NEW_NONZERO_LOCATION_ERR

2019-10-08 Thread Thibaut Appel via petsc-users
Hi, Just out of curiosity, I'm a bit confused by the parameter option MAT_NEW_NONZERO_LOCATION_ERR. It triggers an error if you try to insert/add a value in the non-zero structure, regardless of the matrix preallocation status. In what case would such an option be useful? Thank you,

Re: [petsc-users] Solving a sequence of linear systems stored on disk with MUMPS

2019-08-02 Thread Thibaut Appel via petsc-users
and MatMumpsLoadFromDisk(Mat) they would just money with DMUMPS_STRUC_C id; item. > > >    Barry > > >> On Jul 23, 2019, at 9:24 AM, Thibaut Appel via

Re: [petsc-users] Solving a sequence of linear systems stored on disk with MUMPS

2019-07-29 Thread Thibaut Appel via petsc-users
ith DMUMPS_STRUC_C id; item. > > >    Barry > > >> On Jul 23, 2019, at 9:24 AM, Thibaut Appel via petsc-users mailto:petsc-users@mcs.anl.gov>> wrote: >> >> Dear PETSc users,

Re: [petsc-users] Solving a sequence of linear systems stored on disk with MUMPS

2019-07-25 Thread Thibaut Appel via petsc-users
oney with DMUMPS_STRUC_C id; item. > > >    Barry > > >> On Jul 23, 2019, at 9:24 AM, Thibaut Appel via petsc-users mailto:petsc-users@mcs.anl.gov>> wrote: >> >> Dear PETSc users, >> >> I need to solve several linear sy

Re: [petsc-users] Solving a sequence of linear systems stored on disk with MUMPS

2019-07-23 Thread Thibaut Appel via petsc-users
in the DMUMPS_STRUC_C id; and then reload it when needed. The user level API could be something like MatMumpsSaveToDisk(Mat) and MatMumpsLoadFromDisk(Mat) they would just money with DMUMPS_STRUC_C id; item. Barry On Jul 23, 2019, at 9:24 AM, Thibaut Appel via petsc-users wrote: Dear PETSc

Re: [petsc-users] Solving a sequence of linear systems stored on disk with MUMPS

2019-07-23 Thread Thibaut Appel via petsc-users
Hi Hong, A_m would typically have a leading dimension between 6e5 and 1.5e6, with roughly 100 non-zero entries per row in average. Don't get me wrong: performing ONE LU factorization is fine for the memory. It's just that I need to keep track, and store M x LU factorizations which obviously

[petsc-users] Solving a sequence of linear systems stored on disk with MUMPS

2019-07-23 Thread Thibaut Appel via petsc-users
Dear PETSc users, I need to solve several linear systems successively, with LU factorization, as part of an iterative process in my Fortran application code. The process would solve M systems (A_m)(x_m,i) = (b_m,i) for m=1,M at each iteration i, but computing the LU factorization of A_m

Re: [petsc-users] About DMDA (and extracting its ordering)

2019-02-25 Thread Thibaut Appel via petsc-users
it is not associated with a target Thibaut On 22 Feb 2019, at 15:13, Matthew Knepley mailto:knep...@gmail.com>> wrote: On Fri, Feb 22, 2019 at 9:10 AM Thibaut Appel via petsc-users mailto:petsc-users@mcs.anl.gov>> wrote:

Re: [petsc-users] About DMDA (and extracting its ordering)

2019-02-22 Thread Thibaut Appel via petsc-users
19 17:49, Matthew Knepley wrote: On Thu, Feb 21, 2019 at 11:16 AM Thibaut Appel via petsc-users mailto:petsc-users@mcs.anl.gov>> wrote: Dear PETSc developers/users, I’m solving linear PDEs on a regular grid with high-order finite differences, assembling an MPIAIJ matrix t

Re: [petsc-users] About DMDA (and extracting its ordering)

2019-02-21 Thread Thibaut Appel via petsc-users
ge cannot be called if the matrix hasn't been preallocated, and I need the global indices to preallocate. Thibaut On 21/02/2019 17:49, Matthew Knepley wrote: On Thu, Feb 21, 2019 at 11:16 AM Thibaut Appel via petsc-users mailto:petsc-users@mcs.anl.gov>> wrote: Dear PETSc develope

[petsc-users] About DMDA (and extracting its ordering)

2019-02-21 Thread Thibaut Appel via petsc-users
Dear PETSc developers/users, I’m solving linear PDEs on a regular grid with high-order finite differences, assembling an MPIAIJ matrix to solve linear systems or eigenvalue problems. I’ve been using vertex major, natural ordering for the parallelism with PetscSplitOwnership (yielding

Re: [petsc-users] DIVERGED_NANORING with PC GAMG

2018-11-05 Thread Thibaut Appel via petsc-users
Hi Mark, Yes it doesn't seem to be usable. Unfortunately we're aiming to do 3D so direct solvers are not a viable solution and PETSc' ILU is not parallel and we can't use HYPRE (complex arithmetic) Thibaut On 01/11/2018 20:42, Mark Adams wrote: On Wed, Oct 31, 2018 at 8:11 PM Smith,

Re: [petsc-users] DIVERGED_NANORING with PC GAMG

2018-10-31 Thread Thibaut Appel via petsc-users
Hi Mark, Matthew, Thanks for taking the time. 1) You're not suggesting having -fieldsplit_X_ksp_type *f*gmres for each field, are you? 2) No, the matrix *has* pressure in one of the fields. Here it's a 2D problem (but we're also doing 3D), the unknowns are (p,u,v) and those are my 3

Re: [petsc-users] DIVERGED_NANORING with PC GAMG

2018-10-31 Thread Thibaut Appel via petsc-users
Hi Matthew, Which database option are you referring to? I tried to add -fieldsplit_mg_levels_ksp_type gmres (and -fieldsplit_mg_levels_ksp_max_it 4 for another run) to my options (cf. below) which starts the iterations but it takes 1 hour for PETSc to do 13 of them so it must be wrong.