Re: [petsc-users] Memory growth issue

2019-05-29 Thread Smith, Barry F. via petsc-users
This is indeed worrisome. Would it be possible to put PetscMemoryGetCurrentUsage() around each call to KSPSolve() and each call to your data exchange? See if at each step they increase? One thing to be aware of with "max resident set size" is that it measures the number of pages

[petsc-users] Memory growth issue

2019-05-29 Thread Sanjay Govindjee via petsc-users
I am trying to track down a memory issue with my code; apologies in advance for the longish message. I am solving a FEA problem with a number of load steps involving about 3000 right hand side and tangent assemblies and solves.  The program is mainly Fortran, with a C memory allocator. When I

Re: [petsc-users] Nonzero I-j locations

2019-05-29 Thread Jed Brown via petsc-users
"Smith, Barry F." writes: > Sorry, my mistake. I assumed that the naming would follow PETSc convention > and there would be MatGetLocalSubMatrix_something() as there is > MatGetLocalSubMatrix_IS() and MatGetLocalSubMatrix_Nest(). Instead > MatGetLocalSubMatrix() is hardwired to call MatCreat

Re: [petsc-users] Nonzero I-j locations

2019-05-29 Thread Smith, Barry F. via petsc-users
Sorry, my mistake. I assumed that the naming would follow PETSc convention and there would be MatGetLocalSubMatrix_something() as there is MatGetLocalSubMatrix_IS() and MatGetLocalSubMatrix_Nest(). Instead MatGetLocalSubMatrix() is hardwired to call MatCreateLocalRef() if the method is not

Re: [petsc-users] Nonzero I-j locations

2019-05-29 Thread Jed Brown via petsc-users
"Smith, Barry F. via petsc-users" writes: >This is an interesting idea, but unfortunately not directly compatible > with libMesh filling up the finite element part of the matrix. Plus it > appears MatGetLocalSubMatrix() is only implemented for IS and Nest matrices > :-( Maybe I'm missing

Re: [petsc-users] parallel dual porosity

2019-05-29 Thread Matthew Knepley via petsc-users
On Wed, May 29, 2019 at 10:54 PM Adrian Croucher wrote: > On 30/05/19 2:45 PM, Matthew Knepley wrote: > > > Hmm, I had not thought about that. It will not do that at all. We have > never rebalanced a simulation > using overlap cells. I would have to write the code that strips them out. > Not hard

Re: [petsc-users] parallel dual porosity

2019-05-29 Thread Adrian Croucher via petsc-users
On 30/05/19 2:45 PM, Matthew Knepley wrote: Hmm, I had not thought about that. It will not do that at all. We have never rebalanced a simulation using overlap cells. I would have to write the code that strips them out. Not hard, but more code. If you only plan on redistributing once, you can

Re: [petsc-users] Matrix free GMRES seems to ignore my initial guess

2019-05-29 Thread Jan Izak Cornelius Vermaak via petsc-users
Just some feedback. I found the problem. For reference my solve was called as follows KSPSolve(ksp,b,phi_new) Inside my matrix operation (the "Matrix-Action" or MAT_OP_MULT) I was using phi_new for a computation and that overwrite my initial guess everytime. Looks like the solver still holds on t

Re: [petsc-users] parallel dual porosity

2019-05-29 Thread Matthew Knepley via petsc-users
On Wed, May 29, 2019 at 10:38 PM Adrian Croucher wrote: > hi > On 28/05/19 11:32 AM, Matthew Knepley wrote: > > > I would not do that. It should be much easier, and better from a workflow > standpoint, > to just redistribute in parallel. We now have several test examples that > redistribute > in

Re: [petsc-users] parallel dual porosity

2019-05-29 Thread Adrian Croucher via petsc-users
hi On 28/05/19 11:32 AM, Matthew Knepley wrote: I would not do that. It should be much easier, and better from a workflow standpoint, to just redistribute in parallel. We now have several test examples that redistribute in parallel, for example https://bitbucket.org/petsc/petsc/src/cd762eb6

Re: [petsc-users] Memory inquire functions

2019-05-29 Thread Smith, Barry F. via petsc-users
They are for the given process. > On May 29, 2019, at 7:10 PM, Sanjay Govindjee via petsc-users > wrote: > > (In Fortran) do the calls > > call PetscMallocGetCurrentUsage(val, ierr) > call PetscMemoryGetCurrentUsage(val, ierr) > > return the per process memory numbers? o

Re: [petsc-users] Nonzero I-j locations

2019-05-29 Thread Smith, Barry F. via petsc-users
This is an interesting idea, but unfortunately not directly compatible with libMesh filling up the finite element part of the matrix. Plus it appears MatGetLocalSubMatrix() is only implemented for IS and Nest matrices :-( You could create a MATNEST reusing exactly the matrix from lib me

[petsc-users] Memory inquire functions

2019-05-29 Thread Sanjay Govindjee via petsc-users
(In Fortran) do the calls     call PetscMallocGetCurrentUsage(val, ierr)     call PetscMemoryGetCurrentUsage(val, ierr) return the per process memory numbers? or are the returned values summed across all processes? -sanjay

Re: [petsc-users] Nonzero I-j locations

2019-05-29 Thread Smith, Barry F. via petsc-users
Understood. Where are you putting the "few extra unknowns" in the vector and matrix? On the first process, on the last process, some places in the middle of the matrix? We don't have any trivial code for copying a big matrix into a even larger matrix directly because we frown on doing tha

Re: [petsc-users] Nonzero I-j locations

2019-05-29 Thread Smith, Barry F. via petsc-users
Manav, For parallel sparse matrices using the standard PETSc formats the matrix is stored in two parts on each process (see the details in MatCreateAIJ()) thus there is no inexpensive way to access directly the IJ locations as a single local matrix. What are you hoping to use the informat

Re: [petsc-users] Nonzero I-j locations

2019-05-29 Thread Zhang, Junchao via petsc-users
Yes, see MatGetRow https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatGetRow.html --Junchao Zhang On Wed, May 29, 2019 at 2:28 PM Manav Bhatia via petsc-users mailto:petsc-users@mcs.anl.gov>> wrote: Hi, Once a MPI-AIJ matrix has been assembled, is there a method to get the

Re: [petsc-users] How do I supply the compiler PIC flag via CFLAGS, CXXXFLAGS, and FCFLAGS

2019-05-29 Thread Jed Brown via petsc-users
Lisandro Dalcin writes: > On Tue, 28 May 2019 at 22:05, Jed Brown wrote: > >> >> Note that all of these compilers (including Sun C, which doesn't define >> the macro) recognize -fPIC. (Blue Gene xlc requires -qpic.) Do we >> still need to test the other alternatives? >> >> > Well, worst case,

Re: [petsc-users] Stop KSP if diverging

2019-05-29 Thread Smith, Barry F. via petsc-users
Hmm, in the lastest couple of releases of PETSc the KSPSolve is suppose to end as soon as it hits a NaN or Infinity. Is that not happening for you? If you run with -ksp_monitor does it print multiple lines with Nan or Inf? If so please send use the -ksp_view output so we can track down whic

Re: [petsc-users] Bad memory scaling with PETSc 3.10

2019-05-29 Thread Myriam Peyrounette via petsc-users
Oh sorry, I missed that. That's great! Thanks, Myriam Le 05/29/19 à 16:55, Zhang, Hong a écrit : > Myriam: > This branch is merged to master. > Thanks for your work and patience. It helps us a lot. The graphs are > very nice :-) > > We plan to re-organise the APIs of mat-mat opts, make them eas

Re: [petsc-users] Bad memory scaling with PETSc 3.10

2019-05-29 Thread Zhang, Hong via petsc-users
Myriam: This branch is merged to master. Thanks for your work and patience. It helps us a lot. The graphs are very nice :-) We plan to re-organise the APIs of mat-mat opts, make them easier for users. Hong Hi, Do you have any idea when Barry's fix (https://bitbucket.org/petsc/petsc/pull-reques

Re: [petsc-users] Stop KSP if diverging

2019-05-29 Thread Edoardo alinovi via petsc-users
Thanks Matthew, Yes, I will give it a try thid evening. Thak you very much! On Wed, 29 May 2019, 11:32 Matthew Knepley, wrote: > On Wed, May 29, 2019 at 3:07 AM Edoardo alinovi via petsc-users < > petsc-users@mcs.anl.gov> wrote: > >> Dear PETSc friends, >> >> Hope you are doing all well. >> >>