This is indeed worrisome.
Would it be possible to put PetscMemoryGetCurrentUsage() around each call
to KSPSolve() and each call to your data exchange? See if at each step they
increase?
One thing to be aware of with "max resident set size" is that it measures
the number of pages
I am trying to track down a memory issue with my code; apologies in
advance for the longish message.
I am solving a FEA problem with a number of load steps involving about 3000
right hand side and tangent assemblies and solves. The program is
mainly Fortran, with a C memory allocator.
When I
"Smith, Barry F." writes:
> Sorry, my mistake. I assumed that the naming would follow PETSc convention
> and there would be MatGetLocalSubMatrix_something() as there is
> MatGetLocalSubMatrix_IS() and MatGetLocalSubMatrix_Nest(). Instead
> MatGetLocalSubMatrix() is hardwired to call MatCreat
Sorry, my mistake. I assumed that the naming would follow PETSc convention
and there would be MatGetLocalSubMatrix_something() as there is
MatGetLocalSubMatrix_IS() and MatGetLocalSubMatrix_Nest(). Instead
MatGetLocalSubMatrix() is hardwired to call MatCreateLocalRef() if the
method is not
"Smith, Barry F. via petsc-users" writes:
>This is an interesting idea, but unfortunately not directly compatible
> with libMesh filling up the finite element part of the matrix. Plus it
> appears MatGetLocalSubMatrix() is only implemented for IS and Nest matrices
> :-(
Maybe I'm missing
On Wed, May 29, 2019 at 10:54 PM Adrian Croucher
wrote:
> On 30/05/19 2:45 PM, Matthew Knepley wrote:
>
>
> Hmm, I had not thought about that. It will not do that at all. We have
> never rebalanced a simulation
> using overlap cells. I would have to write the code that strips them out.
> Not hard
On 30/05/19 2:45 PM, Matthew Knepley wrote:
Hmm, I had not thought about that. It will not do that at all. We have
never rebalanced a simulation
using overlap cells. I would have to write the code that strips them
out. Not hard, but more code.
If you only plan on redistributing once, you can
Just some feedback. I found the problem. For reference my solve was called
as follows
KSPSolve(ksp,b,phi_new)
Inside my matrix operation (the "Matrix-Action" or MAT_OP_MULT) I was using
phi_new for a computation and that overwrite my initial guess everytime.
Looks like the solver still holds on t
On Wed, May 29, 2019 at 10:38 PM Adrian Croucher
wrote:
> hi
> On 28/05/19 11:32 AM, Matthew Knepley wrote:
>
>
> I would not do that. It should be much easier, and better from a workflow
> standpoint,
> to just redistribute in parallel. We now have several test examples that
> redistribute
> in
hi
On 28/05/19 11:32 AM, Matthew Knepley wrote:
I would not do that. It should be much easier, and better from a
workflow standpoint,
to just redistribute in parallel. We now have several test examples
that redistribute
in parallel, for example
https://bitbucket.org/petsc/petsc/src/cd762eb6
They are for the given process.
> On May 29, 2019, at 7:10 PM, Sanjay Govindjee via petsc-users
> wrote:
>
> (In Fortran) do the calls
>
> call PetscMallocGetCurrentUsage(val, ierr)
> call PetscMemoryGetCurrentUsage(val, ierr)
>
> return the per process memory numbers? o
This is an interesting idea, but unfortunately not directly compatible with
libMesh filling up the finite element part of the matrix. Plus it appears
MatGetLocalSubMatrix() is only implemented for IS and Nest matrices :-(
You could create a MATNEST reusing exactly the matrix from lib me
(In Fortran) do the calls
call PetscMallocGetCurrentUsage(val, ierr)
call PetscMemoryGetCurrentUsage(val, ierr)
return the per process memory numbers? or are the returned values summed
across all processes?
-sanjay
Understood. Where are you putting the "few extra unknowns" in the vector and
matrix? On the first process, on the last process, some places in the middle of
the matrix?
We don't have any trivial code for copying a big matrix into a even larger
matrix directly because we frown on doing tha
Manav,
For parallel sparse matrices using the standard PETSc formats the matrix is
stored in two parts on each process (see the details in MatCreateAIJ()) thus
there is no inexpensive way to access directly the IJ locations as a single
local matrix. What are you hoping to use the informat
Yes, see MatGetRow
https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatGetRow.html
--Junchao Zhang
On Wed, May 29, 2019 at 2:28 PM Manav Bhatia via petsc-users
mailto:petsc-users@mcs.anl.gov>> wrote:
Hi,
Once a MPI-AIJ matrix has been assembled, is there a method to get the
Lisandro Dalcin writes:
> On Tue, 28 May 2019 at 22:05, Jed Brown wrote:
>
>>
>> Note that all of these compilers (including Sun C, which doesn't define
>> the macro) recognize -fPIC. (Blue Gene xlc requires -qpic.) Do we
>> still need to test the other alternatives?
>>
>>
> Well, worst case,
Hmm, in the lastest couple of releases of PETSc the KSPSolve is suppose to
end as soon as it hits a NaN or Infinity. Is that not happening for you? If you
run with -ksp_monitor does it print multiple lines with Nan or Inf? If so
please send use the -ksp_view output so we can track down whic
Oh sorry, I missed that. That's great!
Thanks,
Myriam
Le 05/29/19 à 16:55, Zhang, Hong a écrit :
> Myriam:
> This branch is merged to master.
> Thanks for your work and patience. It helps us a lot. The graphs are
> very nice :-)
>
> We plan to re-organise the APIs of mat-mat opts, make them eas
Myriam:
This branch is merged to master.
Thanks for your work and patience. It helps us a lot. The graphs are very nice
:-)
We plan to re-organise the APIs of mat-mat opts, make them easier for users.
Hong
Hi,
Do you have any idea when Barry's fix
(https://bitbucket.org/petsc/petsc/pull-reques
Thanks Matthew,
Yes, I will give it a try thid evening.
Thak you very much!
On Wed, 29 May 2019, 11:32 Matthew Knepley, wrote:
> On Wed, May 29, 2019 at 3:07 AM Edoardo alinovi via petsc-users <
> petsc-users@mcs.anl.gov> wrote:
>
>> Dear PETSc friends,
>>
>> Hope you are doing all well.
>>
>>
21 matches
Mail list logo