I the PETSc.jl stuff I’ve worked on, I punted on the issue and only register a
finalizer when there is 1 MPI rank, so something like this when objects are
created:
if MPI.Comm_size(comm) == 1
finalizer(destroy, mat)
end
see:
g the user
> communicator will cause an attempt to destroy the attribute containing the
> inner PETSc communicator. I had always just assumed the user would not be
> deleting any MPI communicators they made and pass to PETSc until they were
> done with PETSc. It may work correctly b
Sorry if this is clearly stated somewhere in the docs, I'm still getting
familiar with the petsc codebase and was also unable to find the answer
searching (nor could I determine where this would be done in the source).
Does petsc duplicate MPI communicators? Or does the users program need to
BTW:
> On Jul 2, 2021, at 9:45 AM, Barry Smith wrote:
>
> NPS WARNING: *external sender* verify before acting.
>
>
>> On Jul 2, 2021, at 10:03 AM, Stefano Zampini
>> wrote:
>>
>> Patrick
>>
>> Should this be fixed in PETSc build system?
>>
have pointed out.
> On Jul 2, 2021, at 9:03 AM, Satish Balay wrote:
>
> On Fri, 2 Jul 2021, Kozdon, Jeremy (CIV) wrote:
>
>> Thanks for all the feedback!
>>
>> Digging a bit deeper in the dependencies and it seems that the compiler have
>> been updated
Thanks for all the feedback!
Digging a bit deeper in the dependencies and it seems that the compiler have
been updated but MPICH has not been rebuilt since then. Wondering if this is
causing some of the issues. Going to try to manually rebuilt MPICH to see if
that help.
> On Jul 2, 2021, at
I have been talking with Boris Kaus and Patrick Sanan about trying to revive
the Julia PETSc interface wrappers. One of the first things to get going is to
use Julia's binary builder [1] to wrap more scalar, real, and int type builds
of the PETSc library; the current distribution is just Real,