eers,
Jack
*From:* Lawrence Mitchell
*Sent:* 25 October 2021 12:34
*To:* Stefano Zampini
*Cc:* Barry Smith ; "Alberto F. Martín"
; PETSc users list ;
Francesc Verdugo ; Betteridge, Jack D
*Subject:* Re: [petsc-users] Why PetscDestroy global c
I the PETSc.jl stuff I’ve worked on, I punted on the issue and only register a
finalizer when there is 1 MPI rank, so something like this when objects are
created:
if MPI.Comm_size(comm) == 1
finalizer(destroy, mat)
end
see:
eers,
Jack
From: Lawrence Mitchell
Sent: 25 October 2021 12:34
To: Stefano Zampini
Cc: Barry Smith ; "Alberto F. Martín"
; PETSc users list ; Francesc
Verdugo ; Betteridge, Jack D
Subject: Re: [petsc-users] Why PetscDestroy global collective semantics?
**
Hi all,
(I cc Jack who is doing the implementation in the petsc4py setting)
> On 24 Oct 2021, at 06:51, Stefano Zampini wrote:
>
> Non-deterministic garbage collection is an issue from Python too, and
> firedrake folks are also working on that.
>
> We may consider deferring all calls to
I think Jeremy (cc‘d) has also been thinking about this in the context of
PETSc.jl
Stefano Zampini schrieb am So. 24. Okt. 2021 um
07:52:
> Non-deterministic garbage collection is an issue from Python too, and
> firedrake folks are also working on that.
>
> We may consider deferring all calls
Non-deterministic garbage collection is an issue from Python too, and
firedrake folks are also working on that.
We may consider deferring all calls to MPI_Comm_free done on communicators
with 1 as ref count (i.e., the call will actually wipe out some internal
MPI data) in a collective call that
Ahh, this makes perfect sense.
The code for PetscObjectRegisterDestroy() and the actual destruction (called
in PetscFinalize()) is very simply and can be found in
src/sys/objects/destroy.c PetscObjectRegisterDestroy(),
PetscObjectRegisterDestroyAll().
You could easily maintain a new
Thanks all for your very insightful answers.
We are leveraging PETSc from Julia in a parallel distributed memory
context (several MPI tasks running the Julia REPL each).
Julia uses Garbage Collection (GC), and we would like to destroy the
PETSc objects automatically when the GC decides so
Depending on the use-case you may also find PetscObjectRegisterDestroy()
useful. If you can’t guarantee your PetscObjectDestroy() calls are collective,
but have some other collective section you may call it then to punt the
destruction of your object to PetscFinalize() which is guaranteed to be
Junchao Zhang writes:
> On Fri, Oct 22, 2021 at 9:13 PM Barry Smith wrote:
>
>>
>> One technical reason is that PetscHeaderDestroy_Private() may call
>> PetscCommDestroy() which may call MPI_Comm_free() which is defined by the
>> standard to be collective. Though PETSc tries to limit its use
On Fri, Oct 22, 2021 at 9:13 PM Barry Smith wrote:
>
> One technical reason is that PetscHeaderDestroy_Private() may call
> PetscCommDestroy() which may call MPI_Comm_free() which is defined by the
> standard to be collective. Though PETSc tries to limit its use of new MPI
> communicators (for
One technical reason is that PetscHeaderDestroy_Private() may call
PetscCommDestroy() which may call MPI_Comm_free() which is defined by the
standard to be collective. Though PETSc tries to limit its use of new MPI
communicators (for example generally many objects shared the same
Dear PETSc users,
What is the main reason underlying PetscDestroy subroutines having
global collective semantics? Is this actually true for all PETSc
objects? Can this be relaxed/deactivated by, e.g., compilation
macros/configuration options?
Thanks in advance!
Best regards,
Alberto.
13 matches
Mail list logo