There is also the hedge of adding a parameter and API function to control
which of these two behaviors is used, and if trying to preallocate twice,
throwing an error that instructs the user how to change the behavior,
noting that it will increase peak memory usage.

Am Di., 1. Feb. 2022 um 17:07 Uhr schrieb Jed Brown <j...@jedbrown.org>:

> Stefano Zampini <stefano.zamp...@gmail.com> writes:
>
> > Il giorno mar 1 feb 2022 alle ore 18:34 Jed Brown <j...@jedbrown.org> ha
> > scritto:
> >
> >> Patrick Sanan <patrick.sa...@gmail.com> writes:
> >>
> >> > Am Di., 1. Feb. 2022 um 16:20 Uhr schrieb Jed Brown <j...@jedbrown.org
> >:
> >> >
> >> >> Patrick Sanan <patrick.sa...@gmail.com> writes:
> >> >>
> >> >> > Sorry about the delay on this. I can reproduce.
> >> >> >
> >> >> > This regression appears to be a result of this optimization:
> >> >> > https://gitlab.com/petsc/petsc/-/merge_requests/4273
> >> >>
> >> >> Thanks for tracking this down. Is there a reason to prefer
> preallocating
> >> >> twice
> >> >>
> >> >>    ierr =
> >> >> MatPreallocatorPreallocate(preallocator,PETSC_TRUE,A);CHKERRQ(ierr);
> >> >>    ierr =
> >> >>
> >>
> MatPreallocatorPreallocate(preallocator,PETSC_TRUE,A_duplicate);CHKERRQ(ierr);
> >> >>
> >> >> versus using MatDuplicate() or MatConvert()?
> >> >>
> >>
> >
> > Jed
> >
> > this is not the point. Suppose you pass around only a preallocator, but
> do
> > not pass around the matrices. Reusing the preallocator should be allowed.
>
> The current code is not okay (crashing is not okay), but we should decide
> whether to consume the preallocator or to retain the data structure. Peak
> memory use is the main reason hash-based allocation hasn't been default and
> wasn't adopted sooner. Retaining the hash until the preallocator is
> destroyed increases that peak.
>

Reply via email to