> On Mar 12, 2021, at 6:58 PM, Jed Brown <j...@jedbrown.org> wrote:
>
> Barry Smith <bsm...@petsc.dev> writes:
>
>> I think we should start porting the PetscFE infrastructure, numerical
>> integrations, vector and matrix assembly to GPUs soon. It is dog slow on
>> CPUs and should be able to deliver higher performance on GPUs.
>
> IMO, this comes via interfaces to libCEED, not rolling yet another way to
> invoke quadrature routines on GPUs.
I am not talking about matrix-free stuff, that definitely belongs in
libCEED, no reason to rewrite.
But does libCEED also support the traditional finite element construction
process where the matrices are built explicitly? Or does it provide some of the
code, integration points, integration formula etc. that could be shared and
used as a starting point? If it includes all of these "traditional" things then
we should definitely get it all hooked into PetscFE/DMPLEX and go to town. (But
yes not so much need for the GPU hackathon since it is wiring more than GPU
code). The way I have always heard about libCEED was as a matrix-free engine,
so I may have miss understood. It is definitely not my intention to start a
project that reproduces functionality that we can just use.
We do need solid support for traditional finite element assembly on GPUs,
matrix-free finite elements alone is not enough.
>
> DMPlex setup and distribution could conceivably be ported to GPUs, but it
> would be a monumental task and that stuff is usually done once and
> intermediate structures don't necessarily fit in device memory even when they
> fit comfortably in DRAM. Way too big of a task for one hackathon.
I am not talking about the setup and distribution, I don't see that as a
priority.