On Thu, Jan 5, 2023 at 10:32 PM Barry Smith <[email protected]> wrote:
> > > > On Jan 5, 2023, at 3:42 PM, Jed Brown <[email protected]> wrote: > > > > Mark Adams <[email protected]> writes: > > > >> Support of HIP and CUDA hardware together would be crazy, > > > > I don't think it's remotely crazy. libCEED supports both together and > it's very convenient when testing on a development machine that has one of > each brand GPU and simplifies binary distribution for us and every package > that uses us. Every day I wish PETSc could build with both simultaneously, > but everyone tells me it's silly. > > Not everyone at all; just a subset of everyone. Junchao is really the > hold-out :-) > I am not, instead I think we should try (I fully agree it can ease binary distribution). But satish needs to install such a machine first :) There are issues out of our control if we want to mix GPUs in execution. For example, how to do VexAXPY on a cuda vector and a hip vector? Shall we do it on the host? Also, there are no gpu-aware MPI implementations supporting messages between cuda memory and hip memory. > > I just don't care about "binary packages" :-); I think they are an > archaic and bad way of thinking about code distribution (but yes the > alternatives need lots of work to make them flawless, but I think that is > where the work should go in the packaging world.) > > I go further and think one should be able to automatically use a CUDA > vector on a HIP device as well, it is not hard in theory but requires > thinking about how we handle classes and subclasses a little to make it > straightforward; or perhaps Jacob has fixed that also?
