On Fri, Oct 07, 2022 at 04:26:58PM +0200, Tobias Burnus wrote:
> libgomp/nvptx: Prepare for reverse-offload callback handling
> 
> This patch adds a stub 'gomp_target_rev' in the host's target.c, which will
> later handle the reverse offload.
> For nvptx, it adds support for forwarding the offload gomp_target_ext call
> to the host by setting values in a struct on the device and querying it on
> the host - invoking gomp_target_rev on the result.
> 
> For host-device consistency guarantee reasons, reverse offload is currently
> limited -march=sm_70 (for libgomp).
> 
> gcc/ChangeLog:
> 
>       * config/nvptx/mkoffload.cc (process): Warn if the linked-in libgomp.a
>       has not been compiled with sm_70 or higher and disable code gen then.
> 
> include/ChangeLog:
> 
>       * cuda/cuda.h (enum CUdevice_attribute): Add
>       CU_DEVICE_ATTRIBUTE_UNIFIED_ADDRESSING.
>       (CU_MEMHOSTALLOC_DEVICEMAP): Define.
>       (cuMemHostAlloc): Add prototype.
> 
> libgomp/ChangeLog:
> 
>       * config/nvptx/icv-device.c (GOMP_DEVICE_NUM_VAR): Remove
>       'static' for this variable.
>       * config/nvptx/libgomp-nvptx.h: New file.
>       * config/nvptx/target.c: Include it.
>       (GOMP_ADDITIONAL_ICVS): Declare extern var.
>       (GOMP_REV_OFFLOAD_VAR): Declare var.
>       (GOMP_target_ext): Handle reverse offload.
>       * libgomp-plugin.h (GOMP_PLUGIN_target_rev): New prototype.
>       * libgomp-plugin.c (GOMP_PLUGIN_target_rev): New, call ...
>       * target.c (gomp_target_rev): ... this new stub function.
>       * libgomp.h (gomp_target_rev): Declare.
>       * libgomp.map (GOMP_PLUGIN_1.4): New; add GOMP_PLUGIN_target_rev.
>       * plugin/cuda-lib.def (cuMemHostAlloc): Add.
>       * plugin/plugin-nvptx.c: Include libgomp-nvptx.h.
>       (struct ptx_device): Add rev_data member. 
>       (nvptx_open_device): #if 0 unused check; add
>       unified address assert check.
>       (GOMP_OFFLOAD_get_num_devices): Claim unified address
>       support.
>       (GOMP_OFFLOAD_load_image): Free rev_fn_table if no
>       offload functions exist. Make offload var available
>       on host and device.
>       (rev_off_dev_to_host_cpy, rev_off_host_to_dev_cpy): New.
>       (GOMP_OFFLOAD_run): Handle reverse offload.

So, does this mean one has to have gcc configured --with-arch=sm_70
or later to make reverse offloading work (and then on the other
side no support for older PTX arches at all)?
If yes, I was kind of hoping we could arrange for it to be more
user-friendly, build libgomp.a normally (sm_35 or what is the default),
build the single TU in libgomp that needs the sm_70 stuff with -march=sm_70
and arrange for mkoffload to link in the sm_70 stuff only if the user
wants reverse offload (or has requires reverse_offload?).  In that case
ignore sm_60 and older devices, if reverse offload isn't wanted, don't link
in the part that needs sm_70 and make stuff working on sm_35 and later.
Or perhaps have 2 versions of target.o, one sm_35 and one sm_70 and let
mkoffload choose among them.

> +      /* The code for nvptx for GOMP_target_ext in 
> libgomp/config/nvptx/target.c
> +      for < sm_70 exists but is disabled here as it is unclear whether there
> +      is the required consistency between host and device.
> +      See https://gcc.gnu.org/pipermail/gcc-patches/2022-October/602715.html
> +      for details.  */
> +      warning_at (input_location, 0,
> +               "Disabling offload-code generation for this device type: "
> +               "%<omp requires reverse_offload%> can only be fulfilled "
> +               "for %<sm_70%> or higher");
> +      inform (UNKNOWN_LOCATION,
> +           "Reverse offload requires that GCC is configured with "
> +           "%<--with-arch=sm_70%> or higher and not overridden by a lower "
> +           "value for %<-foffload-options=nvptx-none=-march=%>");

Diagnostics (sure, Fortran FE is an exception) shouldn't start with capital
letters).

> @@ -519,10 +523,20 @@ nvptx_open_device (int n)
>                 CU_DEVICE_ATTRIBUTE_MAX_THREADS_PER_MULTIPROCESSOR, dev);
>    ptx_dev->max_threads_per_multiprocessor = pi;
>  
> +#if 0
> +  int async_engines;
>    r = CUDA_CALL_NOCHECK (cuDeviceGetAttribute, &async_engines,
>                        CU_DEVICE_ATTRIBUTE_ASYNC_ENGINE_COUNT, dev);
>    if (r != CUDA_SUCCESS)
>      async_engines = 1;
> +#endif

Please avoid #if 0 code.

> +
> +  /* Required below for reverse offload as implemented, but with compute
> +     capability >= 2.0 and 64bit device processes, this should be 
> universally be
> +     the case; hence, an assert.  */
> +  r = CUDA_CALL_NOCHECK (cuDeviceGetAttribute, &pi,
> +                      CU_DEVICE_ATTRIBUTE_UNIFIED_ADDRESSING, dev);
> +  assert (r == CUDA_SUCCESS && pi);
>  
>    for (int i = 0; i != GOMP_DIM_MAX; i++)
>      ptx_dev->default_dims[i] = 0;
> @@ -1179,8 +1193,10 @@ GOMP_OFFLOAD_get_num_devices (unsigned int 
> omp_requires_mask)
>  {
>    int num_devices = nvptx_get_num_devices ();
>    /* Return -1 if no omp_requires_mask cannot be fulfilled but
> -     devices were present.  */
> -  if (num_devices > 0 && omp_requires_mask != 0)
> +     devices were present. Unified-shared address: see comment in

2 spaces after . rather than 1.

> --- a/libgomp/target.c
> +++ b/libgomp/target.c
> @@ -2925,6 +2925,25 @@ GOMP_target_ext (int device, void (*fn) (void *), 
> size_t mapnum,
>      htab_free (refcount_set);
>  }
>  
> +/* Handle reverse offload. This is called by the device plugins for a
> +   reverse offload; it is not called if the outer target runs on the host.  
> */

Likewise.

        Jakub

Reply via email to