ABataev added a comment.

In D120353#3338770 <https://reviews.llvm.org/D120353#3338770>, @jhuber6 wrote:

> In D120353#3338718 <https://reviews.llvm.org/D120353#3338718>, @ABataev wrote:
>
>> In D120353#3338647 <https://reviews.llvm.org/D120353#3338647>, @jhuber6 
>> wrote:
>>
>>> But the main reason I made this patch is for interoperability. Without this 
>>> if you want to call a CUDA function from the OpenMP device you'd need a 
>>> variant and a dummy implementation. If you don't write a dummy 
>>> implementation you'll get a linker error, if you don't use a variant you'll 
>>> override the CUDA function.
>>
>> Ah, ok, I see. How is supposed to be used? In Cuda code or in plain C/C++ 
>> code?
>
> I haven't finalized the implementation, but the basic support I've tested was 
> calling a `__device__` function compiled from another file with OpenMP, with 
> this patch the source files would look like this for example. I think the 
> inverse would also be possible given some code on the CUDA side. Calling CUDA 
> kernels would take some extra work.
>
>   __device__ int cuda() { return 0; }
>
>
>
>   int cuda(void);
>   #pragma omp declare target device_type(nohost) to(cuda)
>   
>   int main() {
>     int x = 1;
>   #pragma omp target map(from : x)
>     x = cuda();
>   
>     return x;
>   }

What if we have `#pragma omp target if (...)` or `#pragma omp target 
device(...)`?


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D120353/new/

https://reviews.llvm.org/D120353

_______________________________________________
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits

Reply via email to