jdoerfert added inline comments.

================
Comment at: clang/lib/AST/Decl.cpp:3224
 
+  if (Context.getTargetInfo().getTriple().isAMDGCN() &&
+      Context.BuiltinInfo.isPredefinedLibFunction(BuiltinID) &&
----------------
sameerds wrote:
> arsenm wrote:
> > This is also identical to the cuda handling above, can you merge them
> It's not identical, because CUDA is filtering out host code and its check is 
> target independent.
> 
> But then, Saiyed, it seems that the new patch disallows library builtins on 
> all languages that reach this point, both on host and device code. Is that 
> the intention? Does this point in the flow preclude any side-effects outside 
> of "OpenMP on AMDGCN"?
Yes, wasn't there an idea to have isGPU()?


================
Comment at: clang/lib/Frontend/CompilerInvocation.cpp:3162
+      Opts.OpenMPIsDevice && (T.isNVPTX() || T.isAMDGCN()) &&
       Args.hasArg(options::OPT_fopenmp_cuda_force_full_runtime);
 
----------------
Can we please not call it CUDA mode for non-CUDA targets? Or do you expect the 
compilation to "identify" as CUDA compilation?


================
Comment at: llvm/include/llvm/ADT/Triple.h:700
     return getArch() == Triple::r600 || getArch() == Triple::amdgcn;
   }
 
----------------
What's the difference between an AMDGPU and AMDGCN?


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D79754/new/

https://reviews.llvm.org/D79754



_______________________________________________
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits

Reply via email to