================ @@ -13067,6 +13067,14 @@ def warn_cuda_maxclusterrank_sm_90 : Warning< "maxclusterrank requires sm_90 or higher, CUDA arch provided: %0, ignoring " "%1 attribute">, InGroup<IgnoredAttributes>; +def err_cuda_cluster_attr_not_supported : Error< + "%0 is not supported for this GPU architecture" +>; + +def err_cuda_cluster_dims_too_large : Error< + "only a maximum of %0 thread blocks in a cluster is supported" ---------------- shiltian wrote:
I'd not add "CUDA" here, since this is for both CUDA and HIP. It also doesn't look well if we write "CUDA/HIP". The best would be to select "CUDA" or "HIP" accordingly, but I don't see it is really necessary. https://github.com/llvm/llvm-project/pull/156686 _______________________________________________ cfe-commits mailing list [email protected] https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits
