[PATCH] D40275: [CUDA] Report "unsupported VLA" errors only on device side.

2017-11-28 Thread John McCall via Phabricator via cfe-commits
rjmccall added inline comments.



Comment at: clang/lib/Sema/SemaType.cpp:2188
+   !Context.getTargetInfo().isVLASupported() &&
+   shouldDiagnoseTargetSupportFromOpenMP()) {
+  // Some targets don't support VLAs.

tra wrote:
> rjmccall wrote:
> > tra wrote:
> > > tra wrote:
> > > > rjmccall wrote:
> > > > > Please write this check so that it trips in an "ordinary" build on a 
> > > > > target that just happens to not support VLAs, something like:
> > > > > 
> > > > >   else if (!Context.getTargetInfo().isVLASupported() && 
> > > > > shouldDiagnoseTargetSupportFromOpenMP())
> > > > > 
> > > > > If you want to include the explicit OpenMP check there, it would need 
> > > > > to be:
> > > > > 
> > > > >   else if (!Context.getTargetInfo().isVLASupported() && 
> > > > > (!getLangOpts().OpenMP || shouldDiagnoseTargetSupportFromOpenMP()))
> > > > > 
> > > > > but I think the first looks better.
> > > > > 
> > > > > The CUDA and OpenMP paths here seem to be trying to achieve analogous 
> > > > > things; it's unfortunate that we can't find a way to unify their 
> > > > > approaches, even if we'd eventually want to use different diagnostic 
> > > > > text.  I imagine that the target-environment language restrictions 
> > > > > are basically the same, since they arise for the same fundamental 
> > > > > reasons, so all the places using CUDADiagIfDeviceCode are likely to 
> > > > > have a check for shouldDiagnoseTargetSupportFromOpenMP() and 
> > > > > vice-versa.
> > > > The problem is that in CUDA we can't just do 
> > > > ```
> > > > if (!Context.getTargetInfo().isVLASupported() && 
> > > > shouldDiagnoseTargetSupportFromOpenMP())
> > > >Diag(Loc, diag::err_vla_unsupported);
> > > > ```
> > > > 
> > > > In some situations diag messages will only be emitted if we attempt to 
> > > > generate unsupported code on device side.
> > > > Consider 
> > > > ```
> > > > __host__ __device__ void foo(int n) {
> > > >   int vla[n];
> > > > }
> > > > ```
> > > > 
> > > > When Sema sees this code during compilation, it can not tell whether 
> > > > there is an error. Calling foo from the host code is perfectly valid. 
> > > > Calling it from device code is not. `CUDADiagIfDeviceCode` creates 
> > > > 'postponed' diagnostics which only gets emitted if we ever need to 
> > > > generate code for the function on device.
> > > > 
> > > > So, while CUDA and OpenMP do similar things, they are not quite the 
> > > > same.  If your goal to generalize CUDA and OpenMP handling, then it 
> > > > would have to be folded into diagnostic-emitting itself and we'll need 
> > > > an analog of CUDADiagIfDeviceCode which can handle both OpenMP and 
> > > > CUDA. 
> > > > E.g. something like this:
> > > > ```
> > > > ??? DiagIfDeviceCode(???) {
> > > >if (OpenCL || (OpenMP && shouldDiagnoseTargetSupportFromOpenMP()))
> > > >Diag(...);
> > > >else if (CUDA)
> > > >CUDADiagIfDeviceCode()
> > > > } 
> > > > 
> > > > ...
> > > > 
> > > > if (!Context.getTargetInfo().isVLASupported()) 
> > > >DiagIfDeviceCode();
> > > > 
> > > > ```
> > > > 
> > > > Would that work for you?
> > > There's another issue with this approach -- diagnostics itself. Each 
> > > dialect has its own. Specifically CUDA diags have details that are 
> > > relevant only to CUDA. I suspect OpenMP has something specific as well. 
> > > If we insist emitting only one kind of error for particular case across 
> > > all dialects, we'll have to stick to bare bones "feature X is not 
> > > supported" which will not have sufficient details to explain why the 
> > > error was triggered in CUDA.
> > > 
> > > IMO dialect-specific handling of cuda errors in this case is the lesser 
> > > evil. 
> > > 
> > > I'll update the patch to handle non-cuda cases the way  you suggested.
> > If there really is interesting language-specific information to provide in 
> > a diagnostic, I agree that it's hard to avoid having different code for 
> > different targets.  On the other hand, the CUDA-specific information in 
> > this specific diagnostic seems unnecessary — does the user really care 
> > about whether the function was  '__device__' vs. '__host__ __device__'? in 
> > fact, isn't the latter a bit misleading? — and I feel like a generic 
> > "cannot use variable-length arrays when compiling for device 'nvptx64'" 
> > would be perfectly satisfactory in both CUDA and OpenMP, and that's 
> > probably true for almost all of these diagnostics.
> > 
> > On a completely different note, I do want to point out that the only thing 
> > you actually *need* to ban here is declaring a local variable of VLA type.  
> > There's no reason at all to ban VLA types in general; they just compile to 
> > extra arithmetic.
> Agreed. While host/device attributes are part of a function signature in 
> CUDA, in this case only 'device' part makes the difference and therefore 
> common-style reporting the way you suggest would be 

[PATCH] D40275: [CUDA] Report "unsupported VLA" errors only on device side.

2017-11-28 Thread Artem Belevich via Phabricator via cfe-commits
This revision was automatically updated to reflect the committed changes.
Closed by commit rL319201: [CUDA] Report "unsupported VLA" errors only on 
device side. (authored by tra).

Changed prior to commit:
  https://reviews.llvm.org/D40275?vs=123831=124607#toc

Repository:
  rL LLVM

https://reviews.llvm.org/D40275

Files:
  cfe/trunk/lib/Sema/SemaType.cpp
  cfe/trunk/test/SemaCUDA/call-stack-for-deferred-err.cu
  cfe/trunk/test/SemaCUDA/no-call-stack-for-immediate-errs.cu
  cfe/trunk/test/SemaCUDA/vla.cu


Index: cfe/trunk/test/SemaCUDA/vla.cu
===
--- cfe/trunk/test/SemaCUDA/vla.cu
+++ cfe/trunk/test/SemaCUDA/vla.cu
@@ -1,5 +1,5 @@
-// RUN: %clang_cc1 -fcuda-is-device -fsyntax-only -verify %s
-// RUN: %clang_cc1 -fsyntax-only -verify -DHOST %s
+// RUN: %clang_cc1 -triple nvptx64-nvidia-cuda -fcuda-is-device -verify %s
+// RUN: %clang_cc1 -triple nvptx64-nvidia-cuda -verify -DHOST %s
 
 #include "Inputs/cuda.h"
 
Index: cfe/trunk/test/SemaCUDA/no-call-stack-for-immediate-errs.cu
===
--- cfe/trunk/test/SemaCUDA/no-call-stack-for-immediate-errs.cu
+++ cfe/trunk/test/SemaCUDA/no-call-stack-for-immediate-errs.cu
@@ -1,4 +1,4 @@
-// RUN: %clang_cc1 -fcuda-is-device -fsyntax-only -verify %s
+// RUN: %clang_cc1 -triple nvptx64-nvidia-cuda -fcuda-is-device -fsyntax-only 
-verify %s
 
 #include "Inputs/cuda.h"
 
Index: cfe/trunk/test/SemaCUDA/call-stack-for-deferred-err.cu
===
--- cfe/trunk/test/SemaCUDA/call-stack-for-deferred-err.cu
+++ cfe/trunk/test/SemaCUDA/call-stack-for-deferred-err.cu
@@ -1,4 +1,4 @@
-// RUN: %clang_cc1 -fcuda-is-device -fsyntax-only -verify %s
+// RUN: %clang_cc1 -triple nvptx64-nvidia-cuda -fcuda-is-device -fsyntax-only 
-verify %s
 
 #include "Inputs/cuda.h"
 
Index: cfe/trunk/lib/Sema/SemaType.cpp
===
--- cfe/trunk/lib/Sema/SemaType.cpp
+++ cfe/trunk/lib/Sema/SemaType.cpp
@@ -2180,14 +2180,17 @@
 Diag(Loc, diag::err_opencl_vla);
 return QualType();
   }
-  // CUDA device code doesn't support VLAs.
-  if (getLangOpts().CUDA && T->isVariableArrayType())
-CUDADiagIfDeviceCode(Loc, diag::err_cuda_vla) << CurrentCUDATarget();
-  // Some targets don't support VLAs.
-  if (T->isVariableArrayType() && !Context.getTargetInfo().isVLASupported() &&
-  shouldDiagnoseTargetSupportFromOpenMP()) {
-Diag(Loc, diag::err_vla_unsupported);
-return QualType();
+
+  if (T->isVariableArrayType() && !Context.getTargetInfo().isVLASupported()) {
+if (getLangOpts().CUDA) {
+  // CUDA device code doesn't support VLAs.
+  CUDADiagIfDeviceCode(Loc, diag::err_cuda_vla) << CurrentCUDATarget();
+} else if (!getLangOpts().OpenMP ||
+   shouldDiagnoseTargetSupportFromOpenMP()) {
+  // Some targets don't support VLAs.
+  Diag(Loc, diag::err_vla_unsupported);
+  return QualType();
+}
   }
 
   // If this is not C99, extwarn about VLA's and C99 array size modifiers.


Index: cfe/trunk/test/SemaCUDA/vla.cu
===
--- cfe/trunk/test/SemaCUDA/vla.cu
+++ cfe/trunk/test/SemaCUDA/vla.cu
@@ -1,5 +1,5 @@
-// RUN: %clang_cc1 -fcuda-is-device -fsyntax-only -verify %s
-// RUN: %clang_cc1 -fsyntax-only -verify -DHOST %s
+// RUN: %clang_cc1 -triple nvptx64-nvidia-cuda -fcuda-is-device -verify %s
+// RUN: %clang_cc1 -triple nvptx64-nvidia-cuda -verify -DHOST %s
 
 #include "Inputs/cuda.h"
 
Index: cfe/trunk/test/SemaCUDA/no-call-stack-for-immediate-errs.cu
===
--- cfe/trunk/test/SemaCUDA/no-call-stack-for-immediate-errs.cu
+++ cfe/trunk/test/SemaCUDA/no-call-stack-for-immediate-errs.cu
@@ -1,4 +1,4 @@
-// RUN: %clang_cc1 -fcuda-is-device -fsyntax-only -verify %s
+// RUN: %clang_cc1 -triple nvptx64-nvidia-cuda -fcuda-is-device -fsyntax-only -verify %s
 
 #include "Inputs/cuda.h"
 
Index: cfe/trunk/test/SemaCUDA/call-stack-for-deferred-err.cu
===
--- cfe/trunk/test/SemaCUDA/call-stack-for-deferred-err.cu
+++ cfe/trunk/test/SemaCUDA/call-stack-for-deferred-err.cu
@@ -1,4 +1,4 @@
-// RUN: %clang_cc1 -fcuda-is-device -fsyntax-only -verify %s
+// RUN: %clang_cc1 -triple nvptx64-nvidia-cuda -fcuda-is-device -fsyntax-only -verify %s
 
 #include "Inputs/cuda.h"
 
Index: cfe/trunk/lib/Sema/SemaType.cpp
===
--- cfe/trunk/lib/Sema/SemaType.cpp
+++ cfe/trunk/lib/Sema/SemaType.cpp
@@ -2180,14 +2180,17 @@
 Diag(Loc, diag::err_opencl_vla);
 return QualType();
   }
-  // CUDA device code doesn't support VLAs.
-  if (getLangOpts().CUDA && T->isVariableArrayType())
-CUDADiagIfDeviceCode(Loc, diag::err_cuda_vla) << CurrentCUDATarget();
-  // Some targets don't support VLAs.
-  

[PATCH] D40275: [CUDA] Report "unsupported VLA" errors only on device side.

2017-11-28 Thread Artem Belevich via Phabricator via cfe-commits
tra added inline comments.



Comment at: clang/lib/Sema/SemaType.cpp:2188
+   !Context.getTargetInfo().isVLASupported() &&
+   shouldDiagnoseTargetSupportFromOpenMP()) {
+  // Some targets don't support VLAs.

rjmccall wrote:
> tra wrote:
> > tra wrote:
> > > rjmccall wrote:
> > > > Please write this check so that it trips in an "ordinary" build on a 
> > > > target that just happens to not support VLAs, something like:
> > > > 
> > > >   else if (!Context.getTargetInfo().isVLASupported() && 
> > > > shouldDiagnoseTargetSupportFromOpenMP())
> > > > 
> > > > If you want to include the explicit OpenMP check there, it would need 
> > > > to be:
> > > > 
> > > >   else if (!Context.getTargetInfo().isVLASupported() && 
> > > > (!getLangOpts().OpenMP || shouldDiagnoseTargetSupportFromOpenMP()))
> > > > 
> > > > but I think the first looks better.
> > > > 
> > > > The CUDA and OpenMP paths here seem to be trying to achieve analogous 
> > > > things; it's unfortunate that we can't find a way to unify their 
> > > > approaches, even if we'd eventually want to use different diagnostic 
> > > > text.  I imagine that the target-environment language restrictions are 
> > > > basically the same, since they arise for the same fundamental reasons, 
> > > > so all the places using CUDADiagIfDeviceCode are likely to have a check 
> > > > for shouldDiagnoseTargetSupportFromOpenMP() and vice-versa.
> > > The problem is that in CUDA we can't just do 
> > > ```
> > > if (!Context.getTargetInfo().isVLASupported() && 
> > > shouldDiagnoseTargetSupportFromOpenMP())
> > >Diag(Loc, diag::err_vla_unsupported);
> > > ```
> > > 
> > > In some situations diag messages will only be emitted if we attempt to 
> > > generate unsupported code on device side.
> > > Consider 
> > > ```
> > > __host__ __device__ void foo(int n) {
> > >   int vla[n];
> > > }
> > > ```
> > > 
> > > When Sema sees this code during compilation, it can not tell whether 
> > > there is an error. Calling foo from the host code is perfectly valid. 
> > > Calling it from device code is not. `CUDADiagIfDeviceCode` creates 
> > > 'postponed' diagnostics which only gets emitted if we ever need to 
> > > generate code for the function on device.
> > > 
> > > So, while CUDA and OpenMP do similar things, they are not quite the same. 
> > >  If your goal to generalize CUDA and OpenMP handling, then it would have 
> > > to be folded into diagnostic-emitting itself and we'll need an analog of 
> > > CUDADiagIfDeviceCode which can handle both OpenMP and CUDA. 
> > > E.g. something like this:
> > > ```
> > > ??? DiagIfDeviceCode(???) {
> > >if (OpenCL || (OpenMP && shouldDiagnoseTargetSupportFromOpenMP()))
> > >Diag(...);
> > >else if (CUDA)
> > >CUDADiagIfDeviceCode()
> > > } 
> > > 
> > > ...
> > > 
> > > if (!Context.getTargetInfo().isVLASupported()) 
> > >DiagIfDeviceCode();
> > > 
> > > ```
> > > 
> > > Would that work for you?
> > There's another issue with this approach -- diagnostics itself. Each 
> > dialect has its own. Specifically CUDA diags have details that are relevant 
> > only to CUDA. I suspect OpenMP has something specific as well. If we insist 
> > emitting only one kind of error for particular case across all dialects, 
> > we'll have to stick to bare bones "feature X is not supported" which will 
> > not have sufficient details to explain why the error was triggered in CUDA.
> > 
> > IMO dialect-specific handling of cuda errors in this case is the lesser 
> > evil. 
> > 
> > I'll update the patch to handle non-cuda cases the way  you suggested.
> If there really is interesting language-specific information to provide in a 
> diagnostic, I agree that it's hard to avoid having different code for 
> different targets.  On the other hand, the CUDA-specific information in this 
> specific diagnostic seems unnecessary — does the user really care about 
> whether the function was  '__device__' vs. '__host__ __device__'? in fact, 
> isn't the latter a bit misleading? — and I feel like a generic "cannot use 
> variable-length arrays when compiling for device 'nvptx64'" would be 
> perfectly satisfactory in both CUDA and OpenMP, and that's probably true for 
> almost all of these diagnostics.
> 
> On a completely different note, I do want to point out that the only thing 
> you actually *need* to ban here is declaring a local variable of VLA type.  
> There's no reason at all to ban VLA types in general; they just compile to 
> extra arithmetic.
Agreed. While host/device attributes are part of a function signature in CUDA, 
in this case only 'device' part makes the difference and therefore common-style 
reporting the way you suggest would be sufficient to indicate the error in the 
device-side code.

As for the VLA types, clang can currently compile code with VLA arguments: 
https://godbolt.org/g/43hVu9
Clang explicitly does not support GCC's VLA-in-structure extension. 
Are 

[PATCH] D40275: [CUDA] Report "unsupported VLA" errors only on device side.

2017-11-28 Thread John McCall via Phabricator via cfe-commits
rjmccall added inline comments.



Comment at: clang/lib/Sema/SemaType.cpp:2188
+   !Context.getTargetInfo().isVLASupported() &&
+   shouldDiagnoseTargetSupportFromOpenMP()) {
+  // Some targets don't support VLAs.

tra wrote:
> tra wrote:
> > rjmccall wrote:
> > > Please write this check so that it trips in an "ordinary" build on a 
> > > target that just happens to not support VLAs, something like:
> > > 
> > >   else if (!Context.getTargetInfo().isVLASupported() && 
> > > shouldDiagnoseTargetSupportFromOpenMP())
> > > 
> > > If you want to include the explicit OpenMP check there, it would need to 
> > > be:
> > > 
> > >   else if (!Context.getTargetInfo().isVLASupported() && 
> > > (!getLangOpts().OpenMP || shouldDiagnoseTargetSupportFromOpenMP()))
> > > 
> > > but I think the first looks better.
> > > 
> > > The CUDA and OpenMP paths here seem to be trying to achieve analogous 
> > > things; it's unfortunate that we can't find a way to unify their 
> > > approaches, even if we'd eventually want to use different diagnostic 
> > > text.  I imagine that the target-environment language restrictions are 
> > > basically the same, since they arise for the same fundamental reasons, so 
> > > all the places using CUDADiagIfDeviceCode are likely to have a check for 
> > > shouldDiagnoseTargetSupportFromOpenMP() and vice-versa.
> > The problem is that in CUDA we can't just do 
> > ```
> > if (!Context.getTargetInfo().isVLASupported() && 
> > shouldDiagnoseTargetSupportFromOpenMP())
> >Diag(Loc, diag::err_vla_unsupported);
> > ```
> > 
> > In some situations diag messages will only be emitted if we attempt to 
> > generate unsupported code on device side.
> > Consider 
> > ```
> > __host__ __device__ void foo(int n) {
> >   int vla[n];
> > }
> > ```
> > 
> > When Sema sees this code during compilation, it can not tell whether there 
> > is an error. Calling foo from the host code is perfectly valid. Calling it 
> > from device code is not. `CUDADiagIfDeviceCode` creates 'postponed' 
> > diagnostics which only gets emitted if we ever need to generate code for 
> > the function on device.
> > 
> > So, while CUDA and OpenMP do similar things, they are not quite the same.  
> > If your goal to generalize CUDA and OpenMP handling, then it would have to 
> > be folded into diagnostic-emitting itself and we'll need an analog of 
> > CUDADiagIfDeviceCode which can handle both OpenMP and CUDA. 
> > E.g. something like this:
> > ```
> > ??? DiagIfDeviceCode(???) {
> >if (OpenCL || (OpenMP && shouldDiagnoseTargetSupportFromOpenMP()))
> >Diag(...);
> >else if (CUDA)
> >CUDADiagIfDeviceCode()
> > } 
> > 
> > ...
> > 
> > if (!Context.getTargetInfo().isVLASupported()) 
> >DiagIfDeviceCode();
> > 
> > ```
> > 
> > Would that work for you?
> There's another issue with this approach -- diagnostics itself. Each dialect 
> has its own. Specifically CUDA diags have details that are relevant only to 
> CUDA. I suspect OpenMP has something specific as well. If we insist emitting 
> only one kind of error for particular case across all dialects, we'll have to 
> stick to bare bones "feature X is not supported" which will not have 
> sufficient details to explain why the error was triggered in CUDA.
> 
> IMO dialect-specific handling of cuda errors in this case is the lesser evil. 
> 
> I'll update the patch to handle non-cuda cases the way  you suggested.
If there really is interesting language-specific information to provide in a 
diagnostic, I agree that it's hard to avoid having different code for different 
targets.  On the other hand, the CUDA-specific information in this specific 
diagnostic seems unnecessary — does the user really care about whether the 
function was  '__device__' vs. '__host__ __device__'? in fact, isn't the latter 
a bit misleading? — and I feel like a generic "cannot use variable-length 
arrays when compiling for device 'nvptx64'" would be perfectly satisfactory in 
both CUDA and OpenMP, and that's probably true for almost all of these 
diagnostics.

On a completely different note, I do want to point out that the only thing you 
actually *need* to ban here is declaring a local variable of VLA type.  There's 
no reason at all to ban VLA types in general; they just compile to extra 
arithmetic.


https://reviews.llvm.org/D40275



___
cfe-commits mailing list
cfe-commits@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[PATCH] D40275: [CUDA] Report "unsupported VLA" errors only on device side.

2017-11-28 Thread John McCall via Phabricator via cfe-commits
rjmccall accepted this revision.
rjmccall added a comment.

In https://reviews.llvm.org/D40275#937010, @tra wrote:

> In https://reviews.llvm.org/D40275#933253, @tra wrote:
>
> > @rjmccall : are you OK with this approach? If VLA is not supported by the 
> > target, CUDA is handled as a special case so it can emit deferred diag, 
> > OpenMP reports an error only if shouldDiagnoseTargetSupportFromOpenMP() 
> > allows it, and everything else does so unconditionally.
>
>
> @rjmccall : ping.


Sorry for the delay; I took Thanksgiving week off.  Yes, I think this patch is 
fine now, thanks.


https://reviews.llvm.org/D40275



___
cfe-commits mailing list
cfe-commits@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[PATCH] D40275: [CUDA] Report "unsupported VLA" errors only on device side.

2017-11-27 Thread Artem Belevich via Phabricator via cfe-commits
tra added a comment.

In https://reviews.llvm.org/D40275#933253, @tra wrote:

> @rjmccall : are you OK with this approach? If VLA is not supported by the 
> target, CUDA is handled as a special case so it can emit deferred diag, 
> OpenMP reports an error only if shouldDiagnoseTargetSupportFromOpenMP() 
> allows it, and everything else does so unconditionally.


@rjmccall : ping.


https://reviews.llvm.org/D40275



___
cfe-commits mailing list
cfe-commits@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[PATCH] D40275: [CUDA] Report "unsupported VLA" errors only on device side.

2017-11-22 Thread Artem Belevich via Phabricator via cfe-commits
tra added a comment.

@rjmccall : are you OK with this approach? If VLA is not supported by the 
target, CUDA is handled as a special case so it can emit deferred diag, OpenMP 
reports an error only if shouldDiagnoseTargetSupportFromOpenMP() allows it, and 
everything else does so unconditionally.


https://reviews.llvm.org/D40275



___
cfe-commits mailing list
cfe-commits@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[PATCH] D40275: [CUDA] Report "unsupported VLA" errors only on device side.

2017-11-21 Thread Artem Belevich via Phabricator via cfe-commits
tra updated this revision to Diff 123831.
tra added a comment.

Updated CUDA tests


https://reviews.llvm.org/D40275

Files:
  clang/lib/Sema/SemaType.cpp
  clang/test/SemaCUDA/call-stack-for-deferred-err.cu
  clang/test/SemaCUDA/no-call-stack-for-immediate-errs.cu
  clang/test/SemaCUDA/vla.cu


Index: clang/test/SemaCUDA/vla.cu
===
--- clang/test/SemaCUDA/vla.cu
+++ clang/test/SemaCUDA/vla.cu
@@ -1,5 +1,5 @@
-// RUN: %clang_cc1 -fcuda-is-device -fsyntax-only -verify %s
-// RUN: %clang_cc1 -fsyntax-only -verify -DHOST %s
+// RUN: %clang_cc1 -triple nvptx64-nvidia-cuda -fcuda-is-device -verify %s
+// RUN: %clang_cc1 -triple nvptx64-nvidia-cuda -verify -DHOST %s
 
 #include "Inputs/cuda.h"
 
Index: clang/test/SemaCUDA/no-call-stack-for-immediate-errs.cu
===
--- clang/test/SemaCUDA/no-call-stack-for-immediate-errs.cu
+++ clang/test/SemaCUDA/no-call-stack-for-immediate-errs.cu
@@ -1,4 +1,4 @@
-// RUN: %clang_cc1 -fcuda-is-device -fsyntax-only -verify %s
+// RUN: %clang_cc1 -triple nvptx64-nvidia-cuda -fcuda-is-device -fsyntax-only 
-verify %s
 
 #include "Inputs/cuda.h"
 
Index: clang/test/SemaCUDA/call-stack-for-deferred-err.cu
===
--- clang/test/SemaCUDA/call-stack-for-deferred-err.cu
+++ clang/test/SemaCUDA/call-stack-for-deferred-err.cu
@@ -1,4 +1,4 @@
-// RUN: %clang_cc1 -fcuda-is-device -fsyntax-only -verify %s
+// RUN: %clang_cc1 -triple nvptx64-nvidia-cuda -fcuda-is-device -fsyntax-only 
-verify %s
 
 #include "Inputs/cuda.h"
 
Index: clang/lib/Sema/SemaType.cpp
===
--- clang/lib/Sema/SemaType.cpp
+++ clang/lib/Sema/SemaType.cpp
@@ -2180,14 +2180,17 @@
 Diag(Loc, diag::err_opencl_vla);
 return QualType();
   }
-  // CUDA device code doesn't support VLAs.
-  if (getLangOpts().CUDA && T->isVariableArrayType())
-CUDADiagIfDeviceCode(Loc, diag::err_cuda_vla) << CurrentCUDATarget();
-  // Some targets don't support VLAs.
-  if (T->isVariableArrayType() && !Context.getTargetInfo().isVLASupported() &&
-  shouldDiagnoseTargetSupportFromOpenMP()) {
-Diag(Loc, diag::err_vla_unsupported);
-return QualType();
+
+  if (T->isVariableArrayType() && !Context.getTargetInfo().isVLASupported()) {
+if (getLangOpts().CUDA) {
+  // CUDA device code doesn't support VLAs.
+  CUDADiagIfDeviceCode(Loc, diag::err_cuda_vla) << CurrentCUDATarget();
+} else if (!getLangOpts().OpenMP ||
+   shouldDiagnoseTargetSupportFromOpenMP()) {
+  // Some targets don't support VLAs.
+  Diag(Loc, diag::err_vla_unsupported);
+  return QualType();
+}
   }
 
   // If this is not C99, extwarn about VLA's and C99 array size modifiers.


Index: clang/test/SemaCUDA/vla.cu
===
--- clang/test/SemaCUDA/vla.cu
+++ clang/test/SemaCUDA/vla.cu
@@ -1,5 +1,5 @@
-// RUN: %clang_cc1 -fcuda-is-device -fsyntax-only -verify %s
-// RUN: %clang_cc1 -fsyntax-only -verify -DHOST %s
+// RUN: %clang_cc1 -triple nvptx64-nvidia-cuda -fcuda-is-device -verify %s
+// RUN: %clang_cc1 -triple nvptx64-nvidia-cuda -verify -DHOST %s
 
 #include "Inputs/cuda.h"
 
Index: clang/test/SemaCUDA/no-call-stack-for-immediate-errs.cu
===
--- clang/test/SemaCUDA/no-call-stack-for-immediate-errs.cu
+++ clang/test/SemaCUDA/no-call-stack-for-immediate-errs.cu
@@ -1,4 +1,4 @@
-// RUN: %clang_cc1 -fcuda-is-device -fsyntax-only -verify %s
+// RUN: %clang_cc1 -triple nvptx64-nvidia-cuda -fcuda-is-device -fsyntax-only -verify %s
 
 #include "Inputs/cuda.h"
 
Index: clang/test/SemaCUDA/call-stack-for-deferred-err.cu
===
--- clang/test/SemaCUDA/call-stack-for-deferred-err.cu
+++ clang/test/SemaCUDA/call-stack-for-deferred-err.cu
@@ -1,4 +1,4 @@
-// RUN: %clang_cc1 -fcuda-is-device -fsyntax-only -verify %s
+// RUN: %clang_cc1 -triple nvptx64-nvidia-cuda -fcuda-is-device -fsyntax-only -verify %s
 
 #include "Inputs/cuda.h"
 
Index: clang/lib/Sema/SemaType.cpp
===
--- clang/lib/Sema/SemaType.cpp
+++ clang/lib/Sema/SemaType.cpp
@@ -2180,14 +2180,17 @@
 Diag(Loc, diag::err_opencl_vla);
 return QualType();
   }
-  // CUDA device code doesn't support VLAs.
-  if (getLangOpts().CUDA && T->isVariableArrayType())
-CUDADiagIfDeviceCode(Loc, diag::err_cuda_vla) << CurrentCUDATarget();
-  // Some targets don't support VLAs.
-  if (T->isVariableArrayType() && !Context.getTargetInfo().isVLASupported() &&
-  shouldDiagnoseTargetSupportFromOpenMP()) {
-Diag(Loc, diag::err_vla_unsupported);
-return QualType();
+
+  if (T->isVariableArrayType() && !Context.getTargetInfo().isVLASupported()) {
+if (getLangOpts().CUDA) {
+ 

[PATCH] D40275: [CUDA] Report "unsupported VLA" errors only on device side.

2017-11-21 Thread Artem Belevich via Phabricator via cfe-commits
tra added a comment.

In https://reviews.llvm.org/D40275#931947, @hfinkel wrote:

> > When Sema sees this code during compilation, it can not tell whether there 
> > is an error. Calling foo from the host code is perfectly valid. Calling it 
> > from device code is not. CUDADiagIfDeviceCode creates 'postponed' 
> > diagnostics which only gets emitted if we ever need to generate code for 
> > the function on device.
>
> Interesting. I suspect that we'll end up dealing with this problem for OpenMP 
> as well (in the future - for OpenMP v5). In this next version (for which the 
> draft is available here: 
> http://www.openmp.org/wp-content/uploads/openmp-TR6.pdf), we'll have 
> "implicit declare target" functions (whereby we generate target code based on 
> the locally-defined subset of the transitive closure of the call graph 
> starting from target regions).


We've been contemplating treating all functions as `__host__ __device__` by 
default. After all, most of the source code is target-agnostic. Currently a lot 
of templated code must be `__host__ __device__` in order to be usable and it's 
a major obstacle to making standard library and other template libraries 
(somewhat) usable on device. Alas, making `__host__ __device__` the default 
would be a major departure from CUDA semantics.


https://reviews.llvm.org/D40275



___
cfe-commits mailing list
cfe-commits@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[PATCH] D40275: [CUDA] Report "unsupported VLA" errors only on device side.

2017-11-21 Thread Artem Belevich via Phabricator via cfe-commits
tra added inline comments.



Comment at: clang/lib/Sema/SemaType.cpp:2188
+   !Context.getTargetInfo().isVLASupported() &&
+   shouldDiagnoseTargetSupportFromOpenMP()) {
+  // Some targets don't support VLAs.

tra wrote:
> rjmccall wrote:
> > Please write this check so that it trips in an "ordinary" build on a target 
> > that just happens to not support VLAs, something like:
> > 
> >   else if (!Context.getTargetInfo().isVLASupported() && 
> > shouldDiagnoseTargetSupportFromOpenMP())
> > 
> > If you want to include the explicit OpenMP check there, it would need to be:
> > 
> >   else if (!Context.getTargetInfo().isVLASupported() && 
> > (!getLangOpts().OpenMP || shouldDiagnoseTargetSupportFromOpenMP()))
> > 
> > but I think the first looks better.
> > 
> > The CUDA and OpenMP paths here seem to be trying to achieve analogous 
> > things; it's unfortunate that we can't find a way to unify their 
> > approaches, even if we'd eventually want to use different diagnostic text.  
> > I imagine that the target-environment language restrictions are basically 
> > the same, since they arise for the same fundamental reasons, so all the 
> > places using CUDADiagIfDeviceCode are likely to have a check for 
> > shouldDiagnoseTargetSupportFromOpenMP() and vice-versa.
> The problem is that in CUDA we can't just do 
> ```
> if (!Context.getTargetInfo().isVLASupported() && 
> shouldDiagnoseTargetSupportFromOpenMP())
>Diag(Loc, diag::err_vla_unsupported);
> ```
> 
> In some situations diag messages will only be emitted if we attempt to 
> generate unsupported code on device side.
> Consider 
> ```
> __host__ __device__ void foo(int n) {
>   int vla[n];
> }
> ```
> 
> When Sema sees this code during compilation, it can not tell whether there is 
> an error. Calling foo from the host code is perfectly valid. Calling it from 
> device code is not. `CUDADiagIfDeviceCode` creates 'postponed' diagnostics 
> which only gets emitted if we ever need to generate code for the function on 
> device.
> 
> So, while CUDA and OpenMP do similar things, they are not quite the same.  If 
> your goal to generalize CUDA and OpenMP handling, then it would have to be 
> folded into diagnostic-emitting itself and we'll need an analog of 
> CUDADiagIfDeviceCode which can handle both OpenMP and CUDA. 
> E.g. something like this:
> ```
> ??? DiagIfDeviceCode(???) {
>if (OpenCL || (OpenMP && shouldDiagnoseTargetSupportFromOpenMP()))
>Diag(...);
>else if (CUDA)
>CUDADiagIfDeviceCode()
> } 
> 
> ...
> 
> if (!Context.getTargetInfo().isVLASupported()) 
>DiagIfDeviceCode();
> 
> ```
> 
> Would that work for you?
There's another issue with this approach -- diagnostics itself. Each dialect 
has its own. Specifically CUDA diags have details that are relevant only to 
CUDA. I suspect OpenMP has something specific as well. If we insist emitting 
only one kind of error for particular case across all dialects, we'll have to 
stick to bare bones "feature X is not supported" which will not have sufficient 
details to explain why the error was triggered in CUDA.

IMO dialect-specific handling of cuda errors in this case is the lesser evil. 

I'll update the patch to handle non-cuda cases the way  you suggested.


https://reviews.llvm.org/D40275



___
cfe-commits mailing list
cfe-commits@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[PATCH] D40275: [CUDA] Report "unsupported VLA" errors only on device side.

2017-11-21 Thread Artem Belevich via Phabricator via cfe-commits
tra updated this revision to Diff 123823.
tra added a comment.

Updated to partially address rjmccall@ comments.


https://reviews.llvm.org/D40275

Files:
  clang/lib/Sema/SemaType.cpp
  clang/test/SemaCUDA/vla.cu


Index: clang/test/SemaCUDA/vla.cu
===
--- clang/test/SemaCUDA/vla.cu
+++ clang/test/SemaCUDA/vla.cu
@@ -1,5 +1,5 @@
-// RUN: %clang_cc1 -fcuda-is-device -fsyntax-only -verify %s
-// RUN: %clang_cc1 -fsyntax-only -verify -DHOST %s
+// RUN: %clang_cc1 -triple nvptx64-nvidia-cuda -fcuda-is-device -verify %s
+// RUN: %clang_cc1 -triple nvptx64-nvidia-cuda -verify -DHOST %s
 
 #include "Inputs/cuda.h"
 
Index: clang/lib/Sema/SemaType.cpp
===
--- clang/lib/Sema/SemaType.cpp
+++ clang/lib/Sema/SemaType.cpp
@@ -2175,19 +2175,16 @@
 T = Context.getConstantArrayType(T, ConstVal, ASM, Quals);
   }
 
-  // OpenCL v1.2 s6.9.d: variable length arrays are not supported.
-  if (getLangOpts().OpenCL && T->isVariableArrayType()) {
-Diag(Loc, diag::err_opencl_vla);
-return QualType();
-  }
-  // CUDA device code doesn't support VLAs.
-  if (getLangOpts().CUDA && T->isVariableArrayType())
-CUDADiagIfDeviceCode(Loc, diag::err_cuda_vla) << CurrentCUDATarget();
-  // Some targets don't support VLAs.
-  if (T->isVariableArrayType() && !Context.getTargetInfo().isVLASupported() &&
-  shouldDiagnoseTargetSupportFromOpenMP()) {
-Diag(Loc, diag::err_vla_unsupported);
-return QualType();
+  if (T->isVariableArrayType() && !Context.getTargetInfo().isVLASupported()) {
+if (getLangOpts().CUDA) {
+  // CUDA device code doesn't support VLAs.
+  CUDADiagIfDeviceCode(Loc, diag::err_cuda_vla) << CurrentCUDATarget();
+} else if (!getLangOpts().OpenMP ||
+   shouldDiagnoseTargetSupportFromOpenMP()) {
+  // Some targets don't support VLAs.
+  Diag(Loc, diag::err_vla_unsupported);
+  return QualType();
+}
   }
 
   // If this is not C99, extwarn about VLA's and C99 array size modifiers.


Index: clang/test/SemaCUDA/vla.cu
===
--- clang/test/SemaCUDA/vla.cu
+++ clang/test/SemaCUDA/vla.cu
@@ -1,5 +1,5 @@
-// RUN: %clang_cc1 -fcuda-is-device -fsyntax-only -verify %s
-// RUN: %clang_cc1 -fsyntax-only -verify -DHOST %s
+// RUN: %clang_cc1 -triple nvptx64-nvidia-cuda -fcuda-is-device -verify %s
+// RUN: %clang_cc1 -triple nvptx64-nvidia-cuda -verify -DHOST %s
 
 #include "Inputs/cuda.h"
 
Index: clang/lib/Sema/SemaType.cpp
===
--- clang/lib/Sema/SemaType.cpp
+++ clang/lib/Sema/SemaType.cpp
@@ -2175,19 +2175,16 @@
 T = Context.getConstantArrayType(T, ConstVal, ASM, Quals);
   }
 
-  // OpenCL v1.2 s6.9.d: variable length arrays are not supported.
-  if (getLangOpts().OpenCL && T->isVariableArrayType()) {
-Diag(Loc, diag::err_opencl_vla);
-return QualType();
-  }
-  // CUDA device code doesn't support VLAs.
-  if (getLangOpts().CUDA && T->isVariableArrayType())
-CUDADiagIfDeviceCode(Loc, diag::err_cuda_vla) << CurrentCUDATarget();
-  // Some targets don't support VLAs.
-  if (T->isVariableArrayType() && !Context.getTargetInfo().isVLASupported() &&
-  shouldDiagnoseTargetSupportFromOpenMP()) {
-Diag(Loc, diag::err_vla_unsupported);
-return QualType();
+  if (T->isVariableArrayType() && !Context.getTargetInfo().isVLASupported()) {
+if (getLangOpts().CUDA) {
+  // CUDA device code doesn't support VLAs.
+  CUDADiagIfDeviceCode(Loc, diag::err_cuda_vla) << CurrentCUDATarget();
+} else if (!getLangOpts().OpenMP ||
+   shouldDiagnoseTargetSupportFromOpenMP()) {
+  // Some targets don't support VLAs.
+  Diag(Loc, diag::err_vla_unsupported);
+  return QualType();
+}
   }
 
   // If this is not C99, extwarn about VLA's and C99 array size modifiers.
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[PATCH] D40275: [CUDA] Report "unsupported VLA" errors only on device side.

2017-11-21 Thread Hal Finkel via Phabricator via cfe-commits
hfinkel added a comment.

> When Sema sees this code during compilation, it can not tell whether there is 
> an error. Calling foo from the host code is perfectly valid. Calling it from 
> device code is not. CUDADiagIfDeviceCode creates 'postponed' diagnostics 
> which only gets emitted if we ever need to generate code for the function on 
> device.

Interesting. I suspect that we'll end up dealing with this problem for OpenMP 
as well (in the future - for OpenMP v5). In this next version (for which the 
draft is available here: 
http://www.openmp.org/wp-content/uploads/openmp-TR6.pdf), we'll have "implicit 
declare target" functions (whereby we generate target code based on the 
locally-defined subset of the transitive closure of the call graph starting 
from target regions).


https://reviews.llvm.org/D40275



___
cfe-commits mailing list
cfe-commits@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[PATCH] D40275: [CUDA] Report "unsupported VLA" errors only on device side.

2017-11-21 Thread Artem Belevich via Phabricator via cfe-commits
tra added inline comments.



Comment at: clang/lib/Sema/SemaType.cpp:2188
+   !Context.getTargetInfo().isVLASupported() &&
+   shouldDiagnoseTargetSupportFromOpenMP()) {
+  // Some targets don't support VLAs.

rjmccall wrote:
> Please write this check so that it trips in an "ordinary" build on a target 
> that just happens to not support VLAs, something like:
> 
>   else if (!Context.getTargetInfo().isVLASupported() && 
> shouldDiagnoseTargetSupportFromOpenMP())
> 
> If you want to include the explicit OpenMP check there, it would need to be:
> 
>   else if (!Context.getTargetInfo().isVLASupported() && 
> (!getLangOpts().OpenMP || shouldDiagnoseTargetSupportFromOpenMP()))
> 
> but I think the first looks better.
> 
> The CUDA and OpenMP paths here seem to be trying to achieve analogous things; 
> it's unfortunate that we can't find a way to unify their approaches, even if 
> we'd eventually want to use different diagnostic text.  I imagine that the 
> target-environment language restrictions are basically the same, since they 
> arise for the same fundamental reasons, so all the places using 
> CUDADiagIfDeviceCode are likely to have a check for 
> shouldDiagnoseTargetSupportFromOpenMP() and vice-versa.
The problem is that in CUDA we can't just do 
```
if (!Context.getTargetInfo().isVLASupported() && 
shouldDiagnoseTargetSupportFromOpenMP())
   Diag(Loc, diag::err_vla_unsupported);
```

In some situations diag messages will only be emitted if we attempt to generate 
unsupported code on device side.
Consider 
```
__host__ __device__ void foo(int n) {
  int vla[n];
}
```

When Sema sees this code during compilation, it can not tell whether there is 
an error. Calling foo from the host code is perfectly valid. Calling it from 
device code is not. `CUDADiagIfDeviceCode` creates 'postponed' diagnostics 
which only gets emitted if we ever need to generate code for the function on 
device.

So, while CUDA and OpenMP do similar things, they are not quite the same.  If 
your goal to generalize CUDA and OpenMP handling, then it would have to be 
folded into diagnostic-emitting itself and we'll need an analog of 
CUDADiagIfDeviceCode which can handle both OpenMP and CUDA. 
E.g. something like this:
```
??? DiagIfDeviceCode(???) {
   if (OpenCL || (OpenMP && shouldDiagnoseTargetSupportFromOpenMP()))
   Diag(...);
   else if (CUDA)
   CUDADiagIfDeviceCode()
} 

...

if (!Context.getTargetInfo().isVLASupported()) 
   DiagIfDeviceCode();

```

Would that work for you?


https://reviews.llvm.org/D40275



___
cfe-commits mailing list
cfe-commits@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[PATCH] D40275: [CUDA] Report "unsupported VLA" errors only on device side.

2017-11-20 Thread John McCall via Phabricator via cfe-commits
rjmccall added inline comments.



Comment at: clang/lib/Sema/SemaType.cpp:2188
+   !Context.getTargetInfo().isVLASupported() &&
+   shouldDiagnoseTargetSupportFromOpenMP()) {
+  // Some targets don't support VLAs.

Please write this check so that it trips in an "ordinary" build on a target 
that just happens to not support VLAs, something like:

  else if (!Context.getTargetInfo().isVLASupported() && 
shouldDiagnoseTargetSupportFromOpenMP())

If you want to include the explicit OpenMP check there, it would need to be:

  else if (!Context.getTargetInfo().isVLASupported() && (!getLangOpts().OpenMP 
|| shouldDiagnoseTargetSupportFromOpenMP()))

but I think the first looks better.

The CUDA and OpenMP paths here seem to be trying to achieve analogous things; 
it's unfortunate that we can't find a way to unify their approaches, even if 
we'd eventually want to use different diagnostic text.  I imagine that the 
target-environment language restrictions are basically the same, since they 
arise for the same fundamental reasons, so all the places using 
CUDADiagIfDeviceCode are likely to have a check for 
shouldDiagnoseTargetSupportFromOpenMP() and vice-versa.


https://reviews.llvm.org/D40275



___
cfe-commits mailing list
cfe-commits@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[PATCH] D40275: [CUDA] Report "unsupported VLA" errors only on device side.

2017-11-20 Thread Artem Belevich via Phabricator via cfe-commits
tra updated this revision to Diff 123694.
tra added a comment.

Updates CUDA's VLA test to use nvptx triple.


https://reviews.llvm.org/D40275

Files:
  clang/lib/Sema/SemaType.cpp
  clang/test/SemaCUDA/vla.cu


Index: clang/test/SemaCUDA/vla.cu
===
--- clang/test/SemaCUDA/vla.cu
+++ clang/test/SemaCUDA/vla.cu
@@ -1,5 +1,5 @@
-// RUN: %clang_cc1 -fcuda-is-device -fsyntax-only -verify %s
-// RUN: %clang_cc1 -fsyntax-only -verify -DHOST %s
+// RUN: %clang_cc1 -triple nvptx64-nvidia-cuda -fcuda-is-device -verify %s
+// RUN: %clang_cc1 -triple nvptx64-nvidia-cuda -verify -DHOST %s
 
 #include "Inputs/cuda.h"
 
Index: clang/lib/Sema/SemaType.cpp
===
--- clang/lib/Sema/SemaType.cpp
+++ clang/lib/Sema/SemaType.cpp
@@ -2175,19 +2175,21 @@
 T = Context.getConstantArrayType(T, ConstVal, ASM, Quals);
   }
 
-  // OpenCL v1.2 s6.9.d: variable length arrays are not supported.
-  if (getLangOpts().OpenCL && T->isVariableArrayType()) {
-Diag(Loc, diag::err_opencl_vla);
-return QualType();
-  }
-  // CUDA device code doesn't support VLAs.
-  if (getLangOpts().CUDA && T->isVariableArrayType())
-CUDADiagIfDeviceCode(Loc, diag::err_cuda_vla) << CurrentCUDATarget();
-  // Some targets don't support VLAs.
-  if (T->isVariableArrayType() && !Context.getTargetInfo().isVLASupported() &&
-  shouldDiagnoseTargetSupportFromOpenMP()) {
-Diag(Loc, diag::err_vla_unsupported);
-return QualType();
+  if (T->isVariableArrayType()) {
+if (getLangOpts().OpenCL) {
+  // OpenCL v1.2 s6.9.d: variable length arrays are not supported.
+  Diag(Loc, diag::err_opencl_vla);
+  return QualType();
+} else if (getLangOpts().CUDA) {
+  // CUDA device code doesn't support VLAs.
+  CUDADiagIfDeviceCode(Loc, diag::err_cuda_vla) << CurrentCUDATarget();
+} else if (getLangOpts().OpenMP &&
+   !Context.getTargetInfo().isVLASupported() &&
+   shouldDiagnoseTargetSupportFromOpenMP()) {
+  // Some targets don't support VLAs.
+  Diag(Loc, diag::err_vla_unsupported);
+  return QualType();
+}
   }
 
   // If this is not C99, extwarn about VLA's and C99 array size modifiers.


Index: clang/test/SemaCUDA/vla.cu
===
--- clang/test/SemaCUDA/vla.cu
+++ clang/test/SemaCUDA/vla.cu
@@ -1,5 +1,5 @@
-// RUN: %clang_cc1 -fcuda-is-device -fsyntax-only -verify %s
-// RUN: %clang_cc1 -fsyntax-only -verify -DHOST %s
+// RUN: %clang_cc1 -triple nvptx64-nvidia-cuda -fcuda-is-device -verify %s
+// RUN: %clang_cc1 -triple nvptx64-nvidia-cuda -verify -DHOST %s
 
 #include "Inputs/cuda.h"
 
Index: clang/lib/Sema/SemaType.cpp
===
--- clang/lib/Sema/SemaType.cpp
+++ clang/lib/Sema/SemaType.cpp
@@ -2175,19 +2175,21 @@
 T = Context.getConstantArrayType(T, ConstVal, ASM, Quals);
   }
 
-  // OpenCL v1.2 s6.9.d: variable length arrays are not supported.
-  if (getLangOpts().OpenCL && T->isVariableArrayType()) {
-Diag(Loc, diag::err_opencl_vla);
-return QualType();
-  }
-  // CUDA device code doesn't support VLAs.
-  if (getLangOpts().CUDA && T->isVariableArrayType())
-CUDADiagIfDeviceCode(Loc, diag::err_cuda_vla) << CurrentCUDATarget();
-  // Some targets don't support VLAs.
-  if (T->isVariableArrayType() && !Context.getTargetInfo().isVLASupported() &&
-  shouldDiagnoseTargetSupportFromOpenMP()) {
-Diag(Loc, diag::err_vla_unsupported);
-return QualType();
+  if (T->isVariableArrayType()) {
+if (getLangOpts().OpenCL) {
+  // OpenCL v1.2 s6.9.d: variable length arrays are not supported.
+  Diag(Loc, diag::err_opencl_vla);
+  return QualType();
+} else if (getLangOpts().CUDA) {
+  // CUDA device code doesn't support VLAs.
+  CUDADiagIfDeviceCode(Loc, diag::err_cuda_vla) << CurrentCUDATarget();
+} else if (getLangOpts().OpenMP &&
+   !Context.getTargetInfo().isVLASupported() &&
+   shouldDiagnoseTargetSupportFromOpenMP()) {
+  // Some targets don't support VLAs.
+  Diag(Loc, diag::err_vla_unsupported);
+  return QualType();
+}
   }
 
   // If this is not C99, extwarn about VLA's and C99 array size modifiers.
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[PATCH] D40275: [CUDA] Report "unsupported VLA" errors only on device side.

2017-11-20 Thread Jonas Hahnfeld via Phabricator via cfe-commits
Hahnfeld added a comment.

In https://reviews.llvm.org/D40275#930981, @tra wrote:

> In https://reviews.llvm.org/D40275#930948, @Hahnfeld wrote:
>
> > In https://reviews.llvm.org/D39505 @rjmccall requested that the check 
> > should be made independent of the language. To preserve this, I think the 
> > CUDA specific checks should be added to the generic case instead of 
> > restricting its evaluation.
>
>
> I'm not sure what exactly you or @rjmccall  have in mind. Specifically - what 
> is the 'generic case' CUDA checks should be added to? Could you give me an 
> example?


Not supporting VLAs is a property of the target we are compiling for, see newly 
added `Context.getTargetInfo().isVLASupported()`. So neither CUDA nor OpenMP 
are special cases in general, it's rather that the targeted architecture 
doesn't support that feature. What is a special case though is that both CUDA 
and OpenMP analyze the complete host code again and we need to suppress the 
diagnostic if the VLA is encountered in the host code that is never codegen'd 
for the device. For OpenMP, this special case is encoded in 
`shouldDiagnoseTargetSupportFromOpenMP` (a horrible name - suggestions 
welcome!) and I think you should add a similar check for CUDA.


https://reviews.llvm.org/D40275



___
cfe-commits mailing list
cfe-commits@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[PATCH] D40275: [CUDA] Report "unsupported VLA" errors only on device side.

2017-11-20 Thread Artem Belevich via Phabricator via cfe-commits
tra added a comment.

In https://reviews.llvm.org/D40275#930982, @Hahnfeld wrote:

> And please add a regression test which is apparently missing for the case 
> that a VLA is **NOT** diagnosed in CUDA mode


Hmm. We do have test/SemaCUDA/vla.cu which should've triggered the error. Let 
me see why it didn't happen.


https://reviews.llvm.org/D40275



___
cfe-commits mailing list
cfe-commits@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[PATCH] D40275: [CUDA] Report "unsupported VLA" errors only on device side.

2017-11-20 Thread Artem Belevich via Phabricator via cfe-commits
tra added a comment.

In https://reviews.llvm.org/D40275#930948, @Hahnfeld wrote:

> In https://reviews.llvm.org/D39505 @rjmccall requested that the check should 
> be made independent of the language. To preserve this, I think the CUDA 
> specific checks should be added to the generic case instead of restricting 
> its evaluation.


I'm not sure what exactly you or @rjmccall  have in mind. Specifically - what 
is the 'generic case' CUDA checks should be added to? Could you give me an 
example?


https://reviews.llvm.org/D40275



___
cfe-commits mailing list
cfe-commits@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[PATCH] D40275: [CUDA] Report "unsupported VLA" errors only on device side.

2017-11-20 Thread Jonas Hahnfeld via Phabricator via cfe-commits
Hahnfeld added a comment.

And please add a regression test which is apparently missing for the case that 
a VLA is **NOT** diagnosed in CUDA mode


https://reviews.llvm.org/D40275



___
cfe-commits mailing list
cfe-commits@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[PATCH] D40275: [CUDA] Report "unsupported VLA" errors only on device side.

2017-11-20 Thread Jonas Hahnfeld via Phabricator via cfe-commits
Hahnfeld requested changes to this revision.
Hahnfeld added a comment.
This revision now requires changes to proceed.

In https://reviews.llvm.org/D39505 @rjmccall requested that the check should be 
made independent of the language. To preserve this, I think the CUDA specific 
checks should be added to the generic case instead of restricting its 
evaluation.


https://reviews.llvm.org/D40275



___
cfe-commits mailing list
cfe-commits@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[PATCH] D40275: [CUDA] Report "unsupported VLA" errors only on device side.

2017-11-20 Thread Artem Belevich via Phabricator via cfe-commits
tra updated this revision to Diff 123690.
tra added a comment.

Folded OpenCL check under `if (T->isVariableArrayType())`


https://reviews.llvm.org/D40275

Files:
  clang/lib/Sema/SemaType.cpp


Index: clang/lib/Sema/SemaType.cpp
===
--- clang/lib/Sema/SemaType.cpp
+++ clang/lib/Sema/SemaType.cpp
@@ -2175,19 +2175,21 @@
 T = Context.getConstantArrayType(T, ConstVal, ASM, Quals);
   }
 
-  // OpenCL v1.2 s6.9.d: variable length arrays are not supported.
-  if (getLangOpts().OpenCL && T->isVariableArrayType()) {
-Diag(Loc, diag::err_opencl_vla);
-return QualType();
-  }
-  // CUDA device code doesn't support VLAs.
-  if (getLangOpts().CUDA && T->isVariableArrayType())
-CUDADiagIfDeviceCode(Loc, diag::err_cuda_vla) << CurrentCUDATarget();
-  // Some targets don't support VLAs.
-  if (T->isVariableArrayType() && !Context.getTargetInfo().isVLASupported() &&
-  shouldDiagnoseTargetSupportFromOpenMP()) {
-Diag(Loc, diag::err_vla_unsupported);
-return QualType();
+  if (T->isVariableArrayType()) {
+if (getLangOpts().OpenCL) {
+  // OpenCL v1.2 s6.9.d: variable length arrays are not supported.
+  Diag(Loc, diag::err_opencl_vla);
+  return QualType();
+} else if (getLangOpts().CUDA) {
+  // CUDA device code doesn't support VLAs.
+  CUDADiagIfDeviceCode(Loc, diag::err_cuda_vla) << CurrentCUDATarget();
+} else if (getLangOpts().OpenMP &&
+   !Context.getTargetInfo().isVLASupported() &&
+   shouldDiagnoseTargetSupportFromOpenMP()) {
+  // Some targets don't support VLAs.
+  Diag(Loc, diag::err_vla_unsupported);
+  return QualType();
+}
   }
 
   // If this is not C99, extwarn about VLA's and C99 array size modifiers.


Index: clang/lib/Sema/SemaType.cpp
===
--- clang/lib/Sema/SemaType.cpp
+++ clang/lib/Sema/SemaType.cpp
@@ -2175,19 +2175,21 @@
 T = Context.getConstantArrayType(T, ConstVal, ASM, Quals);
   }
 
-  // OpenCL v1.2 s6.9.d: variable length arrays are not supported.
-  if (getLangOpts().OpenCL && T->isVariableArrayType()) {
-Diag(Loc, diag::err_opencl_vla);
-return QualType();
-  }
-  // CUDA device code doesn't support VLAs.
-  if (getLangOpts().CUDA && T->isVariableArrayType())
-CUDADiagIfDeviceCode(Loc, diag::err_cuda_vla) << CurrentCUDATarget();
-  // Some targets don't support VLAs.
-  if (T->isVariableArrayType() && !Context.getTargetInfo().isVLASupported() &&
-  shouldDiagnoseTargetSupportFromOpenMP()) {
-Diag(Loc, diag::err_vla_unsupported);
-return QualType();
+  if (T->isVariableArrayType()) {
+if (getLangOpts().OpenCL) {
+  // OpenCL v1.2 s6.9.d: variable length arrays are not supported.
+  Diag(Loc, diag::err_opencl_vla);
+  return QualType();
+} else if (getLangOpts().CUDA) {
+  // CUDA device code doesn't support VLAs.
+  CUDADiagIfDeviceCode(Loc, diag::err_cuda_vla) << CurrentCUDATarget();
+} else if (getLangOpts().OpenMP &&
+   !Context.getTargetInfo().isVLASupported() &&
+   shouldDiagnoseTargetSupportFromOpenMP()) {
+  // Some targets don't support VLAs.
+  Diag(Loc, diag::err_vla_unsupported);
+  return QualType();
+}
   }
 
   // If this is not C99, extwarn about VLA's and C99 array size modifiers.
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[PATCH] D40275: [CUDA] Report "unsupported VLA" errors only on device side.

2017-11-20 Thread Artem Belevich via Phabricator via cfe-commits
tra created this revision.
Herald added a subscriber: sanjoy.

This fixes erroneously reported CUDA compilation errors in host-side code 
during device-side compilation.

I've also restricted OpenMP-specific checks to trigger only if we're compiling 
with OpenMP enabled.


https://reviews.llvm.org/D40275

Files:
  clang/lib/Sema/SemaType.cpp


Index: clang/lib/Sema/SemaType.cpp
===
--- clang/lib/Sema/SemaType.cpp
+++ clang/lib/Sema/SemaType.cpp
@@ -2180,14 +2180,17 @@
 Diag(Loc, diag::err_opencl_vla);
 return QualType();
   }
-  // CUDA device code doesn't support VLAs.
-  if (getLangOpts().CUDA && T->isVariableArrayType())
-CUDADiagIfDeviceCode(Loc, diag::err_cuda_vla) << CurrentCUDATarget();
-  // Some targets don't support VLAs.
-  if (T->isVariableArrayType() && !Context.getTargetInfo().isVLASupported() &&
-  shouldDiagnoseTargetSupportFromOpenMP()) {
-Diag(Loc, diag::err_vla_unsupported);
-return QualType();
+  if (T->isVariableArrayType()) {
+if (getLangOpts().CUDA) {
+  // CUDA device code doesn't support VLAs.
+  CUDADiagIfDeviceCode(Loc, diag::err_cuda_vla) << CurrentCUDATarget();
+} else if (getLangOpts().OpenMP &&
+   !Context.getTargetInfo().isVLASupported() &&
+   shouldDiagnoseTargetSupportFromOpenMP()) {
+  // Some targets don't support VLAs.
+  Diag(Loc, diag::err_vla_unsupported);
+  return QualType();
+}
   }
 
   // If this is not C99, extwarn about VLA's and C99 array size modifiers.


Index: clang/lib/Sema/SemaType.cpp
===
--- clang/lib/Sema/SemaType.cpp
+++ clang/lib/Sema/SemaType.cpp
@@ -2180,14 +2180,17 @@
 Diag(Loc, diag::err_opencl_vla);
 return QualType();
   }
-  // CUDA device code doesn't support VLAs.
-  if (getLangOpts().CUDA && T->isVariableArrayType())
-CUDADiagIfDeviceCode(Loc, diag::err_cuda_vla) << CurrentCUDATarget();
-  // Some targets don't support VLAs.
-  if (T->isVariableArrayType() && !Context.getTargetInfo().isVLASupported() &&
-  shouldDiagnoseTargetSupportFromOpenMP()) {
-Diag(Loc, diag::err_vla_unsupported);
-return QualType();
+  if (T->isVariableArrayType()) {
+if (getLangOpts().CUDA) {
+  // CUDA device code doesn't support VLAs.
+  CUDADiagIfDeviceCode(Loc, diag::err_cuda_vla) << CurrentCUDATarget();
+} else if (getLangOpts().OpenMP &&
+   !Context.getTargetInfo().isVLASupported() &&
+   shouldDiagnoseTargetSupportFromOpenMP()) {
+  // Some targets don't support VLAs.
+  Diag(Loc, diag::err_vla_unsupported);
+  return QualType();
+}
   }
 
   // If this is not C99, extwarn about VLA's and C99 array size modifiers.
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits