[GitHub] [incubator-tvm] iotamudelta commented on a change in pull request #4342: Add workgroup size attribute to AMDGPU functions in codegen

2019-11-15 Thread GitBox
iotamudelta commented on a change in pull request #4342: Add workgroup size 
attribute to AMDGPU functions in codegen
URL: https://github.com/apache/incubator-tvm/pull/4342#discussion_r346923874
 
 

 ##
 File path: src/codegen/llvm/codegen_amdgpu.cc
 ##
 @@ -36,13 +36,39 @@
 namespace tvm {
 namespace codegen {
 
+namespace {
+
+// calls the device api to get the max threads per block
+static inline int DetectROCMmaxThreadsPerBlock() {
+  TVMContext tvm_ctx;
+  tvm_ctx.device_type = kDLROCM;
+  tvm_ctx.device_id = 0;
+  tvm::runtime::DeviceAPI* api = tvm::runtime::DeviceAPI::Get(tvm_ctx, true);
+  if (api != nullptr) {
+TVMRetValue val;
+api->GetAttr(tvm_ctx, tvm::runtime::kExist, );
+if (val.operator int() == 1) {
+  tvm::runtime::DeviceAPI::Get(tvm_ctx)->
+GetAttr(tvm_ctx, tvm::runtime::kMaxThreadsPerBlock, );
+  return val.operator int();
+}
+  }
+  LOG(WARNING) << "Cannot get maximum number of threads for AMD codegen";
+  return 1024;
 
 Review comment:
   No. As said, NV doesn't have this issue since they compile to PTX IR and any 
__launch_bounds__ annotation is simply a performance optimization. This is 
independent of HIP - if you want to use a work group size >256, you must tell 
LC about it. __launch_bounds__ is the way to do it for HIP source kernels, 
there are obviously equivalent processes along to stack to get said information 
to LC.
   
   There is nothing inherently unstable with our HW with work group sizes >256 
- you simply must use it correctly.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] iotamudelta commented on a change in pull request #4342: Add workgroup size attribute to AMDGPU functions in codegen

2019-11-15 Thread GitBox
iotamudelta commented on a change in pull request #4342: Add workgroup size 
attribute to AMDGPU functions in codegen
URL: https://github.com/apache/incubator-tvm/pull/4342#discussion_r346894323
 
 

 ##
 File path: src/codegen/llvm/codegen_amdgpu.cc
 ##
 @@ -36,13 +36,39 @@
 namespace tvm {
 namespace codegen {
 
+namespace {
+
+// calls the device api to get the max threads per block
+static inline int DetectROCMmaxThreadsPerBlock() {
+  TVMContext tvm_ctx;
+  tvm_ctx.device_type = kDLROCM;
+  tvm_ctx.device_id = 0;
+  tvm::runtime::DeviceAPI* api = tvm::runtime::DeviceAPI::Get(tvm_ctx, true);
+  if (api != nullptr) {
+TVMRetValue val;
+api->GetAttr(tvm_ctx, tvm::runtime::kExist, );
+if (val.operator int() == 1) {
+  tvm::runtime::DeviceAPI::Get(tvm_ctx)->
+GetAttr(tvm_ctx, tvm::runtime::kMaxThreadsPerBlock, );
+  return val.operator int();
+}
+  }
+  LOG(WARNING) << "Cannot get maximum number of threads for AMD codegen";
+  return 1024;
 
 Review comment:
   Since @t-vi pinged me: this is not entirely correct. Let me explain. Our LC 
backend assumes, in the absence of explicit annotation, the max workgroup size 
to be 256 and generates code for that. This impacts us differently than CUDA 
since we finalize to ISA during compile time, as opposed to some IR that gets 
finalized at runtime. So indeed, if a kernel is dispatched with more than 256 
it may fail in interesting ways at runtime. There is internal discussion going 
on to finally mitigate this behavior on the FE level. However, it is, as @t-vi 
correctly asserted, easy to fix: explicit annotation with `__launch_bounds__()` 
and the max workgroup size will fix this. Hence, just dropping back to 256 is 
not the optimal solution, it is a workaround. The optimal solution is to figure 
out best workgroup size for a given kernel and annotate explicitly. I would 
hence recommend @t-vi to use the threads per block he finds performance optimal.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services