[GitHub] [incubator-tvm] wpan11nv commented on a change in pull request #5600: [TOPI] Improve CUDA softmax scheduling

2020-06-04 Thread GitBox


wpan11nv commented on a change in pull request #5600:
URL: https://github.com/apache/incubator-tvm/pull/5600#discussion_r435383834



##
File path: src/target/llvm/codegen_llvm.cc
##
@@ -736,7 +736,40 @@ llvm::Function* 
CodeGenLLVM::GetIntrinsicDecl(llvm::Intrinsic::ID id, llvm::Type
 #endif  // TVM_LLVM_VERSION
 }
 
+// Check if this is a warp shuffle intrinsic call and match its
+// corresponding nvvm intrinsic. Return true if the match is successful.
+static bool GetWarpShuffleIntrinsic(const CallNode* op, llvm::Intrinsic::ID* 
id) {
+  // Only 32 bit data type is supported.
+  if (op->dtype.is_vector() || op->dtype.bits() != 32) {

Review comment:
   Sounds good. Thanks!





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] wpan11nv commented on a change in pull request #5600: [TOPI] Improve CUDA softmax scheduling

2020-06-03 Thread GitBox


wpan11nv commented on a change in pull request #5600:
URL: https://github.com/apache/incubator-tvm/pull/5600#discussion_r434689864



##
File path: src/target/llvm/codegen_llvm.cc
##
@@ -736,7 +736,40 @@ llvm::Function* 
CodeGenLLVM::GetIntrinsicDecl(llvm::Intrinsic::ID id, llvm::Type
 #endif  // TVM_LLVM_VERSION
 }
 
+// Check if this is a warp shuffle intrinsic call and match its
+// corresponding nvvm intrinsic. Return true if the match is successful.
+static bool GetWarpShuffleIntrinsic(const CallNode* op, llvm::Intrinsic::ID* 
id) {
+  // Only 32 bit data type is supported.
+  if (op->dtype.is_vector() || op->dtype.bits() != 32) {

Review comment:
   Sure. I will have a look.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] wpan11nv commented on a change in pull request #5600: [TOPI] Improve CUDA softmax scheduling

2020-05-26 Thread GitBox


wpan11nv commented on a change in pull request #5600:
URL: https://github.com/apache/incubator-tvm/pull/5600#discussion_r429998819



##
File path: topi/python/topi/cuda/softmax.py
##
@@ -53,13 +54,62 @@ def schedule_softmax(outs):
 raise ValueError('Tag is expected to be softmax_output or 
log_softmax_output. \
  Got {0}'.format(op_tag))
 
+# The nvptx backend only supports 32-bits warp shuffle instructions.
+#
+# TODO(tvm-team) Fix nvptx codegen or deprecate nvptx backend.
+def sched_warp_softmax():
+if tgt.target_name == "nvptx":
+return softmax.dtype == "float32" or softmax.dtype == "int32"
+return True
+
 if len(softmax.shape) > 2:
 ops = [max_elem.op, expsum.op, softmax.op]
 if exp is not None:
 ops.append(exp.op)
 
 for op in ops:
 s = schedule_injective_from_existing(s, op.output(0))
+
+elif sched_warp_softmax():
+# A warp of 32 threads performs a row reduction.
+num_thread = tgt.thread_warp_size
+block_x = te.thread_axis("blockIdx.x")
+thread_x = te.thread_axis((0, num_thread), "threadIdx.x")
+
+# (4) softmax
+xo, xi = s[softmax].split(softmax.op.axis[1], nparts=num_thread)
+if tgt.target_name != "nvptx":
+_, xii = s[softmax].split(xi, factor=4)
+s[softmax].vectorize(xii)
+s[softmax].bind(xo, thread_x)
+s[softmax].bind(softmax.op.axis[0], block_x)
+
+# (3) expsum
+k = expsum.op.reduce_axis[0]
+ko, _ = s[expsum].split(k, nparts=num_thread)
+s[expsum].bind(ko, thread_x)
+s[expsum].compute_at(s[softmax], xo)
+
+# (2) exp
+if exp is not None:
+xo, xi = s[exp].split(exp.op.axis[1], nparts=num_thread)
+_, xii = s[exp].split(xi, factor=4)
+s[exp].vectorize(xii)

Review comment:
   Good point, I forgot why I added this nvptx check. Now removed, 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] wpan11nv commented on a change in pull request #5600: [TOPI] Improve CUDA softmax scheduling

2020-05-18 Thread GitBox


wpan11nv commented on a change in pull request #5600:
URL: https://github.com/apache/incubator-tvm/pull/5600#discussion_r426735612



##
File path: src/tir/transforms/lower_warp_memory.cc
##
@@ -213,9 +213,13 @@ class WarpAccessRewriter : protected StmtExprMutator {
 alloc_size *= op->dtype.lanes();
 std::tie(warp_index_, width_) = WarpIndexFinder(warp_size_).Find(op->body);
 warp_coeff_ = WarpStoreCoeffFinder(buffer_, warp_index_, 
analyzer_).Find(op->body);
-CHECK_EQ(alloc_size % (width_ * warp_coeff_), 0)
-<< "Warp memory must be multiple of the extent of threadIdx.x";
-warp_group_ = alloc_size / (width_ * warp_coeff_);
+
+// Align the local memory size. The number of elements may not
+// be a multiple of width_ * warp_coeff_; round it up.
+int factor = width_ * warp_coeff_;
+warp_group_ = (alloc_size + (factor - 1)) / factor;
+alloc_size = warp_group_ * factor;
+

Review comment:
   Yes, it is not sub-warp specific. Thanks! 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] wpan11nv commented on a change in pull request #5600: [TOPI] Improve CUDA softmax scheduling

2020-05-17 Thread GitBox


wpan11nv commented on a change in pull request #5600:
URL: https://github.com/apache/incubator-tvm/pull/5600#discussion_r426370564



##
File path: src/tir/transforms/lower_warp_memory.cc
##
@@ -213,9 +213,13 @@ class WarpAccessRewriter : protected StmtExprMutator {
 alloc_size *= op->dtype.lanes();
 std::tie(warp_index_, width_) = WarpIndexFinder(warp_size_).Find(op->body);
 warp_coeff_ = WarpStoreCoeffFinder(buffer_, warp_index_, 
analyzer_).Find(op->body);
-CHECK_EQ(alloc_size % (width_ * warp_coeff_), 0)
-<< "Warp memory must be multiple of the extent of threadIdx.x";
-warp_group_ = alloc_size / (width_ * warp_coeff_);
+
+// Align the local memory size. The number of elements may not
+// be a multiple of width_ * warp_coeff_; round it up.
+int factor = width_ * warp_coeff_;
+warp_group_ = (alloc_size + (factor - 1)) / factor;
+alloc_size = warp_group_ * factor;
+

Review comment:
   Looks like this is a sub-warp related, and I am not following it 
closely. CUDA __shfl_sync does not support non-power of 2 width. So your 
example may not be supported.  It would be helpful if we have a legal example 
that would be broken by this patch. Thanks!





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] wpan11nv commented on a change in pull request #5600: [TOPI] Improve CUDA softmax scheduling

2020-05-17 Thread GitBox


wpan11nv commented on a change in pull request #5600:
URL: https://github.com/apache/incubator-tvm/pull/5600#discussion_r426350654



##
File path: src/tir/transforms/lower_warp_memory.cc
##
@@ -213,9 +213,13 @@ class WarpAccessRewriter : protected StmtExprMutator {
 alloc_size *= op->dtype.lanes();
 std::tie(warp_index_, width_) = WarpIndexFinder(warp_size_).Find(op->body);
 warp_coeff_ = WarpStoreCoeffFinder(buffer_, warp_index_, 
analyzer_).Find(op->body);
-CHECK_EQ(alloc_size % (width_ * warp_coeff_), 0)
-<< "Warp memory must be multiple of the extent of threadIdx.x";
-warp_group_ = alloc_size / (width_ * warp_coeff_);
+
+// Align the local memory size. The number of elements may not
+// be a multiple of width_ * warp_coeff_; round it up.
+int factor = width_ * warp_coeff_;
+warp_group_ = (alloc_size + (factor - 1)) / factor;
+alloc_size = warp_group_ * factor;
+

Review comment:
   What’s the extent of threadidx.x? I think the test case in this PR 
clearly shows rounding up to the warp size is needed. It is the same as softmax 
failures I have seen. As there is no warp level allocation of size n,  we 
allocate n/32 elements in each thread. If n is not a multiple of 32, we need to 
”over-allocate“ slightly and predict the access.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] wpan11nv commented on a change in pull request #5600: [TOPI] Improve CUDA softmax scheduling

2020-05-17 Thread GitBox


wpan11nv commented on a change in pull request #5600:
URL: https://github.com/apache/incubator-tvm/pull/5600#discussion_r426314749



##
File path: src/tir/transforms/lower_warp_memory.cc
##
@@ -213,9 +213,13 @@ class WarpAccessRewriter : protected StmtExprMutator {
 alloc_size *= op->dtype.lanes();
 std::tie(warp_index_, width_) = WarpIndexFinder(warp_size_).Find(op->body);
 warp_coeff_ = WarpStoreCoeffFinder(buffer_, warp_index_, 
analyzer_).Find(op->body);
-CHECK_EQ(alloc_size % (width_ * warp_coeff_), 0)
-<< "Warp memory must be multiple of the extent of threadIdx.x";
-warp_group_ = alloc_size / (width_ * warp_coeff_);
+
+// Align the local memory size. The number of elements may not
+// be a multiple of width_ * warp_coeff_; round it up.
+int factor = width_ * warp_coeff_;
+warp_group_ = (alloc_size + (factor - 1)) / factor;
+alloc_size = warp_group_ * factor;
+

Review comment:
   Added tests. Without this fix, this test will assert. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org