tkonolige commented on a change in pull request #6580:
URL: https://github.com/apache/incubator-tvm/pull/6580#discussion_r499018751
##########
File path: python/tvm/topi/cuda/sparse.py
##########
@@ -97,3 +94,287 @@ def _callback(op):
traverse_inline(s, outs[0].op, _callback)
return s
+
+
+def schedule_cuda_transpose(s, out):
+ """Schedule for transpose on the gpu.
+
+ Roughly follows this:
+ https://developer.nvidia.com/blog/efficient-matrix-transpose-cuda-cc/, but
+ without the padding for shared memory. For better performance, we could
+ rewrite it in tir to add the padding.
+ """
+
+ def _callback(op):
+ # pylint: disable=invalid-name
+ m, n = s[op].op.axis
+ warp_size =
int(tvm.target.Target.current(allow_none=False).thread_warp_size)
Review comment:
Yes, 4 warps per block is slightly faster than 1 warp per block. I've
added a comment to this effect.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]