tkonolige commented on a change in pull request #8030:
URL: https://github.com/apache/tvm/pull/8030#discussion_r633766017
##########
File path: python/tvm/topi/cuda/sparse.py
##########
@@ -105,13 +105,22 @@ def _callback(op):
return s
-def schedule_cuda_transpose(s, out):
+def schedule_transpose(outs):
Review comment:
moved to transform.py
##########
File path: python/tvm/relay/op/strategy/cuda.py
##########
@@ -1068,3 +1070,23 @@ def unique_strategy_cuda(attrs, inputs, out_type,
target):
name="unique.cuda",
)
return strategy
+
+
+@schedule_transpose.register(["cuda", "gpu", "rocm"])
+def schedule_transpose_cuda(attrs, outs, target):
+ """
+ Transpose cuda strategy
+ Dispatches to and optimized schedule if the transpose is standalone (not
fused).
+ """
+ warp_size = int(Target.current(allow_none=False).thread_warp_size)
+ if (
Review comment:
As far as I can tell, there is not a better way to do this. There is a
way to add implementations based on input sizes, but these are not on a
per-target basis. If you know a better way, let me know.
##########
File path: tests/python/topi/python/test_topi_transform.py
##########
@@ -870,6 +871,30 @@ def test_transpose():
verify_transpose((3, 10), None)
[email protected]_targets
+def test_transpose_schedule(target, dev):
+ shape = (100, 34)
+ x = relay.var("x", relay.TensorType(shape, "float32"))
+ f = relay.transpose(x)
+ ex = relay.create_executor(
+ kind="graph", mod=tvm.IRModule.from_expr(relay.Function([x], f)),
device=dev, target=target
+ )
+ r = np.random.rand(*shape)
+ tvm.testing.assert_allclose(ex.evaluate()(r).asnumpy(), np.transpose(r))
+
+ # make sure schedule does not fire here
Review comment:
It is more like a wish. Ideally we would be able to know which schedules
were used, but there is to way to introspect on what was used. I've updated the
comment to reflect this.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]