vinx13 commented on code in PR #14766:
URL: https://github.com/apache/tvm/pull/14766#discussion_r1190791916
##########
python/tvm/tir/schedule/schedule.py:
##########
@@ -2691,13 +2691,15 @@ def after_set_dtype(
########## Schedule: Blockize & Tensorize ##########
@type_checked
- def blockize(self, loop: LoopRV, preserve_unit_iters: bool = True) ->
BlockRV:
+ def blockize(
+ self, target: Union[LoopRV, List[BlockRV]], preserve_unit_iters: bool
= True
+ ) -> BlockRV:
"""Convert the subtree rooted at a specific loop into a block.
Review Comment:
need to update doc here
##########
src/tir/schedule/primitive/blockize_tensorize.cc:
##########
@@ -152,7 +152,8 @@ Array<Array<arith::IterMark>> SubspaceDivide(const
BlockRealize& realize,
const StmtSRef& block_sref, //
Review Comment:
update function doc for the new param
##########
tests/python/unittest/test_meta_schedule_schedule_rule_mlt_tc.py:
##########
@@ -988,15 +988,15 @@ def conv2d_1x1_0(inputs: T.Buffer((1, 16, 16, 64),
"float16"), weight: T.Buffer(
v2, v3 = T.axis.remap("SS", [ax2_1, ax3])
v4_o = T.axis.spatial(1, 0)
v5_o = T.axis.spatial(1, 0)
-
T.reads(conv2d_nhwc_reindex_shared_wmma_accumulator[v0, v1, v2, v3, 0:16, 0:16])
- T.writes(conv2d_nhwc_reindex_shared[v0, v1,
v2, v3, 0:16, 0:16])
+
T.reads(conv2d_nhwc_reindex_shared_wmma_accumulator[v0, v1, 0, 0, 0:16, 0:16])
Review Comment:
I verified these workloads and confirmed this PR doesn't break auto
tensorization. Keeping these unit iters preserve more semantics of original IR,
but I don't have strong opinion
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]