jcf94 commented on a change in pull request #8636:
URL: https://github.com/apache/tvm/pull/8636#discussion_r683083725
##########
File path: python/tvm/topi/cuda/conv2d_nhwc.py
##########
@@ -85,8 +87,10 @@ def schedule_conv2d_nhwc_direct(cfg, s, Conv):
thread_yz = te.thread_axis((0, vthread_n), "vthread", name="vy")
# Schedule for output
- ni, hi, wi, fi = s[output].op.axis
- bz = s[output].fuse(hi, wi)
+ ni, _, wi, fi = s[output].op.axis
+ bz = wi
+ fi, vec = s[output].split(fi, factor=vec_factor)
+ s[output].vectorize(vec)
Review comment:
Does this actually work? ...... In my experience it's better to put this
vectorize at the back of reorder.
And in L96, you didn't add the new vector axis to the reorder list?
##########
File path: python/tvm/topi/cuda/conv2d_nhwc.py
##########
@@ -54,6 +54,7 @@ def schedule_conv2d_nhwc_direct(cfg, s, Conv):
cfg.define_knob("vthread_n", [1] if dynamic_batch else [1, 2])
cfg.define_knob("vthread_c", [1, 2])
cfg.define_knob("step", [16, 3, 32, 64])
+ cfg.define_knob("vectorize", [4, 2, 8, 16])
Review comment:
Better to add a 1 for including no vectorize case.
```suggestion
cfg.define_knob("vectorize", [1, 2, 4, 8, 16])
```
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]