csullivan commented on code in PR #12537:
URL: https://github.com/apache/tvm/pull/12537#discussion_r958643753


##########
python/tvm/topi/adreno/conv2d_nhwc.py:
##########
@@ -270,37 +264,27 @@ def schedule_conv2d_NHWC(cfg, s, output):
     ##### space definition end #####
 
     pad_data, kernel = s[conv].op.input_tensors
-    if (
-        isinstance(kernel.op, tvm.te.ComputeOp) and "filter_pack" in 
kernel.op.tag
-    ):  # len(latest.op.axis) == 4:
-        # manage scheduling of datacopy
-        pad_data, kernel = s[conv].op.input_tensors
+    if autotvm.GLOBAL_SCOPE.in_tuning or input_pack_rt:
         if "pad_temp" in pad_data.op.name:
+            s[pad_data].compute_inline()
             pack_data = pad_data.op.input_tensors[0]
             bind_data_copy(s[pack_data])
         else:
             bind_data_copy(s[pad_data])
-        bind_data_copy(s[kernel])
-
-    pad_data, kernel = s[conv].op.input_tensors
 
-    if (
-        autotvm.GLOBAL_SCOPE.in_tuning
-        or isinstance(kernel.op, tvm.te.ComputeOp)
-        and "filter_pack" in kernel.op.tag
-    ):
-        if "pad_temp" in pad_data.op.name:
-            s[pad_data].compute_inline()
         AT = s.cache_read(pad_data, get_texture_storage(pad_data.shape), 
[conv])
         bind_data_copy(s[AT])
-        WT = s.cache_read(kernel, get_texture_storage(kernel.shape), [conv])
-        bind_data_copy(s[WT])
     elif "pad_temp" in pad_data.op.name:
         s[pad_data].compute_inline()
         # create cache stage
         AT = s.cache_read(pad_data, get_texture_storage(pad_data.shape), 
[conv])
         bind_data_copy(s[AT])
 
+    if autotvm.GLOBAL_SCOPE.in_tuning or filter_pack_rt:
+        bind_data_copy(s[kernel])
+        WT = s.cache_read(kernel, get_texture_storage(kernel.shape), [conv])

Review Comment:
   nit: if this is an issue with the filter, we should be able to constant fold 
the filter into a shape amenable to 5d filter with output channels divisible by 
a factor of 4. Let me know your thoughts.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to