ashutosh-arm commented on code in PR #14003:
URL: https://github.com/apache/tvm/pull/14003#discussion_r1112218588


##########
python/tvm/topi/arm_cpu/conv2d_spatial_pack.py:
##########
@@ -402,20 +421,18 @@ def schedule_conv2d_spatial_pack_nhwc(cfg, s, op, output):
     oho, ohi = cfg["tile_oh"].apply(s, output, oh)
     owo, owi = cfg["tile_ow"].apply(s, output, ow)
     s[output].reorder(n, oho, owo, oco, ohi, owi, oci)
-    cfg["ann_spatial"].apply(
-        s, output, [ohi, owi, oci], axis_lens=[OHI, OWI, OCI], max_unroll=16, 
cfg=cfg
-    )
-    cfg.define_knob("compat", [0, 1, 2])
-    if cfg["compat"].val < 2:
-        compat_axis = [owo, oco][cfg["compat"].val]  # pylint: disable=R1706
-        s[conv].compute_at(s[output], compat_axis)
+    cfg["ann_spatial"].apply(s, output, [owi, oci], axis_lens=[OWI, OCI], 
max_unroll=16, cfg=cfg)

Review Comment:
   This is a great finding :star:  Can we generalize this to other schedules 
where the split doubles the (maybe some of the) axes and unrolling higher up 
degrades performance? Theoretically sounds reasonable. To be clear, not asking 
for any modifications :smile_cat: , but this is something that needs to be paid 
attention while writing CPU schedules.



##########
python/tvm/topi/arm_cpu/conv2d_spatial_pack.py:
##########
@@ -424,33 +441,22 @@ def schedule_conv2d_spatial_pack_nhwc(cfg, s, op, output):
         max_unroll=16,
         cfg=cfg,
     )
-    cfg["ann_spatial"].apply(
-        s, conv, [ohi, owi, oci], axis_lens=[OHI, OWI, OCI], max_unroll=16, 
cfg=cfg
-    )
-    if cfg["compat"].val < 2:
-        compat_axis = [owo, oco][cfg["compat"].val]  # pylint: disable=R1706
-        s[kernel_vec].compute_at(s[conv], compat_axis)
-        s[data_vec].compute_at(s[conv], compat_axis)
-
-    if not autotvm.GLOBAL_SCOPE.in_tuning:
-        # schedule kernel pack
-        oco, kh, kw, ic, oci = kernel_vec.op.axis
-        s[kernel_vec].vectorize(oci)
-        s[kernel_vec].unroll(ic)
-        if cfg["compat"].val == 2:
-            s[kernel_vec].parallel(oco)
-
-    # schedule data pack
+    cfg["ann_spatial"].apply(s, conv, [owi, oci], axis_lens=[OWI, OCI], 
max_unroll=16, cfg=cfg)
+
+    # schedule data_vec, data_pad and kernel_vec
+    compat_axis = [owo, oco][cfg["compat"].val]  # pylint: disable=R1706
+    s[kernel_vec].compute_at(s[conv], compat_axis)
+    s[data_vec].compute_at(s[conv], compat_axis)
+
+    # Inlining kernel vec brings a performance improvement, but the tuner 
seems to not
+    # like it, so inline only when we are using the fallback config
+    if cfg.is_fallback:
+        s[kernel_vec].compute_inline()

Review Comment:
   Out of curiosity, what does it mean to inline schedule of a constant?



##########
python/tvm/topi/arm_cpu/conv2d_spatial_pack.py:
##########
@@ -302,9 +303,27 @@ def conv2d_spatial_pack_nhwc(cfg, data, kernel, strides, 
padding, dilation, out_
     )
 
     cfg.define_annotate("ann_reduce", [kh, kw], policy="try_unroll")
-    cfg.define_annotate("ann_spatial", [ohi, owi, oci], 
policy="try_unroll_vec")
+    cfg.define_annotate("ann_spatial", [owi, oci], policy="try_unroll_vec")
     # ====================================================================
 
+    # If there are no tuning records, use this config
+    if cfg.is_fallback:
+
+        def _tile_size(axis, candidates):
+            for candidate in candidates:
+                tiles_divisible_by_candidate = axis % candidate == 0
+                if tiles_divisible_by_candidate:
+                    return candidate
+            return 1
+
+        cfg["tile_oh"] = SplitEntity([-1, 1])
+        cfg["tile_ow"] = SplitEntity([-1, _tile_size(OW, [8, 4])])
+        cfg["tile_co"] = SplitEntity([-1, _tile_size(OC, [8, 4])])

Review Comment:
   Question: why 4 and 8 only? 8x2 tile sizes do not help some dims :thinking: ?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to