masahi commented on code in PR #13195:
URL: https://github.com/apache/tvm/pull/13195#discussion_r1018952166


##########
src/meta_schedule/space_generator/post_order_apply.cc:
##########
@@ -150,33 +141,13 @@ class PostOrderApplyNode : public SpaceGeneratorNode {
           stack.emplace_back(sch, blocks);
           continue;
         }
-
-        Optional<String> ann = tir::GetAnn<String>(sch->GetSRef(block_rv), 
"schedule_rule");
-        const runtime::PackedFunc* custom_schedule_fn =
-            ann.defined() ? runtime::Registry::Get(ann.value()) : nullptr;
-        const bool has_schedule_rule = custom_schedule_fn != nullptr;
-
-        if (ann.defined() && ann.value() != "None" && !has_schedule_rule) {
-          LOG(WARNING) << "Custom schedule rule not found, ignoring 
schedule_rule annotation: "
-                       << ann.value();
-        }
-
-        if ((has_schedule_rule && sch_rule.defined()) ||
-            (!has_schedule_rule && !sch_rule.defined()) ||
-            (ann.defined() && ann.value() == "None")) {
-          stack.emplace_back(sch, blocks);
-          continue;
-        }
-
-        Array<tir::Schedule> applied{nullptr};
-        if (sch_rule.defined()) {
-          applied = sch_rule.value()->Apply(sch, /*block=*/block_rv);
-        } else {
-          ICHECK(custom_schedule_fn)
-              << "ValueError: Custom schedule rule not found: " << ann.value();
-          applied = (*custom_schedule_fn)(sch, block_rv);
+        if (!ScheduleRule::IsApplyCustomRule(sch_rule)) {
+          if (tir::GetAnn<String>(sch->GetSRef(block_rv), 
"schedule_rule").defined()) {
+            stack.emplace_back(sch, blocks);

Review Comment:
   It seems this change broke auto tensorization for VNNI and Hexagon `vrmpy`. 
They target the TE compute that happens to be annotated with `schedule_rule` 
(specifically, NCHWc int8 conv2d 
https://github.com/apache/tvm/blob/main/python/tvm/topi/nn/conv2d.py#L534-L552).
 After this change, `MultiLevelTilingWithIntrin` will never be applied to the 
block corresponding to this te compute.
   
   So for auto tensorization, we just want to ignore this annotation. 
   
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to