zxybazh commented on code in PR #13084:
URL: https://github.com/apache/tvm/pull/13084#discussion_r996145902


##########
src/meta_schedule/utils.h:
##########
@@ -318,14 +318,16 @@ struct ThreadedTraceApply {
 
     for (int i = 0; i < n_; ++i) {
       Item& item = items_[i];
+      bool applied = false;
       try {
-        if (!item.postproc->Apply(sch)) {
-          ++item.fail_counter;
-          return NullOpt;
+        if (item.postproc->Apply(sch)) {
+          applied = true;
         }
       } catch (const std::exception& e) {
-        // Used in multi-thread, only output to screen but failure summary 
sent to logging
-        LOG(WARNING) << "ThreadedTraceApply::Apply failed with error " << 
e.what();
+        // left blank intentionally

Review Comment:
   This issue is not caused by a single candidate TIR running multiple times 
but thousands of them generated by sampler or mutator. They could all be 
different from each other so only log some number of exceptions is not 
representative. On the other hand, this is applied in multi-thread, if we keep 
track of the bounded logging numbers, it may impact the tuning speed.
   
   Even if we don't log out the exceptions here, we keep track of the failing 
count, and it's reflected in the log. If the failing count doesn't make sense, 
it's easy to spot from the log. And we can easily reproduce with running the 
postprocessor separately once there's any issue. We should be able to see 
failed tuning of this workload or impacted performance as a result.
   
   One thing we can do to further improve visibility is like I said create a 
exception counter field in the output, or simply set a check that warns user 
when there's unexpected postproc failure in the counter. Let me know which one 
you think is better.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to