zxybazh commented on code in PR #13084:
URL: https://github.com/apache/tvm/pull/13084#discussion_r996145902
##########
src/meta_schedule/utils.h:
##########
@@ -318,14 +318,16 @@ struct ThreadedTraceApply {
for (int i = 0; i < n_; ++i) {
Item& item = items_[i];
+ bool applied = false;
try {
- if (!item.postproc->Apply(sch)) {
- ++item.fail_counter;
- return NullOpt;
+ if (item.postproc->Apply(sch)) {
+ applied = true;
}
} catch (const std::exception& e) {
- // Used in multi-thread, only output to screen but failure summary
sent to logging
- LOG(WARNING) << "ThreadedTraceApply::Apply failed with error " <<
e.what();
+ // left blank intentionally
Review Comment:
Thanks for pointing that out Andrew!
This issue is not caused by a single candidate TIR running multiple times
but by thousands of TIR candidate workloads generated by search space samples
or mutators. They could all be different from each other. Therefore, only log
some number of exceptions may not be representative. On the other hand, this is
applied in multi-thread, if we keep track of the bounded logging numbers, it
may slightly impact the tuning speed.
Even though we don't log out the exceptions here, we still keep track of the
failing count, and it's still reflected in the log. If the failing count
doesn't make sense, it's easy to spot that from the log. And we can also easily
reproduce the issue with running the postprocessor separately once there's any
issue. Moreover, we should be able to see failed tuning of this workload or
impacted performance in the performance table as a result.
One thing we can do to further improve visibility is create an exception
counter field in the output as I mentioend above, or simply set a check that
warns user when there's unexpected post processor failures in the counter.
Would like to hear everyone’s thought on how to proceed here.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]