driazati commented on code in PR #11663:
URL: https://github.com/apache/tvm/pull/11663#discussion_r894765517
##########
tests/python/integration/test_tuning.py:
##########
@@ -174,7 +173,14 @@ def runner(target, dev):
assert len(results) == 20
- successful_results = [r for r in results if r.error_no ==
autotvm.MeasureErrorNo.NO_ERROR]
+ successful_results = [
+ r
+ for r in results
+ if r.error_no == autotvm.MeasureErrorNo.NO_ERROR
+ # Autotvm can filter some records before building if we know they
won't work ahead of time.
+ # We can't guarantee we sample at least one good record so we
count these as success too
+ or r.error_no == autotvm.MeasureErrorNo.INSTANTIATION_ERROR
Review Comment:
no idea what I'm talking about here but wouldn't this just hide the
flakiness the same as the `xfail` (i.e. if this line happens the test is
bogus)? Is there a way to make it deterministic on a known good rng seed?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]