AndrewZhaoLuo commented on code in PR #11663:
URL: https://github.com/apache/tvm/pull/11663#discussion_r894777445


##########
tests/python/integration/test_tuning.py:
##########
@@ -174,7 +173,14 @@ def runner(target, dev):
 
         assert len(results) == 20
 
-        successful_results = [r for r in results if r.error_no == 
autotvm.MeasureErrorNo.NO_ERROR]
+        successful_results = [
+            r
+            for r in results
+            if r.error_no == autotvm.MeasureErrorNo.NO_ERROR
+            # Autotvm can filter some records before building if we know they 
won't work ahead of time.
+            # We can't guarantee we sample at least one good record so we 
count these as success too
+            or r.error_no == autotvm.MeasureErrorNo.INSTANTIATION_ERROR

Review Comment:
   Nah it's expected to sometimes fail during the tuning process so we would 
expect each result to have no error or an instantiation error (which indicates 
we caught it and didn't build). I don't know enough about how to make things 
deterministic, perhaps by replacing the workload with something simpler where 
it can never fail.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to