tqchen commented on a change in pull request #10686:
URL: https://github.com/apache/tvm/pull/10686#discussion_r831328067
##########
File path: tests/python/unittest/test_auto_scheduler_feature.py
##########
@@ -220,12 +220,15 @@ def tir_matmul(
def test_primfunc():
- features = auto_scheduler.feature.named_features_from_primfunc(tir_matmul)
Review comment:
I see, explicitly constructed script would help to clarify the
regression that the change intend to cover.
To rephrase what you are saying (correct if that was not what you meant):
- In the future, if CSE changes its behavior to generate other cases, the
feature extractor from an explicit tvm-script may not capture that changes (as
new CSE will generate new IRs that may have other properties) -- the test won't
fail but I guess you mean it fails to capture the possible problem in CSE
changes.
This is indeed a fair pt. Although running things through is a bit like
integration tests(that tests the combined effect of CSE, lowering and feature
extraction).
The case of tvmscript inputs is closer to unit-test to test the intended
behavior of the specific pass. Such unit-tests usually have a clear intend, and
easier to debug when it triggers a regression. Although its coverage won't move
as the integration tests do.
I still would suggest keep `test_primfunc` given the original intended
behavior of the pass was limited. It does not hurt to have a case to cover the
original intended case.
Adding another `test_primfunc_lowered_with_let` would give quite clear
context, and provide some safety net when we start to explore the ability of
using the extractor with CSE passes.
As for the unit-test case, it is merely a recommendation(as per above
discussion on unit-test and integration tests)
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]