gromero commented on code in PR #10963:
URL: https://github.com/apache/tvm/pull/10963#discussion_r850015627
##########
python/tvm/autotvm/measure/measure_methods.py:
##########
@@ -496,7 +497,8 @@ def set_task(self, task):
def _build_func_common(measure_input, runtime=None, check_gpu=None,
build_option=None):
"""Common part for building a configuration"""
target, task, config = measure_input
- target, task.target_host = Target.check_and_update_host_consist(target,
task.target_host)
+ assert not isinstance(target, (Map, dict)), "It's expected that 'target'
is a string here."
Review Comment:
> > I think we could but what would be the achievement exactly with a unit
test for that internal function (_build_func_common)?
>
> @gromero it's generally good practice to have comprehensive test coverage,
including input validation and error handling (which is essentially what these
`assert`s do). One reason is to figure out whether or not we can actually
trigger such a situation from a more public interface (i.e. test `LocalBuilder`
in some way to trigger this), another is to document the expected interface of
the function so it's well understood how we expect it to behave.
I think my question was not clear, maybe I should have said "[...] unit test
for that **assert** in that internal function [...]". I agreed with the test
coverage necessity and also with Andrew's suggestion for adding the assert, and
everything else about the assert. My question was actually about testing (with
a unit test) the assert in that specific function.
> > More broadly, what has been our policy about testing assert in TVM code?
Should we test all the asserts? I recall there were a discussion some time ago
about it but could not find the PR, iirc you're discussing that with other Arm
folks.
>
> There was a discussion awhile back on [#9331
(comment)](https://github.com/apache/tvm/pull/9331#issuecomment-966241622),
which was resolved by @manupa-arm merging the PR without the tests added, no
follow up conversation has happened since. Different people follow the [Code
Review
Guidelines](https://github.com/apache/tvm/blob/main/docs/contribute/code_review.rst#factors-to-consider-about-code-quality)
differently, and there are contributions without testing at all, so I think
the policy is very much down to the Committer rather than strictly defined.
Thanks, that was exactly the PR I had in mind. OK, I've read it again
carefully and I'm 100% with Manupa, Ashutosh, and Elen about we should not test
internal asserts (or, as Elen says, "developer facing asserts" ). I totally see
the necessary of having a good test coverage and adding tests like for
https://github.com/apache/tvm/pull/10865, but for this case what is the value?
Testing on a code refactor if the assert gets messed up or removed (a
regression in the assert itself) from `_build_func_common`? Besides that the
other tests for more public interfaces should also be already exercising the
assert() properly (giving a more meaningful full traceback in case the assert
fires).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]