mdw-octoml commented on a change in pull request #8807:
URL: https://github.com/apache/tvm/pull/8807#discussion_r694180020
##########
File path: python/tvm/runtime/module.py
##########
@@ -230,8 +288,7 @@ def evaluator(*args):
blob = feval(*args)
fmt = "@" + ("d" * repeat)
results = struct.unpack(fmt, blob)
- mean = sum(results) / float(repeat)
- return ProfileResult(mean=mean, results=results)
+ return BenchmarkResult(results)
Review comment:
Well, OK, but the existing tests for time_evaluator seem kind of weak.
Here we're adding a new class (BenchmarkResult) which calculates a number of
new statistics, but there's no tests for it ... meaning ... a regression in
this behavior (either a change to time_evaluator or BenchmarkResult) would not
be caught by CI.
At minimum I'd recommend a unit test for BenchmarkResult() itself, which is
easy to write. Not trying to be too nitpicky but I do feel like we should be
constantly improving our test coverage and adding a new class without
corresponding tests tends to be a red flag for me :-)
WDYT?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]