cyb70289 commented on a change in pull request #9843:
URL: https://github.com/apache/arrow/pull/9843#discussion_r603794401
##########
File path: dev/archery/archery/tests/test_benchmarks.py
##########
@@ -94,10 +94,16 @@ def test_static_runner_from_json():
archery_result['suites'][0]['benchmarks'][0]['values'][0] *= 2
baseline = StaticBenchmarkRunner.from_json(json.dumps(archery_result))
- artificial_reg, normal = RunnerComparator(contender, baseline).comparisons
+ comparisons = list(RunnerComparator(contender, baseline).comparisons)
+
+ # can't assume return order
+ artificial, unchanged = comparisons[0], comparisons[1]
+ if comparisons[0].name == "FloatParsing<FloatType>":
+ artificial, unchanged = comparisons[1], comparisons[0]
Review comment:
This code is a bit strange, looks like workaround to a bug.
I would recommend changing the test suite to contain only one benchmark.
First test the non-regression case. Then introduce artificial regression and
test the regression case.
##########
File path: dev/archery/archery/tests/test_benchmarks.py
##########
@@ -94,10 +94,16 @@ def test_static_runner_from_json():
archery_result['suites'][0]['benchmarks'][0]['values'][0] *= 2
baseline = StaticBenchmarkRunner.from_json(json.dumps(archery_result))
- artificial_reg, normal = RunnerComparator(contender, baseline).comparisons
+ comparisons = list(RunnerComparator(contender, baseline).comparisons)
+
+ # can't assume return order
Review comment:
The return order is not determined as internally, benchmarks list is
changed to pythons dict which is unordered.
https://github.com/apache/arrow/blob/master/dev/archery/archery/benchmark/compare.py#L140-L145
Result of the first benchmark may return after the result of the second
benchmark.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]