pitrou commented on pull request #11876:
URL: https://github.com/apache/arrow/pull/11876#issuecomment-994564539


   > it may be we want a reduced "for-automation" set which is the default and 
a more complete "for-investigation" set
   
   Agreed that we should think more about what we're expecting from this. Does 
the fine-grained selection of benchmark parameters really help dive into 
performance issues? Or is the coverage just excessive?
   
   We could have pretty much the same fine-grained approach for many other 
benchmarks (I gave the sorting example above, which definitely encourages a 
combinatory explosion in benchmark numbers as well), but it would multiply the 
total time for running benchmarks by a non-trivial factor.
   
   Besides continuous benchmarking, I'll point out that interactive work with 
benchmarks is less pleasant and more tedious when individual benchmark suites 
are too long (again that's my experience with the current sorting benchmarks, 
yet they're 8x faster than this).


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to