epompeii commented on issue #5504:
URL: 
https://github.com/apache/arrow-datafusion/issues/5504#issuecomment-2027604140

   > track benchmarks over time, through a similar job triggered by merge 
commits to main (fwiw, I now prefer Bencher to conbench, as it seems simpler to 
setup/maintain)
   
   @gruuya let me know if you run into anything or have any questions getting 
setup with Bencher.
   I would be more than happy to help answer any questions or with parts of the 
integration work here.
   
   > (optional) re-base benchmarks on criterion.rs
   
   If you do move to Criterion, Bencher has a [built-in adapter for 
Criterion](https://bencher.dev/docs/explanation/adapters/#-rust-criterion) 
which should make things pretty simple.
   
   > The more long term solution, and the first next step that makes sense to 
me would be to run these on a dedicated runner.
   
   @alamb if you all want to build things out yourselves, the [Rustls bench 
runner](https://github.com/rustls/rustls-bench-app) may be a good starting 
point. I recently wrote [a case study of the Rustls continuous benchmarking 
setup](https://bencher.dev/learn/case-study/rustls/) if that is of interest.
   
   Another possibility is that I am working on Bencher Cloud Bare Metal. It is 
a fleet of identical hardware and benchmarks are run on the bare metal servers. 
I can go into more detail if that is something you all want to explore.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to