Hello Arrow community,

We (first at Ursa Computing, now Voltron Data) have been orchestrating
Apache Arrow C++, Python, R, Java, and JavaScript benchmarks and logging
results into Conbench [1] since May 2021 (huge shout out and thank you to
Diana Clarke who was the driving force behind this with me). You might have
already interacted with Arrow Benchmarks CI via `@ursabot please benchmark`
comments or noticed ursabot's comments with benchmarks results (like [2])
on Apache Arrow pull requests. This data and interface has been invaluable
in reviewing performance-related pull requests already.

In order to make it easy for other Arrow developers to provide machines to
run benchmarks on, we have made the orchestration part of this project open
source [3]. In particular, there is now a process for adding new benchmark
machines [4] and public Buildkite pipelines to
https://buildkite.com/apache-arrow.

If you would like to add a new machine for running new benchmarks (I know
there was interest to do this from the Rust folks a few months ago), you
are welcome to follow the above process. Feel free to reach out to me
directly (via Issues and Pull Requests on [3]) for any help you might need.

-Elena

[1]: https://conbench.ursa.dev/

[2]: https://github.com/apache/arrow/pull/12085#issuecomment-1008899897

[3]: https://github.com/ursacomputing/arrow-benchmarks-ci

[4]:
https://github.com/ursacomputing/arrow-benchmarks-ci/blob/main/docs/how-to-add-new-benchmark-machine.md

Reply via email to