2010YOUY01 opened a new pull request, #13090: URL: https://github.com/apache/datafusion/pull/13090
## Which issue does this PR close? <!-- We generally require a GitHub issue to be filed for all bug fixes and enhancements and this helps us generate change logs for our releases. You can link an issue to this PR using the GitHub syntax. For example `Closes #123` indicates that this PR will close issue #123. --> Closes https://github.com/apache/datafusion/issues/7571 ## Rationale for this change Adding benchmark for external aggregation. The benchmark queries are simple aggregation on the TPCH `lineitem` table. For instance, `SELECT DISTINCT l_orderkey FROM lineitem`. I think using `lineitem` table is suitable to benchmark memory-limited aggregation due to 1. It's easy to choose different aggregation cardinality (figure is from [DuckDB external aggregation paper](https://hannes.muehleisen.org/publications/icde2024-out-of-core-kuiper-boncz-muehleisen.pdf))  Also we already have TPCH data generator and we can change scaling factor later 2. Memory stress can be changed by adding more aggregates to `SELECT` clause (e.g. query `select max(c1), max(c2)... max(c100)` has larger memory pressure because it have to store large intermediates for many aggregation columns) This PR only sets up the benchmark framework and adds two simple queries: Q1: `SELECT DISTINCT l_orderkey FROM lineitem;` Q2: `SELECT DISTINCT l_orderkey, l_suppkey FROM lineitem;` And run with pre-defined memory limits (For example, Q1 requires 36MiB memory to run, the benchmark will run Q1 with 64, 32 and 16 MiB) For now, it's not able to select more aggregation columns or set smaller memory limits (query fails), due to a known issue https://github.com/apache/datafusion/issues/13089 Once it's fixed, we can update the benchmark to run more diverse memory-limited aggregation workloads <!-- Why are you proposing this change? If this is already explained clearly in the issue then this section is not needed. Explaining clearly why changes are proposed helps reviewers understand your changes and offer better suggestions for fixes. --> ## What changes are included in this PR? <!-- There is no need to duplicate the description in the issue here but it is sometimes worth providing a summary of the individual changes in this PR. --> 1. One new binary program to run the benchmark 2. Update benchmark script `bench.rs` for it TODO: - [ ] Update benchmark README ## Are these changes tested? I tested locally 1. Execute benchmark binary to run all benchmark queries and memory limits (under arrow-datafusion/benchmarks) ```cargo run --bin external_aggr -- benchmark -n 4 --iterations 5 -p '/Users/yongting/Desktop/code/my_datafusion/arrow-datafusion/benchmarks/data/tpch_sf1' -o '/tmp/aggr.json'``` <details> <summary>Benchmark Result</summary> ``` Q1(64.0 MB) iteration 0 took 1288.4 ms and returned 1 rows Q1(64.0 MB) iteration 1 took 1193.3 ms and returned 1 rows Q1(64.0 MB) iteration 2 took 1176.4 ms and returned 1 rows Q1(64.0 MB) avg time: 1219.37 ms Q1(32.0 MB) iteration 0 took 2166.6 ms and returned 1 rows Q1(32.0 MB) iteration 1 took 2145.1 ms and returned 1 rows Q1(32.0 MB) iteration 2 took 2129.6 ms and returned 1 rows Q1(32.0 MB) avg time: 2147.09 ms Q1(16.0 MB) iteration 0 took 2024.5 ms and returned 1 rows Q1(16.0 MB) iteration 1 took 1952.5 ms and returned 1 rows Q1(16.0 MB) iteration 2 took 2069.7 ms and returned 1 rows Q1(16.0 MB) avg time: 2015.55 ms Q2(512.0 MB) iteration 0 took 2453.9 ms and returned 1 rows Q2(512.0 MB) iteration 1 took 2506.9 ms and returned 1 rows Q2(512.0 MB) iteration 2 took 2507.0 ms and returned 1 rows Q2(512.0 MB) avg time: 2489.25 ms ...... ``` </details> 2. Execute benchmark binary to run a single query with the provided memory limit ``` Q1(30.0 MB) iteration 0 took 2034.1 ms and returned 1 rows Q1(30.0 MB) iteration 1 took 1722.9 ms and returned 1 rows Q1(30.0 MB) iteration 2 took 1875.0 ms and returned 1 rows Q1(30.0 MB) avg time: 1877.37 ms ``` 4. Run all with `bench.sh` script ```sh # under 'arrow-datafusion/benchmarks' ./bench.sh data tpch # This benchmark uses TPCH lineitem table (parquet format only) ./bench.sh run external_aggr ./bench.sh compare main main ``` Result: ``` -------------------- Benchmark external_aggr.json -------------------- ┏━━━━━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━━┓ ┃ Query ┃ main ┃ main ┃ Change ┃ ┡━━━━━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━━┩ │ Q1(64.0 MB) │ 127.07ms │ 127.07ms │ no change │ │ Q1(32.0 MB) │ 159.59ms │ 159.59ms │ no change │ │ Q1(16.0 MB) │ 131.80ms │ 131.80ms │ no change │ │ Q2(512.0 MB) │ 308.05ms │ 308.05ms │ no change │ │ Q2(256.0 MB) │ 671.76ms │ 671.76ms │ no change │ │ Q2(128.0 MB) │ 655.00ms │ 655.00ms │ no change │ │ Q2(64.0 MB) │ 608.16ms │ 608.16ms │ no change │ │ Q2(32.0 MB) │ 535.54ms │ 535.54ms │ no change │ └──────────────┴──────────┴──────────┴───────────┘ ``` <!-- We typically require tests for all PRs in order to: 1. Prevent the code from being accidentally broken by subsequent changes 5. Serve as another way to document the expected behavior of the code If tests are not included in your PR, please explain why (for example, are they covered by existing tests)? --> ## Are there any user-facing changes? <!-- If there are user-facing changes then we may require documentation to be updated before approving the PR. --> <!-- If there are any breaking changes to public APIs, please add the `api change` label. --> No -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: github-unsubscr...@datafusion.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org --------------------------------------------------------------------- To unsubscribe, e-mail: github-unsubscr...@datafusion.apache.org For additional commands, e-mail: github-h...@datafusion.apache.org