alamb commented on code in PR #6172:
URL: https://github.com/apache/arrow-datafusion/pull/6172#discussion_r1184002980
##########
benchmarks/bench.sh:
##########
@@ -247,6 +257,22 @@ run_tpch_mem() {
$CARGO_COMMAND --bin tpch -- benchmark datafusion --iterations 5 --path
"${DATA_DIR}" -m --format parquet -o ${RESULTS_FILE}
}
+# Runs the parquet filter benchmark
+run_parquet() {
+ RESULTS_FILE="${RESULTS_DIR}/parquet.json"
+ echo "RESULTS_FILE: ${RESULTS_FILE}"
+ echo "Running parquet filter benchmark..."
+ $CARGO_COMMAND --bin parquet -- filter --path "${DATA_DIR}" --scale-factor
1.0 --iterations 5 -o ${RESULTS_FILE}
Review Comment:
This is even more confusing -- the scale factor for tpch is actually a
applied, but it is applied when we create the data. Specifically
https://github.com/apache/arrow-datafusion/blob/2787e7a36a6be83d91201df20827d3695f933300/benchmarks/bench.sh#L202
The "parquet" benchmark actually doesn't use the tpch dataset at all, and
instead generates its own data. Hwever, it overloads the "scale factor"
terminology to describe the relative sizes
##########
benchmarks/bench.sh:
##########
@@ -247,6 +257,22 @@ run_tpch_mem() {
$CARGO_COMMAND --bin tpch -- benchmark datafusion --iterations 5 --path
"${DATA_DIR}" -m --format parquet -o ${RESULTS_FILE}
}
+# Runs the parquet filter benchmark
+run_parquet() {
+ RESULTS_FILE="${RESULTS_DIR}/parquet.json"
+ echo "RESULTS_FILE: ${RESULTS_FILE}"
+ echo "Running parquet filter benchmark..."
+ $CARGO_COMMAND --bin parquet -- filter --path "${DATA_DIR}" --scale-factor
1.0 --iterations 5 -o ${RESULTS_FILE}
Review Comment:
It is somewhat ugly to have to run a second command(`parquet`) for
different benchmarks -- I plan to combine them into a single benchmark runner
over time
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]