wgtmac commented on code in PR #34323:
URL: https://github.com/apache/arrow/pull/34323#discussion_r1119487673
##########
cpp/src/parquet/encoding_benchmark.cc:
##########
@@ -569,6 +569,126 @@
BENCHMARK(BM_DeltaBitPackingDecode_Int64_Narrow)->Range(MIN_RANGE, MAX_RANGE);
BENCHMARK(BM_DeltaBitPackingDecode_Int32_Wide)->Range(MIN_RANGE, MAX_RANGE);
BENCHMARK(BM_DeltaBitPackingDecode_Int64_Wide)->Range(MIN_RANGE, MAX_RANGE);
+void EncodingByteArrayBenchmark(benchmark::State& state, Encoding::type
encoding) {
+ ::arrow::random::RandomArrayGenerator rag(0);
+ // Using arrow generator to generate random data.
+ int32_t max_length = state.range(0);
+ auto array =
+ rag.String(/* size */ 1024, /* min_length */ 0, /* max_length */
max_length,
Review Comment:
Sometimes we need to adaptively determine a best batch size in the compute
engine. Providing different batch sizes may give us better visibility on the
encoding side. I suspect it will demonstrate a linear behavior as it has
barriers including the block size or encoding pattern. @rok
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]