This is an automated email from the ASF dual-hosted git repository.
dheres pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/arrow-rs.git
The following commit(s) were added to refs/heads/main by this push:
new ba21a8378d [Parquet] Optimize appending max level comparison in
DefinitionLevelDecoder (#9217)
ba21a8378d is described below
commit ba21a8378db9f622cdb8464cfaa5727520894bb4
Author: Jörn Horstmann <[email protected]>
AuthorDate: Mon Jan 19 13:47:33 2026 +0100
[Parquet] Optimize appending max level comparison in DefinitionLevelDecoder
(#9217)
# Which issue does this PR close?
<!--
We generally require a GitHub issue to be filed for all bug fixes and
enhancements and this helps us generate change logs for our releases.
You can link an issue to this PR using the GitHub syntax.
-->
- Closes #9216.
# Rationale for this change
Profiling showed this loop to be a clear performance hotspot and thanks
to the new `BooleanBufferBuilder::extend_trusted_len` method introduced
in #9137 there is a very simple improvement.
Benchmark results for optional structs after the fix:
```
arrow_array_reader/struct/Int32Array/plain encoded, optional struct,
optional data, no NULLs
time: [69.873 µs 69.917 µs 69.970 µs]
change: [−60.075% −60.046% −60.018%] (p = 0.00 <
0.05)
Performance has improved.
arrow_array_reader/struct/Int32Array/plain encoded, optional struct,
optional data, half NULLs
time: [136.62 µs 136.66 µs 136.72 µs]
change: [−67.663% −67.536% −67.416%] (p = 0.00 <
0.05)
Performance has improved.
```
This is a big improvement, but still significantly slower than reading
non-nested data. The main hotspot still is the `extend_trusted_len` and
manual simd code for comparing chunks and appending 64-bits at a time
could potentially speed it up even more. But that would require either
architecture specific intrinsics or unstable features.
<!--
Why are you proposing this change? If this is already explained clearly
in the issue then this section is not needed.
Explaining clearly why changes are proposed helps reviewers understand
your changes and offer better suggestions for fixes.
-->
# What changes are included in this PR?
<!--
There is no need to duplicate the description in the issue here but it
is sometimes worth providing a summary of the individual changes in this
PR.
-->
# Are these changes tested?
Tested by existing tests.
# Are there any user-facing changes?
no
---
parquet/src/arrow/array_reader/struct_array.rs | 9 +++++++--
parquet/src/arrow/record_reader/definition_levels.rs | 7 ++++---
2 files changed, 11 insertions(+), 5 deletions(-)
diff --git a/parquet/src/arrow/array_reader/struct_array.rs
b/parquet/src/arrow/array_reader/struct_array.rs
index 3c5a5f836b..63b20eb0d6 100644
--- a/parquet/src/arrow/array_reader/struct_array.rs
+++ b/parquet/src/arrow/array_reader/struct_array.rs
@@ -158,8 +158,13 @@ impl ArrayReader for StructArrayReader {
}
}
None => {
- for def_level in def_levels {
- bitmap_builder.append(*def_level >=
self.struct_def_level)
+ // Safety: slice iterator has a trusted length
+ unsafe {
+ bitmap_builder.extend_trusted_len(
+ def_levels
+ .iter()
+ .map(|level| *level >= self.struct_def_level),
+ )
}
}
}
diff --git a/parquet/src/arrow/record_reader/definition_levels.rs
b/parquet/src/arrow/record_reader/definition_levels.rs
index 8fe26a9b52..f51dee5c5c 100644
--- a/parquet/src/arrow/record_reader/definition_levels.rs
+++ b/parquet/src/arrow/record_reader/definition_levels.rs
@@ -160,9 +160,10 @@ impl DefinitionLevelDecoder for
DefinitionLevelBufferDecoder {
let start = levels.len();
let (values_read, levels_read) =
decoder.read_def_levels(levels, num_levels)?;
- nulls.reserve(levels_read);
- for i in &levels[start..] {
- nulls.append(i == max_level);
+ // Safety: slice iterator has a trusted length
+ unsafe {
+ nulls
+ .extend_trusted_len(levels[start..].iter().map(|level|
level == max_level));
}
Ok((values_read, levels_read))