alamb commented on code in PR #8607:
URL: https://github.com/apache/arrow-rs/pull/8607#discussion_r2444646741
##########
parquet/src/column/reader/decoder.rs:
##########
@@ -136,14 +134,22 @@ pub trait ColumnValueDecoder {
fn skip_values(&mut self, num_values: usize) -> Result<usize>;
}
+/// Bucket-based storage for decoder instances keyed by `Encoding`.
+///
+/// This replaces `HashMap` lookups with direct indexing to avoid hashing
overhead in the
+/// hot decoding paths.
+const ENCODING_SLOTS: usize = Encoding::BYTE_STREAM_SPLIT as usize + 1;
Review Comment:
I think we typically try to keep the dependencies to a minimum
> * Add UT in this file by counting the enum manually so the contributor
will be get a failure here if new encoding is introduced.
This sounds like a nice middle ground to me
Definitely let's do it as a follow on PR
##########
parquet/src/column/reader/decoder.rs:
##########
@@ -136,14 +134,22 @@ pub trait ColumnValueDecoder {
fn skip_values(&mut self, num_values: usize) -> Result<usize>;
}
+/// Bucket-based storage for decoder instances keyed by `Encoding`.
+///
+/// This replaces `HashMap` lookups with direct indexing to avoid hashing
overhead in the
+/// hot decoding paths.
+const ENCODING_SLOTS: usize = Encoding::BYTE_STREAM_SPLIT as usize + 1;
Review Comment:
Whoops -- yes , sorry I copy/pasted the wrong thing 🤦
I mean
> Add UT in this file by counting the enum manually so the contributor will
be get a failure here if new encoding is introduced.
(updated above too)
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]