hhhizzz commented on code in PR #8607:
URL: https://github.com/apache/arrow-rs/pull/8607#discussion_r2443854284
##########
parquet/src/column/reader/decoder.rs:
##########
@@ -136,14 +134,22 @@ pub trait ColumnValueDecoder {
fn skip_values(&mut self, num_values: usize) -> Result<usize>;
}
+/// Bucket-based storage for decoder instances keyed by `Encoding`.
+///
+/// This replaces `HashMap` lookups with direct indexing to avoid hashing
overhead in the
+/// hot decoding paths.
+const ENCODING_SLOTS: usize = Encoding::BYTE_STREAM_SPLIT as usize + 1;
Review Comment:
Yes, that's a good point, I think there're some methods to avoid that:
- use [strum](https://docs.rs/strum/latest/strum/) to get the number of
encodings in the enum
- Add UT in this file by counting the enum manually so the contributor will
be get a failure here if new encoding is introduced.
- Define a const name EONCODING_COUNTS in the `basic.rs`
I prefer to use `strum`, it will make the code more readable, and won't
cause any other change if the new encodings are coming.
Anyway I can do them in the following PR since the new encoding won't appear
immediately.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]