tustvold commented on code in PR #2111:
URL: https://github.com/apache/arrow-rs/pull/2111#discussion_r928243863


##########
parquet/src/arrow/record_reader/definition_levels.rs:
##########
@@ -248,7 +254,9 @@ struct PackedDecoder {
 
 impl PackedDecoder {
     fn next_rle_block(&mut self) -> Result<()> {
-        let indicator_value = self.decode_header()?;
+        let indicator_value = self
+            .decode_header()
+            .expect("decode_header fail in PackedDecoder");

Review Comment:
   Why this change?



##########
parquet/src/column/reader/decoder.rs:
##########
@@ -318,10 +318,41 @@ impl ColumnLevelDecoder for ColumnLevelDecoderImpl {
 impl DefinitionLevelDecoder for ColumnLevelDecoderImpl {
     fn skip_def_levels(
         &mut self,
-        _num_levels: usize,
-        _max_def_level: i16,
+        num_levels: usize,
+        max_def_level: i16,
     ) -> Result<(usize, usize)> {
-        Err(nyi_err!("https://github.com/apache/arrow-rs/issues/1792";))
+        let mut level_skip = 0;
+        let mut value_skip = 0;
+        match self.decoder.as_mut().unwrap() {
+            LevelDecoderInner::Packed(reader, bit_width) => {
+                for _ in 0..num_levels {

Review Comment:
   It might be faster to decode to a temporary buffer, to allow vectorized 
unpacking, but definitely something that can be done as a follow up



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to