ggershinsky commented on pull request #925: URL: https://github.com/apache/parquet-mr/pull/925#issuecomment-909119450
> FYI,Maybe we can make use of this information : > RowGroup[n].file_offset = RowGroup[n-1].file_offset + RowGroup[n-1].total_compressed_size > total_compressed_size always holds the truth, while file_offset doesn't. > total_compressed_size is also introduced in for encryption feature. Yep, exactly! If there are no hidden surprises and this works as expected, it would certainly be the optimal solution. While at it, maybe you can also add a check at the [write side](https://github.com/apache/parquet-mr/blob/apache-parquet-1.12.0/parquet-hadoop/src/main/java/org/apache/parquet/format/converter/ParquetMetadataConverter.java#L580), to verify the RG offset values in a similar manner (must be equal to a sum of previous RG sizes, plus the first RG offset; this also runs in a loop). Thanks @loudongfeng ! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
