etseidl commented on code in PR #197:
URL: https://github.com/apache/parquet-format/pull/197#discussion_r1302095236
##########
src/main/thrift/parquet.thrift:
##########
@@ -974,6 +1050,13 @@ struct ColumnIndex {
/** A list containing the number of null values for each page **/
5: optional list<i64> null_counts
+ /**
+ * Repetition and definition level histograms for the pages.
+ *
+ * This contains some redundancy with null_counts, however, to accommodate
the
+ * widest range of readers both should be populated.
+ **/
+ 6: optional list<RepetitionDefinitionLevelHistogram>
repetition_definition_level_histograms;
Review Comment:
> I think `SizeEstimationStatistics` is already in `ColumnChunk`, would a
page level `unencoded_variable_width_stored_bytes` helps pruning? Since here
the `repetition_definition_level_histograms` can help pruning like
`struct.list_child == null`?
My use case for the information isn't necessarily pruning (in the filtering
sense), but to figure out how large of a batch I can read from a set of files
and still remain within a memory budget.
> If we want this it's even more better to put
`unencoded_variable_width_stored_bytes` into `OffsetIndex`. It's much more like
`compressed_page_size` there.
This would be ideal...it's what I originally wanted to propose.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]