tustvold commented on code in PR #197: URL: https://github.com/apache/parquet-format/pull/197#discussion_r1318314149
########## src/main/thrift/parquet.thrift: ########## @@ -529,7 +596,15 @@ struct DataPageHeader { /** Encoding used for repetition levels **/ 4: required Encoding repetition_level_encoding; - /** Optional statistics for the data in this page**/ + /** + * Optional statistics for the data in this page. + * + * For filter use-cases populating data in the page index is generally a superior + * solution because it allows readers to avoid IO, however not all readers make use + * of the page index. For best compatibility both should be populated. If the writer Review Comment: This appears to contradict the docs for page index > Readers that support ColumnIndex should not also use page statistics. The only reason to write page-level statistics when writing ColumnIndex structs is to support older readers (not recommended). ########## src/main/thrift/parquet.thrift: ########## @@ -191,6 +191,73 @@ enum FieldRepetitionType { REPEATED = 2; } +/** + * A histogram of repetition and definition levels for either a page or column + * chunk. + * + * This is useful for: + * 1. Estimating the size of the data when materialized in memory + * + * 2. For filter push-down on nulls at various levels of nested + * structures and list lengths. + */ +struct RepetitionDefinitionLevelHistogram { + /** + * When present, there is expected to be one element corresponding to each + * repetition (i.e. size=max repetition_level+1) where each element + * represents the number of times the repetition level was observed in the + * data. + * + * This field may be omitted if max_repetition_level is 0. + **/ + 1: optional list<i64> repetition_level_histogram; + /** + * Same as repetition_level_histogram except for definition levels. + * + * This field may be omitted if max_definition_level is 0 or 1. + **/ + 2: optional list<i64> definition_level_histogram; + } + +/** + * A structure for capturing metadata for estimating the unencoded, + * uncompressed size of data written. This is useful for readers to estimate + * how much memory is needed to reconstruct data in their memory model and for + * fine grained filter pushdown on nested structures (the histogram contained + * in this structure can help determine the number of nulls at a particular + * nesting level). + * + * Writers should populate all fields in this struct except for the exceptions + * listed per field. + */ +struct SizeStatistics { + /** + * The number of physical bytes stored for BYTE_ARRAY data values assuming + * no encoding. This is exclusive of the bytes needed to store the length of + * each byte array. In other words, this field is equivalent to the `(size + * of PLAIN-ENCODING the byte array values) - (4 bytes * number of values + * written)`. To determine unencoded sizes of other types readers can use + * schema information multiplied by the number of non-null and null values. + * The number of null/non-null values can be inferred from the histograms + * below. + * + * For example, if a column chunk is dictionary-encoded with dictionary + * ["a", "bc", "cde"], and a data page contains the indices [0, 0, 1, 2], + * then this value for that data page should be 7 (1 + 1 + 2 + 3). + * + * This field should only be set for types that use BYTE_ARRAY as their + * physical type. + */ + 1: optional i64 unencoded_byte_array_data_bytes; Review Comment: I don't really understand this field. Most readers, I'd hope, are not decoding a column chunk or even a page at a time, but are rather reading a number of rows at a time. This avoids potentially exploding memory on highly compressed data. It is unclear how this could be used by such a reader? It also seems off to me that it assumes readers can't preserve dictionaries, something both arrow-cpp and arrow-rs are able to do... -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: dev-unsubscr...@parquet.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org