pitrou commented on code in PR #196:
URL: https://github.com/apache/parquet-format/pull/196#discussion_r1243836277
##########
src/main/thrift/parquet.thrift:
##########
@@ -966,6 +985,23 @@ struct ColumnIndex {
/** A list containing the number of null values for each page **/
5: optional list<i64> null_counts
+
+ /**
+ * A list of Boolean values to determine pages that contain only NaNs. Only
+ * present for columns of type FLOAT and DOUBLE. If true, all non-null
+ * values in a page are NaN. Writers are suggested to set the corresponding
+ * entries in min_values and max_values to NaN, so that all lists have the
same
+ * length and contain valid values. If false, then either all values in the
+ * page are null or there is at least one non-null non-NaN value in the page.
+ * As readers are supposed to ignore all NaN values in bounds, legacy readers
+ * who do not consider nan_pages yet are still able to use the column index
+ * but are not able to skip only-NaN pages.
+ */
+ 6: optional list<bool> nan_pages
Review Comment:
> I've brought up boundary order because that was our original answer to the
problems of these ordering issues.
Hmm, how is it an answer? It only seems to be a redundant piece of
information about `min_values` and `max_values`.
> E.g. how should we order internationalized UTF-8 strings?
Byte-wise (i.e. codeunit-wise) lexicograph ordering and character-wise (i.e.
codepoint-wise) lexicographic ordering should give identical results AFAIR.
They are also technically "natural".
If a query system needs a more sophisticated ordering, then it should
certainly synthesize its own index.
I also don't uderstand what that has to do with the presence or absence of
`boundary_order`?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]