JFinis commented on code in PR #196:
URL: https://github.com/apache/parquet-format/pull/196#discussion_r1243496427


##########
src/main/thrift/parquet.thrift:
##########
@@ -966,6 +985,23 @@ struct ColumnIndex {
 
   /** A list containing the number of null values for each page **/
   5: optional list<i64> null_counts
+
+  /**
+   * A list of Boolean values to determine pages that contain only NaNs. Only
+   * present for columns of type FLOAT and DOUBLE. If true, all non-null
+   * values in a page are NaN. Writers are suggested to set the corresponding
+   * entries in min_values and max_values to NaN, so that all lists have the 
same
+   * length and contain valid values. If false, then either all values in the
+   * page are null or there is at least one non-null non-NaN value in the page.
+   * As readers are supposed to ignore all NaN values in bounds, legacy readers
+   * who do not consider nan_pages yet are still able to use the column index
+   * but are not able to skip only-NaN pages.
+   */
+  6: optional list<bool> nan_pages

Review Comment:
   Oh, actually there is yet another option
   
   d) Stick with nan_pages (or value_counts) (i.e., alternatives (2) or (3)) 
and write min=-Infinity and max=+Infinity into the bounds in the column index 
for only-NaN pages. This way, new readers could use nan_pages (or value_counts) 
to detect an only-NaN pages. Legacy readers would simply never filter this page 
due to the maximally wide bounds. My heart is bleeding a bit while writing 
this, as this is obviously a patch solution that feels wrong (the bounds are 
just not correct) and is just to reverse-patch old implementations by bending 
the spec, but it would fulfill the requirements and allow backward 
compatibility while enabling support for filtering only-NaN pages.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to