ASF GitHub Bot commented on PARQUET-2249:

JFinis commented on code in PR #196:
URL: https://github.com/apache/parquet-format/pull/196#discussion_r1145999358

@@ -952,6 +961,9 @@ struct ColumnIndex {
    * Such more compact values must still be valid values within the column's
    * logical type. Readers must make sure that list entries are populated 
    * using them by inspecting null_pages.
+   * For columns of type FLOAT and DOUBLE, NaN values are not to be included

Review Comment:
   Don’t we then have the same problem already for the NaN values stored in the 
actual columns? We do already serialize NaN to binary values in the columns 
themselves. There we also do not mandate a specific bit pattern. The spec does 
define float double to be IEEE compliant:
      * FLOAT - 4 bytes per value.  IEEE. Stored as little-endian.
      * DOUBLE - 8 bytes per value.  IEEE. Stored as little-endian.
   So if I see it correctly, any conforming implementation has to be able to 
handle all NaN bit patterns that IEEE allows. Otherwise they could not read the 
actual data in the columns.
   As you mention Java: Java has a defined way of reading IEEE bits into Java 
floats: Float.intBitsToFloa (and the respective method for double). Here it is 
guaranteed that all valid NaN bit patterns produce a Java Nan. From [the 
   > If the argument is any value in the range 0x7f800001 through 0x7fffffff or 
in the range 0xff800001 through 0xffffffff, the result is a NaN.
   This method is used by parquet-mr, so we should be fine here.
   So, to generalize, as I see it, the following holds:
   * Parquet defines FLOAT/DOUBLE to be IEEE without further mandating any bit 
     * If a reader cannot handle all NaN bit patterns, they are not conforming 
to the spec.
     * Also, such reader would already today malfunction, as there can be NaNs 
in columns already.
   * All prominent programming languages (C++, Java, Python, Go, ...) have IEEE 
compliant binary to float conversions, so this also sounds like a rather 
theoretical problem.
   I can also change my suggestion to not write NaNs, but that comes with its 
own challenges, as layed out in my commit message. But I first want to get a 
common understanding of why NaN bit patterns in bounds are worse than NaN bit 
patterns in the columns themselves. Maybe I'm missing something here.

> Parquet spec (parquet.thrift) is inconsistent w.r.t. ColumnIndex + NaNs
> -----------------------------------------------------------------------
>                 Key: PARQUET-2249
>                 URL: https://issues.apache.org/jira/browse/PARQUET-2249
>             Project: Parquet
>          Issue Type: Bug
>          Components: parquet-format
>            Reporter: Jan Finis
>            Priority: Major
> Currently, the specification of {{ColumnIndex}} in {{parquet.thrift}} is 
> inconsistent, leading to cases where it is impossible to create a parquet 
> file that is conforming to the spec.
> The problem is with double/float columns if a page contains only NaN values. 
> The spec mentions that NaN values should not be included in min/max bounds, 
> so a page consisting of only NaN values has no defined min/max bound. To 
> quote the spec:
> {noformat}
>    *     When writing statistics the following rules should be followed:
>    *     - NaNs should not be written to min or max statistics 
> fields.{noformat}
> However, the comments in the ColumnIndex on the null_pages member states the 
> following:
> {noformat}
> struct ColumnIndex {
>   /**
>    * A list of Boolean values to determine the validity of the corresponding
>    * min and max values. If true, a page contains only null values, and 
> writers
>    * have to set the corresponding entries in min_values and max_values to
>    * byte[0], so that all lists have the same length. If false, the
>    * corresponding entries in min_values and max_values must be valid.
>    */
>   1: required list<bool> null_pages{noformat}
> For a page with only NaNs, we now have a problem. The page definitly does 
> *not* only contain null values, so {{null_pages}} should be {{false}} for 
> this page. However, in this case the spec requires valid min/max values in 
> {{min_values}} and {{max_values}} for this page. As the only value in the 
> page is NaN, the only valid min/max value we could enter here is NaN, but as 
> mentioned before, NaNs should never be written to min/max values.
> Thus, no writer can currently create a parquet file that conforms to this 
> specification as soon as there is a only-NaN column and column indexes are to 
> be written.
> I see three possible solutions:
> 1. A page consisting only of NaNs (or a mixture of NaNs and nulls) has it's 
> null_pages entry set to {*}true{*}.
> 2. A page consisting of only NaNs (or a mixture of NaNs and nulls) has 
> {{byte[0]}} as min/max, even though the null_pages entry is set to 
> {*}false{*}.
> 3. A page consisting of only NaNs (or a mixture of NaNs and nulls) does have 
> NaN as min & max in the column index.
> None of the solutions is perfect. But I guess solution 3. is the best of 
> them. It gives us valid min/max bounds, makes null_pages compatible with 
> this, and gives us a way to determine only-Nan pages (min=max=NaN).
> As a general note: I would say that it is a shortcoming that Parquet doesn't 
> track NaN counts. E.g., Iceberg does track NaN counts and therefore doesn't 
> have this inconsistency. In a future version, NaN counts could be introduced, 
> but that doesn't help for backward compatibility, so we do need a solution 
> for now.
> Any of the solutions is better than the current situation where engines 
> writing such a page cannot write a conforming parquet file and will randomly 
> pick any of the solutions.
> Thus, my suggestion would be to update parquet.thrift to use solution 3. 
> I.e., rewrite the comments saying that NaNs shouldn't be included in min/max 
> bounds by adding a clause stating that "if a page contains only NaNs or a 
> mixture of NaNs and NULLs, then NaN should be written as min & max".

This message was sent by Atlassian Jira

Reply via email to