[
https://issues.apache.org/jira/browse/PARQUET-2244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17689522#comment-17689522
]
ASF GitHub Bot commented on PARQUET-2244:
-----------------------------------------
wgtmac commented on PR #1028:
URL: https://github.com/apache/parquet-mr/pull/1028#issuecomment-1432527608
> I did a quick test using Spark
>
> ```
> Seq("A", "A",
null).toDF("column").repartition(1).write.mode("overwrite").parquet("t")
> spark.read.parquet("t").where("NOT (column <=> 'A')").show //
this returns null
> spark.read.parquet("t").where("NOT (column = 'A')").show //
this returns empty
> spark.read.parquet("t").where("column IN ('A')").show // this
returns A A
> spark.read.parquet("t").where("column NOT IN ('A')").show //
this returns empty
> ```
>
> If we only has `A` and `null` for `column` and we have predicate `column
not in ('A')`, should we return empty instead of null?
IIUC,
- `col IN (A, B)` is equal to `col = A OR col = B`
- `col NOT IN (A, B)` is equal to `col <> A AND col <> B`. where `col <> A`
means `col IS NOT NULL and col != A`
So my answer to your question above is empty. @huaxingao
It seems that we lose the chance to skip the row group with this fix.
@gszadovszky
> Dictionary filter may skip row-groups incorrectly when evaluating notIn
> -----------------------------------------------------------------------
>
> Key: PARQUET-2244
> URL: https://issues.apache.org/jira/browse/PARQUET-2244
> Project: Parquet
> Issue Type: Bug
> Components: parquet-mr
> Affects Versions: 1.12.2
> Reporter: Yujiang Zhong
> Assignee: Yujiang Zhong
> Priority: Major
>
> Dictionary filter may skip row-groups incorrectly when evaluating `notIn` on
> optional columns with null values. Here is an example:
> Say there is a optional column `c1` with all pages dict encoded, `c1` has and
> only has two distinct values: ['foo', null], and the predicate is `c1 not
> in ('foo', 'bar')`.
> Now dictionary filter may skip this row-group that is actually should not be
> skipped, because there are nulls in the column.
>
> This is a bug similar to #1510.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)