[
https://issues.apache.org/jira/browse/SPARK-16371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15363768#comment-15363768
]
Hyukjin Kwon commented on SPARK-16371:
--------------------------------------
Sorry for being noisy, here is the Scala version
{code}
case class Parent(a: Child)
case class Child(a: Long)
spark.range(10).map(num =>
Parent(Child(num))).write.mode("overwrite").parquet("/tmp/test")
spark.read.parquet("/tmp/test").where("a is not null").count() # 0
{code}
It seems it fails to apply the filter from Parquet when both inner column name
and outer column name are the same.
I will look into this deeper.
> IS NOT NULL clause gives false for nested not empty column
> ----------------------------------------------------------
>
> Key: SPARK-16371
> URL: https://issues.apache.org/jira/browse/SPARK-16371
> Project: Spark
> Issue Type: Bug
> Components: SQL
> Affects Versions: 2.0.0
> Reporter: Maciej BryĆski
> Priority: Critical
>
> I have df where column1 is struct type and there is 1M rows.
> (sample data from https://issues.apache.org/jira/browse/SPARK-16320)
> {code}
> df.where("column1 is not null").count()
> {code}
> gives:
> 1M in Spark 1.6
> *0* in Spark 2.0
> Is there a change in IS NOT NULL behaviour in Spark 2.0 ?
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]