[
https://issues.apache.org/jira/browse/SPARK-16371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15363757#comment-15363757
]
Hyukjin Kwon commented on SPARK-16371:
--------------------------------------
Here is shorter codes
{code}
from pyspark.sql.functions import struct
from pyspark.sql import Row
path = '/tmp/test'
rdd = sc.parallelize(range(10))
data = rdd.map(lambda r: Row(column0=r))
child_df = spark.createDataFrame(data)
parent_df = child_df.select(struct("column0").alias("column0"))
parent_df.write.mode('overwrite').parquet(path)
parent_df = spark.read.parquet(path)
parent_df.where("column0 is not null").count()
{code}
> IS NOT NULL clause gives false for nested not empty column
> ----------------------------------------------------------
>
> Key: SPARK-16371
> URL: https://issues.apache.org/jira/browse/SPARK-16371
> Project: Spark
> Issue Type: Bug
> Components: SQL
> Affects Versions: 2.0.0
> Reporter: Maciej BryĆski
> Priority: Critical
>
> I have df where column1 is struct type and there is 1M rows.
> (sample data from https://issues.apache.org/jira/browse/SPARK-16320)
> {code}
> df.where("column1 is not null").count()
> {code}
> gives:
> 1M in Spark 1.6
> *0* in Spark 2.0
> Is there a change in IS NOT NULL behaviour in Spark 2.0 ?
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]