Github user maropu commented on the issue:
https://github.com/apache/spark/pull/18576
yea, I also think `nullability` has good effects on many places as you
suggested, so we better propagate this info correctly as much as possible. But,
the plan nodes in the current implementation doesn't respects `nullability`
generated by constraints, for example;
```
scala> val df = Seq(Some(1), None).toDF("a")
df: org.apache.spark.sql.DataFrame = [a: int]
scala> df.printSchema
root
|-- a: integer (nullable = true)
scala> df.where("a != 1").explain
== Physical Plan ==
*Project [value#66 AS a#68]
+- *Filter (isnotnull(value#66) && NOT (value#66 = 1))
+- LocalTableScan [value#66]
scala> df.where("a != 1").queryExecution.sparkPlan.output(0).nullable
res13: Boolean = true
```
This example kills `nullable=false` in `Project`. Thanks for the valuable
advice! I'll look for other approaches by using traits or something!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]