Github user henryr commented on a diff in the pull request:

    https://github.com/apache/spark/pull/20687#discussion_r173268999
  
    --- Diff: 
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/optimizer/complexTypesSuite.scala
 ---
    @@ -331,4 +330,31 @@ class ComplexTypesSuite extends PlanTest with 
ExpressionEvalHelper {
           .analyze
         comparePlans(Optimizer execute rel, expected)
       }
    +
    +  test("SPARK-23500: Simplify complex ops that aren't at the plan root") {
    +    val structRel = relation
    +      .select(GetStructField(CreateNamedStruct(Seq("att1", 'nullable_id)), 
0, None) as "foo")
    +      .groupBy($"foo")("1").analyze
    +    val structExpected = relation
    +      .select('nullable_id as "foo")
    +      .groupBy($"foo")("1").analyze
    +    comparePlans(Optimizer execute structRel, structExpected)
    +
    +    // If nullable attributes aren't used in the 'expected' plans, the 
array and map test
    +    // cases fail because array and map indexing can return null so the 
output attribute
    --- End diff --
    
    @cloud-fan I looked again at this briefly this morning. The issue is that 
it's the `AttributeReference` in the top-level `Aggregate`'s 
`groupingExpressions` that has inconsistent nullability. 
    
    The `AttributeReference` in the original plan was originally created with 
`nullable=true`, before optimization. So at that point it's kind of fixed 
unless the optimizer dereferences the attr reference and realises that the 
target is no longer nullable. 



---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to