Github user cloud-fan commented on a diff in the pull request:

    https://github.com/apache/spark/pull/20637#discussion_r212960763
  
    --- Diff: 
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/ExpressionEvalHelper.scala
 ---
    @@ -223,8 +223,9 @@ trait ExpressionEvalHelper extends 
GeneratorDrivenPropertyChecks with PlanTestBa
               }
             } else {
               val lit = InternalRow(expected, expected)
    +          val dtAsNullable = expression.dataType.asNullable
    --- End diff --
    
    I didn't go through the entire thread. But my opinion is that, the data 
type nullable should match the real data.
    
    BTW will this reduce test coverage? It seems the optimization for 
non-nullable fields is not tested if we always assume the expression is 
nullable.
    
    > we use the default value for the type, and create a wrong result instead 
of throwing a NPE.
    This is expected and I think it's a common symptom for nullable-mismatch 
problems. Why can't our test expose it?


---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to