Github user srowen commented on a diff in the pull request:

    https://github.com/apache/spark/pull/22701#discussion_r224547656
  
    --- Diff: 
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/analysis/AnalysisSuite.scala
 ---
    @@ -351,8 +351,8 @@ class AnalysisSuite extends AnalysisTest with Matchers {
       test("SPARK-24891 Fix HandleNullInputsForUDF rule") {
         val a = testRelation.output(0)
         val func = (x: Int, y: Int) => x + y
    -    val udf1 = ScalaUDF(func, IntegerType, a :: a :: Nil)
    -    val udf2 = ScalaUDF(func, IntegerType, a :: udf1 :: Nil)
    +    val udf1 = ScalaUDF(func, IntegerType, a :: a :: Nil, nullableTypes = 
false :: false :: Nil)
    --- End diff --
    
    So clearly we should make both of these changes. This change fixes the test 
here. But is the change above, involving `KnownNotNull`, important for 
correctness? that is can some user code trigger this infinite loop you mention 
in SPARK-24891? I'm trying to figure out whether the change here is absolutely 
required for 2.4, or an important change that could happen in 2.4.1.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to