cloud-fan commented on a change in pull request #28070: [SPARK-31010][SQL] Add 
Java UDF suggestion in error message of untyped Scala UDF
URL: https://github.com/apache/spark/pull/28070#discussion_r400082499
 
 

 ##########
 File path: sql/core/src/main/scala/org/apache/spark/sql/functions.scala
 ##########
 @@ -4864,9 +4864,10 @@ object functions {
         "information. Spark may blindly pass null to the Scala closure with 
primitive-type " +
         "argument, and the closure will see the default value of the Java type 
for the null " +
         "argument, e.g. `udf((x: Int) => x, IntegerType)`, the result is 0 for 
null input. " +
-        "You could use typed Scala UDF APIs (e.g. `udf((x: Int) => x)`) to 
avoid this problem, " +
-        s"or set ${SQLConf.LEGACY_ALLOW_UNTYPED_SCALA_UDF.key} to true and use 
this API with " +
-        s"caution."
+        "You could use typed Scala UDF APIs (e.g. `udf((x: Int) => x)`) or 
Java UDF (e.g. " +
+        "`udf(new UDF1[String, Int] { override def call(s: String): Int = 
s.length() }, " +
+        "IntegerType)`) if input types are all non primitive to avoid this 
problem or set " +
+        "${SQLConf.LEGACY_ALLOW_UNTYPED_SCALA_UDF.key} to true and use this 
API with caution."
 
 Review comment:
   now there are 3 workarounds, can we list them nicely?
   ```
   "To get rid of this error, you could:\n" + 
     "1. use typed Scala UDF, e.g. ...\n" +
     "2. use Java UDF, e.g. ..., if input types... \n" +
     "3. set .... "
   ```

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to