cloud-fan commented on a change in pull request #27488:
[SPARK-26580][SQL][ML][FOLLOW-UP] Throw exception when use untyped UDF by
default
URL: https://github.com/apache/spark/pull/27488#discussion_r376306898
##########
File path: sql/core/src/main/scala/org/apache/spark/sql/functions.scala
##########
@@ -4732,6 +4733,15 @@ object functions {
* @since 2.0.0
*/
def udf(f: AnyRef, dataType: DataType): UserDefinedFunction = {
+ if (!SQLConf.get.getConf(SQLConf.LEGACY_USE_UNTYPED_UDF)) {
+ val errorMsg = "You're using untyped udf, which does not have the input
type information. " +
+ "So, Spark may blindly pass null to the Scala closure with
primitive-type argument, " +
+ "and the closure will see the default value of the Java type for the
null argument, " +
+ "e.g. `udf((x: Int) => x, IntegerType)`, the result is 0 for null
input. You could use " +
+ "other typed udf APIs to avoid this problem, or set " +
+ "spark.sql.legacy.useUnTypedUdf.enabled to true to insistently use
this."
Review comment:
let's not hardcode config names.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]