robreeves commented on code in PR #41599:
URL: https://github.com/apache/spark/pull/41599#discussion_r1235593276


##########
sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala:
##########
@@ -207,12 +207,12 @@ private[sql] object QueryExecutionErrors extends 
QueryErrorsBase {
       messageParameters = Map("typeName" -> (dataType + failure)))
   }
 
-  def failedExecuteUserDefinedFunctionError(funcCls: String, inputTypes: 
String,
+  def failedExecuteUserDefinedFunctionError(functionName: String, inputTypes: 
String,
       outputType: String, e: Throwable): Throwable = {
     new SparkException(
       errorClass = "FAILED_EXECUTE_UDF",
       messageParameters = Map(
-        "functionName" -> funcCls,
+        "functionName" -> functionName,

Review Comment:
   I added it to be consistent with other usages, but it does look a little 
weird to me in this context, particularly for the Hive UDF changes where it 
adds backticks in the class name.
   
   `QueryErrorsBase.toSQLId` passes the input to 
`UnresolvedAttribute.parseAttributeName`. The `functionName` here is not an 
attribute name so it doesn't seem like `toSQLId` should not be used here. The 
main purpose of `toSQLId` that I see is to remove a prefix of 
`__auto_generated_subquery_name` in an attribute name and backticks between 
period delimiters. Since this input is a UDF name and its class location it 
doesn't feel applicable to me.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to