cloud-fan commented on a change in pull request #32764:
URL: https://github.com/apache/spark/pull/32764#discussion_r645316170
##########
File path:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
##########
@@ -2169,12 +2169,29 @@ class Analyzer(override val catalogManager:
CatalogManager)
unbound, arguments, unsupported)
}
+ if (bound.inputTypes().length != arguments.length) {
+ throw
QueryCompilationErrors.v2FunctionInvalidInputTypeLengthError(
+ bound, arguments)
+ }
+
+ val castedArguments = arguments.zip(bound.inputTypes()).map
{ case (arg, ty) =>
+ if (arg.dataType != ty) {
+ if (Cast.canCast(arg.dataType, ty)) {
+ Cast(arg, ty)
+ } else {
+ throw
QueryCompilationErrors.v2FunctionCastError(bound, arg, ty)
+ }
+ } else {
+ arg
+ }
+ }
Review comment:
It seems to me that a more natural approach is: the v2 function tells
Spark all the overloads of its function (e.g. `substring(string)` and
`substring(binary)`) , then Spark decides which overload to call and apply
necessary type coercions.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]