ulysses-you commented on a change in pull request #29749:
URL: https://github.com/apache/spark/pull/29749#discussion_r488513059
##########
File path: sql/hive/src/main/scala/org/apache/spark/sql/hive/hiveUDFs.scala
##########
@@ -69,6 +70,23 @@ private[hive] case class HiveSimpleUDF(
udfType != null && udfType.deterministic() && !udfType.stateful()
}
+ override def inputTypes: Seq[AbstractDataType] = {
+ val inTypes = children.map(_.dataType)
+ if (!inTypes.exists(_.existsRecursively(_.isInstanceOf[DecimalType]))) {
Review comment:
It's a compatible issue. In normal case, data type is converted by Hive
ObjectInspector at running time. But Hive not support input decimal type when
method required double type. Unfortunately the default type of `1.1` is
different between Spark and Hive, which are decimal and double. Then caused
this issue.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]