Github user dilipbiswal commented on a diff in the pull request:
https://github.com/apache/spark/pull/13368#discussion_r64981737
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/TypeCoercion.scala
---
@@ -290,11 +290,6 @@ object TypeCoercion {
// Skip nodes who's children have not been resolved yet.
case e if !e.childrenResolved => e
- case a @ BinaryArithmetic(left @ StringType(), right @
DecimalType.Expression(_, _)) =>
- a.makeCopy(Array(Cast(left, DecimalType.SYSTEM_DEFAULT), right))
- case a @ BinaryArithmetic(left @ DecimalType.Expression(_, _), right
@ StringType()) =>
- a.makeCopy(Array(left, Cast(right, DecimalType.SYSTEM_DEFAULT)))
-
--- End diff --
Hi, @dongjoon-hyun
Thanks !! You are right. We are multiplying two decimals with
SYSTEM_DEFAULT. I looked at hive, impala code and it seems like the rule to
multiply two decimals d1, d2 are pretty standard. The resulting decimal gets a
precision of p1+p2+1 and scale of s1 + s2.
So i looked at how hive promotes a string in an expression involving
decimal and they use double and so does mysql.
Please let me know your thoughts..
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]