Github user gatorsmile commented on the issue:

    https://github.com/apache/spark/pull/20023
  
    Thanks for your careful summary! 
    
    We do not have a SQLCA. Thus, it is hard for us to send a warning message 
back like [what DB2 
does.](https://www.ibm.com/support/knowledgecenter/SSEPEK_10.0.0/sqlref/src/tpc/db2z_decimalmultiplication.html).
 Silently losing the precision looks scary to me. Oracle sounds like following 
the rule, [`If a value exceeds the precision, then Oracle returns an error. If 
a value exceeds the scale, then Oracle rounds 
it.`](https://docs.oracle.com/en/database/oracle/oracle-database/12.2/sqlrf/Data-Types.html#GUID-9401BC04-81C4-4CD5-99E7-C5E25C83F608)
    
    SQL ANSI 2011 does not document many details. For example, [the result type 
of DB2's 
division](https://www.ibm.com/support/knowledgecenter/en/SSEPEK_10.0.0/sqlref/src/tpc/db2z_decimaldivision.html)
 is different from either our existing rule or the rule you changed. The rule 
you mentioned above about DB2 is just for multiplification. 
    
    I am not sure whether we can finalize our default type cocersion rule 
`DecimalPrecision` now. However, for Hive compliance, we can add a new rule 
after we introduce the new conf `spark.sql.typeCoercion.mode`. See the PR 
https://github.com/apache/spark/pull/18853 for details. The new behavior will 
be corrected if and only if `spark.sql.typeCoercion.mode` is set to `hive`.
    
    Could you first improve the test cases added in 
https://github.com/apache/spark/pull/20008 ?


---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to