andygrove opened a new pull request, #19628:
URL: https://github.com/apache/datafusion/pull/19628

   ## Which issue does this PR close?
   
   <!--
   We generally require a GitHub issue to be filed for all bug fixes and 
enhancements and this helps us generate change logs for our releases. You can 
link an issue to this PR using the GitHub syntax. For example `Closes #123` 
indicates that this PR will close issue #123.
   -->
   
   - Part of https://github.com/apache/datafusion/issues/15914
   
   ## Rationale for this change
   
   <!--
    Why are you proposing this change? If this is already explained clearly in 
the issue then this section is not needed.
    Explaining clearly why changes are proposed helps reviewers understand your 
changes and offer better suggestions for fixes.  
   -->
   
   This PR adds Spark-compatible decimal division functions to 
datafusion-spark. These functions implement Spark's specific decimal division 
semantics which differ from standard SQL:
   
   - Legacy mode behavior: Division by zero returns 0 (not an error)
   - Spark rounding: Round half away from zero
   - BigInt fallback: When the required precision exceeds Decimal128 limits 
(scale > 38), the implementation falls back to BigInt arithmetic
   
   
   ## What changes are included in this PR?
   
   <!--
   There is no need to duplicate the description in the issue here but it is 
sometimes worth providing a summary of the individual changes in this PR.
   -->
   
   - New file datafusion/spark/src/function/math/decimal_div.rs containing:
     - spark_decimal_div() - Regular decimal division with Spark semantics
     - spark_decimal_integral_div() - Integer division (truncates toward zero)
     - SparkDecimalDiv and SparkDecimalIntegralDiv UDF structs for use by query 
planners
   - Added num crate dependency to datafusion-spark
   - 6 unit tests covering basic division, rounding, division by zero, integral 
division, negative values, and NULL handling
   
   Note: These are internal functions intended for use by query planners when 
Spark-compatible decimal division semantics are needed. The precision and scale 
of the result type are determined at query planning time based on input types, 
similar to how Comet uses them.
   
   ## Are these changes tested?
   
   <!--
   We typically require tests for all PRs in order to:
   1. Prevent the code from being accidentally broken by subsequent changes
   2. Serve as another way to document the expected behavior of the code
   
   If tests are not included in your PR, please explain why (for example, are 
they covered by existing tests)?
   -->
   
   ## Are there any user-facing changes?
   
   <!--
   If there are user-facing changes then we may require documentation to be 
updated before approving the PR.
   -->
   
   New public APIs in datafusion-spark:
   - SparkDecimalDiv::new(result_precision, result_scale)
   - SparkDecimalIntegralDiv::new(result_precision, result_scale)
   - spark_decimal_div() and spark_decimal_integral_div() functions
   
   <!--
   If there are any breaking changes to public APIs, please add the `api 
change` label.
   -->
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to