0lai0 opened a new pull request, #3804:
URL: https://github.com/apache/datafusion-comet/pull/3804

   ## Which issue does this PR close?
   
   <!--
   We generally require a GitHub issue to be filed for all bug fixes and 
enhancements and this helps us generate change logs for our releases. You can 
link an issue to this PR using the GitHub syntax. For example `Closes #123` 
indicates that this PR will close issue #123.
   -->
   
   Closes #3125 
   
   ## Rationale for this change
   
   <!--
    Why are you proposing this change? If this is already explained clearly in 
the issue then this section is not needed.
    Explaining clearly why changes are proposed helps reviewers understand your 
changes and offer better suggestions for fixes.
   -->
   Comet previously did not support the Spark `hours` expression (a V2 
partition transform). 
   Queries using the `hours` function for partitioning would fall back to 
Spark's JVM execution instead of running natively on DataFusion. By adding 
native support for this expression, we allow more Spark workloads (especially 
those partitioned by hourly intervals) to benefit from Comet's native 
acceleration.
   
   ## What changes are included in this PR?
   
   <!--
   There is no need to duplicate the description in the issue here but it is 
sometimes worth providing a summary of the individual changes in this PR.
   -->
   
   This change adds end-to-end native support for the `hours` partition 
transform. Since `Hours` is a `PartitionTransformExpression` (and not a 
`TimeZoneAwareExpression`), the timezone is injected from the session 
configuration during the planning phase. 
   The native implementation uses Arrow's `unary` and `try_unary` kernels for 
efficient vectorized computation, and correctly handles pre-epoch (negative) 
timestamps using Euclidean floor division (`div_euclid`). It distinctly handles 
both `TimestampType` (applies timezone offsets) and `TimestampNTZType` (direct 
wall-clock computation).
   
   - `expr.proto`: Added `HoursTransform` message definition to pass the child 
expression and session timezone.
   -  `datetime.scala`: Added `CometHours` serde handler to intercept the Spark 
`Hours` expression and read the timezone from `SQLConf`.
   - `QueryPlanSerde.scala`: Registered the `CometHours` handler in the 
temporal expressions map.
   - `hours.rs`: Added `SparkHoursTransform` UDF using efficient Arrow kernels.
   - `temporal.rs` & `expression_registry.rs`: Registered the native Builder 
for the new expression.
   
   ## How are these changes tested?
   
   Added comprehensive evaluation in both Rust and Scala:
   1. Rust Unit Tests : Added unit tests in `hours.rs` covering:
      - Positive/negative (pre-epoch) epoch handling
      - Epoch boundary (zero)
      - Timezone offset handling 
      - Null propagation
      - Proper isolation of `TimestampNTZType` (ensuring it ignores timezone 
offsets)
      ```bash
      cargo test -p datafusion-comet-spark-expr -- datetime_funcs::hours
   2. Scala Integration Tests: Evaluated end-to-end execution in 
CometTemporalExpressionSuite.
      ```bash
      ./mvnw test -pl spark 
-Dsuites='org.apache.comet.CometTemporalExpressionSuite'
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to