andygrove opened a new issue, #3116:
URL: https://github.com/apache/datafusion-comet/issues/3116

   ## What is the problem the feature request solves?
   
   > **Note:** This issue was generated with AI assistance. The specification 
details have been extracted from Spark documentation and may need verification.
   
   Comet does not currently support the Spark `timestamp_diff` function, 
causing queries using this function to fall back to Spark's JVM execution 
instead of running natively on DataFusion.
   
   TimestampDiff calculates the difference between two timestamp values in a 
specified unit of time. It returns a Long value representing the number of 
complete time units between the start and end timestamps, taking timezone 
information into account for proper temporal calculations.
   
   Supporting this expression would allow more Spark workloads to benefit from 
Comet's native acceleration.
   
   ## Describe the potential solution
   
   ### Spark Specification
   
   **Syntax:**
   ```sql
   TIMESTAMPDIFF(unit, start_timestamp, end_timestamp)
   ```
   
   ```scala
   // DataFrame API usage
   timestampdiff(lit("SECOND"), col("start_time"), col("end_time"))
   ```
   
   **Arguments:**
   | Argument | Type | Description |
   |----------|------|-------------|
   | unit | String | Time unit for the difference calculation (e.g., "SECOND", 
"MINUTE", "HOUR", "DAY") |
   | startTimestamp | Expression | Starting timestamp value |
   | endTimestamp | Expression | Ending timestamp value |
   | timeZoneId | Option[String] | Optional timezone identifier for 
timezone-aware calculations |
   
   **Return Type:** Long - representing the number of complete time units 
between the timestamps.
   
   **Supported Data Types:**
   - Input types: TimestampType for both start and end timestamp arguments
   - The unit parameter must be a valid time unit string
   - Supports timezone-aware timestamp calculations
   
   **Edge Cases:**
   - Null handling: Returns null if either start or end timestamp is null 
(nullIntolerant = true)
   - Negative results: When start timestamp is later than end timestamp, 
returns negative values
   - Timezone handling: Uses the expression's timezone context or falls back to 
session timezone
   - Precision: Calculations are performed at microsecond precision internally
   - Unit validation: Invalid time units will cause runtime errors during 
evaluation
   
   **Examples:**
   ```sql
   -- Calculate difference in seconds
   SELECT TIMESTAMPDIFF('SECOND', '2023-01-01 10:00:00', '2023-01-01 10:05:30');
   -- Returns: 330
   
   -- Calculate difference in days
   SELECT TIMESTAMPDIFF('DAY', '2023-01-01', '2023-01-15');
   -- Returns: 14
   ```
   
   ```scala
   // DataFrame API usage
   import org.apache.spark.sql.functions._
   
   df.select(timestampdiff(lit("HOUR"), col("start_time"), col("end_time")))
   
   // With timezone consideration
   df.select(timestampdiff(lit("MINUTE"), 
     col("start_timestamp").cast("timestamp"), 
     col("end_timestamp").cast("timestamp")))
   ```
   
   ### Implementation Approach
   
   See the [Comet guide on adding new 
expressions](https://datafusion.apache.org/comet/contributor-guide/adding_a_new_expression.html)
 for detailed instructions.
   
   1. **Scala Serde**: Add expression handler in 
`spark/src/main/scala/org/apache/comet/serde/`
   2. **Register**: Add to appropriate map in `QueryPlanSerde.scala`
   3. **Protobuf**: Add message type in `native/proto/src/proto/expr.proto` if 
needed
   4. **Rust**: Implement in `native/spark-expr/src/` (check if DataFusion has 
built-in support first)
   
   
   ## Additional context
   
   **Difficulty:** Medium
   **Spark Expression Class:** 
`org.apache.spark.sql.catalyst.expressions.TimestampDiff`
   
   **Related:**
   - DateDiff - for date-only difference calculations
   - TimestampAdd - for adding time intervals to timestamps  
   - Extract - for extracting specific time components from timestamps
   - TimeZoneAwareExpression - base interface for timezone-aware temporal 
operations
   
   ---
   *This issue was auto-generated from Spark reference documentation.*
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to