andygrove opened a new issue, #3137:
URL: https://github.com/apache/datafusion-comet/issues/3137

   ## What is the problem the feature request solves?
   
   > **Note:** This issue was generated with AI assistance. The specification 
details have been extracted from Spark documentation and may need verification.
   
   Comet does not currently support the Spark `time_window` function, causing 
queries using this function to fall back to Spark's JVM execution instead of 
running natively on DataFusion.
   
   TimeWindow is a Spark Catalyst expression that creates time-based windows 
for streaming and batch data processing. It transforms timestamp columns into 
window structures with start and end times, enabling time-based grouping and 
aggregation operations.
   
   Supporting this expression would allow more Spark workloads to benefit from 
Comet's native acceleration.
   
   ## Describe the potential solution
   
   ### Spark Specification
   
   **Syntax:**
   ```sql
   window(timeColumn, windowDuration, slideDuration, startTime)
   window(timeColumn, windowDuration, slideDuration)
   window(timeColumn, windowDuration)
   ```
   
   ```scala
   // DataFrame API
   df.select(window($"timestamp", "10 minutes", "5 minutes"))
   ```
   
   **Arguments:**
   | Argument | Type | Description |
   |----------|------|-------------|
   | timeColumn | Expression | The timestamp column to create windows from |
   | windowDuration | Long/Expression | Length of each window in microseconds |
   | slideDuration | Long/Expression | Slide interval between windows in 
microseconds (defaults to windowDuration) |
   | startTime | Long/Expression | Offset for window start time in microseconds 
(defaults to 0) |
   
   **Return Type:** Returns a StructType with two fields:
   
   - `start`: TimestampType or TimestampNTZType (matches input column type)
   - `end`: TimestampType or TimestampNTZType (matches input column type)
   
   **Supported Data Types:**
   The time column supports:
   
   - TimestampType
   - TimestampNTZType  
   - StructType with start/end timestamp fields (for nested window operations)
   
   **Edge Cases:**
   - Null timestamp values produce null window structures
   - windowDuration must be positive (> 0), otherwise throws DataTypeMismatch 
error
   - slideDuration must be positive (> 0), otherwise throws DataTypeMismatch 
error  
   - slideDuration cannot exceed windowDuration, otherwise throws constraint 
violation
   - Absolute startTime must be less than slideDuration, otherwise throws 
constraint violation
   - Expression remains unresolved until analyzer replacement occurs
   
   **Examples:**
   ```sql
   -- Tumbling 10-minute windows
   SELECT window(timestamp, '10 minutes'), count(*)
   FROM events
   GROUP BY window(timestamp, '10 minutes')
   
   -- Sliding 10-minute windows every 5 minutes  
   SELECT window(timestamp, '10 minutes', '5 minutes'), avg(value)
   FROM metrics
   GROUP BY window(timestamp, '10 minutes', '5 minutes')
   ```
   
   ```scala
   // DataFrame API - tumbling windows
   df.groupBy(window($"timestamp", "10 minutes"))
     .count()
   
   // DataFrame API - sliding windows with start offset
   df.select(window($"timestamp", "1 hour", "30 minutes", "15 minutes"))
     .groupBy($"window")
     .avg("value")
   ```
   
   ### Implementation Approach
   
   See the [Comet guide on adding new 
expressions](https://datafusion.apache.org/comet/contributor-guide/adding_a_new_expression.html)
 for detailed instructions.
   
   1. **Scala Serde**: Add expression handler in 
`spark/src/main/scala/org/apache/comet/serde/`
   2. **Register**: Add to appropriate map in `QueryPlanSerde.scala`
   3. **Protobuf**: Add message type in `native/proto/src/proto/expr.proto` if 
needed
   4. **Rust**: Implement in `native/spark-expr/src/` (check if DataFusion has 
built-in support first)
   
   
   ## Additional context
   
   **Difficulty:** Medium
   **Spark Expression Class:** 
`org.apache.spark.sql.catalyst.expressions.TimeWindow`
   
   **Related:**
   - SessionWindow - for session-based windowing
   - UnaryExpression - parent expression type
   - ImplicitCastInputTypes - for automatic type casting behavior
   
   ---
   *This issue was auto-generated from Spark reference documentation.*
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to