andygrove opened a new issue, #3125:
URL: https://github.com/apache/datafusion-comet/issues/3125

   ## What is the problem the feature request solves?
   
   > **Note:** This issue was generated with AI assistance. The specification 
details have been extracted from Spark documentation and may need verification.
   
   Comet does not currently support the Spark `hours` function, causing queries 
using this function to fall back to Spark's JVM execution instead of running 
natively on DataFusion.
   
   The Hours expression is a v2 partition transform that extracts the hour 
component from timestamp values for partitioning purposes. It is designed to 
partition data based on hourly intervals, converting timestamp values to 
integer representations of hours.
   
   Supporting this expression would allow more Spark workloads to benefit from 
Comet's native acceleration.
   
   ## Describe the potential solution
   
   ### Spark Specification
   
   **Syntax:**
   ```sql
   hours(timestamp_column)
   ```
   
   ```scala
   // DataFrame API usage
   import org.apache.spark.sql.functions._
   hours(col("timestamp_column"))
   ```
   
   **Arguments:**
   | Argument | Type | Description |
   |----------|------|-------------|
   | child | Expression | The input expression, typically a timestamp column |
   
   **Return Type:** `IntegerType` - Returns an integer representing the hour 
component.
   
   **Supported Data Types:**
   - TimestampType
   - TimestampNTZType (Timestamp without timezone)
   
   **Edge Cases:**
   - Null input values: Returns null for null timestamp inputs
   - Invalid timestamp formats: May throw exceptions during evaluation
   - Timezone handling: Behavior depends on the specific timestamp type used
   - Hour range: Returns values from 0-23 representing the 24-hour format
   
   **Examples:**
   ```sql
   -- Partition table by hour
   CREATE TABLE events_hourly 
   USING DELTA 
   PARTITIONED BY (hours(event_timestamp))
   AS SELECT * FROM events;
   
   -- Query with hour-based filtering
   SELECT * FROM events_hourly 
   WHERE hours(event_timestamp) = 14;
   ```
   
   ```scala
   // DataFrame API usage for partitioning
   import org.apache.spark.sql.functions._
   
   // Create partitioned dataset
   df.write
     .partitionBy(hours(col("timestamp")).toString)
     .parquet("path/to/hourly_partitioned_data")
   
   // Filter by specific hour
   val afternoonData = df.filter(hours(col("timestamp")) === 14)
   ```
   
   ### Implementation Approach
   
   See the [Comet guide on adding new 
expressions](https://datafusion.apache.org/comet/contributor-guide/adding_a_new_expression.html)
 for detailed instructions.
   
   1. **Scala Serde**: Add expression handler in 
`spark/src/main/scala/org/apache/comet/serde/`
   2. **Register**: Add to appropriate map in `QueryPlanSerde.scala`
   3. **Protobuf**: Add message type in `native/proto/src/proto/expr.proto` if 
needed
   4. **Rust**: Implement in `native/spark-expr/src/` (check if DataFusion has 
built-in support first)
   
   
   ## Additional context
   
   **Difficulty:** Medium
   **Spark Expression Class:** `org.apache.spark.sql.catalyst.expressions.Hours`
   
   **Related:**
   - `Days` - Partition transform for daily intervals
   - `Months` - Partition transform for monthly intervals  
   - `Years` - Partition transform for yearly intervals
   - `Bucket` - Hash-based partition transform
   - `PartitionTransformExpression` - Base class for partition transforms
   
   ---
   *This issue was auto-generated from Spark reference documentation.*
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to