andygrove opened a new issue, #3128:
URL: https://github.com/apache/datafusion-comet/issues/3128

   ## What is the problem the feature request solves?
   
   > **Note:** This issue was generated with AI assistance. The specification 
details have been extracted from Spark documentation and may need verification.
   
   Comet does not currently support the Spark `months` function, causing 
queries using this function to fall back to Spark's JVM execution instead of 
running natively on DataFusion.
   
   The `Months` expression is a v2 partition transform that extracts the month 
component from temporal data for partitioning purposes. This transform is used 
in Spark's DataSource v2 API to create month-based partitions, allowing for 
efficient time-based data organization and querying.
   
   Supporting this expression would allow more Spark workloads to benefit from 
Comet's native acceleration.
   
   ## Describe the potential solution
   
   ### Spark Specification
   
   **Syntax:**
   ```sql
   months(column_name)
   ```
   
   ```scala
   // DataFrame API usage in partition transforms
   months(col("date_column"))
   ```
   
   **Arguments:**
   | Argument | Type | Description |
   |----------|------|-------------|
   | child | Expression | The input expression, typically a date or timestamp 
column |
   
   **Return Type:** `IntegerType` - Returns an integer representing the month 
component.
   
   **Supported Data Types:**
   - DateType (date columns)
   - TimestampType (timestamp columns) 
   - TimestampNTZType (timestamp without timezone columns)
   
   **Edge Cases:**
   - Null input values are handled according to Spark's standard null 
propagation rules
   - Invalid date/timestamp formats in the child expression will propagate 
errors
   - Month extraction behavior depends on the configured timezone for timestamp 
operations
   - Leap year considerations are handled by the underlying temporal extraction 
logic
   
   **Examples:**
   ```sql
   -- Example SQL usage in table creation
   CREATE TABLE events_table (
     id INT,
     event_date DATE,
     data STRING
   ) 
   PARTITIONED BY (months(event_date))
   ```
   
   ```scala
   // Example DataFrame API usage
   import org.apache.spark.sql.connector.expressions.Expressions._
   val partitionTransform = months(col("event_date"))
   
   // Used in DataSource v2 partition specifications
   val partitioning = Seq(months(FieldReference("event_date")))
   ```
   
   ### Implementation Approach
   
   See the [Comet guide on adding new 
expressions](https://datafusion.apache.org/comet/contributor-guide/adding_a_new_expression.html)
 for detailed instructions.
   
   1. **Scala Serde**: Add expression handler in 
`spark/src/main/scala/org/apache/comet/serde/`
   2. **Register**: Add to appropriate map in `QueryPlanSerde.scala`
   3. **Protobuf**: Add message type in `native/proto/src/proto/expr.proto` if 
needed
   4. **Rust**: Implement in `native/spark-expr/src/` (check if DataFusion has 
built-in support first)
   
   
   ## Additional context
   
   **Difficulty:** Medium
   **Spark Expression Class:** 
`org.apache.spark.sql.catalyst.expressions.Months`
   
   **Related:**
   - `Years` - Year-based partition transform
   - `Days` - Day-based partition transform  
   - `Hours` - Hour-based partition transform
   - `PartitionTransformExpression` - Base class for partition transforms
   - DataSource v2 partitioning documentation
   
   ---
   *This issue was auto-generated from Spark reference documentation.*
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to