andygrove opened a new issue, #3167:
URL: https://github.com/apache/datafusion-comet/issues/3167

   ## What is the problem the feature request solves?
   
   > **Note:** This issue was generated with AI assistance. The specification 
details have been extracted from Spark documentation and may need verification.
   
   Comet does not currently support the Spark `create_map` function, causing 
queries using this function to fall back to Spark's JVM execution instead of 
running natively on DataFusion.
   
   The CreateMap expression creates a map (key-value pairs) from a sequence of 
alternating key and value expressions. Keys and values are paired sequentially 
from the input expressions, where odd-positioned expressions become keys and 
even-positioned expressions become values.
   
   Supporting this expression would allow more Spark workloads to benefit from 
Comet's native acceleration.
   
   ## Describe the potential solution
   
   ### Spark Specification
   
   **Syntax:**
   ```sql
   map(key1, value1, key2, value2, ...)
   ```
   
   ```scala
   // DataFrame API
   map(col("key1"), col("value1"), col("key2"), col("value2"))
   ```
   
   **Arguments:**
   | Argument | Type | Description |
   |----------|------|-------------|
   | children | Seq[Expression] | Sequence of expressions where odd positions 
are keys and even positions are values |
   | useStringTypeWhenEmpty | Boolean | When true, creates a map with string 
type for both keys and values when no children are provided |
   
   **Return Type:** Returns a MapType where the key type is inferred from the 
odd-positioned expressions and the value type is inferred from the 
even-positioned expressions. When empty and useStringTypeWhenEmpty is true, 
returns MapType(StringType, StringType).
   
   **Supported Data Types:**
   Supports all Spark SQL data types for both keys and values. Key expressions 
must be of a type that can be used as map keys (hashable and comparable types). 
Common supported types include:
   
   - Numeric types (IntegerType, LongType, DoubleType, etc.)
   - StringType
   - DateType
   - TimestampType
   - BooleanType
   
   **Edge Cases:**
   - **Null keys**: If any key expression evaluates to null, the entire map 
creation fails and returns null
   - **Null values**: Null values are allowed and preserved in the resulting map
   - **Empty input**: When no expressions are provided and 
useStringTypeWhenEmpty is true, creates an empty map of type Map[String, String]
   - **Type coercion**: All key expressions must be promotable to a common 
type, same for value expressions
   - **Duplicate keys**: Later key-value pairs with the same key will overwrite 
earlier ones
   
   **Examples:**
   ```sql
   -- Create a simple map
   SELECT map(1.0, '2', 3.0, '4');
   -- Result: {1.0:"2", 3.0:"4"}
   
   -- Create a map with column values
   SELECT map('name', first_name, 'age', age) FROM users;
   
   -- Empty map
   SELECT map();
   -- Result: {}
   ```
   
   ```scala
   // DataFrame API usage
   import org.apache.spark.sql.functions.map
   
   df.select(map(lit("key1"), col("value1"), lit("key2"), col("value2")))
   
   // Create map from multiple columns
   df.select(map(
     lit("name"), col("first_name"),
     lit("age"), col("age"),
     lit("city"), col("city")
   ))
   ```
   
   ### Implementation Approach
   
   See the [Comet guide on adding new 
expressions](https://datafusion.apache.org/comet/contributor-guide/adding_a_new_expression.html)
 for detailed instructions.
   
   1. **Scala Serde**: Add expression handler in 
`spark/src/main/scala/org/apache/comet/serde/`
   2. **Register**: Add to appropriate map in `QueryPlanSerde.scala`
   3. **Protobuf**: Add message type in `native/proto/src/proto/expr.proto` if 
needed
   4. **Rust**: Implement in `native/spark-expr/src/` (check if DataFusion has 
built-in support first)
   
   
   ## Additional context
   
   **Difficulty:** Medium
   **Spark Expression Class:** 
`org.apache.spark.sql.catalyst.expressions.CreateMap`
   
   **Related:**
   - **map_keys**: Extract keys from a map
   - **map_values**: Extract values from a map  
   - **CreateArray**: Create arrays from expressions
   - **CreateStruct**: Create struct/row objects from expressions
   
   ---
   *This issue was auto-generated from Spark reference documentation.*
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to