andygrove opened a new issue, #3166:
URL: https://github.com/apache/datafusion-comet/issues/3166

   ## What is the problem the feature request solves?
   
   > **Note:** This issue was generated with AI assistance. The specification 
details have been extracted from Spark documentation and may need verification.
   
   Comet does not currently support the Spark `map_concat` function, causing 
queries using this function to fall back to Spark's JVM execution instead of 
running natively on DataFusion.
   
   MapConcat is a Catalyst expression that concatenates multiple maps into a 
single map. It takes a sequence of map expressions as input and merges them 
together, with later maps overwriting values for duplicate keys from earlier 
maps.
   
   Supporting this expression would allow more Spark workloads to benefit from 
Comet's native acceleration.
   
   ## Describe the potential solution
   
   ### Spark Specification
   
   **Syntax:**
   ```sql
   map_concat(map1, map2, ...)
   ```
   
   **Arguments:**
   | Argument | Type | Description |
   |----------|------|-------------|
   | children | Seq[Expression] | Variable number of map expressions to be 
concatenated |
   
   **Return Type:** Returns a MapType with the same key and value types as the 
input maps.
   
   **Supported Data Types:**
   Supports MapType expressions where:
   
   - All input maps must have compatible key types
   - All input maps must have compatible value types
   - Keys can be any comparable data type
   - Values can be any Spark SQL data type
   
   **Edge Cases:**
   - **Null handling**: If any input map is null, the result will be null
   - **Empty maps**: Empty maps are ignored in the concatenation process
   - **Duplicate keys**: Values from maps appearing later in the argument list 
take precedence
   - **Type compatibility**: All input maps must have compatible key and value 
types, otherwise compilation fails
   - **Single argument**: If only one map is provided, returns that map 
unchanged
   
   **Examples:**
   ```sql
   -- Basic map concatenation
   SELECT map_concat(map(1, 'a', 2, 'b'), map(3, 'c'));
   -- Result: {1:"a", 2:"b", 3:"c"}
   
   -- Handling duplicate keys (later values win)
   SELECT map_concat(map(1, 'old'), map(1, 'new', 2, 'b'));
   -- Result: {1:"new", 2:"b"}
   
   -- Concatenating multiple maps
   SELECT map_concat(map(1, 'a'), map(2, 'b'), map(3, 'c'));
   -- Result: {1:"a", 2:"b", 3:"c"}
   ```
   
   ```scala
   // DataFrame API usage
   import org.apache.spark.sql.functions._
   
   df.select(map_concat(
     map(lit(1), lit("a"), lit(2), lit("b")),
     map(lit(3), lit("c"))
   ))
   
   // Using column references
   df.select(map_concat(col("map1"), col("map2")))
   ```
   
   ### Implementation Approach
   
   See the [Comet guide on adding new 
expressions](https://datafusion.apache.org/comet/contributor-guide/adding_a_new_expression.html)
 for detailed instructions.
   
   1. **Scala Serde**: Add expression handler in 
`spark/src/main/scala/org/apache/comet/serde/`
   2. **Register**: Add to appropriate map in `QueryPlanSerde.scala`
   3. **Protobuf**: Add message type in `native/proto/src/proto/expr.proto` if 
needed
   4. **Rust**: Implement in `native/spark-expr/src/` (check if DataFusion has 
built-in support first)
   
   
   ## Additional context
   
   **Difficulty:** Medium
   **Spark Expression Class:** 
`org.apache.spark.sql.catalyst.expressions.MapConcat`
   
   **Related:**
   - `map()` - Creates a map from key-value pairs
   - `map_keys()` - Extracts keys from a map
   - `map_values()` - Extracts values from a map
   - `map_entries()` - Converts a map to an array of key-value structs
   
   ---
   *This issue was auto-generated from Spark reference documentation.*
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to