andygrove opened a new issue, #3170:
URL: https://github.com/apache/datafusion-comet/issues/3170

   ## What is the problem the feature request solves?
   
   > **Note:** This issue was generated with AI assistance. The specification 
details have been extracted from Spark documentation and may need verification.
   
   Comet does not currently support the Spark `map_zip_with` function, causing 
queries using this function to fall back to Spark's JVM execution instead of 
running natively on DataFusion.
   
   MapZipWith is a higher-order function that combines two maps by applying a 
lambda function to corresponding key-value pairs. It creates a new map 
containing the union of all keys from both input maps, where the lambda 
function receives the key and values from both maps (or null if a key doesn't 
exist in one map) to compute the resulting value.
   
   Supporting this expression would allow more Spark workloads to benefit from 
Comet's native acceleration.
   
   ## Describe the potential solution
   
   ### Spark Specification
   
   **Syntax:**
   ```sql
   map_zip_with(map1, map2, lambda_function)
   ```
   
   ```scala
   // DataFrame API usage would be through expr() or selectExpr()
   df.selectExpr("map_zip_with(map_col1, map_col2, (k, v1, v2) -> v1 + v2)")
   ```
   
   **Arguments:**
   | Argument | Type | Description |
   |----------|------|-------------|
   | left | Map | The first input map |
   | right | Map | The second input map |
   | function | Lambda | A three-parameter lambda function (key, value1, 
value2) -> result |
   
   **Return Type:** Returns a MapType with the same key type as the input maps 
and value type determined by the lambda function's return type.
   
   **Supported Data Types:**
   - Input maps must have the same key type
   - Input maps can have different value types
   - Lambda function can return any supported Spark data type
   - Keys must be of a type that supports equality comparison
   
   **Edge Cases:**
   - If a key exists in only one map, the missing value is passed as null to 
the lambda function
   - If either input map is null, the result is null
   - Empty maps are handled gracefully - the result contains only keys from the 
non-empty map
   - Lambda function must handle null values appropriately using functions like 
`coalesce()`
   - Duplicate processing is avoided by taking the union of keys rather than 
iterating both maps separately
   
   **Examples:**
   ```sql
   -- Combine two maps by adding values, treating missing keys as 0
   SELECT map_zip_with(
     map('a', 1, 'b', 2), 
     map('b', 3, 'c', 4), 
     (k, v1, v2) -> coalesce(v1, 0) + coalesce(v2, 0)
   );
   -- Result: {"a":1,"b":5,"c":4}
   
   -- Combine maps with string concatenation
   SELECT map_zip_with(
     map('x', 'hello', 'y', 'world'), 
     map('y', '!', 'z', 'new'), 
     (k, v1, v2) -> concat(coalesce(v1, ''), coalesce(v2, ''))
   );
   -- Result: {"x":"hello","y":"world!","z":"new"}
   ```
   
   ```scala
   // DataFrame API usage
   import org.apache.spark.sql.functions._
   
   df.selectExpr("""
     map_zip_with(
       map_col1, 
       map_col2, 
       (k, v1, v2) -> coalesce(v1, 0) + coalesce(v2, 0)
     ) as combined_map
   """)
   ```
   
   ### Implementation Approach
   
   See the [Comet guide on adding new 
expressions](https://datafusion.apache.org/comet/contributor-guide/adding_a_new_expression.html)
 for detailed instructions.
   
   1. **Scala Serde**: Add expression handler in 
`spark/src/main/scala/org/apache/comet/serde/`
   2. **Register**: Add to appropriate map in `QueryPlanSerde.scala`
   3. **Protobuf**: Add message type in `native/proto/src/proto/expr.proto` if 
needed
   4. **Rust**: Implement in `native/spark-expr/src/` (check if DataFusion has 
built-in support first)
   
   
   ## Additional context
   
   **Difficulty:** Large
   **Spark Expression Class:** 
`org.apache.spark.sql.catalyst.expressions.MapZipWith`
   
   **Related:**
   - `map_from_arrays` - Create maps from key and value arrays
   - `map_concat` - Concatenate multiple maps
   - `transform` - Apply lambda functions to arrays
   - `map_filter` - Filter map entries using predicates
   
   ---
   *This issue was auto-generated from Spark reference documentation.*
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to