andygrove opened a new issue, #3163:
URL: https://github.com/apache/datafusion-comet/issues/3163

   ## What is the problem the feature request solves?
   
   > **Note:** This issue was generated with AI assistance. The specification 
details have been extracted from Spark documentation and may need verification.
   
   Comet does not currently support the Spark `schema_of_json` function, 
causing queries using this function to fall back to Spark's JVM execution 
instead of running natively on DataFusion.
   
   The `SchemaOfJson` expression analyzes a JSON string and returns the 
inferred schema as a data type string. It parses the JSON structure and 
determines the appropriate Spark SQL data types for all fields, including 
support for complex nested structures like arrays and structs.
   
   Supporting this expression would allow more Spark workloads to benefit from 
Comet's native acceleration.
   
   ## Describe the potential solution
   
   ### Spark Specification
   
   **Syntax:**
   ```sql
   SELECT schema_of_json(json_string [, options_map])
   ```
   
   ```scala
   // DataFrame API
   import org.apache.spark.sql.functions._
   df.select(schema_of_json(col("json_column")))
   df.select(schema_of_json(col("json_column"), map("allowNumericLeadingZeros" 
-> "true")))
   ```
   
   **Arguments:**
   | Argument | Type | Description |
   |----------|------|-------------|
   | json_string | STRING | The JSON string to analyze for schema inference |
   | options_map | MAP<STRING, STRING> | Optional parsing options like 
`allowNumericLeadingZeros`, `allowBackslashEscapingAnyCharacter`, etc. |
   
   **Return Type:** Returns a `STRING` representing the inferred Spark SQL data 
type schema (e.g., "STRUCT<name: STRING, age: BIGINT>").
   
   **Supported Data Types:**
   - Input: STRING (JSON formatted)
   - Inferred types: All Spark SQL data types including BOOLEAN, BIGINT, 
DOUBLE, STRING, ARRAY, STRUCT, MAP
   
   **Edge Cases:**
   - Null input returns null result
   - Invalid JSON strings may throw parsing exceptions
   - Empty JSON objects return "STRUCT<>" 
   - Empty JSON arrays return "ARRAY<STRING>" (default array element type)
   - Mixed type arrays are inferred as the most general common type
   - Numeric values with leading zeros require `allowNumericLeadingZeros` 
option to parse correctly
   - Very deeply nested JSON may hit recursion limits
   
   **Examples:**
   ```sql
   -- Basic schema inference
   SELECT schema_of_json('{"name":"John", "age":30}');
   -- Result: STRUCT<age: BIGINT, name: STRING>
   
   -- Array schema inference  
   SELECT schema_of_json('[{"col":01}]', map('allowNumericLeadingZeros', 
'true'));
   -- Result: ARRAY<STRUCT<col: BIGINT>>
   
   -- Complex nested structure
   SELECT schema_of_json('{"users":[{"name":"John","scores":[95,87]}]}');
   -- Result: STRUCT<users: ARRAY<STRUCT<name: STRING, scores: ARRAY<BIGINT>>>>
   ```
   
   ```scala
   // DataFrame API usage
   import org.apache.spark.sql.functions._
   
   val df = Seq("""{"name":"Alice","age":25}""").toDF("json_col")
   df.select(schema_of_json(col("json_col"))).show(false)
   
   // With options
   val options = Map("allowNumericLeadingZeros" -> "true")
   df.select(schema_of_json(col("json_col"), lit(options))).show(false)
   ```
   
   ### Implementation Approach
   
   See the [Comet guide on adding new 
expressions](https://datafusion.apache.org/comet/contributor-guide/adding_a_new_expression.html)
 for detailed instructions.
   
   1. **Scala Serde**: Add expression handler in 
`spark/src/main/scala/org/apache/comet/serde/`
   2. **Register**: Add to appropriate map in `QueryPlanSerde.scala`
   3. **Protobuf**: Add message type in `native/proto/src/proto/expr.proto` if 
needed
   4. **Rust**: Implement in `native/spark-expr/src/` (check if DataFusion has 
built-in support first)
   
   
   ## Additional context
   
   **Difficulty:** Large
   **Spark Expression Class:** 
`org.apache.spark.sql.catalyst.expressions.SchemaOfJson`
   
   **Related:**
   - `from_json` - Parse JSON strings into structured data using a schema
   - `to_json` - Convert structured data to JSON strings  
   - `json_tuple` - Extract multiple fields from JSON strings
   - `get_json_object` - Extract single values from JSON using JSONPath
   
   ---
   *This issue was auto-generated from Spark reference documentation.*
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to