prashantwason opened a new pull request, #17917:
URL: https://github.com/apache/hudi/pull/17917

   ### Describe the issue this Pull Request addresses
   
   The test `TestSparkSchemaUtils.testConvertBasicTypes()` was using 
`SparkSqlParser.parseTableSchema()` with a SQL string containing the 
`timestamp_ntz` data type. However, `timestamp_ntz` is only supported by the 
SQL parser starting in Spark 3.4, causing the test to fail with "DataType 
timestamp_ntz is not supported" when running with Spark 3.3.
   
   ### Summary and Changelog
   
   Fixed by constructing the expected StructType schema programmatically using 
Spark's data type API (e.g., `TimestampNTZType$.MODULE$`) instead of parsing 
SQL strings. This works across all Spark versions since `TimestampNTZType` 
exists as a class in Spark 3.3 - only the SQL parser syntax was unsupported.
   
   **Changes:**
   - Replaced SQL string parsing with programmatic schema construction in 
`testConvertBasicTypes()` test
   - Added necessary imports for Spark data type classes (`BinaryType$`, 
`BooleanType$`, `DateType$`, `DoubleType$`, `FloatType$`, `LongType$`, 
`TimestampNTZType$`, `TimestampType$`)
   
   ### Impact
   
   Test-only change. No impact on production code or public APIs.
   
   ### Risk Level
   
   none
   
   ### Documentation Update
   
   none
   
   ### Contributor's checklist
   
   - [x] Read through [contributor's 
guide](https://hudi.apache.org/contribute/how-to-contribute)
   - [x] Enough context is provided in the sections above
   - [x] Adequate tests were added if applicable


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to