dongjoon-hyun commented on code in PR #45658: URL: https://github.com/apache/spark/pull/45658#discussion_r1535017016
########## docs/sql-migration-guide.md: ########## @@ -31,6 +31,7 @@ license: | - Since Spark 3.5, `spark.sql.optimizer.canChangeCachedPlanOutputPartitioning` is enabled by default. To restore the previous behavior, set `spark.sql.optimizer.canChangeCachedPlanOutputPartitioning` to `false`. - Since Spark 3.5, the `array_insert` function is 1-based for negative indexes. It inserts new element at the end of input arrays for the index -1. To restore the previous behavior, set `spark.sql.legacy.negativeIndexInArrayInsert` to `true`. - Since Spark 3.5, the Avro will throw `AnalysisException` when reading Interval types as Date or Timestamp types, or reading Decimal types with lower precision. To restore the legacy behavior, set `spark.sql.legacy.avro.allowIncompatibleSchema` to `true` +- Since Spark 3.5, MySQL JDBC datasource will read TINYINT(n > 1) as ByteType, TINYINT UNSIGNED is read as ShortType, while in Spark 3.4 and below, they were read as IntegerType. To restore the previous behavior, you can cast the column to the old type. Note that for 3.5.0 and 3.5.1, TINYINT UNSIGNED is wrongly read as ByteType, and it is fixed in 3.5.2. Review Comment: According to the context, `Since Spark 3.5.2` instead of `Since Spark 3.5`? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
