yaooqinn commented on code in PR #45039:
URL: https://github.com/apache/spark/pull/45039#discussion_r1857648791
##########
sql/hive/src/main/scala/org/apache/spark/sql/hive/client/HiveClientImpl.scala:
##########
@@ -1056,11 +1056,22 @@ private[hive] object HiveClientImpl extends Logging {
/** Get the Spark SQL native DataType from Hive's FieldSchema. */
private def getSparkSQLDataType(hc: FieldSchema): DataType = {
Review Comment:
> it causes troubles if a user creates a table using Spark 4.0, and tries
to read it with older Spark versions.
Is this similar to how we supported a new type in Spark 4.0 for older
versions of Spark to read? These tables won't affect users' existing data
pipelines, preventing any breaking changes.
I'm open to adding a new parameter here, anyway.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]