yaooqinn commented on PR #47181:
URL: https://github.com/apache/spark/pull/47181#issuecomment-2242591010

   The use of version-numbered configurations in Spark can be unfriendly for 
users compared to legacy configurations. Various version policies in Spark, 
such as Thrift Server Versions, API versions, JDBC Spec versions, and others, 
are implemented to ensure backward compatibility. However, the current system 
is not user-friendly. It's also hard for us to write user-friendly 
documentation like 
https://spark.apache.org/docs/4.0.0-preview1/sql-data-sources-jdbc.html#data-type-mapping.
   
   The rules for data type mapping cannot be determined by dialect versions 
alone. Various factors such as upstream jdbc artifacts, database server modes, 
user-specified connection properties, etc., can influence the rules. I believe 
it's challenging for users to guarantee consistent behavior simply by setting 
the old version they use.
   
   Given that the data type mapping rules are publicly available in the 
documentation, see 
https://spark.apache.org/docs/4.0.0-preview1/sql-data-sources-jdbc.html#data-type-mapping,
 I'm not convinced that any more radical changes will be made in the future 
that will require the introduction of versions to control.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to