HyukjinKwon commented on a change in pull request #34042:
URL: https://github.com/apache/spark/pull/34042#discussion_r711838027



##########
File path: docs/sql-data-sources-jdbc.md
##########
@@ -29,7 +29,9 @@ as a DataFrame and they can easily be processed in Spark SQL 
or joined with othe
 The JDBC data source is also easier to use from Java or Python as it does not 
require the user to
 provide a ClassTag.
 (Note that this is different than the Spark SQL JDBC server, which allows 
other applications to
-run queries using Spark SQL).
+run queries using Spark SQL). 
+
+All columns are automatically converted to be nullable for compatibility 
reasons.

Review comment:
       I actually think this is something we should fix but we couldn't because 
it is a too much breaking change. This is not only for JDBC but for other filed 
based sources.
   
   We should better have an option or configuration to set the nullability 
correctly, and make it disabled by default.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to