Darren Price created SPARK-36758:
------------------------------------

             Summary: SQL column nullable setting not retained as part of spark 
read
                 Key: SPARK-36758
                 URL: https://issues.apache.org/jira/browse/SPARK-36758
             Project: Spark
          Issue Type: Improvement
          Components: PySpark
    Affects Versions: 3.1.1
         Environment: Databricks:

Runtime 8.2

Spark 3.1.1
            Reporter: Darren Price


When reading in a column set as not null this is not retained as part of the 
spark.read.
All columns are showing as nullable = true

Is there a way to change this behaviour to retain the null setting from the 
source?

See here for more info

https://github.com/microsoft/sql-spark-connector/issues/121

 

Example code from databricks:

tableName = "dbo.MyTable"

df = spark.read
.format("com.microsoft.sqlserver.jdbc.spark")
.option("url", myJdbcUrl)
.option("accessToken", accessToken)
.option("dbTable", tableName)
.load()

df.printSchema()

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to