[ 
https://issues.apache.org/jira/browse/SPARK-19950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15934719#comment-15934719
 ] 

Jason White commented on SPARK-19950:
-------------------------------------

Without something that allows us to read using the nullable as exists on-disk, 
we end doing:
df = spark.read.parquet(path)
return spark.createDataFrame(df.rdd, schema)

Which is obviously not desirable. We would much rather rely on the schema as 
defined by the file format (Parquet in our case), or rely on a user-supplied 
schema. Preferably both.

> nullable ignored when df.load() is executed for file-based data source
> ----------------------------------------------------------------------
>
>                 Key: SPARK-19950
>                 URL: https://issues.apache.org/jira/browse/SPARK-19950
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 2.2.0
>            Reporter: Kazuaki Ishizaki
>
> This problem is reported in [Databricks 
> forum|https://forums.databricks.com/questions/7123/nullable-seemingly-ignored-when-reading-parquet.html].
> When we execute the following code, a schema for "id" in {{dfRead}} has 
> {{nullable = true}}. It should be {{nullable = false}}.
> {code:java}
> val field = "id"
> val df = spark.range(0, 5, 1, 1).toDF(field)
> val fmt = "parquet"
> val path = "/tmp/parquet"
> val schema = StructType(Seq(StructField(field, LongType, false)))
> df.write.format(fmt).mode("overwrite").save(path)
> val dfRead = spark.read.format(fmt).schema(schema).load(path)
> dfRead.printSchema
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to