[
https://issues.apache.org/jira/browse/SPARK-13101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15126976#comment-15126976
]
Deenar Toraskar commented on SPARK-13101:
-----------------------------------------
[~lian cheng] Thanks for the explanation now makes sense. But I would like to
warn you that this is going cause a lot of issues to people who are migrating
from Datasets to Dataframes, given that Parquet is the most widely used format
with SparkSQL. A better option would be to change the behaviour of Parquet
writer too. I would hate to use java primitives every time I want a
non-nullable field in my model classes.
I guess the root cause is the decision in the Parquet writer to convert all
non-nullable fields to nullable fields. I know there have been discussions
about this before, but many times the nullability of the field has functional
impact.
>> Another tricky thing here is about Parquet. When writing Parquet files, all
>> non-nullable fields are converted to nullable fields intentionally. This
>> behavior is for better interoperability with Hive.
I think you should do what is correct.
> Dataset complex types mapping to DataFrame (element nullability) mismatch
> --------------------------------------------------------------------------
>
> Key: SPARK-13101
> URL: https://issues.apache.org/jira/browse/SPARK-13101
> Project: Spark
> Issue Type: Bug
> Components: SQL
> Affects Versions: 1.6.1
> Reporter: Deenar Toraskar
> Priority: Blocker
>
> There seems to be a regression between 1.6.0 and 1.6.1 (snapshot build). By
> default a scala {{Seq\[Double\]}} is mapped by Spark as an ArrayType with
> nullable element
> {noformat}
> |-- valuations: array (nullable = true)
> | |-- element: double (containsNull = true)
> {noformat}
> This could be read back to as a Dataset in Spark 1.6.0
> {code}
> val df = sqlContext.table("valuations").as[Valuation]
> {code}
> But with Spark 1.6.1 the same fails with
> {code}
> val df = sqlContext.table("valuations").as[Valuation]
> org.apache.spark.sql.AnalysisException: cannot resolve 'cast(valuations as
> array<double>)' due to data type mismatch: cannot cast
> ArrayType(DoubleType,true) to ArrayType(DoubleType,false);
> {code}
> Here's the classes I am using
> {code}
> case class Valuation(tradeId : String,
> counterparty: String,
> nettingAgreement: String,
> wrongWay: Boolean,
> valuations : Seq[Double], /* one per scenario */
> timeInterval: Int,
> jobId: String) /* used for hdfs partitioning */
> val vals : Seq[Valuation] = Seq()
> val valsDF = sqlContext.sparkContext.parallelize(vals).toDF
> valsDF.write.partitionBy("jobId").mode(SaveMode.Overwrite).saveAsTable("valuations")
> {code}
> even the following gives the same result
> {code}
> val valsDF = vals.toDS.toDF
> {code}
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]