[
https://issues.apache.org/jira/browse/SPARK-17608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15969328#comment-15969328
]
Apache Spark commented on SPARK-17608:
--------------------------------------
User 'wangmiao1981' has created a pull request for this issue:
https://github.com/apache/spark/pull/17640
> Long type has incorrect serialization/deserialization
> -----------------------------------------------------
>
> Key: SPARK-17608
> URL: https://issues.apache.org/jira/browse/SPARK-17608
> Project: Spark
> Issue Type: Bug
> Components: SparkR
> Affects Versions: 2.0.0
> Reporter: Thomas Powell
>
> Am hitting issues when using {{dapply}} on a data frame that contains a
> {{bigint}} in its schema. When this is converted to a SparkR data frame a
> "bigint" gets converted to a R {{numeric}} type:
> https://github.com/apache/spark/blob/master/R/pkg/R/types.R#L25.
> However, the R {{numeric}} type gets converted to
> {{org.apache.spark.sql.types.DoubleType}}:
> https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/api/r/SQLUtils.scala#L97.
> The two directions therefore aren't compatible. If I use the same schema when
> using dapply (and just an identity function) I will get type collisions
> because the output type is a double but the schema expects a bigint.
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]