[
https://issues.apache.org/jira/browse/SPARK-18484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15673338#comment-15673338
]
Sean Owen commented on SPARK-18484:
-----------------------------------
What do you have in mind? the case class mechanism is already a bit of a
convenience. I'm afraid this would just duplicate several existing mechanisms
for little gain.
> case class datasets - ability to specify decimal precision and scale
> --------------------------------------------------------------------
>
> Key: SPARK-18484
> URL: https://issues.apache.org/jira/browse/SPARK-18484
> Project: Spark
> Issue Type: Improvement
> Affects Versions: 2.0.0, 2.0.1
> Reporter: Damian Momot
>
> Currently when using decimal type (BigDecimal in scala case class) there's no
> way to enforce precision and scale. This is quite critical when saving data -
> regarding space usage and compatibility with external systems (for example
> Hive table) because spark saves data as Decimal(38,18)
> {code}
> case class TestClass(id: String, money: BigDecimal)
> val testDs = spark.createDataset(Seq(
> TestClass("1", BigDecimal("22.50")),
> TestClass("2", BigDecimal("500.66"))
> ))
> testDs.printSchema()
> {code}
> {code}
> root
> |-- id: string (nullable = true)
> |-- money: decimal(38,18) (nullable = true)
> {code}
> Workaround is to convert dataset to dataframe before saving and manually cast
> to specific decimal scale/precision:
> {code}
> import org.apache.spark.sql.types.DecimalType
> val testDf = testDs.toDF()
> testDf
> .withColumn("money", testDf("money").cast(DecimalType(10,2)))
> .printSchema()
> {code}
> {code}
> root
> |-- id: string (nullable = true)
> |-- money: decimal(10,2) (nullable = true)
> {code}
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]