[ https://issues.apache.org/jira/browse/SPARK-9119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14653312#comment-14653312 ]
Apache Spark commented on SPARK-9119: ------------------------------------- User 'davies' has created a pull request for this issue: https://github.com/apache/spark/pull/7925 > In some cases, we may save wrong decimal values to parquet > ---------------------------------------------------------- > > Key: SPARK-9119 > URL: https://issues.apache.org/jira/browse/SPARK-9119 > Project: Spark > Issue Type: Sub-task > Components: SQL > Reporter: Yin Huai > Assignee: Davies Liu > Priority: Blocker > > {code} > > > import org.apache.spark.sql.Row > import > org.apache.spark.sql.types.{StructType,StructField,StringType,DecimalType} > import org.apache.spark.sql.types.Decimal > > val schema = StructType(Array(StructField("name", DecimalType(10, 5), false))) > val rowRDD = sc.parallelize(Array(Row(Decimal("67123.45")))) > val df = sqlContext.createDataFrame(rowRDD, schema) > df.registerTempTable("test") > df.show() > > // +--------+ > // | name| > // +--------+ > // |67123.45| > // +--------+ > sqlContext.sql("create table testDecimal as select * from test") > sqlContext.table("testDecimal").show() > // +--------+ > // | name| > // +--------+ > // |67.12345| > // +--------+ > {code} > The problem is when we do conversions, we do not use precision/scale info in > the schema. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org