[
https://issues.apache.org/jira/browse/SPARK-7196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14522373#comment-14522373
]
Ken Geis commented on SPARK-7196:
---------------------------------
This is now correct:
{noformat}
scala>
amounts.schema(0).dataType.asInstanceOf[org.apache.spark.sql.types.DecimalType].precisionInfo
res4: Option[org.apache.spark.sql.types.PrecisionInfo] =
Some(PrecisionInfo(12,2))
{noformat}
but now this happens (might be a separate bug):
{noformat}
scala> amounts.saveAsTable("amounts")
...
15/04/30 15:05:31 ERROR Executor: Exception in task 0.0 in stage 0.0 (TID 0)
java.lang.ClassCastException: java.math.BigDecimal cannot be cast to
org.apache.spark.sql.types.Decimal
at
org.apache.spark.sql.parquet.MutableRowWriteSupport.consumeType(ParquetTableSupport.scala:365)
at
org.apache.spark.sql.parquet.MutableRowWriteSupport.write(ParquetTableSupport.scala:335)
at
org.apache.spark.sql.parquet.MutableRowWriteSupport.write(ParquetTableSupport.scala:321)
at
parquet.hadoop.InternalParquetRecordWriter.write(InternalParquetRecordWriter.java:120)
at parquet.hadoop.ParquetRecordWriter.write(ParquetRecordWriter.java:81)
at parquet.hadoop.ParquetRecordWriter.write(ParquetRecordWriter.java:37)
at
org.apache.spark.sql.parquet.ParquetRelation2.org$apache$spark$sql$parquet$ParquetRelation2$$writeShard$1(newParquet.scala:699)
at
org.apache.spark.sql.parquet.ParquetRelation2$$anonfun$insert$2.apply(newParquet.scala:717)
at
org.apache.spark.sql.parquet.ParquetRelation2$$anonfun$insert$2.apply(newParquet.scala:717)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:63)
at org.apache.spark.scheduler.Task.run(Task.scala:70)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
...
{noformat}
> decimal precision lost when loading DataFrame from JDBC
> -------------------------------------------------------
>
> Key: SPARK-7196
> URL: https://issues.apache.org/jira/browse/SPARK-7196
> Project: Spark
> Issue Type: Bug
> Components: SQL
> Affects Versions: 1.3.1
> Reporter: Ken Geis
>
> I have a decimal database field that is defined as 10.2 (i.e. ##########.##).
> When I load it into Spark via sqlContext.jdbc(..), the type of the
> corresponding field in the DataFrame is DecimalType, with precisionInfo None.
> Because of that loss of precision information, SPARK-4176 is triggered when I
> try to .saveAsTable(..).
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]