[
https://issues.apache.org/jira/browse/SPARK-4988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Davies Liu resolved SPARK-4988.
-------------------------------
Resolution: Fixed
Assignee: Davies Liu
Fix Version/s: 1.5.0
This should be fixed after cleaning up Row and InternalRow stuff.
> "Create table ..as select ..from..order by .. limit 10" report error when one
> col is a Decimal
> ----------------------------------------------------------------------------------------------
>
> Key: SPARK-4988
> URL: https://issues.apache.org/jira/browse/SPARK-4988
> Project: Spark
> Issue Type: Bug
> Components: SQL
> Reporter: guowei
> Assignee: Davies Liu
> Fix For: 1.5.0
>
> Attachments: spark-4988-1.txt
>
>
> A table 'test' with a decimal type col.
> create table test1 as select * from test order by a limit 10;
> org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in
> stage 2.0 failed 1 times, most recent failure: Lost task 0.0 in stage 2.0
> (TID 2, localhost): java.lang.ClassCastException: scala.math.BigDecimal
> cannot be cast to org.apache.spark.sql.catalyst.types.decimal.Decimal
> at
> org.apache.spark.sql.hive.HiveInspectors$$anonfun$wrapperFor$2.apply(HiveInspectors.scala:339)
> at
> org.apache.spark.sql.hive.HiveInspectors$$anonfun$wrapperFor$2.apply(HiveInspectors.scala:339)
> at
> org.apache.spark.sql.hive.execution.InsertIntoHiveTable$$anonfun$org$apache$spark$sql$hive$execution$InsertIntoHiveTable$$writeToFile$1$1.apply(InsertIntoHiveTable.scala:111)
> at
> org.apache.spark.sql.hive.execution.InsertIntoHiveTable$$anonfun$org$apache$spark$sql$hive$execution$InsertIntoHiveTable$$writeToFile$1$1.apply(InsertIntoHiveTable.scala:108)
> at scala.collection.Iterator$class.foreach(Iterator.scala:727)
> at
> org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
> at
> org.apache.spark.sql.hive.execution.InsertIntoHiveTable.org$apache$spark$sql$hive$execution$InsertIntoHiveTable$$writeToFile$1(InsertIntoHiveTable.scala:108)
> at
> org.apache.spark.sql.hive.execution.InsertIntoHiveTable$$anonfun$saveAsHiveFile$3.apply(InsertIntoHiveTable.scala:87)
> at
> org.apache.spark.sql.hive.execution.InsertIntoHiveTable$$anonfun$saveAsHiveFile$3.apply(InsertIntoHiveTable.scala:87)
> at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:61)
> at org.apache.spark.scheduler.Task.run(Task.scala:56)
> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:195)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:744)
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]