Can you show the DDL for the table?  It looks like the SerDe might be
saying it will produce a decimal type but is actually producing a string.

On Thu, Oct 23, 2014 at 3:17 PM, arthur.hk.c...@gmail.com <
arthur.hk.c...@gmail.com> wrote:

> Hi
>
> My Spark is 1.1.0 and Hive is 0.12,  I tried to run the same query in both
> Hive-0.12.0 then Spark-1.1.0,  HiveQL works while SparkSQL failed.
>
>
> hive> select l_orderkey, sum(l_extendedprice*(1-l_discount)) as revenue,
> o_orderdate, o_shippriority from customer c join orders o on c.c_mktsegment
> = 'BUILDING' and c.c_custkey = o.o_custkey join lineitem l on l.l_orderkey
> = o.o_orderkey where o_orderdate < '1995-03-15' and l_shipdate >
> '1995-03-15' group by l_orderkey, o_orderdate, o_shippriority order by
> revenue desc, o_orderdate limit 10;
> Ended Job = job_1414067367860_0011
> MapReduce Jobs Launched:
> Job 0: Map: 1  Reduce: 1   Cumulative CPU: 2.0 sec   HDFS Read: 261 HDFS
> Write: 96 SUCCESS
> Job 1: Map: 1  Reduce: 1   Cumulative CPU: 0.88 sec   HDFS Read: 458 HDFS
> Write: 0 SUCCESS
> Total MapReduce CPU Time Spent: 2 seconds 880 msec
> OK
> Time taken: 38.771 seconds
>
>
> scala> sqlContext.sql("""select l_orderkey,
> sum(l_extendedprice*(1-l_discount)) as revenue, o_orderdate, o_shippriority
> from customer c join orders o on c.c_mktsegment = 'BUILDING' and
> c.c_custkey = o.o_custkey join lineitem l on l.l_orderkey = o.o_orderkey
> where o_orderdate < '1995-03-15' and l_shipdate > '1995-03-15' group by
> l_orderkey, o_orderdate, o_shippriority order by revenue desc, o_orderdate
> limit 10""").collect().foreach(println);
> org.apache.spark.SparkException: Job aborted due to stage failure: Task 14
> in stage 5.0 failed 4 times, most recent failure: Lost task 14.3 in stage
> 5.0 (TID 568, m34): java.lang.ClassCastException: java.lang.String cannot
> be cast to scala.math.BigDecimal
>         scala.math.Numeric$BigDecimalIsFractional$.minus(Numeric.scala:182)
>
> org.apache.spark.sql.catalyst.expressions.Subtract$$anonfun$eval$3.apply(arithmetic.scala:64)
>
> org.apache.spark.sql.catalyst.expressions.Subtract$$anonfun$eval$3.apply(arithmetic.scala:64)
>
> org.apache.spark.sql.catalyst.expressions.Expression.n2(Expression.scala:114)
>
> org.apache.spark.sql.catalyst.expressions.Subtract.eval(arithmetic.scala:64)
>
> org.apache.spark.sql.catalyst.expressions.Expression.n2(Expression.scala:108)
>
> org.apache.spark.sql.catalyst.expressions.Multiply.eval(arithmetic.scala:70)
>
> org.apache.spark.sql.catalyst.expressions.Coalesce.eval(nullFunctions.scala:47)
>
> org.apache.spark.sql.catalyst.expressions.Expression.n2(Expression.scala:108)
>
> org.apache.spark.sql.catalyst.expressions.Add.eval(arithmetic.scala:58)
>
> org.apache.spark.sql.catalyst.expressions.MutableLiteral.update(literals.scala:69)
>
> org.apache.spark.sql.catalyst.expressions.SumFunction.update(aggregates.scala:433)
>
> org.apache.spark.sql.execution.Aggregate$$anonfun$execute$1$$anonfun$7.apply(Aggregate.scala:167)
>
> org.apache.spark.sql.execution.Aggregate$$anonfun$execute$1$$anonfun$7.apply(Aggregate.scala:151)
>         org.apache.spark.rdd.RDD$$anonfun$13.apply(RDD.scala:596)
>         org.apache.spark.rdd.RDD$$anonfun$13.apply(RDD.scala:596)
>
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
>         org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
>         org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
>
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
>         org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
>         org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
>
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)
>
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
>         org.apache.spark.scheduler.Task.run(Task.scala:54)
>
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:177)
>
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>         java.lang.Thread.run(Thread.java:745)
> Driver stacktrace:
> at org.apache.spark.scheduler.DAGScheduler.org
> $apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1185)
> at
> org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1174)
> at
> org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1173)
> at
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
> at
> org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1173)
> at
> org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:688)
> at
> org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:688)
> at scala.Option.foreach(Option.scala:236)
> at
> org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:688)
> at
> org.apache.spark.scheduler.DAGSchedulerEventProcessActor$$anonfun$receive$2.applyOrElse(DAGScheduler.scala:1391)
> at akka.actor.ActorCell.receiveMessage(ActorCell.scala:498)
> at akka.actor.ActorCell.invoke(ActorCell.scala:456)
> at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:237)
> at akka.dispatch.Mailbox.run(Mailbox.scala:219)
> at
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
> at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
> at
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
> at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
> at
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
>
>
> Regards
> Arthur
>
>
>
>
>
>

Reply via email to