Hello,  Could anybody give me a hand?

Recently, I upgraded my druid server version from 0.9.2 to lastest  0.12.3, I 
encountered into a trouble when I ran my previous application over new druid. 
It worked fine on 0.9.2. after check up the logs, I found that for double 
fields in my schema, it returned 0 instead of 0.0 in the query result, which 
caused my json parser unhappy and throwed an exception something like that:

18/12/11 19:36:37 ERROR thriftserver.SparkExecuteStatementOperation: Error 
running hive query:
org.apache.hive.service.cli.HiveSQLException: org.apache.spark.SparkException: 
Job aborted due to stage failure: Task 0 in stage 11.0 failed 1 times, most 
recent failure: Lost task 0.0 in stage 11.0 (TID 11, localhost): 
java.lang.ClassCastException: scala.math.BigInt cannot be cast to 
java.lang.Double
        at scala.runtime.BoxesRunTime.unboxToDouble(BoxesRunTime.java:114)
        at 
org.apache.spark.sql.catalyst.expressions.BaseGenericInternalRow$class.getDouble(rows.scala:44)
        at 
org.apache.spark.sql.catalyst.expressions.GenericInternalRow.getDouble(rows.scala:221)
        at 
org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown
 Source)
        at 
org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
        at 
org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:370)
        at 
org.apache.spark.sql.execution.SparkPlan$$anonfun$4.apply(SparkPlan.scala:246)
        at 
org.apache.spark.sql.execution.SparkPlan$$anonfun$4.apply(SparkPlan.scala:240)
        at 
org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:803)
        at 
org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:803)
        at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
        at org.apache.spark.scheduler.Task.run(Task.scala:86)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
   


Could anybody respond to me or fix this bug or help me get out of this 
disgusting trouble, thanks million!

Michael

Reply via email to