[ 
https://issues.apache.org/jira/browse/KYLIN-4889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17280522#comment-17280522
 ] 

ASF GitHub Bot commented on KYLIN-4889:
---------------------------------------

hit-lacus commented on pull request #1565:
URL: https://github.com/apache/kylin/pull/1565#issuecomment-774680450


   Meged in https://github.com/apache/kylin/pull/1580


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


> Query error when spark engine in local mode
> -------------------------------------------
>
>                 Key: KYLIN-4889
>                 URL: https://issues.apache.org/jira/browse/KYLIN-4889
>             Project: Kylin
>          Issue Type: Bug
>    Affects Versions: v4.0.0-alpha
>            Reporter: Feng Zhu
>            Assignee: Feng Zhu
>            Priority: Major
>             Fix For: v4.0.0-GA
>
>
> When i query with spark engine in local mode, with -Dspark.local=true, the 
> spark application was still submitted to yarn, and the following error 
> occurred:
> org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in 
> stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 
> (TID 6, sandbox.hortonworks.com, executor 1): java.lang.ClassCastException: 
> cannot assign instance of scala.collection.immutable.List$SerializationProxy 
> to field org.apache.spark.rdd.RDD.org$apache$spark$rdd$RDD$$dependencies_ of 
> type scala.collection.Seq in instance of 
> org.apache.spark.rdd.MapPartitionsRDD at 
> java.io.ObjectStreamClass$FieldReflector.setObjFieldValues(ObjectStreamClass.java:2233)
>  at java.io.ObjectStreamClass.setObjFieldValues(ObjectStreamClass.java:1405) 
> at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2291) 
> at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2209) at 
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2067) at 
> java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1571) at 
> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2285) at 
> java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2209) at 
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2067) at 
> java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1571) at 
> java.io.ObjectInputStream.readObject(ObjectInputStream.java:431) at 
> org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:75)
>  at 
> org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:114)
>  at 
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:88) at 
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:55) at 
> org.apache.spark.scheduler.Task.run(Task.scala:123) at 
> org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
>  at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360) at 
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414) at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748) Driver stacktrace: while executing 
> SQL: "select * from (select KYLIN_SALES.PART_DT , sum(KYLIN_SALES.PRICE ) 
> from KYLIN_SALES group by KYLIN_SALES.PART_DT union select 
> KYLIN_SALES.PART_DT , max(KYLIN_SALES.PRICE ) from KYLIN_SALES group by 
> KYLIN_SALES.PART_DT union select KYLIN_SALES.PART_DT , count(*) from 
> KYLIN_SALES group by KYLIN_SALES.PART_DT union select KYLIN_SALES.PART_DT , 
> count(distinct KYLIN_SALES.PRICE) from KYLIN_SALES group by 
> KYLIN_SALES.PART_DT) limit 501"



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to