[ 
https://issues.apache.org/jira/browse/KYLIN-3272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16460408#comment-16460408
 ] 

Qianying Huang commented on KYLIN-3272:
---------------------------------------

Failed to build cube with spark engine at "#7 Step Name: Build Cube with Spark" 
with Spark 2.3.0.
{panel:title=Output Log}
...

Exception in thread "main" java.lang.RuntimeException: error execute 
org.apache.kylin.engine.spark.SparkCubingByLayer
 at 
org.apache.kylin.common.util.AbstractApplication.execute(AbstractApplication.java:42)
 at org.apache.kylin.common.util.SparkEntry.main(SparkEntry.java:44)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498)
 at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
 at 
org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:879)
 at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:197)
 at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:227)
 at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:136)
 at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: org.apache.spark.SparkException: Job aborted.
 at 
org.apache.spark.internal.io.SparkHadoopWriter$.write(SparkHadoopWriter.scala:96)
 at 
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply$mcV$sp(PairRDDFunctions.scala:1083)
 at 
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply(PairRDDFunctions.scala:1081)
 at 
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply(PairRDDFunctions.scala:1081)
 at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
 at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
 at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
 at 
org.apache.spark.rdd.PairRDDFunctions.saveAsNewAPIHadoopDataset(PairRDDFunctions.scala:1081)
 at 
org.apache.spark.api.java.JavaPairRDD.saveAsNewAPIHadoopDataset(JavaPairRDD.scala:831)
 at 
org.apache.kylin.engine.spark.SparkCubingByLayer.saveToHDFS(SparkCubingByLayer.java:241)
 at 
org.apache.kylin.engine.spark.SparkCubingByLayer.execute(SparkCubingByLayer.java:194)
 at 
org.apache.kylin.common.util.AbstractApplication.execute(AbstractApplication.java:37)
 ... 11 more
Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: 
Task 0 in stage 1.0 failed 4 times, most recent failure: Lost task 0.3 in stage 
1.0 (TID 4, bigdata05, executor 1): java.lang.IllegalArgumentException: Class 
is not registered: 
org.apache.spark.internal.io.FileCommitProtocol$TaskCommitMessage
Note: To register this class use: 
kryo.register(org.apache.spark.internal.io.FileCommitProtocol$TaskCommitMessage.class);
 at com.esotericsoftware.kryo.Kryo.getRegistration(Kryo.java:488)
 at com.twitter.chill.KryoBase.getRegistration(KryoBase.scala:52)
 at 
com.esotericsoftware.kryo.util.DefaultClassResolver.writeClass(DefaultClassResolver.java:97)
 at com.esotericsoftware.kryo.Kryo.writeClass(Kryo.java:517)
 at com.esotericsoftware.kryo.Kryo.writeClassAndObject(Kryo.java:622)
 at 
org.apache.spark.serializer.KryoSerializerInstance.serialize(KryoSerializer.scala:347)
 at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:393)
 at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
 at java.lang.Thread.run(Thread.java:748)

Driver stacktrace:
 at 
org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1599)
 at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1587)
 at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1586)
 at 
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
 at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
 at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1586)
 at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831)
 at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831)
 at scala.Option.foreach(Option.scala:257)
 at 
org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:831)
 at 
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1820)
 at 
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1769)
 at 
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1758)
 at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
 at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:642)
 at org.apache.spark.SparkContext.runJob(SparkContext.scala:2027)
 at org.apache.spark.SparkContext.runJob(SparkContext.scala:2048)
 at org.apache.spark.SparkContext.runJob(SparkContext.scala:2080)
 at 
org.apache.spark.internal.io.SparkHadoopWriter$.write(SparkHadoopWriter.scala:78)
 ... 22 more
Caused by: java.lang.IllegalArgumentException: Class is not registered: 
org.apache.spark.internal.io.FileCommitProtocol$TaskCommitMessage
Note: To register this class use: 
kryo.register(org.apache.spark.internal.io.FileCommitProtocol$TaskCommitMessage.class);
 at com.esotericsoftware.kryo.Kryo.getRegistration(Kryo.java:488)
 at com.twitter.chill.KryoBase.getRegistration(KryoBase.scala:52)
 at 
com.esotericsoftware.kryo.util.DefaultClassResolver.writeClass(DefaultClassResolver.java:97)
 at com.esotericsoftware.kryo.Kryo.writeClass(Kryo.java:517)
 at com.esotericsoftware.kryo.Kryo.writeClassAndObject(Kryo.java:622)
 at 
org.apache.spark.serializer.KryoSerializerInstance.serialize(KryoSerializer.scala:347)
 at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:393)
 at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
 at java.lang.Thread.run(Thread.java:748)

...
{panel}

> Upgrade Spark dependency to 2.3.0
> ---------------------------------
>
>                 Key: KYLIN-3272
>                 URL: https://issues.apache.org/jira/browse/KYLIN-3272
>             Project: Kylin
>          Issue Type: Improvement
>          Components: Spark Engine
>            Reporter: Ted Yu
>            Priority: Minor
>
> Currently Spark 2.1.2 is used.
> Spark 2.3.0 was just released.
> We should upgrade the dependency to 2.3.0



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to