On each hadoop platform/env we tested, we do NOT use the spark provided by 
env(HDP, CDH or AWS EMR), but download specific version of Apache Spark. For 
CDH 6.2, we use Apache Spark 3.1.1 (and I think Spark 2.4.6 should works as 
well).

--

Best wishes to you ! 
From :Xiaoxiang Yu




At 2021-08-25 16:34:15, "washyou112" <[email protected]> wrote:

Maybe spark 2.4.0 no match
 


This is my env information
| |
washyou112
|
|
[email protected]
|
签名由网易邮箱大师定制
On 8/25/2021 15:57,Xiaoxiang Yu<[email protected]> wrote:
Hi, 
    This looks caused by class load conflict with jar provided by env. Here is 
a list of Hadoop Platform we verified and test : 
https://cwiki.apache.org/confluence/display/KYLIN/Support+Hadoop+Version+Matrix+of+Kylin+4.0.0
 . 
Please let us know your env information(version), or you can choose to use our 
recommended hadoop platform.




--

Best wishes to you ! 
From :Xiaoxiang Yu




在 2021-08-25 15:38:25,"washyou112" <[email protected]> 写道:

Kylin 4.0 error in the first step of building cube
kylin.log:
2021-08-24 10:45:00,232 ERROR [Thread-1] application.JobMonitor : Job failed 
the 1 times.
java.lang.NoSuchMethodError: 
com.fasterxml.jackson.databind.JsonMappingException.<init>(Ljava/io/Closeable;Ljava/lang/String;)V
at 
com.fasterxml.jackson.module.scala.JacksonModule$class.setupModule(JacksonModule.scala:61)
at 
com.fasterxml.jackson.module.scala.DefaultScalaModule.setupModule(DefaultScalaModule.scala:17)
at 
com.fasterxml.jackson.databind.ObjectMapper.registerModule(ObjectMapper.java:718)
at org.apache.spark.util.JsonProtocol$.<init>(JsonProtocol.scala:60)
at org.apache.spark.util.JsonProtocol$.<clinit>(JsonProtocol.scala)
at 
org.apache.spark.scheduler.EventLoggingListener$.initEventLog(EventLoggingListener.scala:353)
at 
org.apache.spark.scheduler.EventLoggingListener.start(EventLoggingListener.scala:135)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:533)
at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2549)
at 
org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:944)
at 
org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:935)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:935)
at 
org.apache.kylin.engine.spark.application.SparkApplication.execute(SparkApplication.java:283)
at 
org.apache.kylin.engine.spark.application.SparkApplication.execute(SparkApplication.java:89)
at org.apache.spark.application.JobWorker$$anon$2.run(JobWorker.scala:55)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2021-08-24 10:45:00,246 ERROR [Thread-1] application.JobWorkSpace : Job failed 
eventually. Reason: Error occurred when generate retry configuration.
java.util.NoSuchElementException: spark.executor.memory
at org.apache.spark.SparkConf$$anonfun$get$1.apply(SparkConf.scala:245)
at org.apache.spark.SparkConf$$anonfun$get$1.apply(SparkConf.scala:245)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.SparkConf.get(SparkConf.scala:245)
at 
org.apache.spark.autoheal.ExceptionTerminator$.incMemory(ExceptionTerminator.scala:70)
at 
org.apache.spark.autoheal.ExceptionTerminator$.resolveException(ExceptionTerminator.scala:45)
at 
org.apache.spark.application.JobMonitor.handleResourceLack(JobMonitor.scala:53)
at 
org.apache.spark.application.JobMonitor$$anon$1.onReceive(JobMonitor.scala:33)
at 
org.apache.spark.scheduler.KylinJobEventLoop$$anonfun$onReceive$1.apply(KylinJobEventLoop.scala:42)
at 
org.apache.spark.scheduler.KylinJobEventLoop$$anonfun$onReceive$1.apply(KylinJobEventLoop.scala:42)
at scala.collection.Iterator$class.foreach(Iterator.scala:893)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at 
org.apache.spark.scheduler.KylinJobEventLoop.onReceive(KylinJobEventLoop.scala:42)
at 
org.apache.spark.scheduler.KylinJobEventLoop.onReceive(KylinJobEventLoop.scala:29)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)


Is this an jar package conflict problem? How to solve it?
| |
washyou112
|
|
[email protected]
|
签名由网易邮箱大师定制

Reply via email to