kylin created KYLIN-4360:
----------------------------

             Summary: 使用spark引擎构建cube失败
                 Key: KYLIN-4360
                 URL: https://issues.apache.org/jira/browse/KYLIN-4360
             Project: Kylin
          Issue Type: Bug
         Environment: 我使用得是CDH 6.3.2 得hadoop 集群 kylin 
版本使用得是apache-kylin-3.0.0-bin-cdh60
            Reporter: kylin


我使用得是CDH 6.3.2 得hadoop 集群 kylin 版本使用得是apache-kylin-3.0.0-bin-cdh60

使用mr引擎构建cube成功,选择spark引擎失败,报错如下:#8 Step Name: Convert Cuboid Data to HFile
Duration: 0.59 mins Waiting: 0 seconds

client token: N/A
 diagnostics: User class threw exception: java.lang.RuntimeException: error 
execute org.apache.kylin.storage.hbase.steps.SparkCubeHFile. Root cause: Job 
aborted.
 at 
org.apache.kylin.common.util.AbstractApplication.execute(AbstractApplication.java:42)
 at org.apache.kylin.common.util.SparkEntry.main(SparkEntry.java:44)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498)
 at 
org.apache.spark.deploy.yarn.ApplicationMaster$$anon$4.run(ApplicationMaster.scala:721)
Caused by: org.apache.spark.SparkException: Job aborted.
 at 
org.apache.spark.internal.io.SparkHadoopWriter$.write(SparkHadoopWriter.scala:100)
 at 
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply$mcV$sp(PairRDDFunctions.scala:1083)
 at 
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply(PairRDDFunctions.scala:1081)
 at 
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply(PairRDDFunctions.scala:1081)
 at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
 at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
 at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
 at 
org.apache.spark.rdd.PairRDDFunctions.saveAsNewAPIHadoopDataset(PairRDDFunctions.scala:1081)
 at 
org.apache.spark.api.java.JavaPairRDD.saveAsNewAPIHadoopDataset(JavaPairRDD.scala:831)
 at 
org.apache.kylin.storage.hbase.steps.SparkCubeHFile.execute(SparkCubeHFile.java:238)
 at 
org.apache.kylin.common.util.AbstractApplication.execute(AbstractApplication.java:37)
 ... 6 more
Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: 
Task 1 in stage 1.0 failed 4 times, most recent failure: Lost task 1.3 in stage 
1.0 (TID 7, prod-bigdata-server-02, executor 1): 
java.lang.NumberFormatException: For input string: "30s"
 at 
java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
 at java.lang.Long.parseLong(Long.java:589)
 at java.lang.Long.parseLong(Long.java:631)
 at org.apache.hadoop.conf.Configuration.getLong(Configuration.java:1311)
 at org.apache.hadoop.hdfs.DFSClient$Conf.<init>(DFSClient.java:502)
 at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:638)
 at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:619)
 at 
org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:149)
 at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2669)
 at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:94)
 at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2703)
 at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2685)
 at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:373)
 at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
 at 
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.<init>(FileOutputCommitter.java:113)
 at 
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.<init>(FileOutputCommitter.java:88)
 at 
org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.getOutputCommitter(FileOutputFormat.java:309)
 at 
org.apache.spark.internal.io.HadoopMapReduceCommitProtocol.setupCommitter(HadoopMapReduceCommitProtocol.scala:100)
 at 
org.apache.spark.internal.io.HadoopMapReduceCommitProtocol.setupTask(HadoopMapReduceCommitProtocol.scala:217)
 at 
org.apache.spark.internal.io.SparkHadoopWriter$.org$apache$spark$internal$io$SparkHadoopWriter$$executeTask(SparkHadoopWriter.scala:117)
 at 
org.apache.spark.internal.io.SparkHadoopWriter$$anonfun$3.apply(SparkHadoopWriter.scala:83)
 at 
org.apache.spark.internal.io.SparkHadoopWriter$$anonfun$3.apply(SparkHadoopWriter.scala:78)
 at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
 at org.apache.spark.scheduler.Task.run(Task.scala:109)
 at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
 at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
 at java.lang.Thread.run(Thread.java:748)

Driver stacktrace:
 at 
org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1651)
 at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1639)
 at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1638)
 at 
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
 at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
 at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1638)
 at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831)
 at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831)
 at scala.Option.foreach(Option.scala:257)
 at 
org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:831)
 at 
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1872)
 at 
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1821)
 at 
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1810)
 at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
 at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:642)
 at org.apache.spark.SparkContext.runJob(SparkContext.scala:2034)
 at org.apache.spark.SparkContext.runJob(SparkContext.scala:2055)
 at org.apache.spark.SparkContext.runJob(SparkContext.scala:2087)
 at 
org.apache.spark.internal.io.SparkHadoopWriter$.write(SparkHadoopWriter.scala:78)
 ... 16 more
Caused by: java.lang.NumberFormatException: For input string: "30s"
 at 
java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
 at java.lang.Long.parseLong(Long.java:589)
 at java.lang.Long.parseLong(Long.java:631)
 at org.apache.hadoop.conf.Configuration.getLong(Configuration.java:1311)
 at org.apache.hadoop.hdfs.DFSClient$Conf.<init>(DFSClient.java:502)
 at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:638)
 at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:619)
 at 
org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:149)
 at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2669)
 at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:94)
 at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2703)
 at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2685)
 at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:373)
 at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
 at 
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.<init>(FileOutputCommitter.java:113)
 at 
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.<init>(FileOutputCommitter.java:88)
 at 
org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.getOutputCommitter(FileOutputFormat.java:309)
 at 
org.apache.spark.internal.io.HadoopMapReduceCommitProtocol.setupCommitter(HadoopMapReduceCommitProtocol.scala:100)
 at 
org.apache.spark.internal.io.HadoopMapReduceCommitProtocol.setupTask(HadoopMapReduceCommitProtocol.scala:217)
 at 
org.apache.spark.internal.io.SparkHadoopWriter$.org$apache$spark$internal$io$SparkHadoopWriter$$executeTask(SparkHadoopWriter.scala:117)
 at 
org.apache.spark.internal.io.SparkHadoopWriter$$anonfun$3.apply(SparkHadoopWriter.scala:83)
 at 
org.apache.spark.internal.io.SparkHadoopWriter$$anonfun$3.apply(SparkHadoopWriter.scala:78)
 at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
 at org.apache.spark.scheduler.Task.run(Task.scala:109)
 at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
 at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
 at java.lang.Thread.run(Thread.java:748)

ApplicationMaster host: 10.3.13.2
 ApplicationMaster RPC port: 0
 queue: root.users.kylin
 start time: 1579771783420
 final status: FAILED
 tracking URL: 
http://prod-bigdata-server-01:8088/proxy/application_1579520750792_0230/
 user: kylin
Exception in thread "main" org.apache.spark.SparkException: Application 
application_1579520750792_0230 finished with failed status
 at org.apache.spark.deploy.yarn.Client.run(Client.scala:1165)
 at org.apache.spark.deploy.yarn.YarnClusterApplication.start(Client.scala:1520)
 at 
org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:894)
 at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:198)
 at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:228)
 at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:137)
 at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
2020-01-23 17:30:09 INFO ShutdownHookManager:54 - Shutdown hook called
2020-01-23 17:30:09 INFO ShutdownHookManager:54 - Deleting directory 
/tmp/spark-4cae4b51-6197-4cac-8fed-d82a8744f00d
2020-01-23 17:30:09 INFO ShutdownHookManager:54 - Deleting directory 
/tmp/spark-6142de85-bba3-43e7-8a0d-5ea198636f94
The command is: 
export HADOOP_CONF_DIR=/data/kylin/hadoop_conf && 
/data/kylin/spark/bin/spark-submit --class 
org.apache.kylin.common.util.SparkEntry --name "Convert Cuboid Data to HFile" 
--conf spark.executor.cores=2 --conf 
spark.hadoop.yarn.timeline-service.enabled=false --conf 
spark.hadoop.mapreduce.output.fileoutputformat.compress.codec=org.apache.hadoop.io.compress.DefaultCodec
 --conf spark.master=yarn --conf 
spark.hadoop.mapreduce.output.fileoutputformat.compress=true --conf 
spark.executor.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to