Hi zhuoran, is there any more messages before this error? This error is not
the root cause.

2017-05-17 10:27 GMT+08:00 吕卓然 <lvzhuo...@fosun.com>:

> Hi all,
>
>
>
> Currently I am using Kylin2.0.0 with CDH 5.8. It works fine when I use
> MapReduce engine. However, when I try to use spark engine to build cube, it
> fails at step 7: Build Cube with Spark. Here is the log info:
>
>
>
> 17/05/16 17:50:01 WARN TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0,
> fonova-ahz-cdh34): java.lang.IllegalArgumentException: Failed to find
> metadata store by url: kylin_metadata@hbase
>
>                     at org.apache.kylin.common.persistence.ResourceStore.
> createResourceStore(ResourceStore.java:99)
>
>                     at org.apache.kylin.common.persistence.ResourceStore.
> getStore(ResourceStore.java:110)
>
>                     at org.apache.kylin.cube.CubeDescManager.getStore(
> CubeDescManager.java:370)
>
>                     at org.apache.kylin.cube.CubeDescManager.
> reloadAllCubeDesc(CubeDescManager.java:298)
>
>                     at org.apache.kylin.cube.CubeDescManager.<init>(
> CubeDescManager.java:109)
>
>                     at org.apache.kylin.cube.CubeDescManager.getInstance(
> CubeDescManager.java:81)
>
>                     at org.apache.kylin.cube.CubeInstance.getDescriptor(
> CubeInstance.java:109)
>
>                     at org.apache.kylin.cube.CubeSegment.getCubeDesc(
> CubeSegment.java:119)
>
>                     at org.apache.kylin.cube.CubeSegment.isEnableSharding(
> CubeSegment.java:467)
>
>                     at org.apache.kylin.cube.kv.RowKeyEncoder.<init>(
> RowKeyEncoder.java:48)
>
>                     at org.apache.kylin.cube.kv.AbstractRowKeyEncoder.
> createInstance(AbstractRowKeyEncoder.java:48)
>
>                     at org.apache.kylin.engine.spark.
> SparkCubingByLayer$2.call(SparkCubingByLayer.java:205)
>
>                     at org.apache.kylin.engine.spark.
> SparkCubingByLayer$2.call(SparkCubingByLayer.java:193)
>
>                     at org.apache.spark.api.java.JavaPairRDD$$anonfun$
> pairFunToScalaFun$1.apply(JavaPairRDD.scala:1018)
>
>                     at org.apache.spark.api.java.JavaPairRDD$$anonfun$
> pairFunToScalaFun$1.apply(JavaPairRDD.scala:1018)
>
>                     at scala.collection.Iterator$$
> anon$11.next(Iterator.scala:328)
>
>                     at org.apache.spark.util.collection.ExternalSorter.
> insertAll(ExternalSorter.scala:191)
>
>                     at org.apache.spark.shuffle.sort.
> SortShuffleWriter.write(SortShuffleWriter.scala:64)
>
>                     at org.apache.spark.scheduler.ShuffleMapTask.runTask(
> ShuffleMapTask.scala:73)
>
>                     at org.apache.spark.scheduler.ShuffleMapTask.runTask(
> ShuffleMapTask.scala:41)
>
>                     at org.apache.spark.scheduler.Task.run(Task.scala:89)
>
>                     at org.apache.spark.executor.Executor$TaskRunner.run(
> Executor.scala:227)
>
>                     at java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1142)
>
>                     at java.util.concurrent.ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:617)
>
>                     at java.lang.Thread.run(Thread.java:745)
>
>
>
> Any suggestions would help.
>
>
>
> Thanks,
>
> Zhuoran
>



-- 
Best regards,

Shaofeng Shi 史少锋

Reply via email to