That's good. Thanks for the update.

Best regards,

Shaofeng Shi 史少锋
Apache Kylin PMC
Email: [email protected]

Apache Kylin FAQ: https://kylin.apache.org/docs/gettingstarted/faq.html
Join Kylin user mail group: [email protected]
Join Kylin dev mail group: [email protected]




li_cong521 <[email protected]> 于2020年10月26日周一 下午3:40写道:

> hello
> the error has been solved.
> function: set the hbase-site.xml  hbase.rootdir=hdfs://master2/hbase
> thanks~
>
>
>
>
>
>
>
> At 2020-10-26 14:45:18, "li_cong521" <[email protected]> wrote:
>
> hello:
>
> anbody miss this error?  the cube build stay at 20 step,
> the value i set in kylin.properties
> kylin.storage.hbase.cluster-fs=hdfs://mycluster/hbase
> the hadoop values  fs.defaultFS is hdfs://master2
> the log follows:
> org.apache.kylin.engine.mr.exception.HadoopShellException:
> java.io.IOException: BulkLoad encountered an unrecoverable problem
>  at
> org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.bulkLoadPhase(LoadIncrementalHFiles.java:534)
>  at
> org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.doBulkLoad(LoadIncrementalHFiles.java:465)
>  at
> org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.doBulkLoad(LoadIncrementalHFiles.java:343)
>  at
> org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.run(LoadIncrementalHFiles.java:1069)
>  at org.apache.kylin.engine.mr.MRUtil.runMRJob(MRUtil.java:93)
>  at
> org.apache.kylin.storage.hbase.steps.BulkLoadJob.run(BulkLoadJob.java:102)
>  at org.apache.kylin.engine.mr.MRUtil.runMRJob(MRUtil.java:93)
>  at
> org.apache.kylin.engine.mr.common.HadoopShellExecutable.doWork(HadoopShellExecutable.java:63)
>  at
> org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:167)
>  at
> org.apache.kylin.job.execution.DefaultChainedExecutable.doWork(DefaultChainedExecutable.java:71)
>  at
> org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:167)
>  at
> org.apache.kylin.job.impl.threadpool.DefaultScheduler$JobRunner.run(DefaultScheduler.java:114)
>  at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748)
> Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedException:
> Failed after attempts=35, exceptions:
> Mon Oct 26 11:15:55 CST 2020,
> RpcRetryingCaller{globalStartTime=1603682155923, pause=100, retries=35},
> java.io.IOException: java.io.IOException: Wrong FS:
> hdfs://Master2/kylin/kylin_metadata/kylin-39e914b5-b9f5-3d14-83e8-45da3eb54657/kylin_sales_cube/hfile/F2/4be8dd587c7b4ddebac0d4c30eeaf260,
> expected: hdfs://mycluster
>  at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2239)
>  at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
>  at
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
>  at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
>  at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.IllegalArgumentException: Wrong FS:
> hdfs://Master2/kylin/kylin_metadata/kylin-39e914b5-b9f5-3d14-83e8-45da3eb54657/kylin_sales_cube/hfile/F2/4be8dd587c7b4ddebac0d4c30eeaf260,
> expected: hdfs://mycluster
>  at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:643)
>  at
> org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:184)
>  at
> org.apache.hadoop.hdfs.DistributedFileSystem.access$000(DistributedFileSystem.java:101)
>  at
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1068)
>  at
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1064)
>  at
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>  at
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1064)
>  at
> org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:397)
>  at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1398)
>  at
> org.apache.hadoop.hbase.regionserver.HRegionFileSystem.commitStoreFile(HRegionFileSystem.java:387)
>  at
> org.apache.hadoop.hbase.regionserver.HRegionFileSystem.bulkLoadStoreFile(HRegionFileSystem.java:466)
>  at
> org.apache.hadoop.hbase.regionserver.HStore.bulkLoadHFile(HStore.java:780)
>  at
> org.apache.hadoop.hbase.regionserver.HRegion.bulkLoadHFiles(HRegion.java:5404)
>  at
> org.apache.hadoop.hbase.regionserver.RSRpcServices.bulkLoadHFile(RSRpcServices.java:1970)
>  at
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33650)
>  at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2188)
>  ... 4 more
> Mon Oct 26 11:15:56 CST 2020,
> RpcRetryingCaller{globalStartTime=1603682155923, pause=100, retries=35},
> java.io.IOException: java.io.IOException: Wrong FS:
> hdfs://Master2/kylin/kylin_metadata/kylin-39e914b5-b9f5-3d14-83e8-45da3eb54657/kylin_sales_cube/hfile/F2/4be8dd587c7b4ddebac0d4c30eeaf260,
> expected: hdfs://mycluster
>  at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2239)
>  at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
>  at
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
>  at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
>  at java.lang.Thread.run(Thread.java:748)
>
>
>
>
>
>

Reply via email to