It seems your HBase expects to read data from hdfs://wanda, while the hfile
is in the other hdfs, please double check.

We don't have such a separate environment to test this, so if you identify
this is a bug and can provide a patch, that would be helpful for other
users; Thanks!

2015-11-12 11:15 GMT+08:00 LIU Ze (刘则) <[email protected]>:

> thanks,
>
>  hdfs of hive is  :2.7.1
>  hdfs of hbase is :2.4.0
>  version of hbase is :0.98.12
> ________________________________
> and kylin.hbase.cluster.fs is set to :kylin.hbase.cluster.fs=hdfs://
> 10.209.21.11:10010
>
> full log:
>
> [pool-5-thread-1]:[2015-11-12
> 10:20:07,961][INFO][org.apache.kylin.job.common.HadoopShellExecutable.doWork(HadoopShellExecutable.java:57)]
> - parameters of the HadoopShellExecutable:
> [pool-5-thread-1]:[2015-11-12
> 10:20:07,962][INFO][org.apache.kylin.job.common.HadoopShellExecutable.doWork(HadoopShellExecutable.java:58)]
> -  -input hdfs://
> 10.209.21.11:10010/tmp/kylin/kylin_metadata/kylin-061e3569-a973-4c0a-8d89-84c9e0b12117/test2/hfile
> -htablename KYLIN_S71CJH8UZB -cubename test2
> [pool-5-thread-1]:[2015-11-12
> 10:20:08,067][DEBUG][org.apache.kylin.job.hadoop.hbase.BulkLoadJob.run(BulkLoadJob.java:86)]
> - Start to run LoadIncrementalHFiles
> [pool-5-thread-1]:[2015-11-12
> 10:20:09,108][ERROR][org.apache.kylin.job.common.HadoopShellExecutable.doWork(HadoopShellExecutable.java:64)]
> - error execute
> HadoopShellExecutable{id=061e3569-a973-4c0a-8d89-84c9e0b12117-10, name=Load
> HFile to HBase Table, state=RUNNING}
> java.io.IOException: BulkLoad encountered an unrecoverable problem
>         at
> org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.bulkLoadPhase(LoadIncrementalHFiles.java:443)
>         at
> org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.doBulkLoad(LoadIncrementalHFiles.java:375)
>         at
> org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.run(LoadIncrementalHFiles.java:951)
>         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
>         at
> org.apache.kylin.job.hadoop.hbase.BulkLoadJob.run(BulkLoadJob.java:87)
>         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
>         at
> org.apache.kylin.job.common.HadoopShellExecutable.doWork(HadoopShellExecutable.java:62)
>         at
> org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:107)
>         at
> org.apache.kylin.job.execution.DefaultChainedExecutable.doWork(DefaultChainedExecutable.java:51)
>         at
> org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:107)
>         at
> org.apache.kylin.job.impl.threadpool.DefaultScheduler$JobRunner.run(DefaultScheduler.java:130)
>         at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>         at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedException:
> Failed after attempts=3, exceptions:
> Wed Nov 11 18:20:08 GMT-08:00 2015,
> org.apache.hadoop.hbase.client.RpcRetryingCaller@7544d58e,
> java.io.IOException: java.io.IOException: Wrong FS: hdfs://
> 10.209.21.11:10010/tmp/kylin/kylin_metadata/kylin-061e3569-a973-4c0a-8d89-84c9e0b12117/test2/hfile/F1/fae5ec4c764e4bd8921d6a43d2295493,
> expected: hdfs://wanda
>         at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2132)
>         at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
>         at
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
>         at
> org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
>         at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.IllegalArgumentException: Wrong FS: hdfs://
> 10.209.21.11:10010/tmp/kylin/kylin_metadata/kylin-061e3569-a973-4c0a-8d89-84c9e0b12117/test2/hfile/F1/fae5ec4c764e4bd8921d6a43d2295493,
> expected: hdfs://wanda
>         at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:642)
>         at
> org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:181)
>         at
> org.apache.hadoop.hdfs.DistributedFileSystem.access$000(DistributedFileSystem.java:92)
>         at
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1106)
>         at
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1102)
>         at
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>         at
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1102)
>         at
> org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:398)
>         at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1397)
>         at
> org.apache.hadoop.hbase.regionserver.HRegionFileSystem.commitStoreFile(HRegionFileSystem.java:367)
>         at
> org.apache.hadoop.hbase.regionserver.HRegionFileSystem.bulkLoadStoreFile(HRegionFileSystem.java:446)
>         at
> org.apache.hadoop.hbase.regionserver.HStore.bulkLoadHFile(HStore.java:694)
>         at
> org.apache.hadoop.hbase.regionserver.HRegion.bulkLoadHFiles(HRegion.java:3855)
>         at
> org.apache.hadoop.hbase.regionserver.HRegion.bulkLoadHFiles(HRegion.java:3761)
>         at
> org.apache.hadoop.hbase.regionserver.HRegionServer.bulkLoadHFile(HRegionServer.java:3426)
>         at
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:30948)
>         at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2093)
>         ... 4 more
>
> Wed Nov 11 18:20:08 GMT-08:00 2015,
> org.apache.hadoop.hbase.client.RpcRetryingCaller@7544d58e,
> java.io.IOException: java.io.IOException: Wrong FS: hdfs://
> 10.209.21.11:10010/tmp/kylin/kylin_metadata/kylin-061e3569-a973-4c0a-8d89-84c9e0b12117/test2/hfile/F1/fae5ec4c764e4bd8921d6a43d2295493,
> expected: hdfs://wanda
>         at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2132)
>         at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
>         at
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
>         at
> org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
>         at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.IllegalArgumentException: Wrong FS: hdfs://
> 10.209.21.11:10010/tmp/kylin/kylin_metadata/kylin-061e3569-a973-4c0a-8d89-84c9e0b12117/test2/hfile/F1/fae5ec4c764e4bd8921d6a43d2295493,
> expected: hdfs://wanda
>         at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:642)
>         at
> org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:181)
>         at
> org.apache.hadoop.hdfs.DistributedFileSystem.access$000(DistributedFileSystem.java:92)
>         at
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1106)
>         at
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1102)
>         at
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>         at
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1102)
>         at
> org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:398)
>         at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1397)
>         at
> org.apache.hadoop.hbase.regionserver.HRegionFileSystem.commitStoreFile(HRegionFileSystem.java:367)
>         at
> org.apache.hadoop.hbase.regionserver.HRegionFileSystem.bulkLoadStoreFile(HRegionFileSystem.java:446)
>         at
> org.apache.hadoop.hbase.regionserver.HStore.bulkLoadHFile(HStore.java:694)
>         at
> org.apache.hadoop.hbase.regionserver.HRegion.bulkLoadHFiles(HRegion.java:3855)
>         at
> org.apache.hadoop.hbase.regionserver.HRegion.bulkLoadHFiles(HRegion.java:3761)
>         at
> org.apache.hadoop.hbase.regionserver.HRegionServer.bulkLoadHFile(HRegionServer.java:3426)
>         at
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:30948)
>         at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2093)
>         ... 4 more
>
>
> Which Kylin version you're using? Besides, seems the stack trace you pasted
> is truncked, please provide the fulll error stack so we can see it was
> triggered from which part of Kylin.
>
> Also, did you configure "kylin.hbase.cluster.fs" in "conf/kylin.properties"
> with your HBase HDFS host info?
>
> 2015-11-11 21:02 GMT+08:00 Xiaoyu Wang <[hidden email]<
> http://apache-kylin-incubating.74782.x6.nabble.com/user/SendEmail.jtp?type=node&node=2312&i=0
> >>:
>
> > Which version Hadoop,HBase do you use?
> >
> >
> > 在 2015年11月11日 20:49, LIU Ze (刘则) 写道:
> >
> >> hi,all
> >>
> >> in the step of "Load HFile to HBase Table"  ,connect to regionserver
> will
> >> retry 35 times .
> >>
> >> the regionserver hosts had add to /etc/hosts   ,why it can not connect
> to
> >> regionserver?
> >> ________________________________
> >>
> >> 2015-11-11 20:32:04,549 DEBUG [LoadIncrementalHFiles-1]
> >> mapreduce.LoadIncrementalHFiles: Going to connect to server
> >>
> region=KYLIN_A1CZUZI0MU,,1447244994197.ae3549d9a76c212fea1330f849d4b4ae.,
> >> hostname=CDM1C22-209021018,10620,1444379096591, seqNum=1 for row  with
> >> hfile group [{[B@760ed940,hdfs://
> >>
> 10.209.21.11:10010/tmp/kylin/kylin_metadata/kylin-dd2f8f9f-76b1-44bb-8a7e-96b9c3924e0a/test2/hfile/F1/b4507237baf944d2818933935f22441d
> >> }]
> >> 2015-11-11 20:32:04,558 INFO  [LoadIncrementalHFiles-1]
> >> client.RpcRetryingCaller: Call exception, tries=12, retries=35,
> >> retryTime=108455ms, msg=row '' on table 'KYLIN_A1CZUZI0MU' at
> >>
> region=KYLIN_A1CZUZI0MU,,1447244994197.ae3549d9a76c212fea1330f849d4b4ae.,
> >> hostname=CDM1C22-209021018,10620,1444379096591, seqNum=1
> >> 2015-11-11 20:32:24,577 DEBUG [LoadIncrementalHFiles-1]
> >> client.HConnectionManager$HConnectionImplementation: Removed
> >> CDM1C22-209021018:10620 as a location of
> >> KYLIN_A1CZUZI0MU,,1447244994197.ae3549d9a76c212fea1330f849d4b4ae. for
> >> tableName=KYLIN_A1CZUZI0MU from cache
> >>
> >> hdfs of hbase:hdfs://10.209.21.11:10010
> >> hdfs of hive:  hdfs://wanda
> >>
> >> Wed Nov 11 04:30:16 GMT-08:00 2015,
> >> org.apache.hadoop.hbase.client.RpcRetryingCaller@53cb23f2,
> >> java.io.IOException: java.io.IOException: Wrong FS: hdfs://
> >>
> 10.209.21.11:10010/tmp/kylin/kylin_metadata/kylin-dd2f8f9f-76b1-44bb-8a7e-96b9c3924
> >> e0a/test2/hfile/F1/b4507237baf944d2818933935f22441d, expected:
> >> hdfs://wanda
> >>          at
> >> org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2132)
> >>          at
> >> org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
> >>          at
> >>
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
> >>          at
> >> org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
> >>          at java.lang.Thread.run(Thread.java:745)
> >> Caused by: java.lang.IllegalArgumentException: Wrong FS: hdfs://
> >>
> 10.209.21.11:10010/tmp/kylin/kylin_metadata/kylin-dd2f8f9f-76b1-44bb-8a7e-96b9c3924e0a/test2/hfile/F1/b4507237baf944d2818933935f22441d
> ,
> >> expected: hdfs://wanda
> >>          at
> org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:642)
> >>          at
> >>
> org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:181)
> >>          at
> >>
> org.apache.hadoop.hdfs.DistributedFileSystem.access$000(DistributedFileSystem.java:92)
> >>          at
> >>
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1106)
> >>          at
> >>
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1102)
> >>
> >
> >
> ... [show rest of quote<javascript:void(0)>]
>
>
> --
> Best regards,
>
> Shaofeng Shi
>



-- 
Best regards,

Shaofeng Shi

Reply via email to