Thanks for your reply!
I had set the property
"kylin.hbase.cluster.fs=hdfs://CDM1C22-209021011.wdds.com:10010" ,but the
"kylin-coprocessor-1.0-incubating-${num}.jar" has also created in the hdfs of
hive
the nameservice of hdfs A and hdfs B is all named "wanda". hdfs A is hive,
hdfs B is hbase
the "hdfs-site.xml" of hdfs A:
<property>
<name>dfs.nameservices</name>
<value>wanda</value>
</property>
<property>
<name>dfs.ha.namenodes.wanda</name>
<value>nn1,nn2</value>
</property>
<property>
<name>dfs.namenode.rpc-address.wanda.nn1</name>
<value>CDM1C16-209020011.wdds.com:10010</value>
</property>
<property>
<name>dfs.namenode.rpc-address.wanda.nn2</name>
<value>CDM1C16-209020012.wdds.com:10010</value>
</property>
________________________________
the "hdfs-site.xml" of hdfs B:
<property>
<name>dfs.nameservices</name>
<value>wanda</value>
</property>
<property>
<name>dfs.ha.namenodes.wanda</name>
<value>nn1,nn2</value>
</property>
<property>
<name>dfs.namenode.rpc-address.wanda.nn1</name>
<value>CDM1C22-209021011.wdds.com:10010</value>
</property>
<property>
<name>dfs.namenode.rpc-address.wanda.nn2</name>
<value>CDM1C22-209021012.wdds.com:10010</value>
</property>
What is the name service "hdfs://wanda" stand for? HDFS A or HDFS B? I think
you should check that do you set the property "kylin.hbase.cluster.fs" in kylin
config file at first, if not set it to the nameservice of HDFS:B . Then check
the hadoop config file "hdfs-site.xml" to add nameservice of HDFS B, After
that, you can run "hadoop fs -ls hdfs://b/" to ensure your kylin local env can
access to HDFS:B successfully. 2015-10-28 17:52 GMT+08:00 LIU Ze (刘则) : > > HI,
all > > Hive use hdfs: A, Hbase use hdfs :B > in the step of "Create HTable"
,it will use the jar which in hdfs >
"/tmp/kylin/kylin_metadata/coprocessor/kylin-coprocessor-1.0-incubating-${num}.jar
> " > the jar in hdfs:A but not in hdfs:B ,it make a error > > >
pool-5-thread-10]:[2015-10-28 >
17:36:24,371][ERROR][org.apache.kylin.job.hadoop.hbase.CreateHTableJob.run(CreateHTableJob.java:126)]
> - org.apache.hadoop.hbase.DoNotRetryIOException: >
java.io.FileNotFoundException: File does not exist: hdf > >
s://wanda/tmp/kylin/kylin_metadata/coprocessor/kylin-coprocessor-1.0-incubating-2.jar
> at >
org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1910)
> at > org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1850) >
at > org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:2007) > at
>
org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:41479)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2093) > at
org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101) > at >
org.apache.hadoop.hbase.ipc.FifoRpcScheduler$1.run(FifoRpcScheduler.java:74) >
at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) >
at java.util.concurrent.FutureTask.run(FutureTask.java:262) > at >
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at >
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745) > Caused by:
java.io.FileNotFoundException: File does not exist: >
hdfs://wanda/tmp/kylin/kylin_metadata/coprocessor/kylin-coprocessor-1.0-incubating-2.jar
> at >
org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1110)
> at >
org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1102)
> at >
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at >
org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1102)
> at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:337) > at
org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:289) > at >
org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:2086) > at >
org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:2055) > at >
org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:2031) > at >
org.apache.hadoop.hbase.util.CoprocessorClassLoader.init(CoprocessorClassLoader.java:168)
> at >
org.apache.hadoop.hbase.util.CoprocessorClassLoader.getClassLoader(CoprocessorClassLoader.java:250)
> at >
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.testTableCoprocessorAttrs(RegionCoprocessorHost.java:305)
> at >
org.apache.hadoop.hbase.master.HMaster.checkClassLoading(HMaster.java:1998) >
at >
org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1908)
> ... 11 more > ________________________________ >