It seems little strange not able to find the jar file in hdfs.
Can you manually check if this file exist with the right path.
If it does exist, then it should be some configuration or connection issue
in your hbase-site.xml

liam <[email protected]>于2015年8月3日周一 下午4:14写道:

> Hi,
>     I tracked the code in [ hadoop-hdfs-2.6.0.jar ]
>
> I found this : (the "RuntimeException" message match my error log message)
>
> [org/apache/hadoop/hdfs/server/namenode/ha/ConfiguredFailoverProxyProvider.class]
>  public ConfiguredFailoverProxyProvider(Configuration conf, URI uri,
> Class<T> xface)
>  {
>
>  Preconditions.checkArgument(xface.isAssignableFrom(NamenodeProtocols.class),
> "Interface class %s is not a valid NameNode protocol!");
>
>    this.xface = xface;
>
>    this.conf = new Configuration(conf);
>    int maxRetries =
> this.conf.getInt("dfs.client.failover.connection.retries", 0);
>
>    this.conf.setInt("ipc.client.connect.max.retries", maxRetries);
>
>    int maxRetriesOnSocketTimeouts =
> this.conf.getInt("dfs.client.failover.connection.retries.on.timeouts", 0);
>
>    this.conf.setInt("ipc.client.connect.max.retries.on.timeouts",
> maxRetriesOnSocketTimeouts);
>    try
>    {
>      this.ugi = UserGroupInformation.getCurrentUser();
>
>      Map<String, Map<String, InetSocketAddress>> map =
> DFSUtil.getHaNnRpcAddresses(conf);
>
>      Map<String, InetSocketAddress> addressesInNN =
> (Map)map.get(uri.getHost());
>     * if ((addressesInNN == null) || (addressesInNN.size() == 0)) {*
> *        throw new RuntimeException("Could not find any configured
> addresses for URI " + uri);*
> *      }*
>      Collection<InetSocketAddress> addressesOfNns =
> addressesInNN.values();
>      for (InetSocketAddress address : addressesOfNns) {
>        this.proxies.add(new AddressRpcProxyPair(address));
>      }
>      HAUtil.cloneDelegationTokenForLogicalUri(this.ugi, uri,
> addressesOfNns);
>    }
>    catch (IOException e)
>    {
>      throw new RuntimeException(e);
>    }
>  }
>
> ​
> Then,I write some code to check my configurations:
>        Configuration configuration = new Configuration();
>         System.out.println("dfs.nameservices:" +
> configuration.getTrimmedStringCollection("dfs.nameservices") );
>         Map<String, Map<String, InetSocketAddress>> map =
> DFSUtil.getHaNnRpcAddresses(configuration);
>         System.out.println("getHaNnRpcAddresses map:" + map);
>         try{
> URI uri = new
> URI("hdfs://bicluster/tmp/kylin_Data/coprocessor/kylin-coprocessor-0.7.2-incubating-0.jar");
> Map<String, InetSocketAddress> addressesInNN = (Map)map.get(uri.getHost());
> if ((addressesInNN == null) || (addressesInNN.size() == 0)) {
> throw new RuntimeException("Could not find any configured addresses for
> URI " + uri);
>        }
> else{
> System.out.println("addressesInNN :
> "+addressesInNN+"~#~#~#~#~#~#~#~#~##~#~#~");
> }
> String hosts = uri.getHost();
> System.out.println(hosts);
>
> }catch(Exception e){
> e.printStackTrace();
> }
>
> ​I got the right output:
>
> dfs.nameservices:[bicluster]
> getHaNnRpcAddresses map:{bicluster={nn1=h1.nn1/1X.1XX.1X.7X:8020,
> nn2=h1.nn2/1X.1XX.1X.7X:8020}}
> addressesInNN : {nn1=h1.nn1/1X.1XX.1X.7X:8020, nn2=h1.nn2/1X.1XX.1X.7X}
>
>
>
>
>
> Best Regards.
>
> 2015-08-03 15:43 GMT+08:00 liam <[email protected]>:
>
>> Hi,
>>     I tracked the code in [ hadoop-hdfs-2.6.0.jar ]
>>
>> I found this : (the "RuntimeException" message match my error log message)
>>
>> ​
>> Then,I write some code to check my configurations:
>>
>> ​I got the right output:
>>
>> ​
>>
>>
>>
>>
>> Best Regards.
>>
>>
>>
>>
>> 2015-08-02 13:28 GMT+08:00 周千昊 <[email protected]>:
>>
>>> Hi, Liam
>>>      It seems that there is something wrong with your hadoop
>>> configuration,
>>> please try to double check $KYLIN_HOME/conf/hbase-site.xml to see if any
>>> property is mis-configured.
>>>      There is a post that maybe can help:
>>>
>>> http://hortonworks.com/community/forums/topic/set-up-of-mapreduce-jobhistory-server-fails/
>>>
>>> liam <[email protected]>于2015年8月1日周六 下午6:12写道:
>>>
>>> > Hi,all
>>> >
>>> >     Sorry to be annoying .
>>> > like to log message showed bellow,
>>> >   Should I run kylin on the namenode ?
>>> >   Or should I add some configurations for "URI
>>> >
>>> hdfs://bicluster/tmp/kylin_data/coprocessor/kylin-coprocessor-0.7.2-incubating-3.jar"
>>> > someplace?
>>> > Thanks .
>>> >
>>> > Failed in step 13 when build cube
>>> >
>>> > ​
>>> > ​
>>> > [------------ERROR Message-------------]
>>> >
>>> > [pool-7-thread-1]:[2015-08-01
>>> >
>>> 17:34:28,868][INFO][org.apache.kylin.job.tools.DeployCoprocessorCLI.addCoprocessorOnHTable(DeployCoprocessorCLI.java:119)]
>>> > - Add coprocessor on KYLIN_N1HVO0SD5I
>>> >
>>> > [pool-7-thread-1]:[2015-08-01
>>> >
>>> 17:34:28,869][INFO][org.apache.kylin.job.tools.DeployCoprocessorCLI.deployCoprocessor(DeployCoprocessorCLI.java:99)]
>>> > - hbase table [B@38f07651 deployed with coprocessor.)
>>> >
>>> > at
>>> >
>>> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:93)
>>> >
>>> > at
>>> >
>>> org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3389)
>>> >
>>> > at
>>> >
>>> org.apache.hadoop.hbase.client.HBaseAdmin.createTableAsync(HBaseAdmin.java:631)
>>> >
>>> > at
>>> >
>>> org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:522)
>>> >
>>> > at
>>> >
>>> org.apache.kylin.job.hadoop.hbase.CreateHTableJob.run(CreateHTableJob.java:123)
>>> >
>>> > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>>> >
>>> > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
>>> >
>>> > at
>>> >
>>> org.apache.kylin.job.common.HadoopShellExecutable.doWork(HadoopShellExecutable.java:63)
>>> >
>>> > at
>>> >
>>> org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:106)
>>> >
>>> > at
>>> >
>>> org.apache.kylin.job.execution.DefaultChainedExecutable.doWork(DefaultChainedExecutable.java:50)
>>> >
>>> > at
>>> >
>>> org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:106)
>>> >
>>> > at
>>> >
>>> org.apache.kylin.job.impl.threadpool.DefaultScheduler$JobRunner.run(DefaultScheduler.java:133)
>>> >
>>> > at
>>> >
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>> >
>>> > at
>>> >
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>> >
>>> > at java.lang.Thread.run(Thread.java:745)
>>> >
>>> > Caused by:
>>> >
>>> org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.DoNotRetryIOException):
>>> > org.apache.hadoop.hbase.DoNotRetryIOException: java.io.IOException:
>>> > Couldn't create proxy provider class
>>> >
>>> org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
>>> >
>>> > at
>>> >
>>> org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1329)
>>> >
>>> > at
>>> org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1269)
>>> >
>>> > at
>>> >
>>> org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:398)
>>> >
>>> > at
>>> >
>>> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:42436)
>>> >
>>> > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2031)
>>> >
>>> > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107)
>>> >
>>> > at
>>> >
>>> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
>>> >
>>> > at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
>>> >
>>> > at java.lang.Thread.run(Thread.java:745)
>>> >
>>> > Caused by: java.io.IOException: Couldn't create proxy provider class
>>> >
>>> org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
>>> >
>>> > at
>>> >
>>> org.apache.hadoop.hdfs.NameNodeProxies.createFailoverProxyProvider(NameNodeProxies.java:478)
>>> >
>>> > at
>>> >
>>> org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:148)
>>> >
>>> > at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:602)
>>> >
>>> > at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:547)
>>> >
>>> > at
>>> >
>>> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:139)
>>> >
>>> > at
>>> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2591)
>>> >
>>> > at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:89)
>>> >
>>> > at
>>> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2625)
>>> >
>>> > at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2607)
>>> >
>>> > at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:368)
>>> >
>>> > at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
>>> >
>>> > at
>>> >
>>> org.apache.hadoop.hbase.util.CoprocessorClassLoader.init(CoprocessorClassLoader.java:165)
>>> >
>>> > at
>>> >
>>> org.apache.hadoop.hbase.util.CoprocessorClassLoader.getClassLoader(CoprocessorClassLoader.java:250)
>>> >
>>> > at
>>> >
>>> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.testTableCoprocessorAttrs(RegionCoprocessorHost.java:316)
>>> >
>>> > at
>>> >
>>> org.apache.hadoop.hbase.master.HMaster.checkClassLoading(HMaster.java:1483)
>>> >
>>> > at
>>> >
>>> org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1327)
>>> >
>>> > ... 8 more
>>> >
>>> > Caused by: java.lang.reflect.InvocationTargetException
>>> >
>>> > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
>>> Method)
>>> >
>>> > at
>>> >
>>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
>>> >
>>> > at
>>> >
>>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>>> >
>>> > at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>>> >
>>> > at
>>> >
>>> org.apache.hadoop.hdfs.NameNodeProxies.createFailoverProxyProvider(NameNodeProxies.java:461)
>>> >
>>> > ... 23 more
>>> >
>>> > Caused by: java.lang.RuntimeException: Could not find any configured
>>> > addresses for URI
>>> >
>>> hdfs://bicluster/tmp/kylin_data/coprocessor/kylin-coprocessor-0.7.2-incubating-3.jar
>>> >
>>> > at
>>> >
>>> org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider.<init>(ConfiguredFailoverProxyProvider.java:93)
>>> >
>>> > ... 28 more
>>> >
>>> >
>>> > at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1457)
>>> >
>>> > at
>>> >
>>> org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1661)
>>> >
>>> > at
>>> >
>>> org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1719)
>>> >
>>> > at
>>> >
>>> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.createTable(MasterProtos.java:44788)
>>> >
>>> > at
>>> >
>>> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$5.createTable(HConnectionManager.java:1949)
>>> >
>>> > at
>>> org.apache.hadoop.hbase.client.HBaseAdmin$2.call(HBaseAdmin.java:635)
>>> >
>>> > at
>>> org.apache.hadoop.hbase.client.HBaseAdmin$2.call(HBaseAdmin.java:631)
>>> >
>>> > at
>>> >
>>> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:117)
>>> >
>>> > ... 15 more
>>> >
>>> > [pool-7-thread-1]:[2015-08-01
>>> >
>>> 17:34:28,906][ERROR][org.apache.kylin.job.hadoop.hbase.CreateHTableJob.run(CreateHTableJob.java:130)]
>>> > - org.apache.hadoop.hbase.DoNotRetryIOException: java.io.IOException:
>>> > Couldn't create proxy provider class
>>> >
>>> org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
>>> >
>>> > at
>>> >
>>> org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1329)
>>> >
>>> > at
>>> org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1269)
>>> >
>>> > at
>>> >
>>> org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:398)
>>> >
>>> > at
>>> >
>>> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:42436)
>>> >
>>> > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2031)
>>> >
>>> > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107)
>>> >
>>> > at
>>> >
>>> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
>>> >
>>> > at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
>>> >
>>> > at java.lang.Thread.run(Thread.java:745)
>>> >
>>> > Caused by: java.io.IOException: Couldn't create proxy provider class
>>> >
>>> org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
>>> >
>>> > at
>>> >
>>> org.apache.hadoop.hdfs.NameNodeProxies.createFailoverProxyProvider(NameNodeProxies.java:478)
>>> >
>>> > at
>>> >
>>> org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:148)
>>> >
>>> > at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:602)
>>> >
>>> > at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:547)
>>> >
>>> > at
>>> >
>>> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:139)
>>> >
>>> > at
>>> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2591)
>>> >
>>> > at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:89)
>>> >
>>> > at
>>> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2625)
>>> >
>>> > at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2607)
>>> >
>>> > at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:368)
>>> >
>>> > at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
>>> >
>>> > at
>>> >
>>> org.apache.hadoop.hbase.util.CoprocessorClassLoader.init(CoprocessorClassLoader.java:165)
>>> >
>>> > at
>>> >
>>> org.apache.hadoop.hbase.util.CoprocessorClassLoader.getClassLoader(CoprocessorClassLoader.java:250)
>>> >
>>> > at
>>> >
>>> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.testTableCoprocessorAttrs(RegionCoprocessorHost.java:316)
>>> >
>>> > at
>>> >
>>> org.apache.hadoop.hbase.master.HMaster.checkClassLoading(HMaster.java:1483)
>>> >
>>> > at
>>> >
>>> org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1327)
>>> >
>>> > ... 8 more
>>> >
>>> > Caused by: java.lang.reflect.InvocationTargetException
>>> >
>>> > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
>>> Method)
>>> >
>>> > at
>>> >
>>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
>>> >
>>> > at
>>> >
>>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>>> >
>>> > at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>>> >
>>> > at
>>> >
>>> org.apache.hadoop.hdfs.NameNodeProxies.createFailoverProxyProvider(NameNodeProxies.java:461)
>>> >
>>> > ... 23 more
>>> >
>>> > Caused by: java.lang.RuntimeException: Could not find any configured
>>> > addresses for URI
>>> >
>>> hdfs://bicluster/tmp/kylin_data/coprocessor/kylin-coprocessor-0.7.2-incubating-3.jar
>>> >
>>> > at
>>> >
>>> org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider.<init>(ConfiguredFailoverProxyProvider.java:93)
>>> >
>>> > ... 28 more
>>> >
>>> > --
>>> Best Regard
>>> ZhouQianhao
>>>
>>
>>
> --
Best Regard
ZhouQianhao

Reply via email to