Was new snapshot being taken in the recent days ?
Or these snapshots were left over from some time ago ?

Please consider deleting old snapshots which are not needed. 

See HBASE-8572 which got integrated to 0.98.9

Cheers

> On Sep 15, 2015, at 4:20 AM, Akmal Abbasov <[email protected]> wrote:
> 
> The problem in my case was because of a huge number of hbase 
> snapshots(currently I have > 10000), and since those are processed in the 
> main page it wasn’t responding.
> Is there a way to disable the loading of all snapshots in the main page?
> On the other hand, is it normal to have such a big number of snapshots?
> 
> Thanks!
> 
>> On 15 Sep 2015, at 00:13, Akmal Abbasov <[email protected]> wrote:
>> 
>> Yes, there are a lot of following messages
>> 2015-09-14 22:03:14,930 WARN  [FifoRpcScheduler.handler1-thread-7] 
>> ipc.RpcServer: RpcServer.respondercallId: 122 service: MasterService 
>> methodName: GetCompletedSnapshots size: 29 connection: 192.168.0.54:56419: 
>> output error
>> 2015-09-14 22:03:14,930 WARN  [FifoRpcScheduler.handler1-thread-7] 
>> ipc.RpcServer: FifoRpcScheduler.handler1-thread-7: caught a 
>> ClosedChannelException, this means that the server was processing a request 
>> but the client went away. The error message was: null
>> 2015-09-14 22:03:14,931 WARN  [FifoRpcScheduler.handler1-thread-9] 
>> ipc.RpcServer: RpcServer.respondercallId: 120 service: MasterService 
>> methodName: GetCompletedSnapshots size: 29 connection: 192.168.0.54:56419: 
>> output error
>> 2015-09-14 22:03:14,931 WARN  [FifoRpcScheduler.handler1-thread-9] 
>> ipc.RpcServer: FifoRpcScheduler.handler1-thread-9: caught a 
>> ClosedChannelException, this means that the server was processing a request 
>> but the client went away. The error message was: null
>> 2015-09-14 22:03:14,931 WARN  [FifoRpcScheduler.handler1-thread-29] 
>> ipc.RpcServer: RpcServer.respondercallId: 96 service: MasterService 
>> methodName: GetCompletedSnapshots size: 29 connection: 192.168.0.54:56419: 
>> output error
>> 2015-09-14 22:03:14,931 WARN  [FifoRpcScheduler.handler1-thread-29] 
>> ipc.RpcServer: FifoRpcScheduler.handler1-thread-29: caught a 
>> ClosedChannelException, this means that the server was processing a request 
>> but the client went away. The error message was: null
>> 
>> Thanks.
>> 
>>> On 15 Sep 2015, at 00:04, Ted Yu <[email protected]> wrote:
>>> 
>>> Have you checked the log on 192.168.0.54 ?
>>> 
>>> Cheers
>>> 
>>> On Mon, Sep 14, 2015 at 3:02 PM, Akmal Abbasov <[email protected]>
>>> wrote:
>>> 
>>>> Yes, hbase shell is functioning properly.
>>>> Actually WebUI is working, the only thing which isn’t working is the main
>>>> page,  http://hbase-test-master2:60010/master-status <
>>>> http://hbase-test-master2:60010/master-status>
>>>> The server isn’t responding, and I’m getting the following messages in
>>>> master logs
>>>> 2015-09-14 21:56:29,282 WARN  [1407002739@qtp-1651370800-5]
>>>> client.HConnectionManager$HConnectionImplementation: Checking master
>>>> connection
>>>> com.google.protobuf.ServiceException: java.net.SocketTimeoutException:
>>>> Call to hbase-test-master2/192.168.0.54:60000 failed because
>>>> java.net.SocketTimeoutException: 60000 millis timeout while waiting for
>>>> channel to be ready for read. ch :
>>>> java.nio.channels.SocketChannel[connected local=/192.168.0.54:33876
>>>> remote=hbase-test-master2/192.168.0.54:60000]
>>>> So there is some info in main page of the WebUI which isn’t responding,
>>>> any insights what it could be?
>>>> 
>>>> Thanks.
>>>> 
>>>> 
>>>>> On 14 Sep 2015, at 15:09, Ted Yu <[email protected]> wrote:
>>>>> 
>>>>> Can you check master log for the period when you accessed master web UI ?
>>>>> Does hbase shell function properly ?
>>>>> 
>>>>> Thanks
>>>>> 
>>>>> 
>>>>> 
>>>>>> On Sep 14, 2015, at 4:36 AM, Akmal Abbasov <[email protected]>
>>>> wrote:
>>>>>> 
>>>>>> Hi all,
>>>>>> I’m having problems with accessing to HBase WebUI on active master
>>>> node, while I still can access to the WebUI of the standby master.
>>>>>> I’ve checked the state of HBase using hbase hbck, and it is in
>>>> consistent state.
>>>>>> Checked the node which holds hbase:meta table, the snippet from its logs
>>>>>> 2015-09-09 02:41:02,481 INFO  [PostOpenDeployTasks:1588230740]
>>>> zookeeper.ZooKeeperNodeTracker: Setting hbase:meta region location in
>>>> ZooKeeper as test-rs5,60020,1441628944044
>>>>>> 2015-09-09 02:41:02,509 INFO  [PostOpenDeployTasks:1588230740]
>>>> regionserver.HRegionServer: Finished post open deploy task for
>>>> hbase:meta,,1.1588230740
>>>>>> 2015-09-09 02:41:02,522 DEBUG [RS_OPEN_META-test-rs5:60020-0]
>>>> handler.OpenRegionHandler: Opened hbase:meta,,1.1588230740 on
>>>> test-rs5,60020,1441628944044
>>>>>> I’m using hadoop-2.5.1 with hbase-0.98.7-hadoop2.
>>>>>> What could be wrong in this case?
>>>>>> 
>>>>>> Thanks.
> 

Reply via email to