Thanks. Maybe the same as HBase-10499.
I stop the regionserver then start it. Then hbase back to normal. 
This is jstack log when  2 regions  can not  flush.

"Thread-17" prio=10 tid=0x00007f6210383800 nid=0x6540 waiting on condition
[0x00007f61e0a26000]
   java.lang.Thread.State: TIMED_WAITING (parking)
        at sun.misc.Unsafe.park(Native Method)
        - parking to wait for  <0x000000041ae0e6b8> (a java.util.concurrent.
locks.AbstractQueuedSynchronizer$ConditionObject)
        at
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
        at
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitN
anos(AbstractQueuedSynchronizer.java:2025)
        at java.util.concurrent.DelayQueue.poll(DelayQueue.java:201)
        at java.util.concurrent.DelayQueue.poll(DelayQueue.java:39)
        at
org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemSto
reFlusher.java:228)
        at java.lang.Thread.run(Thread.java:662)

-----邮件原件-----
发件人: 冯宏华 [mailto:[email protected]] 
发送时间: 2014年6月3日 16:34
收件人: [email protected]
主题: 答复: forcing flush not works

The same symptom as HBase-10499? 

I still (highly)suspect that there is something wrong with the flush
queue(some entry pushed into it can't be poll out). 
________________________________________
发件人: sunweiwei [[email protected]]
发送时间: 2014年6月3日 15:43
收件人: [email protected]
主题: forcing flush not works

Hi



I'm using a heavy-write hbase0.96 . I find this in regionserver log:

2014-06-03 15:13:19,445 INFO  [regionserver60020.logRoller] wal.FSHLog: Too
many hlogs: logs=33, maxlogs=32; forcing flush of 3 regions(s):
1a7dda3c3815c19970ace39fd99abfe8, aff81bc46aa7d3ed51a01f11f23c8320,
d5666e003f598147b4dda509f173a779

2014-06-03 15:13:23,869 INFO  [regionserver60020.logRoller] wal.FSHLog: Too
many hlogs: logs=34, maxlogs=32; forcing flush of 2 regions(s):
aff81bc46aa7d3ed51a01f11f23c8320, d5666e003f598147b4dda509f173a779

┇

┇

2014-06-03 15:18:14,778 INFO  [regionserver60020.logRoller] wal.FSHLog: Too
many hlogs: logs=93, maxlogs=32; forcing flush of 2 regions(s):
aff81bc46aa7d3ed51a01f11f23c8320, d5666e003f598147b4dda509f173a779





It seems like 2 regions can’t be flushed and WALs Dir continue to increase
and Then I find this in client log:

INFO | AsyncProcess-waitForMaximumCurrentTasks [2014-06-03 15:30:53] - :
Waiting for the global number of running tasks to be equals or less than 0,
tasksSent=15819, tasksDone=15818, currentTasksDone=15818,
tableName=BT_D_BF001_201406



Then write speed will become very slow.

After I flush 2 regions  manually , write speed can back to normal  only   a
short while.



Any suggestion will be appreciated. Thanks.


Reply via email to