[ 
https://issues.apache.org/jira/browse/HBASE-15436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15196668#comment-15196668
 ] 

Anoop Sam John commented on HBASE-15436:
----------------------------------------

The HBase cluster fully went down.  So this kind of a scenario the application 
should take care? I mean shutdown the clients ( The NMs in this case) before 
HBase cluster down (?)

Another thing is when the flush in BufferedMutator not able to do (this can be 
because of temp unavailability of HBase RS(s) do the put ops on it getting 
blocked? I dont think so. That will make the client side (NM) to go out of 
memory at some point?  This we need to fix.

When close() is called on BufferedMutator what is the expectation? All the 
prior writes (async) should get synced with HBase RS?

> BufferedMutatorImpl.flush() appears to get stuck
> ------------------------------------------------
>
>                 Key: HBASE-15436
>                 URL: https://issues.apache.org/jira/browse/HBASE-15436
>             Project: HBase
>          Issue Type: Bug
>          Components: Client
>    Affects Versions: 1.0.2
>            Reporter: Sangjin Lee
>         Attachments: hbaseException.log, threaddump.log
>
>
> We noticed an instance where the thread that was executing a flush 
> ({{BufferedMutatorImpl.flush()}}) got stuck when the (local one-node) cluster 
> shut down and was unable to get out of that stuck state.
> The setup is a single node HBase cluster, and apparently the cluster went 
> away when the client was executing flush. The flush eventually logged a 
> failure after 30+ minutes of retrying. That is understandable.
> What is unexpected is that thread is stuck in this state (i.e. in the 
> {{flush()}} call). I would have expected the {{flush()}} call to return after 
> the complete failure.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to