[ 
https://issues.apache.org/jira/browse/HBASE-10926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14030287#comment-14030287
 ] 

Jerry He commented on HBASE-10926:
----------------------------------

The normal layout of the zk node for the 'reached' phase:

: |-/hbase/flush-table-proc
:   |-reached
:   |----cluster_test
:   |-------hdtest010.svl.ibm.com,60020,1402634727563
:   |-------hdtest011.svl.ibm.com,60020,1402634727508

procName = ZKUtil.getNodeName(ZKUtil.getParent(path))     --> cluster_test
member = ZKUtil.getNodeName(path)    -->  
hdtest010.svl.ibm.com,60020,1402634727563

But from the warning message, we seem to have the wrong layout (my deduction, 
not actual test result):

: |-/hbase/flush-table-proc
:  |-reached
:  |----cluster_test

The leaf nodes were missing.  Therefore we got:
procName = ZKUtil.getNodeName(ZKUtil.getParent(path))   --> reached
member = ZKUtil.getNodeName(path)  -->  cluster_test

Either the leaf node was not created or cleared due to race?

> Use global procedure to flush table memstore cache
> --------------------------------------------------
>
>                 Key: HBASE-10926
>                 URL: https://issues.apache.org/jira/browse/HBASE-10926
>             Project: HBase
>          Issue Type: Improvement
>          Components: Admin
>    Affects Versions: 0.96.2, 0.98.1
>            Reporter: Jerry He
>            Assignee: Jerry He
>             Fix For: 0.99.0
>
>         Attachments: HBASE-10926-trunk-v1.patch, HBASE-10926-trunk-v2.patch, 
> HBASE-10926-trunk-v3.patch, HBASE-10926-trunk-v4.patch
>
>
> Currently, user can trigger table flush through hbase shell or HBaseAdmin 
> API.  To flush the table cache, each region server hosting the regions is 
> contacted and flushed sequentially, which is less efficient.
> In HBase snapshot global procedure is used to coordinate and flush the 
> regions in a distributed way.
> Let's provide a distributed table flush for general use.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to