[ 
https://issues.apache.org/jira/browse/HBASE-10319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13935157#comment-13935157
 ] 

Jonathan Hsieh commented on HBASE-10319:
----------------------------------------

basically the dn enters decomissioning mode, prevents new blocks for write, and 
"hangs" until the open for write blocks closes. the hbase roll would close the 
file's block  allowing the dn to shutdown and would be excluded as a potential 
place for new blocks from the new wal file.

> HLog should roll periodically to allow DN decommission to eventually complete.
> ------------------------------------------------------------------------------
>
>                 Key: HBASE-10319
>                 URL: https://issues.apache.org/jira/browse/HBASE-10319
>             Project: HBase
>          Issue Type: Bug
>            Reporter: Jonathan Hsieh
>            Assignee: Matteo Bertozzi
>             Fix For: 0.98.0, 0.96.2, 0.94.17
>
>         Attachments: HBASE-10319-v0.patch, HBASE-10319-v1.patch
>
>
> We encountered a situation where we had an esseitially read only table and 
> attempted to do a clean HDFS DN decommission.  DN's cannot decomission if 
> there are open blocks being written to currently on it.  Because the hbase 
> Hlog file was open, had some data (hlog header), the DN could not 
> decommission itself.  Since no new data is ever written, the existing 
> periodic check is not activated.
> After discussing with [~atm], it seems that although an hdfs semantics change 
> would be ideal (e.g. hbase doesn't have to be aware of hdfs decommission and 
> the client would roll over) this would take much more effort than having 
> hbase periodically force a log roll.  This would enable the hdfs dn con 
> complete.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to