[ 
https://issues.apache.org/jira/browse/HBASE-10319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13935016#comment-13935016
 ] 

Himanshu Vashishtha commented on HBASE-10319:
---------------------------------------------

bq.  having hbase periodically force a log roll. This would enable the hdfs dn 
con complete.
A roll creates a new file,  writes and sync its header, and then closes the old 
writer. I wonder how it solve the original issue because you would still have 
one open file ?
I guess decommissioning blacklists a DN, so when a new file is created, NN 
ensures no blocks are allocated on this DN. Just want to confirm.

> HLog should roll periodically to allow DN decommission to eventually complete.
> ------------------------------------------------------------------------------
>
>                 Key: HBASE-10319
>                 URL: https://issues.apache.org/jira/browse/HBASE-10319
>             Project: HBase
>          Issue Type: Bug
>            Reporter: Jonathan Hsieh
>            Assignee: Matteo Bertozzi
>             Fix For: 0.98.0, 0.96.2, 0.94.17
>
>         Attachments: HBASE-10319-v0.patch, HBASE-10319-v1.patch
>
>
> We encountered a situation where we had an esseitially read only table and 
> attempted to do a clean HDFS DN decommission.  DN's cannot decomission if 
> there are open blocks being written to currently on it.  Because the hbase 
> Hlog file was open, had some data (hlog header), the DN could not 
> decommission itself.  Since no new data is ever written, the existing 
> periodic check is not activated.
> After discussing with [~atm], it seems that although an hdfs semantics change 
> would be ideal (e.g. hbase doesn't have to be aware of hdfs decommission and 
> the client would roll over) this would take much more effort than having 
> hbase periodically force a log roll.  This would enable the hdfs dn con 
> complete.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to