[
https://issues.apache.org/jira/browse/HBASE-10319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Matteo Bertozzi updated HBASE-10319:
------------------------------------
Resolution: Fixed
Fix Version/s: 0.94.18
0.96.2
0.98.0
Status: Resolved (was: Patch Available)
committed to 0.94, 0.96, 0.98 and trunk
> HLog should roll periodically to allow DN decommission to eventually complete.
> ------------------------------------------------------------------------------
>
> Key: HBASE-10319
> URL: https://issues.apache.org/jira/browse/HBASE-10319
> Project: HBase
> Issue Type: Bug
> Reporter: Jonathan Hsieh
> Assignee: Matteo Bertozzi
> Fix For: 0.98.0, 0.96.2, 0.94.18
>
> Attachments: HBASE-10319-v0.patch, HBASE-10319-v1.patch
>
>
> We encountered a situation where we had an esseitially read only table and
> attempted to do a clean HDFS DN decommission. DN's cannot decomission if
> there are open blocks being written to currently on it. Because the hbase
> Hlog file was open, had some data (hlog header), the DN could not
> decommission itself. Since no new data is ever written, the existing
> periodic check is not activated.
> After discussing with [~atm], it seems that although an hdfs semantics change
> would be ideal (e.g. hbase doesn't have to be aware of hdfs decommission and
> the client would roll over) this would take much more effort than having
> hbase periodically force a log roll. This would enable the hdfs dn con
> complete.
--
This message was sent by Atlassian JIRA
(v6.1.5#6160)