[ 
https://issues.apache.org/jira/browse/HBASE-2728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12879252#action_12879252
 ] 

stack commented on HBASE-2728:
------------------------------

I looked at this patch again:

{code}
-      Runtime.getRuntime().removeShutdownHook(hdfsClientFinalizer);
+      boolean registered =
+          Runtime.getRuntime().removeShutdownHook(hdfsClientFinalizer);
+      if (!registered) {
+        LOG.info("The HDFS shutdown hook isn't where we expect it, " +
+            "will call close during shutdown");
+        hdfsSupportsAutoCloseDisabling = true;
+      }
{code}

In above, I'd say you should do better explaination in log message.. mention 
that you are going to presume fs.automatic.close is in place.

Do you think it would pay to do better introspection up earlier in this method 
looking for hdfsClientFinalizer explicitly in Cache -- then you'd know you have 
an hdfs w/ hadoop-4829 in place?  Maybe not . Maybe thats what I should do on 
trunk and this is good enough for branch, presuming all tests pass on both 
apache and cloudera?

> Support for HADOOP-4829
> -----------------------
>
>                 Key: HBASE-2728
>                 URL: https://issues.apache.org/jira/browse/HBASE-2728
>             Project: HBase
>          Issue Type: Bug
>            Reporter: Jean-Daniel Cryans
>            Assignee: Jean-Daniel Cryans
>             Fix For: 0.20.6
>
>         Attachments: HBASE-2728.patch
>
>
> Users who have a HADOOP-4829 patched hadoop will run into the issue that 
> closing a RS cleanly result into data loss because the FileSystem will be 
> closed before the regions are. Cloudera is an example. We need to support 
> those users.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to