[ 
https://issues.apache.org/jira/browse/HBASE-2372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12849449#action_12849449
 ] 

Jean-Daniel Cryans commented on HBASE-2372:
-------------------------------------------

I'm not a fan of this loop:

{code}
+    while (!recovered) {
+      try {
+        FSDataOutputStream out = fs.append(p);
+        out.close();
+        recovered = true;
+      } catch (IOException e) {
+        LOG.info("Failed open for append, waiting on lease recovery: " + p, e);
+        try {
+          Thread.sleep(1000);
+        } catch (InterruptedException ex) {
+          // ignore it and try again
+        }
+      }
+    }
{code}

Looks like if there's any other kind of exception, like a missing block or 
whatnot, we'll get stuck in that loop forever? I guess this also applies for 
0.20

> hbase trunk should check dfs.support.append and use append() to recovery 
> logfiles (as in branch)
> ------------------------------------------------------------------------------------------------
>
>                 Key: HBASE-2372
>                 URL: https://issues.apache.org/jira/browse/HBASE-2372
>             Project: Hadoop HBase
>          Issue Type: Bug
>            Reporter: ryan rawson
>            Assignee: ryan rawson
>             Fix For: 0.21.0
>
>         Attachments: HBASE-2372.txt
>
>
> this is a backport of the feature in the hlog recovery where we call 
> fs.append(logfile); then fs.close() to recover a logfile in a situation where 
> we run with HDFS-200 (ie: dfs.support.append = true)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to