[ 
https://issues.apache.org/jira/browse/HADOOP-12678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15084107#comment-15084107
 ] 

Chris Nauroth commented on HADOOP-12678:
----------------------------------------

[~madhuch-ms], thank you for the patch.  I have a few notes.

# In {{NativeAzureFileSystem.FolderRenamePending#deleteRenamePendingFile}}, the 
exception handling logic is quite complex.  There is an unchecked downcast to 
{{StorageException}}, which could cause a {{CastClassException}}.  There is no 
check that the cause is non-null.  I don't see why there is a nested {{catch 
(Exception e2)}}, because I don't see a possibility of any exception being 
thrown in that block, unless this was put in place to mask the 
{{CastClassException}}.  There is no need to wrap the original {{IOException}} 
in a whole new {{IOException}} before throwing it.  This will only make the 
stack trace longer without adding new information.  The code was also 
incorrectly indented, which made it difficult to read.  I suggest simplifying 
the {{catch (IOException e)}} block to this:
{code}
        Throwable cause = e.getCause();
        if (cause != null && cause instanceof StorageException &&
            "BlobNotFound".equals(((StorageException)cause).getErrorCode())) {
          LOG.warn("rename pending file " + redoFile + " is already deleted");
        } else {
          throw e;
        }
{code}
# Is {{NativeAzureFileSystem.FolderRenamePending#deleteRenamePendingFile}} 
marked {{public}} only so that the tests can call it?  If so, then please make 
it package-private (remove {{public}}) and apply the {{VisibleForTesting}} 
annotation.
# Please ensure any lines that you are changing or adding are shorter than 80 
characters.


> Handle empty rename pending metadata file during atomic rename in redo path
> ---------------------------------------------------------------------------
>
>                 Key: HADOOP-12678
>                 URL: https://issues.apache.org/jira/browse/HADOOP-12678
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: fs/azure
>            Reporter: madhumita chakraborty
>            Assignee: madhumita chakraborty
>            Priority: Critical
>         Attachments: HADOOP-12678.001.patch, HADOOP-12678.002.patch, 
> HADOOP-12678.003.patch
>
>
> Handle empty rename pending metadata file during atomic rename in redo path
> During atomic rename we create metadata file for rename(-renamePending.json). 
> We create that in 2 steps
> 1. We create an empty blob corresponding to the .json file in its real 
> location
> 2. We create a scratch file to which we write the contents of the rename 
> pending which is then copied over into the blob described in 1
> If process crash occurs after step 1 and before step 2 is complete - we will 
> be left with a zero size blob corresponding to the pending rename metadata 
> file.
> This kind of scenario can happen in the /hbase/.tmp folder because it is 
> considered a candidate folder for atomic rename. Now when HMaster starts up 
> it executes listStatus on the .tmp folder to clean up pending data. At this 
> stage due to the lazy pending rename complete process we look for these json 
> files. On seeing an empty file the process simply throws a fatal exception 
> assuming something went wrong.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to