[
https://issues.apache.org/jira/browse/HDFS-7707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14302737#comment-14302737
]
Hadoop QA commented on HDFS-7707:
---------------------------------
{color:red}-1 overall{color}. Here are the results of testing the latest
attachment
http://issues.apache.org/jira/secure/attachment/12696056/HDFS-7707.001.patch
against trunk revision 8cb4731.
{color:green}+1 @author{color}. The patch does not contain any @author
tags.
{color:green}+1 tests included{color}. The patch appears to include 1 new
or modified test files.
{color:green}+1 javac{color}. The applied patch does not increase the
total number of javac compiler warnings.
{color:green}+1 javadoc{color}. There were no new javadoc warning messages.
{color:green}+1 eclipse:eclipse{color}. The patch built with
eclipse:eclipse.
{color:green}+1 findbugs{color}. The patch does not introduce any new
Findbugs (version 2.0.3) warnings.
{color:green}+1 release audit{color}. The applied patch does not increase
the total number of release audit warnings.
{color:red}-1 core tests{color}. The patch failed these unit tests in
hadoop-hdfs-project/hadoop-hdfs:
org.apache.hadoop.hdfs.server.namenode.TestCommitBlockSynchronization
Test results:
https://builds.apache.org/job/PreCommit-HDFS-Build/9406//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9406//console
This message is automatically generated.
> Edit log corruption due to delayed block removal again
> ------------------------------------------------------
>
> Key: HDFS-7707
> URL: https://issues.apache.org/jira/browse/HDFS-7707
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: namenode
> Affects Versions: 2.6.0
> Reporter: Yongjun Zhang
> Assignee: Yongjun Zhang
> Attachments: HDFS-7707.001.patch, reproduceHDFS-7707.patch
>
>
> Edit log corruption is seen again, even with the fix of HDFS-6825.
> Prior to HDFS-6825 fix, if dirX is deleted recursively, an OP_CLOSE can get
> into edit log for the fileY under dirX, thus corrupting the edit log
> (restarting NN with the edit log would fail).
> What HDFS-6825 does to fix this issue is, to detect whether fileY is already
> deleted by checking the ancestor dirs on it's path, if any of them doesn't
> exist, then fileY is already deleted, and don't put OP_CLOSE to edit log for
> the file.
> For this new edit log corruption, what I found was, the client first deleted
> dirX recursively, then create another dir with exactly the same name as dirX
> right away. Because HDFS-6825 count on the namespace checking (whether dirX
> exists in its parent dir) to decide whether a file has been deleted, the
> newly created dirX defeats this checking, thus OP_CLOSE for the already
> deleted file gets into the edit log, due to delayed block removal.
> What we need to do is to have a more robust way to detect whether a file has
> been deleted.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)