[
https://issues.apache.org/jira/browse/HDFS-5443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13813579#comment-13813579
]
Jing Zhao commented on HDFS-5443:
---------------------------------
bq. When we delete directory:
I think we still handle the files. After the "if-else" section, we have the
following code, which will go down to clean the files.
{code}
counts.add(cleanSubtreeRecursively(snapshot, prior, collectedBlocks,
removedINodes, priorDeleted, countDiffChange));
{code}
And for INodeFile(UnderConstruction)WithSnapshot, we call
FileWithSnapshot.Util#collectBlocksAndClear to clear the blocks, which will
also remove the 0-sized block:
{code}
for(long size = 0; n < oldBlocks.length && max > size; n++) {
size += oldBlocks[n].getNumBytes();
}
// starting from block n, the data is beyond max.
{code}
> Namenode can stuck in safemode on restart if it crashes just after addblock
> logsync and after taking snapshot for such file.
> ----------------------------------------------------------------------------------------------------------------------------
>
> Key: HDFS-5443
> URL: https://issues.apache.org/jira/browse/HDFS-5443
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: snapshots
> Affects Versions: 3.0.0, 2.2.0
> Reporter: Uma Maheswara Rao G
> Assignee: sathish
>
> This issue is reported by Prakash and Sathish.
> On looking into the issue following things are happening.
> .
> 1) Client added block at NN and just did logsync
> So, NN has block ID persisted.
> 2)Before returning addblock response to client take a snapshot for root or
> parent directories for that file
> 3) Delete parent directory for that file
> 4) Now crash the NN with out responding success to client for that addBlock
> call
> Now on restart of the Namenode, it will stuck in safemode.
--
This message was sent by Atlassian JIRA
(v6.1#6144)