[
https://issues.apache.org/jira/browse/HDFS-7587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14366016#comment-14366016
]
Kihwal Lee commented on HDFS-7587:
----------------------------------
Is it possible for the last block size to be greater than the preferred block
size?
{code}
+ final long diff = (file.getPreferredBlockSize() -
lastBlock.getNumBytes())
+ * file.getBlockReplication();
{code}
> Edit log corruption can happen if append fails with a quota violation
> ---------------------------------------------------------------------
>
> Key: HDFS-7587
> URL: https://issues.apache.org/jira/browse/HDFS-7587
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: namenode
> Reporter: Kihwal Lee
> Assignee: Jing Zhao
> Priority: Blocker
> Attachments: HDFS-7587.001.patch, HDFS-7587.002.patch, HDFS-7587.patch
>
>
> We have seen a standby namenode crashing due to edit log corruption. It was
> complaining that {{OP_CLOSE}} cannot be applied because the file is not
> under-construction.
> When a client was trying to append to the file, the remaining space quota was
> very small. This caused a failure in {{prepareFileForWrite()}}, but after the
> inode was already converted for writing and a lease added. Since these were
> not undone when the quota violation was detected, the file was left in
> under-construction with an active lease without edit logging {{OP_ADD}}.
> A subsequent {{append()}} eventually caused a lease recovery after the soft
> limit period. This resulted in {{commitBlockSynchronization()}}, which closed
> the file with {{OP_CLOSE}} being logged. Since there was no corresponding
> {{OP_ADD}}, edit replaying could not apply this.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)