[
https://issues.apache.org/jira/browse/HDFS-16985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17713362#comment-17713362
]
Chengwei Wang commented on HDFS-16985:
--------------------------------------
[~weichiu] thanks for comment.
I have tried to explain the root cause in description.
The code in BlockSender
{code:java}
try {
// check block file
// check block meta file
} catch (FileNotFoundException e) {
if ((e.getMessage() != null) && !(e.getMessage()
.contains("Too many open files"))) {
// The replica is on its volume map but not on disk
datanode
.notifyNamenodeDeletedBlock(block, replica.getStorageUuid());
// local block file would be delete async
datanode.data.invalidate(block.getBlockPoolId(),
new Block[] {block.getLocalBlock()}); }
throw e;
} finally {
if (!keepMetaInOpen) {
IOUtils.closeStream(metaIn);
}
}
{code}
> delete local block file when FileNotFoundException occurred may lead to
> missing block.
> --------------------------------------------------------------------------------------
>
> Key: HDFS-16985
> URL: https://issues.apache.org/jira/browse/HDFS-16985
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: datanode
> Reporter: Chengwei Wang
> Assignee: Chengwei Wang
> Priority: Major
>
> We encounterd several missing-block problem in our production cluster which
> hdfs running on AWS EC2 + EBS.
> The root cause:
> # the block remains only 1 replication left and hasn't been reconstruction
> # DN checks block file existing when BlockSender construction
> # the EBS checking failed and throw FileNotFoundException (EBS may be in
> fault condition)
> # DN invalidateBlock and schedule block async deletion
> # EBS already back to normal when DN do delete block
> # the block file be delete permanently and can't be recovered
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]