[
https://issues.apache.org/jira/browse/HDFS-3519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Ming Ma updated HDFS-3519:
--------------------------
Attachment: HDFS-3519-2.patch
The change of slowness parameter from 2 to 20 causes test case
testReadsAllowedDuringCheckpoint to time out due to the large number of edits
used in that test case. The motivation of adjusting this parameter is to make
sure both NNs are doing checkpointing in test case testBothNodesInStandbyState.
That doesn't seem be an issue in trunk given OIV image checkpoint adds extra
delay before the image upload. Still, we can adjust the value from 2 to 5, just
to be safe.
> Checkpoint upload may interfere with a concurrent saveNamespace
> ---------------------------------------------------------------
>
> Key: HDFS-3519
> URL: https://issues.apache.org/jira/browse/HDFS-3519
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: namenode
> Reporter: Todd Lipcon
> Assignee: Ming Ma
> Priority: Critical
> Attachments: HDFS-3519-2.patch, HDFS-3519.patch, test-output.txt
>
>
> TestStandbyCheckpoints failed in [precommit build
> 2620|https://builds.apache.org/job/PreCommit-HDFS-Build/2620//testReport/]
> due to the following issue:
> - both nodes were in Standby state, and configured to checkpoint "as fast as
> possible"
> - NN1 starts to save its own namespace
> - NN2 starts to upload a checkpoint for the same txid. So, both threads are
> writing to the same file fsimage.ckpt_12, but the actual file contents
> correspond to the uploading thread's data.
> - NN1 finished its saveNamespace operation while NN2 was still uploading. So,
> it renamed the ckpt file. However, the contents of the file are still empty
> since NN2 hasn't sent any bytes
> - NN2 finishes the upload, and the rename() call fails, which causes the
> directory to be marked failed, etc.
> The result is that there is a file fsimage_12 which appears to be a finalized
> image but in fact is incompletely transferred. When the transfer completes,
> the problem "heals itself" so there wouldn't be persistent corruption unless
> the machine crashes at the same time. And even then, we'd still have the
> earlier checkpoint to restore from.
> This same race could occur in a non-HA setup if a user puts the NN in safe
> mode and issues saveNamespace operations concurrent with a 2NN checkpointing,
> I believe.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)