[
https://issues.apache.org/jira/browse/HDFS-3734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13428582#comment-13428582
]
Vinay commented on HDFS-3734:
-----------------------------
{quote}Because of the replication before the cluster restart is persistent
stored in the metadata of each file, I think we can calculate safe block number
with the former replication rather than the modified one. Then the cluster will
exit safe mode and execute replicate operation to achieve required replication.
If this scenario is ok, I can fix it.{quote}
+1 for this.
[~umamaheswararao] what you say..?
> TestFSEditLogLoader.testReplicationAdjusted() will hang if number of blocks
> are more than one
> ---------------------------------------------------------------------------------------------
>
> Key: HDFS-3734
> URL: https://issues.apache.org/jira/browse/HDFS-3734
> Project: Hadoop HDFS
> Issue Type: Bug
> Affects Versions: 2.1.0-alpha, 3.0.0
> Reporter: Vinay
>
> TestFSEditLogLoader.testReplicationAdjusted() which was added in HDFS-2003
> will fail if number of blocks before cluster restart are more than one.
> Test Scenario:
> --------------
> 1. Write a file with min replication as 1 and replication factor as 1.
> 2. Change the min replication to 2 and restart the cluster.
> Expected: Min replication should be automatically reset on cluster restart by
> replicating more blocks.
> Currently, if the number of blocks before restart is only one, then on
> restart NN will not enter safemode, hence replication will happen and
> satisfies min replication factor.
> If initial blocks count is more than 1 which are having replication factor as
> 1, then on restart NN will enter safemode and will never come out.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators:
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira