[ https://issues.apache.org/jira/browse/HDFS-3772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13431637#comment-13431637 ]
Yanbo Liang commented on HDFS-3772: ----------------------------------- {quote} Client's write will succeed if it meets min replication. ( So, it is possible that replication might not met the replication factor for that file yet). {quote} Indeed,the min replication is used as this. I also suspect whether we need to restart cluster with a bigger minimum replication. But the parameter in hdfs-site.xml is not limited to only smaller. Whether we need to add this limitation in the parameter description? {quote} Why can't you come out of safe mode explicitely by giving admin command and let replication happen. After this, restarts will not have issue. {quote} It can resolve this problem when we know the hang is blocked by this issue clearly. But if the cluster just hang in safe mode and users can not judge whether is blocked by this issue. It's dangerous to exit safe mode explicitly and may cause HDFS metadata corruption. > HDFS NN will hang in safe mode and never come out if we change the > dfs.namenode.replication.min bigger. > ------------------------------------------------------------------------------------------------------- > > Key: HDFS-3772 > URL: https://issues.apache.org/jira/browse/HDFS-3772 > Project: Hadoop HDFS > Issue Type: Bug > Components: name-node > Affects Versions: 2.0.0-alpha > Reporter: Yanbo Liang > > If the NN restarts with a new minimum replication > (dfs.namenode.replication.min), any files created with the old replication > count will expected to bump up to the new minimum upon restart automatically. > However, the real case is that if the NN restarts will a new minimum > replication which is bigger than the old one, the NN will hang in safemode > and never come out. > The corresponding test case can pass is because we have missing some test > coverage. It had been discussed in HDFS-3734. > If the NN received enough number of reported block which is satisfying the > new minimum replication, it will exit safe mode. However, if we change a > bigger minimum replication, there will be no enough amount blocks which are > satisfying the limitation. > Look at the code segment in FSNamesystem.java: > private synchronized void incrementSafeBlockCount(short replication) { > if (replication == safeReplication) { > this.blockSafe++; > checkMode(); > } > } > The DNs report blocks to NN and if the replication is equal to > safeReplication(It is assigned by the new minimum replication.), we will > increment blockSafe. But if we change a bigger minimum replication, all the > blocks whose replications are lower than it can not satisfy this equal > relationship. But actually the NN had received complete block information. It > cause blockSafe will not increment as usual and not reach the enough amount > to exit safe mode and then NN hangs. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira