[
https://issues.apache.org/jira/browse/HDFS-14164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16796585#comment-16796585
]
Arpit Agarwal commented on HDFS-14164:
--------------------------------------
Nice find [~kpalanisamy]. We should fix this.
[~shv], do you have any ideas how to fix TestBackupNode? It's not obvious how
it could be fixed.
> Namenode should not be started if safemode threshold is out of boundary
> -----------------------------------------------------------------------
>
> Key: HDFS-14164
> URL: https://issues.apache.org/jira/browse/HDFS-14164
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: namenode
> Affects Versions: 2.7.3, 3.1.1
> Environment: #apache hadoop-3.1.1
> Reporter: Karthik Palanisamy
> Assignee: Karthik Palanisamy
> Priority: Minor
> Labels: patch
> Attachments: HDFS-14164-001.patch, HDFS-14164-002.patch,
> HDFS-14164-003.patch
>
>
> Mistakenly, User has configured safemode
> threshold(dfs.namenode.safemode.threshold-pct) to 090 instead of 0.90. Due to
> this change, UI has the incorrect summary and it never turns to be out of
> safemode until manual intervention. Because total block count will never
> match with an additional block count, which is to be reported.
> I.e
> {code:java}
> Wrong setting: dfs.namenode.safemode.threshold-pct=090
> Summary:
> Safe mode is ON. The reported blocks 0 needs additional 360 blocks to reach
> the threshold 90.0000 of total blocks 4. The number of live datanodes 3 has
> reached the minimum number 0. Safe mode will be turned off automatically once
> the thresholds have been reached.
> 10 files and directories, 4 blocks (4 replicated blocks, 0 erasure coded
> block groups) = 14 total filesystem object(s).
> {code}
>
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]