[
https://issues.apache.org/jira/browse/HDFS-3368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13279390#comment-13279390
]
Tsz Wo (Nicholas), SZE commented on HDFS-3368:
----------------------------------------------
I just found that the enableDebugLogging constant is outdated. It still refers
to "FSNamesystem logger" but the actually logger is BlockPlacementPolicy.LOG.
Could you also update it? Below is my suggested change.
{code}
public class BlockPlacementPolicyDefault extends BlockPlacementPolicy {
+ private static final String enableDebugLogging
+ = "For more information, please enable DEBUG log level on "
+ + ((Log4JLogger)LOG).getLogger().getName();
+
private boolean considerLoad;
private boolean preferLocalNode = true;
private NetworkTopology clusterMap;
private FSClusterStats stats;
- static final String enableDebugLogging = "For more information, please
enable"
- + " DEBUG level logging on the "
- + "org.apache.hadoop.hdfs.server.namenode.FSNamesystem logger.";
{code}
> Missing blocks due to bad DataNodes comming up and down.
> --------------------------------------------------------
>
> Key: HDFS-3368
> URL: https://issues.apache.org/jira/browse/HDFS-3368
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: name-node
> Affects Versions: 0.22.0, 1.0.0, 2.0.0, 3.0.0
> Reporter: Konstantin Shvachko
> Assignee: Konstantin Shvachko
> Attachments: blockDeletePolicy-0.22.patch,
> blockDeletePolicy-trunk.patch, blockDeletePolicy.patch
>
>
> All replicas of a block can be removed if bad DataNodes come up and down
> during cluster restart resulting in data loss.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators:
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira