[
https://issues.apache.org/jira/browse/HDFS-9959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
yunjiong zhao updated HDFS-9959:
--------------------------------
Attachment: HDFS-9959.2.patch
How about this one?
In case of extreme case, for example, all DataNodes are dead, and the missing
blocks may have more than hundred thousand, logging all of them might take very
long time.
{code}
NameNode.blockStateChangeLog.warn("After removed " + dn +
", no live nodes contain the following " + missing.size() +
" blocks: " + missing);
{code}
Which solution is better:
1. add only first thousand blocks and ignore others
2. if there are more than thousand blocks missing, only logging first thousand?
> add log when block removed from last live datanode
> --------------------------------------------------
>
> Key: HDFS-9959
> URL: https://issues.apache.org/jira/browse/HDFS-9959
> Project: Hadoop HDFS
> Issue Type: Improvement
> Components: namenode
> Reporter: yunjiong zhao
> Assignee: yunjiong zhao
> Priority: Minor
> Attachments: HDFS-9959.1.patch, HDFS-9959.2.patch, HDFS-9959.patch
>
>
> Add logs like "BLOCK* No live nodes contain block blk_1073741825_1001, last
> datanode contain it is node: 127.0.0.1:65341" in BlockStateChange should help
> to identify which datanode should be fixed first to recover missing blocks.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)