[
https://issues.apache.org/jira/browse/HDFS-9906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15180572#comment-15180572
]
Haohui Mai commented on HDFS-9906:
----------------------------------
The information is useful for debugging but I agree that we don't need to print
all of them. We can consider doing a rate limit on the logs. For example, print
only 5 logs / minutes at most.
> Remove spammy log spew when a datanode is restarted
> ---------------------------------------------------
>
> Key: HDFS-9906
> URL: https://issues.apache.org/jira/browse/HDFS-9906
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: namenode
> Affects Versions: 2.7.2
> Reporter: Elliott Clark
>
> {code}
> WARN BlockStateChange: BLOCK* addStoredBlock: Redundant addStoredBlock
> request received for blk_1109897077_36157149 on node 192.168.1.1:50010 size
> 268435456
> {code}
> This happens waaaay too much to add any useful information. We should either
> move this to a different level or only warn once per machine.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)