[
https://issues.apache.org/jira/browse/HDFS-12082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16078904#comment-16078904
]
Weiwei Yang commented on HDFS-12082:
------------------------------------
Hi [~vagarychen]
Thanks for helping to review this. You are making a good point. Second thought,
I think it is better to ensure the effected invalidate block limit is the
bigger one of configured value in hdfs-site.xml and 20*HB_interval. This will
ensure we don't throttle the block deletion too much on datanodes. I have
revised the patch to do so. Please let me know if v3 patch makes sense to you.
Thanks.
> BlockInvalidateLimit value is incorrectly set after namenode heartbeat
> interval reconfigured
> ---------------------------------------------------------------------------------------------
>
> Key: HDFS-12082
> URL: https://issues.apache.org/jira/browse/HDFS-12082
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: hdfs, namenode
> Reporter: Weiwei Yang
> Assignee: Weiwei Yang
> Attachments: HDFS-12082.001.patch, HDFS-12082.002.patch,
> HDFS-12082.003.patch
>
>
> HDFS-1477 provides an option to reconfigured namenode heartbeat interval
> without restarting the namenode. When the heartbeat interval is reconfigured,
> {{blockInvalidateLimit}} gets recounted
> {code}
> this.blockInvalidateLimit = Math.max(20 * (int) (intervalSeconds),
> DFSConfigKeys.DFS_BLOCK_INVALIDATE_LIMIT_DEFAULT);
> {code}
> this doesn't honor the existing value set by {{dfs.block.invalidate.limit}}.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]