[
https://issues.apache.org/jira/browse/HDFS-16502?focusedWorklogId=740649&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-740649
]
ASF GitHub Bot logged work on HDFS-16502:
-----------------------------------------
Author: ASF GitHub Bot
Created on: 14/Mar/22 03:49
Start Date: 14/Mar/22 03:49
Worklog Time Spent: 10m
Work Description: jojochuang commented on a change in pull request #4064:
URL: https://github.com/apache/hadoop/pull/4064#discussion_r825574426
##########
File path:
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
##########
@@ -2434,6 +2438,27 @@ String reconfigureSlowNodesParameters(final
DatanodeManager datanodeManager,
}
}
+ private String reconfigureBlockInvalidateLimit(final DatanodeManager
datanodeManager,
+ final String property, final String newVal) throws
ReconfigurationException {
+ namesystem.writeLock();
+ try {
+ if (newVal == null) {
+
datanodeManager.setBlockInvalidateLimit(DFSConfigKeys.DFS_BLOCK_INVALIDATE_LIMIT_DEFAULT);
+ return
String.valueOf(DFSConfigKeys.DFS_BLOCK_INVALIDATE_LIMIT_DEFAULT);
+ } else {
+ datanodeManager.setBlockInvalidateLimit(Integer.parseInt(newVal));
+ return String.valueOf(datanodeManager.getBlockInvalidateLimit());
+ }
Review comment:
perhaps we should log the message here instead of in the finally block,
because if it throws an exception here, it won't be reconfigured.
In addition, if newVal == null, the method returns
DFS_BLOCK_INVALIDATE_LIMIT_DEFAULT, not
datanodeManager.getBlockInvalidateLimit().
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
Issue Time Tracking
-------------------
Worklog Id: (was: 740649)
Time Spent: 1h (was: 50m)
> Reconfigure Block Invalidate limit
> ----------------------------------
>
> Key: HDFS-16502
> URL: https://issues.apache.org/jira/browse/HDFS-16502
> Project: Hadoop HDFS
> Issue Type: Task
> Reporter: Viraj Jasani
> Assignee: Viraj Jasani
> Priority: Major
> Labels: pull-request-available
> Time Spent: 1h
> Remaining Estimate: 0h
>
> Based on the cluster load, it would be helpful to consider tuning block
> invalidate limit (dfs.block.invalidate.limit). The only way we can do this
> without restarting Namenode as of today is by reconfiguring heartbeat
> interval
> {code:java}
> Math.max(heartbeatInt*20, blockInvalidateLimit){code}
> , this logic is not straightforward and operators are usually not aware of it
> (lack of documentation), also updating heartbeat interval is not desired in
> all the cases.
> We should provide the ability to alter block invalidation limit without
> affecting heartbeat interval on the live cluster to adjust some load at
> Datanode level.
> We should also take this opportunity to keep (heartbeatInterval * 20)
> computation logic in a common method.
--
This message was sent by Atlassian Jira
(v8.20.1#820001)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]