tasanuma commented on a change in pull request #3863:
URL: https://github.com/apache/hadoop/pull/3863#discussion_r837704290



##########
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
##########
@@ -854,6 +865,43 @@ private String reconfSlowDiskParameters(String property, 
String newVal)
     }
   }
 
+  private String reconfDfsUsageParameters(String property, String newVal)
+      throws ReconfigurationException {
+    String result = null;
+    try {
+      LOG.info("Reconfiguring {} to {}", property, newVal);
+      if (property.equals(FS_DU_INTERVAL_KEY)) {
+        Preconditions.checkNotNull(data, "FsDatasetSpi has not been 
initialized.");
+        long interval = (newVal == null ? FS_DU_INTERVAL_DEFAULT :
+            Long.parseLong(newVal));
+        result = Long.toString(interval);
+        List<FsVolumeImpl> volumeList = data.getVolumeList();
+        for (FsVolumeImpl fsVolume : volumeList) {
+          Map<String, BlockPoolSlice> blockPoolSlices = 
fsVolume.getBlockPoolSlices();
+          for (Entry<String, BlockPoolSlice> entry : 
blockPoolSlices.entrySet()) {
+            entry.getValue().updateDfsUsageConfig(interval, null);
+          }
+        }
+      } else if (property.equals(FS_GETSPACEUSED_JITTER_KEY)) {
+        Preconditions.checkNotNull(data, "FsDatasetSpi has not been 
initialized.");
+        long jitter = (newVal == null ? FS_GETSPACEUSED_JITTER_DEFAULT :
+            Long.parseLong(newVal));
+        result = Long.toString(jitter);
+        List<FsVolumeImpl> volumeList = data.getVolumeList();
+        for (FsVolumeImpl fsVolume : volumeList) {
+          Map<String, BlockPoolSlice> blockPoolSlices = 
fsVolume.getBlockPoolSlices();
+          for (Entry<String, BlockPoolSlice> entry : 
blockPoolSlices.entrySet()) {
+            entry.getValue().updateDfsUsageConfig(null, jitter);
+          }

Review comment:
       ```suggestion
             for (BlockPoolSlice bp : blockPoolSlices.values()) {
               bp.updateDfsUsageConfig(null, jitter);
             }
   ```

##########
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
##########
@@ -854,6 +865,43 @@ private String reconfSlowDiskParameters(String property, 
String newVal)
     }
   }
 
+  private String reconfDfsUsageParameters(String property, String newVal)
+      throws ReconfigurationException {
+    String result = null;
+    try {
+      LOG.info("Reconfiguring {} to {}", property, newVal);
+      if (property.equals(FS_DU_INTERVAL_KEY)) {
+        Preconditions.checkNotNull(data, "FsDatasetSpi has not been 
initialized.");
+        long interval = (newVal == null ? FS_DU_INTERVAL_DEFAULT :
+            Long.parseLong(newVal));
+        result = Long.toString(interval);
+        List<FsVolumeImpl> volumeList = data.getVolumeList();
+        for (FsVolumeImpl fsVolume : volumeList) {
+          Map<String, BlockPoolSlice> blockPoolSlices = 
fsVolume.getBlockPoolSlices();
+          for (Entry<String, BlockPoolSlice> entry : 
blockPoolSlices.entrySet()) {
+            entry.getValue().updateDfsUsageConfig(interval, null);
+          }

Review comment:
       We can make it simpler.
   ```suggestion
             for (BlockPoolSlice bp : blockPoolSlices.values()) {
               bp.updateDfsUsageConfig(interval, null);
             }
   ```

##########
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CachingGetSpaceUsed.java
##########
@@ -154,10 +155,19 @@ boolean running() {
   /**
    * How long in between runs of the background refresh.
    */
-  long getRefreshInterval() {
+  public long getRefreshInterval() {

Review comment:
       ```suggestion
     @VisibleForTesting
     public long getRefreshInterval() {
   ```




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to