[ 
https://issues.apache.org/jira/browse/HDFS-17853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=18038693#comment-18038693
 ] 

ASF GitHub Bot commented on HDFS-17853:
---------------------------------------

ayushtkn commented on code in PR #8064:
URL: https://github.com/apache/hadoop/pull/8064#discussion_r2532117035


##########
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java:
##########
@@ -580,6 +581,18 @@ String setProtectedDirectories(String protectedDirsString) 
{
     return Joiner.on(",").skipNulls().join(protectedDirectories);
   }
 
+  public void setMaxDirItems(int newVal) {
+    com.google.common.base.Preconditions.checkArgument(

Review Comment:
   I don't think we need the prefix "com.google.common.base.Preconditions."



##########
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java:
##########
@@ -2805,6 +2810,23 @@ private String 
reconfigureFSNamesystemLockMetricsParameters(final String propert
     }
   }
 
+  private String reconfMaxDirItems(String newVal) throws 
ReconfigurationException {

Review Comment:
   Can you rename to reconfigureMaxDirItems



##########
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeReconfigure.java:
##########
@@ -865,6 +866,23 @@ public void testReconfigureSlowPeerCollectInterval() 
throws Exception {
     assertEquals(600000, datanodeManager.getSlowPeerCollectionInterval());
   }
 
+  @Test
+  public void testReconfigureMaxDirItems()
+      throws ReconfigurationException {
+    final NameNode nameNode = cluster.getNameNode();
+    final FSDirectory fsd = nameNode.namesystem.getFSDirectory();
+
+    // By default, DFS_NAMENODE_MAX_DIRECTORY_ITEMS_KEY is 1024*1024.
+    assertEquals(1024*1024, fsd.getMaxDirItems());
+
+    // Reconfigure.
+    nameNode.reconfigureProperty(
+        DFS_NAMENODE_MAX_DIRECTORY_ITEMS_KEY, Integer.toString(1024*1024*2));
+
+    // Assert DFS_NAMENODE_MAX_SLOWPEER_COLLECT_NODES_KEY is 10.
+    assertEquals(1024*1024*2, fsd.getMaxDirItems());

Review Comment:
   nit
   this doesn't look formatted properly else there should have been some spacing



##########
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java:
##########
@@ -217,6 +217,8 @@ private static INodeDirectory createRoot(FSNamesystem 
namesystem) {
   // authorizeWithContext() API or not.
   private boolean useAuthorizationWithContextAPI = false;
 
+  private static final int maxDirItemsLimit = 64 * 100 * 1000;

Review Comment:
   I don't catch this changed, why we dropped ``MAX_DIR_ITEMS`` in favour of 
this?



##########
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java:
##########
@@ -398,11 +400,10 @@ public enum DirOp {
     // We need a maximum maximum because by default, PB limits message sizes
     // to 64MB. This means we can only store approximately 6.7 million entries
     // per directory, but let's use 6.4 million for some safety.

Review Comment:
   the comment above becomes orphan now



##########
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeReconfigure.java:
##########
@@ -865,6 +866,23 @@ public void testReconfigureSlowPeerCollectInterval() 
throws Exception {
     assertEquals(600000, datanodeManager.getSlowPeerCollectionInterval());
   }
 
+  @Test
+  public void testReconfigureMaxDirItems()
+      throws ReconfigurationException {
+    final NameNode nameNode = cluster.getNameNode();
+    final FSDirectory fsd = nameNode.namesystem.getFSDirectory();
+
+    // By default, DFS_NAMENODE_MAX_DIRECTORY_ITEMS_KEY is 1024*1024.
+    assertEquals(1024*1024, fsd.getMaxDirItems());
+
+    // Reconfigure.
+    nameNode.reconfigureProperty(
+        DFS_NAMENODE_MAX_DIRECTORY_ITEMS_KEY, Integer.toString(1024*1024*2));
+
+    // Assert DFS_NAMENODE_MAX_SLOWPEER_COLLECT_NODES_KEY is 10.
+    assertEquals(1024*1024*2, fsd.getMaxDirItems());
+  }

Review Comment:
   can you add the -ve cases as well





> Support to make dfs.namenode.fs-limits.max-directory-items reconfigurable
> -------------------------------------------------------------------------
>
>                 Key: HDFS-17853
>                 URL: https://issues.apache.org/jira/browse/HDFS-17853
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: namenode
>    Affects Versions: 3.5.0
>            Reporter: caozhiqiang
>            Assignee: caozhiqiang
>            Priority: Major
>              Labels: pull-request-available
>
> Sometimes, certain directories—such as temporary library directories—contain 
> too many subdirectories and files, exceeding the limit defined by the 
> {{dfs.namenode.fs-limits.max-directory-items}} configuration. This causes 
> many jobs to fail.
> To quickly restore job execution, we need to temporarily adjust this 
> configuration. However, since it currently requires to restart NameNode to 
> take effect, we need to make it dynamically reconfigurable without restarting 
> the NameNode.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to