[jira] [Commented] (HDFS-17277) Delete invalid code logic in namenode format

2023-12-29 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17801294#comment-17801294
 ] 

ASF GitHub Bot commented on HDFS-17277:
---

tasanuma commented on code in PR #6323:
URL: https://github.com/apache/hadoop/pull/6323#discussion_r1438473858


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java:
##
@@ -1423,9 +1420,7 @@ private static boolean format(Configuration conf, boolean 
force,
   LOG.warn("Encountered exception during format", ioe);
   throw ioe;
 } finally {
-  if (fsImage != null) {
-fsImage.close();
-  }
+  fsImage.close();

Review Comment:
   @slfan1989 Since the `fsImage` variable is initialized with `new FSImage`, 
it can't be null, and IntelliJ has issued a warning that `fsImage != null is 
always true`. Generally, even in such situations, performing a null check is 
considered good practice as future code changes could potentially cause the 
variable to become null. However, I thought this is unlikely in this case.





> Delete invalid code logic in namenode format
> 
>
> Key: HDFS-17277
> URL: https://issues.apache.org/jira/browse/HDFS-17277
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: zhangzhanchang
>Assignee: zhangzhanchang
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.9
>
>
> There is invalid logical processing in the namenode format process



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17277) Delete invalid code logic in namenode format

2023-12-29 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17801293#comment-17801293
 ] 

ASF GitHub Bot commented on HDFS-17277:
---

tasanuma commented on PR #6323:
URL: https://github.com/apache/hadoop/pull/6323#issuecomment-1872456130

   > Guess the commit misses the jira id
   
   Oh, that's my bad. I'm sorry.




> Delete invalid code logic in namenode format
> 
>
> Key: HDFS-17277
> URL: https://issues.apache.org/jira/browse/HDFS-17277
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: zhangzhanchang
>Assignee: zhangzhanchang
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.9
>
>
> There is invalid logical processing in the namenode format process



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17254) DataNode httpServer has too many worker threads

2023-12-29 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17801287#comment-17801287
 ] 

ASF GitHub Bot commented on HDFS-17254:
---

2005hithlj commented on code in PR #6307:
URL: https://github.com/apache/hadoop/pull/6307#discussion_r1438459049


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java:
##
@@ -966,6 +966,9 @@ public class DFSConfigKeys extends CommonConfigurationKeys {
   public static final String  DFS_DATANODE_HTTP_ADDRESS_DEFAULT = "0.0.0.0:" + 
DFS_DATANODE_HTTP_DEFAULT_PORT;
   public static final String  DFS_DATANODE_HTTP_INTERNAL_PROXY_PORT =
   "dfs.datanode.http.internal-proxy.port";
+  public static final String DFS_DATANODE_NETTY_WORKER_NUM_THREADS_KEY =
+  "dfs.datanode.netty.worker.threads";
+  public static final int DFS_DATANODE_NETTY_WORKER_NUM_THREADS_DEFAULT = 10;

Review Comment:
   @ayushtkn Thank you for your review, I have changed it.





> DataNode httpServer has too many worker threads
> ---
>
> Key: HDFS-17254
> URL: https://issues.apache.org/jira/browse/HDFS-17254
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Liangjun He
>Assignee: Liangjun He
>Priority: Minor
>  Labels: pull-request-available
>
> When optimizing the thread number of high-density storage DN, we found the 
> number of worker threads for the DataNode httpServer is twice the number of 
> available cores on node , resulting in too many threads. We can change this 
> to be configurable.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17290) HDFS: add client rpc backoff metrics due to disconnection from lowest priority queue

2023-12-29 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17801286#comment-17801286
 ] 

ASF GitHub Bot commented on HDFS-17290:
---

hadoop-yetus commented on PR #6359:
URL: https://github.com/apache/hadoop/pull/6359#issuecomment-1872425565

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  16m 55s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  46m 47s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  18m  4s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |  16m 24s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m 17s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 37s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 13s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 49s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   2m 36s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  39m 37s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 56s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 19s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |  17m 19s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 29s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |  16m 29s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m 13s | 
[/results-checkstyle-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6359/10/artifact/out/results-checkstyle-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common-project/hadoop-common: The patch generated 2 new + 197 
unchanged - 0 fixed = 199 total (was 197)  |
   | +1 :green_heart: |  mvnsite  |   1m 33s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m  6s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 51s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   2m 40s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  40m  1s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  19m 19s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 57s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 250m 20s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6359/10/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6359 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets markdownlint 
|
   | uname | Linux 328bbc48c941 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 
15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 92fb9e41a6a24e1ddcfd0620415c73a23664d7b9 |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6359/10/testReport/ |
   | Max. 

[jira] [Commented] (HDFS-17305) Add avoid datanode reason count related metrics to namenode.

2023-12-29 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17801279#comment-17801279
 ] 

ASF GitHub Bot commented on HDFS-17305:
---

huangzhaobo99 closed pull request #6393: HDFS-17305. Add avoid datanode reason 
count related metrics to namenode.
URL: https://github.com/apache/hadoop/pull/6393




> Add avoid datanode reason count related metrics to namenode.
> 
>
> Key: HDFS-17305
> URL: https://issues.apache.org/jira/browse/HDFS-17305
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: huangzhaobo99
>Assignee: huangzhaobo99
>Priority: Minor
>  Labels: pull-request-available
>
> Now, there are slownode and load avoidance functions, mainly implemented in 
> the  BlockPlacementPolicyDefault class.
> 1. After triggering the exclusion condition, some logs will be printed on nn, 
> which can be used to troubleshoot anomalies in nn by checking the logs, the 
> code is as follows:
> {code:java}
> ...
> if (!node.isInService()) {
>   logNodeIsNotChosen(node, NodeNotChosenReason.NOT_IN_SERVICE);
>   return false;
> }
> if (avoidStaleNodes) {
>   if (node.isStale(this.staleInterval)) {
> logNodeIsNotChosen(node, NodeNotChosenReason.NODE_STALE);
> return false;
>   }
> }
> ...{code}
> 2. If the exclusion condition is triggered, we can record it through metrics 
> and count the total number of exclusions.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17277) Delete invalid code logic in namenode format

2023-12-29 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17801278#comment-17801278
 ] 

ASF GitHub Bot commented on HDFS-17277:
---

slfan1989 commented on code in PR #6323:
URL: https://github.com/apache/hadoop/pull/6323#discussion_r1438449685


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java:
##
@@ -1423,9 +1420,7 @@ private static boolean format(Configuration conf, boolean 
force,
   LOG.warn("Encountered exception during format", ioe);
   throw ioe;
 } finally {
-  if (fsImage != null) {
-fsImage.close();
-  }
+  fsImage.close();

Review Comment:
   Why should we delete this condition? @zzccctv @tasanuma 





> Delete invalid code logic in namenode format
> 
>
> Key: HDFS-17277
> URL: https://issues.apache.org/jira/browse/HDFS-17277
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: zhangzhanchang
>Assignee: zhangzhanchang
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.9
>
>
> There is invalid logical processing in the namenode format process



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17305) Add avoid datanode reason count related metrics to namenode.

2023-12-29 Thread huangzhaobo99 (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huangzhaobo99 updated HDFS-17305:
-
Description: 
Now, there are slownode and load avoidance functions, mainly implemented in the 
 BlockPlacementPolicyDefault class.

1. After triggering the exclusion condition, some logs will be printed on nn, 
which can be used to troubleshoot anomalies in nn by checking the logs, the 
code is as follows:
{code:java}
...
if (!node.isInService()) {
  logNodeIsNotChosen(node, NodeNotChosenReason.NOT_IN_SERVICE);
  return false;
}

if (avoidStaleNodes) {
  if (node.isStale(this.staleInterval)) {
logNodeIsNotChosen(node, NodeNotChosenReason.NODE_STALE);
return false;
  }
}
...{code}
2. If the exclusion condition is triggered, we can record it through metrics 
and count the total number of exclusions.

  was:
Now, there are slownode and load avoidance functions, mainly implemented in the 
 BlockPlacementPolicyDefault class.

1. After triggering the exclusion condition, some logs will be printed on nn, 
which can be used to troubleshoot anomalies in nn by checking the logs, the 
code is as follows:
{code:java}
...
if (!node.isInService()) {
  logNodeIsNotChosen(node, NodeNotChosenReason.NOT_IN_SERVICE);
  return false;
}

if (avoidStaleNodes) {
  if (node.isStale(this.staleInterval)) {
logNodeIsNotChosen(node, NodeNotChosenReason.NODE_STALE);
return false;
  }
}
...{code}
2. If the exclusion condition is triggered, we can record it through metrics 
and count the total number of exclusions.

3. These metrics through prometheus+grafana to observe the current situation of 
the cluster when selecting datanodes.


> Add avoid datanode reason count related metrics to namenode.
> 
>
> Key: HDFS-17305
> URL: https://issues.apache.org/jira/browse/HDFS-17305
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: huangzhaobo99
>Assignee: huangzhaobo99
>Priority: Minor
>  Labels: pull-request-available
>
> Now, there are slownode and load avoidance functions, mainly implemented in 
> the  BlockPlacementPolicyDefault class.
> 1. After triggering the exclusion condition, some logs will be printed on nn, 
> which can be used to troubleshoot anomalies in nn by checking the logs, the 
> code is as follows:
> {code:java}
> ...
> if (!node.isInService()) {
>   logNodeIsNotChosen(node, NodeNotChosenReason.NOT_IN_SERVICE);
>   return false;
> }
> if (avoidStaleNodes) {
>   if (node.isStale(this.staleInterval)) {
> logNodeIsNotChosen(node, NodeNotChosenReason.NODE_STALE);
> return false;
>   }
> }
> ...{code}
> 2. If the exclusion condition is triggered, we can record it through metrics 
> and count the total number of exclusions.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17305) Add avoid datanode reason count related metrics to namenode.

2023-12-29 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17801269#comment-17801269
 ] 

ASF GitHub Bot commented on HDFS-17305:
---

hadoop-yetus commented on PR #6393:
URL: https://github.com/apache/hadoop/pull/6393#issuecomment-1872393246

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  18m 42s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  48m  1s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 23s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   1m 13s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m 13s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 23s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  9s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   1m 35s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | -1 :x: |  spotbugs  |   3m 23s | 
[/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6393/1/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html)
 |  hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant spotbugs warnings.  |
   | +1 :green_heart: |  shadedclient  |  40m  2s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  9s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 17s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   1m 17s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  9s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   1m  9s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m  2s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6393/1/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 6 new + 95 unchanged - 
0 fixed = 101 total (was 95)  |
   | +1 :green_heart: |  mvnsite  |   1m 14s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 55s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   1m 32s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 22s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  40m 47s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 253m 54s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6393/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 42s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 425m 17s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.datanode.TestDirectoryScanner |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6393/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6393 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux f3b86a25ef3b 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 
15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / d9a672c10c10bfb439db8cb3b2ff9778850644e6 |
   | Default Java | Private 

[jira] [Commented] (HDFS-17277) Delete invalid code logic in namenode format

2023-12-29 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17801257#comment-17801257
 ] 

ASF GitHub Bot commented on HDFS-17277:
---

ayushtkn commented on PR #6323:
URL: https://github.com/apache/hadoop/pull/6323#issuecomment-1872324389

   Guess the commit misses the jira id
   
https://github.com/apache/hadoop/commit/9f76fba6a44bdf281bc8a8874c03301eca73aafb




> Delete invalid code logic in namenode format
> 
>
> Key: HDFS-17277
> URL: https://issues.apache.org/jira/browse/HDFS-17277
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: zhangzhanchang
>Assignee: zhangzhanchang
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.9
>
>
> There is invalid logical processing in the namenode format process



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17254) DataNode httpServer has too many worker threads

2023-12-29 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17801256#comment-17801256
 ] 

ASF GitHub Bot commented on HDFS-17254:
---

ayushtkn commented on code in PR #6307:
URL: https://github.com/apache/hadoop/pull/6307#discussion_r1438397084


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java:
##
@@ -966,6 +966,9 @@ public class DFSConfigKeys extends CommonConfigurationKeys {
   public static final String  DFS_DATANODE_HTTP_ADDRESS_DEFAULT = "0.0.0.0:" + 
DFS_DATANODE_HTTP_DEFAULT_PORT;
   public static final String  DFS_DATANODE_HTTP_INTERNAL_PROXY_PORT =
   "dfs.datanode.http.internal-proxy.port";
+  public static final String DFS_DATANODE_NETTY_WORKER_NUM_THREADS_KEY =
+  "dfs.datanode.netty.worker.threads";
+  public static final int DFS_DATANODE_NETTY_WORKER_NUM_THREADS_DEFAULT = 10;

Review Comment:
   this needs to be changed as well





> DataNode httpServer has too many worker threads
> ---
>
> Key: HDFS-17254
> URL: https://issues.apache.org/jira/browse/HDFS-17254
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Liangjun He
>Assignee: Liangjun He
>Priority: Minor
>  Labels: pull-request-available
>
> When optimizing the thread number of high-density storage DN, we found the 
> number of worker threads for the DataNode httpServer is twice the number of 
> available cores on node , resulting in too many threads. We can change this 
> to be configurable.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17310) DiskBalancer: Enhance the log message for submitPlan

2023-12-29 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17801229#comment-17801229
 ] 

ASF GitHub Bot commented on HDFS-17310:
---

hadoop-yetus commented on PR #6391:
URL: https://github.com/apache/hadoop/pull/6391#issuecomment-1872242894

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  12m  6s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  42m 33s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 19s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   1m 13s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m  7s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 23s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  4s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   1m 38s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 10s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  34m 24s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 10s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 12s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   1m 12s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  5s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   1m  5s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 56s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 13s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 52s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   1m 29s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 11s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  34m 12s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 208m 41s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 42s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 354m 57s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6391/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6391 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 00d50bf38832 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 
15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / dfc915e94292755877743d38975e5189f6ec68da |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6391/1/testReport/ |
   | Max. process+thread count | 3645 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6391/1/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was 

[jira] [Commented] (HDFS-17305) Add avoid datanode reason count related metrics to namenode.

2023-12-29 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17801221#comment-17801221
 ] 

ASF GitHub Bot commented on HDFS-17305:
---

huangzhaobo99 opened a new pull request, #6393:
URL: https://github.com/apache/hadoop/pull/6393

   
   
   ### Description of PR
   JIRA: https://issues.apache.org/jira/browse/HDFS-17305
   
   Now, there are slownode and load avoidance functions, mainly implemented in 
the  BlockPlacementPolicyDefault class.
   
   1. After triggering the exclusion condition, some logs will be printed on 
nn, which can be used to troubleshoot anomalies in nn by checking the logs, the 
code is as follows:
   ```java
   ...
   if (!node.isInService()) {
 logNodeIsNotChosen(node, NodeNotChosenReason.NOT_IN_SERVICE);
 return false;
   }
   
   if (avoidStaleNodes) {
 if (node.isStale(this.staleInterval)) {
   logNodeIsNotChosen(node, NodeNotChosenReason.NODE_STALE);
   return false;
 }
   }
   ...
   ```
   2. If the exclusion condition is triggered, we can record it through metrics 
and count the total number of exclusions.
   
   3. These metrics through prometheus+grafana to observe the current situation 
of the cluster when selecting datanodes.
   
   
   ### How was this patch tested?
Add TestNameNodeMetrics#testAvoidTargetDataNodeMetrics UnitTest.




> Add avoid datanode reason count related metrics to namenode.
> 
>
> Key: HDFS-17305
> URL: https://issues.apache.org/jira/browse/HDFS-17305
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: huangzhaobo99
>Assignee: huangzhaobo99
>Priority: Minor
>
> Now, there are slownode and load avoidance functions, mainly implemented in 
> the  BlockPlacementPolicyDefault class.
> 1. After triggering the exclusion condition, some logs will be printed on nn, 
> which can be used to troubleshoot anomalies in nn by checking the logs, the 
> code is as follows:
> {code:java}
> ...
> if (!node.isInService()) {
>   logNodeIsNotChosen(node, NodeNotChosenReason.NOT_IN_SERVICE);
>   return false;
> }
> if (avoidStaleNodes) {
>   if (node.isStale(this.staleInterval)) {
> logNodeIsNotChosen(node, NodeNotChosenReason.NODE_STALE);
> return false;
>   }
> }
> ...{code}
> 2. If the exclusion condition is triggered, we can record it through metrics 
> and count the total number of exclusions.
> 3. These metrics through prometheus+grafana to observe the current situation 
> of the cluster when selecting datanodes.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17305) Add avoid datanode reason count related metrics to namenode.

2023-12-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-17305:
--
Labels: pull-request-available  (was: )

> Add avoid datanode reason count related metrics to namenode.
> 
>
> Key: HDFS-17305
> URL: https://issues.apache.org/jira/browse/HDFS-17305
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: huangzhaobo99
>Assignee: huangzhaobo99
>Priority: Minor
>  Labels: pull-request-available
>
> Now, there are slownode and load avoidance functions, mainly implemented in 
> the  BlockPlacementPolicyDefault class.
> 1. After triggering the exclusion condition, some logs will be printed on nn, 
> which can be used to troubleshoot anomalies in nn by checking the logs, the 
> code is as follows:
> {code:java}
> ...
> if (!node.isInService()) {
>   logNodeIsNotChosen(node, NodeNotChosenReason.NOT_IN_SERVICE);
>   return false;
> }
> if (avoidStaleNodes) {
>   if (node.isStale(this.staleInterval)) {
> logNodeIsNotChosen(node, NodeNotChosenReason.NODE_STALE);
> return false;
>   }
> }
> ...{code}
> 2. If the exclusion condition is triggered, we can record it through metrics 
> and count the total number of exclusions.
> 3. These metrics through prometheus+grafana to observe the current situation 
> of the cluster when selecting datanodes.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17305) Add avoid datanode reason count related metrics to namenode.

2023-12-29 Thread huangzhaobo99 (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huangzhaobo99 updated HDFS-17305:
-
Description: 
Now, there are slownode and load avoidance functions, mainly implemented in the 
 BlockPlacementPolicyDefault class.

1. After triggering the exclusion condition, some logs will be printed on nn, 
which can be used to troubleshoot anomalies in nn by checking the logs, the 
code is as follows:
{code:java}
...
if (!node.isInService()) {
  logNodeIsNotChosen(node, NodeNotChosenReason.NOT_IN_SERVICE);
  return false;
}

if (avoidStaleNodes) {
  if (node.isStale(this.staleInterval)) {
logNodeIsNotChosen(node, NodeNotChosenReason.NODE_STALE);
return false;
  }
}
...{code}
2. If the exclusion condition is triggered, we can record it through metrics 
and count the total number of exclusions.

3. These metrics through prometheus+grafana to observe the current situation of 
the cluster when selecting datanodes.

  was:
Now, there are slownode and load avoidance functions, mainly implemented in the 
 BlockPlacementPolicyDefault class.

1. After triggering the exclusion condition, some logs will be printed on nn, 
which can be used to troubleshoot anomalies in nn by checking the logs, the 
code is as follows:
{code:java}
...
if (!node.isInService()) {
  logNodeIsNotChosen(node, NodeNotChosenReason.NOT_IN_SERVICE);
  return false;
}

if (avoidStaleNodes) {
  if (node.isStale(this.staleInterval)) {
logNodeIsNotChosen(node, NodeNotChosenReason.NODE_STALE);
return false;
  }
}
...{code}
2. If the exclusion condition is triggered, can we record it through metrics 
and count the total number of exclusions?

3. These metrics through prometheus+grafana to observe the current situation of 
the cluster when selecting datanodes.


> Add avoid datanode reason count related metrics to namenode.
> 
>
> Key: HDFS-17305
> URL: https://issues.apache.org/jira/browse/HDFS-17305
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: huangzhaobo99
>Assignee: huangzhaobo99
>Priority: Minor
>
> Now, there are slownode and load avoidance functions, mainly implemented in 
> the  BlockPlacementPolicyDefault class.
> 1. After triggering the exclusion condition, some logs will be printed on nn, 
> which can be used to troubleshoot anomalies in nn by checking the logs, the 
> code is as follows:
> {code:java}
> ...
> if (!node.isInService()) {
>   logNodeIsNotChosen(node, NodeNotChosenReason.NOT_IN_SERVICE);
>   return false;
> }
> if (avoidStaleNodes) {
>   if (node.isStale(this.staleInterval)) {
> logNodeIsNotChosen(node, NodeNotChosenReason.NODE_STALE);
> return false;
>   }
> }
> ...{code}
> 2. If the exclusion condition is triggered, we can record it through metrics 
> and count the total number of exclusions.
> 3. These metrics through prometheus+grafana to observe the current situation 
> of the cluster when selecting datanodes.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17276) The nn fetch editlog forbidden in kerberos environment

2023-12-29 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17801210#comment-17801210
 ] 

ASF GitHub Bot commented on HDFS-17276:
---

hadoop-yetus commented on PR #6326:
URL: https://github.com/apache/hadoop/pull/6326#issuecomment-1872200872

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 20s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  30m 25s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 41s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   0m 39s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 35s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 44s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 39s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   1m  4s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m 47s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 33s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 34s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 36s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   0m 36s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 33s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 33s |  |  the patch passed  |
   | -1 :x: |  blanks  |   0m  0s | 
[/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6326/4/artifact/out/blanks-eol.txt)
 |  The patch has 1 line(s) that end in blanks. Use git apply --whitespace=fix 
<>. Refer https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  checkstyle  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 36s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   1m  1s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m 44s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  20m 29s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 186m 56s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6326/4/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 27s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 271m 56s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.datanode.TestDirectoryScanner |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6326/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6326 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 8ac8d29ac7ad 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 
15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 665815c4485bdb44999476c1d7b2f594d2597171 |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6326/4/testReport/ |
   | Max. process+thread count | 4923 (vs. 

[jira] [Assigned] (HDFS-17305) Add avoid datanode reason count related metrics to namenode.

2023-12-29 Thread huangzhaobo99 (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huangzhaobo99 reassigned HDFS-17305:


Assignee: huangzhaobo99

> Add avoid datanode reason count related metrics to namenode.
> 
>
> Key: HDFS-17305
> URL: https://issues.apache.org/jira/browse/HDFS-17305
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: huangzhaobo99
>Assignee: huangzhaobo99
>Priority: Minor
>
> Now, there are slownode and load avoidance functions, mainly implemented in 
> the  BlockPlacementPolicyDefault class.
> 1. After triggering the exclusion condition, some logs will be printed on nn, 
> which can be used to troubleshoot anomalies in nn by checking the logs, the 
> code is as follows:
> {code:java}
> ...
> if (!node.isInService()) {
>   logNodeIsNotChosen(node, NodeNotChosenReason.NOT_IN_SERVICE);
>   return false;
> }
> if (avoidStaleNodes) {
>   if (node.isStale(this.staleInterval)) {
> logNodeIsNotChosen(node, NodeNotChosenReason.NODE_STALE);
> return false;
>   }
> }
> ...{code}
> 2. If the exclusion condition is triggered, can we record it through metrics 
> and count the total number of exclusions?
> 3. These metrics through prometheus+grafana to observe the current situation 
> of the cluster when selecting datanodes.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17309) Fix Router Safemode check contidition error

2023-12-29 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17801190#comment-17801190
 ] 

ASF GitHub Bot commented on HDFS-17309:
---

hadoop-yetus commented on PR #6390:
URL: https://github.com/apache/hadoop/pull/6390#issuecomment-1872135656

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 51s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  46m 13s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 41s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   0m 37s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 30s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 41s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 42s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 30s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m 23s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  38m  7s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 33s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   0m 33s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 29s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 18s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 32s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 29s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m 22s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  38m  0s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  26m 17s |  |  hadoop-hdfs-rbf in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 35s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 163m 44s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6390/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6390 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 4136f3fd4455 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 
15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / af5d1aebaa3bac9c8774d62351b7c65912c1c53c |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6390/1/testReport/ |
   | Max. process+thread count | 2429 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6390/1/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message 

[jira] [Commented] (HDFS-17311) RBF: ConnectionManager creatorQueue should offer a pool that is not already in creatorQueue.

2023-12-29 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17801189#comment-17801189
 ] 

ASF GitHub Bot commented on HDFS-17311:
---

hadoop-yetus commented on PR #6392:
URL: https://github.com/apache/hadoop/pull/6392#issuecomment-1872134350

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 48s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  47m 27s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 41s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   0m 37s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 29s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 41s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 42s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 31s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m 23s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  37m 39s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 33s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   0m 33s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 29s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 18s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 32s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 28s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m 22s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  37m 56s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  23m 21s |  |  hadoop-hdfs-rbf in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 35s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 161m 28s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6392/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6392 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux b8c93540ab42 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 
15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 58e16f2596df88f76298ddd88e53f4960cc28479 |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6392/1/testReport/ |
   | Max. process+thread count | 2204 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6392/1/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message 

[jira] [Commented] (HDFS-17310) DiskBalancer: Enhance the log message for submitPlan

2023-12-29 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17801175#comment-17801175
 ] 

ASF GitHub Bot commented on HDFS-17310:
---

haiyang1987 commented on PR #6391:
URL: https://github.com/apache/hadoop/pull/6391#issuecomment-1872096054

   Hi @tasanuma Can you help to take a review this small changes when you have 
time? thank you very much~




> DiskBalancer: Enhance the log message for submitPlan
> 
>
> Key: HDFS-17310
> URL: https://issues.apache.org/jira/browse/HDFS-17310
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Haiyang Hu
>Assignee: Haiyang Hu
>Priority: Major
>  Labels: pull-request-available
>
> In order to convenient troubleshoot problems, enhance the log message for 
> submitPlan.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17277) Delete invalid code logic in namenode format

2023-12-29 Thread Takanobu Asanuma (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HDFS-17277:

Fix Version/s: 3.3.9

> Delete invalid code logic in namenode format
> 
>
> Key: HDFS-17277
> URL: https://issues.apache.org/jira/browse/HDFS-17277
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: zhangzhanchang
>Assignee: zhangzhanchang
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.9
>
>
> There is invalid logical processing in the namenode format process



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-17277) Delete invalid code logic in namenode format

2023-12-29 Thread Takanobu Asanuma (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma reassigned HDFS-17277:
---

Assignee: zhangzhanchang

> Delete invalid code logic in namenode format
> 
>
> Key: HDFS-17277
> URL: https://issues.apache.org/jira/browse/HDFS-17277
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: zhangzhanchang
>Assignee: zhangzhanchang
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>
> There is invalid logical processing in the namenode format process



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-17277) Delete invalid code logic in namenode format

2023-12-29 Thread Takanobu Asanuma (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma resolved HDFS-17277.
-
Fix Version/s: 3.4.0
   Resolution: Fixed

> Delete invalid code logic in namenode format
> 
>
> Key: HDFS-17277
> URL: https://issues.apache.org/jira/browse/HDFS-17277
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: zhangzhanchang
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>
> There is invalid logical processing in the namenode format process



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17277) Delete invalid code logic in namenode format

2023-12-29 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17801174#comment-17801174
 ] 

ASF GitHub Bot commented on HDFS-17277:
---

tasanuma commented on PR #6323:
URL: https://github.com/apache/hadoop/pull/6323#issuecomment-1872086408

   Merged. Thanks for your contribution, @zzccctv.




> Delete invalid code logic in namenode format
> 
>
> Key: HDFS-17277
> URL: https://issues.apache.org/jira/browse/HDFS-17277
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: zhangzhanchang
>Priority: Minor
>  Labels: pull-request-available
>
> There is invalid logical processing in the namenode format process



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17277) Delete invalid code logic in namenode format

2023-12-29 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17801173#comment-17801173
 ] 

ASF GitHub Bot commented on HDFS-17277:
---

tasanuma merged PR #6323:
URL: https://github.com/apache/hadoop/pull/6323




> Delete invalid code logic in namenode format
> 
>
> Key: HDFS-17277
> URL: https://issues.apache.org/jira/browse/HDFS-17277
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: zhangzhanchang
>Priority: Minor
>  Labels: pull-request-available
>
> There is invalid logical processing in the namenode format process



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17310) DiskBalancer: Enhance the log message for submitPlan

2023-12-29 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17801141#comment-17801141
 ] 

ASF GitHub Bot commented on HDFS-17310:
---

slfan1989 commented on code in PR #6391:
URL: https://github.com/apache/hadoop/pull/6391#discussion_r1438161561


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DiskBalancer.java:
##
@@ -523,7 +524,7 @@ private void createWorkPlan(NodePlan plan) throws 
DiskBalancerException {
 
   String sourceVolBasePath = storageIDToVolBasePathMap.get(sourceVolUuid);
   if (sourceVolBasePath == null) {
-final String errMsg = "Disk Balancer - Unable to find volume: "
+final String errMsg = "Disk Balancer - Unable to find source volume: "

Review Comment:
   Can we use `{}` instead of `+` ?





> DiskBalancer: Enhance the log message for submitPlan
> 
>
> Key: HDFS-17310
> URL: https://issues.apache.org/jira/browse/HDFS-17310
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Haiyang Hu
>Assignee: Haiyang Hu
>Priority: Major
>  Labels: pull-request-available
>
> In order to convenient troubleshoot problems, enhance the log message for 
> submitPlan.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17310) DiskBalancer: Enhance the log message for submitPlan

2023-12-29 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17801143#comment-17801143
 ] 

ASF GitHub Bot commented on HDFS-17310:
---

slfan1989 commented on code in PR #6391:
URL: https://github.com/apache/hadoop/pull/6391#discussion_r1438161561


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DiskBalancer.java:
##
@@ -523,7 +524,7 @@ private void createWorkPlan(NodePlan plan) throws 
DiskBalancerException {
 
   String sourceVolBasePath = storageIDToVolBasePathMap.get(sourceVolUuid);
   if (sourceVolBasePath == null) {
-final String errMsg = "Disk Balancer - Unable to find volume: "
+final String errMsg = "Disk Balancer - Unable to find source volume: "

Review Comment:
   Can we use `{}` instead of `+` ?





> DiskBalancer: Enhance the log message for submitPlan
> 
>
> Key: HDFS-17310
> URL: https://issues.apache.org/jira/browse/HDFS-17310
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Haiyang Hu
>Assignee: Haiyang Hu
>Priority: Major
>  Labels: pull-request-available
>
> In order to convenient troubleshoot problems, enhance the log message for 
> submitPlan.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17310) DiskBalancer: Enhance the log message for submitPlan

2023-12-29 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17801139#comment-17801139
 ] 

ASF GitHub Bot commented on HDFS-17310:
---

haiyang1987 commented on code in PR #6391:
URL: https://github.com/apache/hadoop/pull/6391#discussion_r1438160021


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DiskBalancer.java:
##
@@ -443,7 +444,7 @@ private NodePlan verifyPlanHash(String planID, String plan)
   throws DiskBalancerException {
 final long sha1Length = 40;
 if (plan == null || plan.length() == 0) {
-  LOG.error("Disk Balancer -  Invalid plan.");
+  LOG.error("Disk Balancer -  Invalid plan ().");

Review Comment:
   Thanks @slfan1989 for your comment.
   here no need to modify it,i will remove ‘()’





> DiskBalancer: Enhance the log message for submitPlan
> 
>
> Key: HDFS-17310
> URL: https://issues.apache.org/jira/browse/HDFS-17310
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Haiyang Hu
>Assignee: Haiyang Hu
>Priority: Major
>  Labels: pull-request-available
>
> In order to convenient troubleshoot problems, enhance the log message for 
> submitPlan.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17310) DiskBalancer: Enhance the log message for submitPlan

2023-12-29 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17801136#comment-17801136
 ] 

ASF GitHub Bot commented on HDFS-17310:
---

slfan1989 commented on code in PR #6391:
URL: https://github.com/apache/hadoop/pull/6391#discussion_r1438156687


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DiskBalancer.java:
##
@@ -443,7 +444,7 @@ private NodePlan verifyPlanHash(String planID, String plan)
   throws DiskBalancerException {
 final long sha1Length = 40;
 if (plan == null || plan.length() == 0) {
-  LOG.error("Disk Balancer -  Invalid plan.");
+  LOG.error("Disk Balancer -  Invalid plan ().");

Review Comment:
   This seems to be missing some information.





> DiskBalancer: Enhance the log message for submitPlan
> 
>
> Key: HDFS-17310
> URL: https://issues.apache.org/jira/browse/HDFS-17310
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Haiyang Hu
>Assignee: Haiyang Hu
>Priority: Major
>  Labels: pull-request-available
>
> In order to convenient troubleshoot problems, enhance the log message for 
> submitPlan.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17310) DiskBalancer: Enhance the log message for submitPlan

2023-12-29 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17801134#comment-17801134
 ] 

ASF GitHub Bot commented on HDFS-17310:
---

slfan1989 commented on code in PR #6391:
URL: https://github.com/apache/hadoop/pull/6391#discussion_r1438156687


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DiskBalancer.java:
##
@@ -443,7 +444,7 @@ private NodePlan verifyPlanHash(String planID, String plan)
   throws DiskBalancerException {
 final long sha1Length = 40;
 if (plan == null || plan.length() == 0) {
-  LOG.error("Disk Balancer -  Invalid plan.");
+  LOG.error("Disk Balancer -  Invalid plan ().");

Review Comment:
   planId ?





> DiskBalancer: Enhance the log message for submitPlan
> 
>
> Key: HDFS-17310
> URL: https://issues.apache.org/jira/browse/HDFS-17310
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Haiyang Hu
>Assignee: Haiyang Hu
>Priority: Major
>  Labels: pull-request-available
>
> In order to convenient troubleshoot problems, enhance the log message for 
> submitPlan.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17311) RBF: ConnectionManager creatorQueue should offer a pool that is not already in creatorQueue.

2023-12-29 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17801124#comment-17801124
 ] 

ASF GitHub Bot commented on HDFS-17311:
---

LiuGuH opened a new pull request, #6392:
URL: https://github.com/apache/hadoop/pull/6392

   …eatorQueue.
   
   
   
   ### Description of PR
   2023-12-29 15:18:54,799 ERROR 
org.apache.hadoop.hdfs.server.federation.router.ConnectionManager: Cannot add 
more than 2048 connections at the same time
   
   In my environment, ConnectionManager creatorQueue is full ,but the cluster 
does not have so many users cloud reach up  2048 pair of  in router.
   
   In the case of a large number of concurrent  creatorQueue add same pool more 
than once.
   
   
   
   
   




> RBF: ConnectionManager creatorQueue should offer a pool that is not already 
> in creatorQueue.
> 
>
> Key: HDFS-17311
> URL: https://issues.apache.org/jira/browse/HDFS-17311
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: liuguanghua
>Priority: Major
>
> 2023-12-29 15:18:54,799 ERROR 
> org.apache.hadoop.hdfs.server.federation.router.ConnectionManager: Cannot add 
> more than 2048 connections at the same time
> In my environment, ConnectionManager creatorQueue is full ,but the cluster 
> does not have so many users cloud reach up  2048 pair of  in router.
> In the case of a large number of concurrent  creatorQueue add same pool more 
> than once.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17310) DiskBalancer: Enhance the log message for submitPlan

2023-12-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-17310:
--
Labels: pull-request-available  (was: )

> DiskBalancer: Enhance the log message for submitPlan
> 
>
> Key: HDFS-17310
> URL: https://issues.apache.org/jira/browse/HDFS-17310
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Haiyang Hu
>Assignee: Haiyang Hu
>Priority: Major
>  Labels: pull-request-available
>
> In order to convenient troubleshoot problems, enhance the log message for 
> submitPlan.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17311) RBF: ConnectionManager creatorQueue should offer a pool that is not already in creatorQueue.

2023-12-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-17311:
--
Labels: pull-request-available  (was: )

> RBF: ConnectionManager creatorQueue should offer a pool that is not already 
> in creatorQueue.
> 
>
> Key: HDFS-17311
> URL: https://issues.apache.org/jira/browse/HDFS-17311
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: liuguanghua
>Priority: Major
>  Labels: pull-request-available
>
> 2023-12-29 15:18:54,799 ERROR 
> org.apache.hadoop.hdfs.server.federation.router.ConnectionManager: Cannot add 
> more than 2048 connections at the same time
> In my environment, ConnectionManager creatorQueue is full ,but the cluster 
> does not have so many users cloud reach up  2048 pair of  in router.
> In the case of a large number of concurrent  creatorQueue add same pool more 
> than once.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17310) DiskBalancer: Enhance the log message for submitPlan

2023-12-29 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17801123#comment-17801123
 ] 

ASF GitHub Bot commented on HDFS-17310:
---

haiyang1987 opened a new pull request, #6391:
URL: https://github.com/apache/hadoop/pull/6391

   ### Description of PR
   https://issues.apache.org/jira/browse/HDFS-17310
   
   In order to convenient troubleshoot problems, enhance the log message for 
submitPlan.
   
   
   
   




> DiskBalancer: Enhance the log message for submitPlan
> 
>
> Key: HDFS-17310
> URL: https://issues.apache.org/jira/browse/HDFS-17310
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Haiyang Hu
>Assignee: Haiyang Hu
>Priority: Major
>
> In order to convenient troubleshoot problems, enhance the log message for 
> submitPlan.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-17311) RBF: ConnectionManager creatorQueue should offer a pool that is not already in creatorQueue.

2023-12-29 Thread liuguanghua (Jira)
liuguanghua created HDFS-17311:
--

 Summary: RBF: ConnectionManager creatorQueue should offer a pool 
that is not already in creatorQueue.
 Key: HDFS-17311
 URL: https://issues.apache.org/jira/browse/HDFS-17311
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: liuguanghua


2023-12-29 15:18:54,799 ERROR 
org.apache.hadoop.hdfs.server.federation.router.ConnectionManager: Cannot add 
more than 2048 connections at the same time

In my environment, ConnectionManager creatorQueue is full ,but the cluster does 
not have so many users cloud reach up  2048 pair of  in router.

In the case of a large number of concurrent  creatorQueue add same pool more 
than once.

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17310) DiskBalancer: Enhance the log message for submitPlan

2023-12-29 Thread Haiyang Hu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haiyang Hu updated HDFS-17310:
--
Description: 
In order to convenient troubleshoot problems, enhance the log message for 
submitPlan.


> DiskBalancer: Enhance the log message for submitPlan
> 
>
> Key: HDFS-17310
> URL: https://issues.apache.org/jira/browse/HDFS-17310
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Haiyang Hu
>Assignee: Haiyang Hu
>Priority: Major
>
> In order to convenient troubleshoot problems, enhance the log message for 
> submitPlan.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17310) DiskBalancer: Enhance the log message for submitPlan

2023-12-29 Thread Haiyang Hu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haiyang Hu updated HDFS-17310:
--
Summary: DiskBalancer: Enhance the log message for submitPlan  (was: 
DiskBalancer: Enhance the log message)

> DiskBalancer: Enhance the log message for submitPlan
> 
>
> Key: HDFS-17310
> URL: https://issues.apache.org/jira/browse/HDFS-17310
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Haiyang Hu
>Assignee: Haiyang Hu
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-17310) DiskBalancer: Enhance the log message

2023-12-29 Thread Haiyang Hu (Jira)
Haiyang Hu created HDFS-17310:
-

 Summary: DiskBalancer: Enhance the log message
 Key: HDFS-17310
 URL: https://issues.apache.org/jira/browse/HDFS-17310
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Haiyang Hu
Assignee: Haiyang Hu






--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17277) Delete invalid code logic in namenode format

2023-12-29 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17801121#comment-17801121
 ] 

ASF GitHub Bot commented on HDFS-17277:
---

zzccctv commented on PR #6323:
URL: https://github.com/apache/hadoop/pull/6323#issuecomment-1871865255

   @tasanuma Can you help me check this PR?




> Delete invalid code logic in namenode format
> 
>
> Key: HDFS-17277
> URL: https://issues.apache.org/jira/browse/HDFS-17277
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: zhangzhanchang
>Priority: Minor
>  Labels: pull-request-available
>
> There is invalid logical processing in the namenode format process



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17223) Add journalnode maintenance node list

2023-12-29 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17801116#comment-17801116
 ] 

ASF GitHub Bot commented on HDFS-17223:
---

gp1314 commented on PR #6183:
URL: https://github.com/apache/hadoop/pull/6183#issuecomment-1871841096

   Unfortunately, I didn't reproduce the problem. In the past, stopping a JN 
and restarting NN took a long time to initialize. I will pay more attention to 
the root cause of the problem.




> Add journalnode maintenance node list
> -
>
> Key: HDFS-17223
> URL: https://issues.apache.org/jira/browse/HDFS-17223
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: qjm
>Affects Versions: 3.3.6
>Reporter: kuper
>Priority: Major
>  Labels: pull-request-available
>
> * In the case of configuring 3 journal nodes in HDFS, if only 2 journal nodes 
> are available and 1 journal node fails to start due to machine issues, it 
> will result in a long initialization time for the namenode (around 30-40 
> minutes, depending on the IPC timeout and retry policy configuration). 
> * The failed journal node cannot recover immediately, but HDFS can still 
> function in this situation. In our production environment, we encountered 
> this issue and had to reduce the IPC timeout and adjust the retry policy to 
> accelerate the namenode initialization and provide services. 
> * I'm wondering if it would be possible to have a journal node maintenance 
> list to speed up the namenode initialization knowing that one journal node 
> cannot provide services in advance?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org