[jira] [Commented] (HDFS-17451) RBF: fix spotbugs for redundant nullcheck of dns.
[ https://issues.apache.org/jira/browse/HDFS-17451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17833394#comment-17833394 ] ASF GitHub Bot commented on HDFS-17451: --- hadoop-yetus commented on PR #6697: URL: https://github.com/apache/hadoop/pull/6697#issuecomment-2033529207 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 31s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 45m 9s | | trunk passed | | +1 :green_heart: | compile | 0m 41s | | trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | compile | 0m 38s | | trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | +1 :green_heart: | checkstyle | 0m 36s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 45s | | trunk passed | | +1 :green_heart: | javadoc | 0m 46s | | trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | javadoc | 0m 34s | | trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | -1 :x: | spotbugs | 1m 25s | [/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-rbf-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6697/1/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-rbf-warnings.html) | hadoop-hdfs-project/hadoop-hdfs-rbf in trunk has 1 extant spotbugs warnings. | | +1 :green_heart: | shadedclient | 33m 43s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 31s | | the patch passed | | +1 :green_heart: | compile | 0m 33s | | the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | javac | 0m 33s | | the patch passed | | +1 :green_heart: | compile | 0m 30s | | the patch passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | +1 :green_heart: | javac | 0m 30s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 19s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 33s | | the patch passed | | +1 :green_heart: | javadoc | 0m 29s | | the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | javadoc | 0m 25s | | the patch passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | +1 :green_heart: | spotbugs | 1m 23s | | hadoop-hdfs-project/hadoop-hdfs-rbf generated 0 new + 0 unchanged - 1 fixed = 0 total (was 1) | | +1 :green_heart: | shadedclient | 33m 36s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 29m 37s | | hadoop-hdfs-rbf in the patch passed. | | +1 :green_heart: | asflicense | 0m 39s | | The patch does not generate ASF License warnings. | | | | 158m 11s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.45 ServerAPI=1.45 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6697/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6697 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux f9c60ec11db1 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 4ae49e955dab61cfcfc8f3b58d314d65e2765e52 | | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6
[jira] [Commented] (HDFS-17438) RBF: The newest STANDBY and UNAVAILABLE nn should be the lowest priority.
[ https://issues.apache.org/jira/browse/HDFS-17438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17833373#comment-17833373 ] ASF GitHub Bot commented on HDFS-17438: --- KeeProMise commented on PR #6655: URL: https://github.com/apache/hadoop/pull/6655#issuecomment-2033451894 @goiri @slfan1989 hi, in to avoid modifying code unrelated to this function, I fixed spotbugs separately in HDFS-17451. You can post comments here https://github.com/apache/hadoop/pull/6697, thanks. > RBF: The newest STANDBY and UNAVAILABLE nn should be the lowest priority. > - > > Key: HDFS-17438 > URL: https://issues.apache.org/jira/browse/HDFS-17438 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Jian Zhang >Assignee: Jian Zhang >Priority: Major > Labels: pull-request-available > Attachments: HDFS-17438.001.patch > > > At present, when the status of all namenodes in an ns in the router is the > same, the namenode which is the newest reported will be placed at the top of > the cache. when the client accesses the ns through the router, it will first > access the namenode. > If multiple namenodes in this route are in an active state, or if there are > namenodes with multiple observer states, the existing logic is not a problem, > because the newest reported active or observer state namenode have a higher > probability of being true active or observer compared to the namenode that > reported active or observer state a long time ago. > Similarly, the newest reported namenode with a status of standby or > unavailable has a higher probability of being a standby or unavailable > namenode compared to the namenode reported with a status of standby or > unavailable a long time ago. Therefore, the newest nn reported as standby or > unavailable status should have a lower priority for access, the oldest nn > reported as standby or unavailable status should have a higher priority for > access. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-17438) RBF: The newest STANDBY and UNAVAILABLE nn should be the lowest priority.
[ https://issues.apache.org/jira/browse/HDFS-17438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17833370#comment-17833370 ] ASF GitHub Bot commented on HDFS-17438: --- KeeProMise commented on code in PR #6655: URL: https://github.com/apache/hadoop/pull/6655#discussion_r1548852305 ## hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/resolver/TestNamenodeResolver.java: ## @@ -427,4 +466,14 @@ private static InetSocketAddress getInetSocketAddress(String rpcAddr) { String hostname = rpcAddrArr[0]; return new InetSocketAddress(hostname, port); } + + private boolean registerNamenode(String nsId, + String nnId, HAServiceState haServiceState) { +try { + return namenodeResolver.registerNamenode( + createNamenodeReport(nsId, nnId, haServiceState)); +}catch (IOException e) { Review Comment: Thank you for your review, done. > RBF: The newest STANDBY and UNAVAILABLE nn should be the lowest priority. > - > > Key: HDFS-17438 > URL: https://issues.apache.org/jira/browse/HDFS-17438 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Jian Zhang >Assignee: Jian Zhang >Priority: Major > Labels: pull-request-available > Attachments: HDFS-17438.001.patch > > > At present, when the status of all namenodes in an ns in the router is the > same, the namenode which is the newest reported will be placed at the top of > the cache. when the client accesses the ns through the router, it will first > access the namenode. > If multiple namenodes in this route are in an active state, or if there are > namenodes with multiple observer states, the existing logic is not a problem, > because the newest reported active or observer state namenode have a higher > probability of being true active or observer compared to the namenode that > reported active or observer state a long time ago. > Similarly, the newest reported namenode with a status of standby or > unavailable has a higher probability of being a standby or unavailable > namenode compared to the namenode reported with a status of standby or > unavailable a long time ago. Therefore, the newest nn reported as standby or > unavailable status should have a lower priority for access, the oldest nn > reported as standby or unavailable status should have a higher priority for > access. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-17451) RBF: fix spotbugs for redundant nullcheck of dns.
[ https://issues.apache.org/jira/browse/HDFS-17451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17833363#comment-17833363 ] ASF GitHub Bot commented on HDFS-17451: --- KeeProMise commented on code in PR #6697: URL: https://github.com/apache/hadoop/pull/6697#discussion_r1548840150 ## hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java: ## @@ -1090,7 +1090,7 @@ DatanodeInfo[] getCachedDatanodeReport(DatanodeReportType type) throws IOException { try { DatanodeInfo[] dns = this.dnCache.get(type); - if (dns == null) { + if (dns.length == 0) { Review Comment: RouterRpcServer.java:[line 1093]. method 'get' inherits annotation from class LoadingCache, thus 'non-null', this is the cause given by spotbugs. > RBF: fix spotbugs for redundant nullcheck of dns. > - > > Key: HDFS-17451 > URL: https://issues.apache.org/jira/browse/HDFS-17451 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Jian Zhang >Assignee: Jian Zhang >Priority: Major > Labels: pull-request-available > > h2. Dodgy code Warnings > ||Code||Warning|| > |RCN|Redundant nullcheck of dns, which is known to be non-null in > org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getCachedDatanodeReport(HdfsConstants$DatanodeReportType)| > | |[Bug type RCN_REDUNDANT_NULLCHECK_OF_NONNULL_VALUE (click for > details)|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6655/8/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-rbf-warnings.html#RCN_REDUNDANT_NULLCHECK_OF_NONNULL_VALUE] > In class org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer > In method > org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getCachedDatanodeReport(HdfsConstants$DatanodeReportType) > Value loaded from dns > Return value of > org.apache.hadoop.thirdparty.com.google.common.cache.LoadingCache.get(Object) > of type Object > Redundant null check at RouterRpcServer.java:[line 1093]| -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-17451) RBF: fix spotbugs for redundant nullcheck of dns.
[ https://issues.apache.org/jira/browse/HDFS-17451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17833362#comment-17833362 ] ASF GitHub Bot commented on HDFS-17451: --- KeeProMise commented on code in PR #6697: URL: https://github.com/apache/hadoop/pull/6697#discussion_r1548840150 ## hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java: ## @@ -1090,7 +1090,7 @@ DatanodeInfo[] getCachedDatanodeReport(DatanodeReportType type) throws IOException { try { DatanodeInfo[] dns = this.dnCache.get(type); - if (dns == null) { + if (dns.length == 0) { Review Comment: In line 1092. method 'get' inherits annotation from class LoadingCache, thus 'non-null', this is the cause given by spotbugs. > RBF: fix spotbugs for redundant nullcheck of dns. > - > > Key: HDFS-17451 > URL: https://issues.apache.org/jira/browse/HDFS-17451 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Jian Zhang >Assignee: Jian Zhang >Priority: Major > Labels: pull-request-available > > h2. Dodgy code Warnings > ||Code||Warning|| > |RCN|Redundant nullcheck of dns, which is known to be non-null in > org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getCachedDatanodeReport(HdfsConstants$DatanodeReportType)| > | |[Bug type RCN_REDUNDANT_NULLCHECK_OF_NONNULL_VALUE (click for > details)|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6655/8/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-rbf-warnings.html#RCN_REDUNDANT_NULLCHECK_OF_NONNULL_VALUE] > In class org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer > In method > org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getCachedDatanodeReport(HdfsConstants$DatanodeReportType) > Value loaded from dns > Return value of > org.apache.hadoop.thirdparty.com.google.common.cache.LoadingCache.get(Object) > of type Object > Redundant null check at RouterRpcServer.java:[line 1093]| -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-17438) RBF: The newest STANDBY and UNAVAILABLE nn should be the lowest priority.
[ https://issues.apache.org/jira/browse/HDFS-17438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17833361#comment-17833361 ] ASF GitHub Bot commented on HDFS-17438: --- slfan1989 commented on code in PR #6655: URL: https://github.com/apache/hadoop/pull/6655#discussion_r1548836707 ## hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/resolver/TestNamenodeResolver.java: ## @@ -427,4 +466,14 @@ private static InetSocketAddress getInetSocketAddress(String rpcAddr) { String hostname = rpcAddrArr[0]; return new InetSocketAddress(hostname, port); } + + private boolean registerNamenode(String nsId, + String nnId, HAServiceState haServiceState) { +try { + return namenodeResolver.registerNamenode( + createNamenodeReport(nsId, nnId, haServiceState)); +}catch (IOException e) { Review Comment: small comment, There should be a space > RBF: The newest STANDBY and UNAVAILABLE nn should be the lowest priority. > - > > Key: HDFS-17438 > URL: https://issues.apache.org/jira/browse/HDFS-17438 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Jian Zhang >Assignee: Jian Zhang >Priority: Major > Labels: pull-request-available > Attachments: HDFS-17438.001.patch > > > At present, when the status of all namenodes in an ns in the router is the > same, the namenode which is the newest reported will be placed at the top of > the cache. when the client accesses the ns through the router, it will first > access the namenode. > If multiple namenodes in this route are in an active state, or if there are > namenodes with multiple observer states, the existing logic is not a problem, > because the newest reported active or observer state namenode have a higher > probability of being true active or observer compared to the namenode that > reported active or observer state a long time ago. > Similarly, the newest reported namenode with a status of standby or > unavailable has a higher probability of being a standby or unavailable > namenode compared to the namenode reported with a status of standby or > unavailable a long time ago. Therefore, the newest nn reported as standby or > unavailable status should have a lower priority for access, the oldest nn > reported as standby or unavailable status should have a higher priority for > access. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-17451) RBF: fix spotbugs for redundant nullcheck of dns.
[ https://issues.apache.org/jira/browse/HDFS-17451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17833360#comment-17833360 ] ASF GitHub Bot commented on HDFS-17451: --- slfan1989 commented on code in PR #6697: URL: https://github.com/apache/hadoop/pull/6697#discussion_r1548835509 ## hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java: ## @@ -1090,7 +1090,7 @@ DatanodeInfo[] getCachedDatanodeReport(DatanodeReportType type) throws IOException { try { DatanodeInfo[] dns = this.dnCache.get(type); - if (dns == null) { + if (dns.length == 0) { Review Comment: Can we confirm it won't be null? > RBF: fix spotbugs for redundant nullcheck of dns. > - > > Key: HDFS-17451 > URL: https://issues.apache.org/jira/browse/HDFS-17451 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Jian Zhang >Assignee: Jian Zhang >Priority: Major > > h2. Dodgy code Warnings > ||Code||Warning|| > |RCN|Redundant nullcheck of dns, which is known to be non-null in > org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getCachedDatanodeReport(HdfsConstants$DatanodeReportType)| > | |[Bug type RCN_REDUNDANT_NULLCHECK_OF_NONNULL_VALUE (click for > details)|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6655/8/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-rbf-warnings.html#RCN_REDUNDANT_NULLCHECK_OF_NONNULL_VALUE] > In class org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer > In method > org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getCachedDatanodeReport(HdfsConstants$DatanodeReportType) > Value loaded from dns > Return value of > org.apache.hadoop.thirdparty.com.google.common.cache.LoadingCache.get(Object) > of type Object > Redundant null check at RouterRpcServer.java:[line 1093]| -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-17451) RBF: fix spotbugs for redundant nullcheck of dns.
[ https://issues.apache.org/jira/browse/HDFS-17451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HDFS-17451: -- Labels: pull-request-available (was: ) > RBF: fix spotbugs for redundant nullcheck of dns. > - > > Key: HDFS-17451 > URL: https://issues.apache.org/jira/browse/HDFS-17451 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Jian Zhang >Assignee: Jian Zhang >Priority: Major > Labels: pull-request-available > > h2. Dodgy code Warnings > ||Code||Warning|| > |RCN|Redundant nullcheck of dns, which is known to be non-null in > org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getCachedDatanodeReport(HdfsConstants$DatanodeReportType)| > | |[Bug type RCN_REDUNDANT_NULLCHECK_OF_NONNULL_VALUE (click for > details)|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6655/8/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-rbf-warnings.html#RCN_REDUNDANT_NULLCHECK_OF_NONNULL_VALUE] > In class org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer > In method > org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getCachedDatanodeReport(HdfsConstants$DatanodeReportType) > Value loaded from dns > Return value of > org.apache.hadoop.thirdparty.com.google.common.cache.LoadingCache.get(Object) > of type Object > Redundant null check at RouterRpcServer.java:[line 1093]| -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-17451) RBF: fix spotbugs for redundant nullcheck of dns.
[ https://issues.apache.org/jira/browse/HDFS-17451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jian Zhang updated HDFS-17451: -- Description: h2. Dodgy code Warnings ||Code||Warning|| |RCN|Redundant nullcheck of dns, which is known to be non-null in org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getCachedDatanodeReport(HdfsConstants$DatanodeReportType)| | |[Bug type RCN_REDUNDANT_NULLCHECK_OF_NONNULL_VALUE (click for details)|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6655/8/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-rbf-warnings.html#RCN_REDUNDANT_NULLCHECK_OF_NONNULL_VALUE] In class org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer In method org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getCachedDatanodeReport(HdfsConstants$DatanodeReportType) Value loaded from dns Return value of org.apache.hadoop.thirdparty.com.google.common.cache.LoadingCache.get(Object) of type Object Redundant null check at RouterRpcServer.java:[line 1093]| > RBF: fix spotbugs for redundant nullcheck of dns. > - > > Key: HDFS-17451 > URL: https://issues.apache.org/jira/browse/HDFS-17451 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Jian Zhang >Assignee: Jian Zhang >Priority: Major > > h2. Dodgy code Warnings > ||Code||Warning|| > |RCN|Redundant nullcheck of dns, which is known to be non-null in > org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getCachedDatanodeReport(HdfsConstants$DatanodeReportType)| > | |[Bug type RCN_REDUNDANT_NULLCHECK_OF_NONNULL_VALUE (click for > details)|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6655/8/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-rbf-warnings.html#RCN_REDUNDANT_NULLCHECK_OF_NONNULL_VALUE] > In class org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer > In method > org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getCachedDatanodeReport(HdfsConstants$DatanodeReportType) > Value loaded from dns > Return value of > org.apache.hadoop.thirdparty.com.google.common.cache.LoadingCache.get(Object) > of type Object > Redundant null check at RouterRpcServer.java:[line 1093]| -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-17451) RBF: fix spotbugs for redundant nullcheck of dns.
Jian Zhang created HDFS-17451: - Summary: RBF: fix spotbugs for redundant nullcheck of dns. Key: HDFS-17451 URL: https://issues.apache.org/jira/browse/HDFS-17451 Project: Hadoop HDFS Issue Type: Improvement Reporter: Jian Zhang -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-17438) RBF: The newest STANDBY and UNAVAILABLE nn should be the lowest priority.
[ https://issues.apache.org/jira/browse/HDFS-17438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17833352#comment-17833352 ] ASF GitHub Bot commented on HDFS-17438: --- KeeProMise commented on code in PR #6655: URL: https://github.com/apache/hadoop/pull/6655#discussion_r1548803547 ## hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/resolver/TestNamenodeResolver.java: ## @@ -359,11 +359,12 @@ public void testRegistrationNamenodeSelection() FederationNamenodeServiceState.ACTIVE); // 1) ns0:nn0 - Standby (oldest) -// 2) ns0:nn1 - Standby (newest) -// 3) ns0:nn2 - Standby -// Verify the selected entry is the newest standby entry +// 2) ns0:nn1 - Standby +// 3) ns0:nn2 - Standby (newest) +// Verify the selected entry is the oldest standby entry assertTrue(namenodeResolver.registerNamenode(createNamenodeReport( NAMESERVICES[0], NAMENODES[0], HAServiceState.STANDBY))); +Thread.sleep(1500); Review Comment: This suggestion sounds good, I added GenericTestUtils#atLeastWaitFor method, please help to take a look. > RBF: The newest STANDBY and UNAVAILABLE nn should be the lowest priority. > - > > Key: HDFS-17438 > URL: https://issues.apache.org/jira/browse/HDFS-17438 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Jian Zhang >Assignee: Jian Zhang >Priority: Major > Labels: pull-request-available > Attachments: HDFS-17438.001.patch > > > At present, when the status of all namenodes in an ns in the router is the > same, the namenode which is the newest reported will be placed at the top of > the cache. when the client accesses the ns through the router, it will first > access the namenode. > If multiple namenodes in this route are in an active state, or if there are > namenodes with multiple observer states, the existing logic is not a problem, > because the newest reported active or observer state namenode have a higher > probability of being true active or observer compared to the namenode that > reported active or observer state a long time ago. > Similarly, the newest reported namenode with a status of standby or > unavailable has a higher probability of being a standby or unavailable > namenode compared to the namenode reported with a status of standby or > unavailable a long time ago. Therefore, the newest nn reported as standby or > unavailable status should have a lower priority for access, the oldest nn > reported as standby or unavailable status should have a higher priority for > access. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-17438) RBF: The newest STANDBY and UNAVAILABLE nn should be the lowest priority.
[ https://issues.apache.org/jira/browse/HDFS-17438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17833325#comment-17833325 ] ASF GitHub Bot commented on HDFS-17438: --- hadoop-yetus commented on PR #6655: URL: https://github.com/apache/hadoop/pull/6655#issuecomment-2033163166 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 48s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 14m 19s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 36m 43s | | trunk passed | | +1 :green_heart: | compile | 19m 20s | | trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | compile | 17m 24s | | trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | +1 :green_heart: | checkstyle | 4m 38s | | trunk passed | | +1 :green_heart: | mvnsite | 2m 33s | | trunk passed | | +1 :green_heart: | javadoc | 2m 4s | | trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | javadoc | 1m 34s | | trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | -1 :x: | spotbugs | 1m 27s | [/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-rbf-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6655/8/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-rbf-warnings.html) | hadoop-hdfs-project/hadoop-hdfs-rbf in trunk has 1 extant spotbugs warnings. | | +1 :green_heart: | shadedclient | 39m 37s | | branch has no errors when building and testing our client artifacts. | | -0 :warning: | patch | 40m 3s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 30s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 27s | | the patch passed | | +1 :green_heart: | compile | 18m 22s | | the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | javac | 18m 22s | | the patch passed | | +1 :green_heart: | compile | 17m 40s | | the patch passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | +1 :green_heart: | javac | 17m 40s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 4m 40s | | the patch passed | | +1 :green_heart: | mvnsite | 2m 34s | | the patch passed | | +1 :green_heart: | javadoc | 1m 56s | | the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | javadoc | 1m 33s | | the patch passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | +1 :green_heart: | spotbugs | 2m 42s | | hadoop-common in the patch passed. | | +1 :green_heart: | spotbugs | 1m 40s | | hadoop-hdfs-project/hadoop-hdfs-rbf generated 0 new + 0 unchanged - 1 fixed = 0 total (was 1) | | +1 :green_heart: | shadedclient | 39m 35s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 19m 23s | | hadoop-common in the patch passed. | | +1 :green_heart: | unit | 32m 14s | | hadoop-hdfs-rbf in the patch passed. | | +1 :green_heart: | asflicense | 1m 2s | | The patch does not generate ASF License warnings. | | | | 294m 55s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.44 ServerAPI=1.44 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6655/8/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6655 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 780c5ef9a5b7 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk
[jira] [Commented] (HDFS-17307) docker-compose.yaml sets namenode directory wrong causing datanode failures on restart
[ https://issues.apache.org/jira/browse/HDFS-17307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17833219#comment-17833219 ] ASF GitHub Bot commented on HDFS-17307: --- matthewrossi commented on PR #6387: URL: https://github.com/apache/hadoop/pull/6387#issuecomment-2032214029 This is what I've found diving into the project history: - `docker-compose.yaml` was always configured with `ENSURE_NAMENODE_DIR: "/tmp/hadoop-root/dfs/name"` - the namenode [base image](https://github.com/apache/hadoop/blob/docker-hadoop-runner/Dockerfile) always specified the use of the `hadoop` user (so my initial assumption about the previous use of the `root` user was wrong) - the default configurations of [Hadoop](https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml#L37) and [HDFS](https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml#L442) are the ones determining the use of the `/tmp/hadoop-${user.name}/dfs/name` directory, but they date back before the creation of the `docker-compose.yaml` So, it looks like the issue has always been there. > docker-compose.yaml sets namenode directory wrong causing datanode failures > on restart > -- > > Key: HDFS-17307 > URL: https://issues.apache.org/jira/browse/HDFS-17307 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode, namenode >Reporter: Matthew Rossi >Priority: Major > Labels: pull-request-available > > Restarting existing services using the docker-compose.yaml, causes the > datanode to crash after a few seconds. > How to reproduce: > {code:java} > $ docker-compose up -d # everything starts ok > $ docker-compose stop # stop services without removing containers > $ docker-compose up -d # everything starts, but datanode crashes after a few > seconds{code} > The log produced by the datanode suggests the issue is due to a mismatch in > the clusterIDs of the namenode and the datanode: > {code:java} > datanode_1 | 2023-12-28 11:17:15 WARN Storage:420 - Failed to add > storage directory [DISK]file:/tmp/hadoop-hadoop/dfs/data > datanode_1 | java.io.IOException: Incompatible clusterIDs in > /tmp/hadoop-hadoop/dfs/data: namenode clusterID = > CID-250bae07-6a8a-45ce-84bb-8828b37b10b7; datanode clusterID = > CID-2c1c7105-7fdf-4a19-8ef8-7cb763e5b701 {code} > After some troubleshooting I found out the namenode is not reusing the > clusterID of the previous run because it cannot find it in the directory set > by ENSURE_NAMENODE_DIR=/tmp/hadoop-root/dfs/name. This is due to a change of > the default user of the namenode, which is now "hadoop", so the namenode is > actually writing these information to /tmp/hadoop-hadoop/dfs/name. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-17424) [FGL] DelegationTokenSecretManager supports fine-grained lock
[ https://issues.apache.org/jira/browse/HDFS-17424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17833061#comment-17833061 ] ASF GitHub Bot commented on HDFS-17424: --- hadoop-yetus commented on PR #6696: URL: https://github.com/apache/hadoop/pull/6696#issuecomment-2031372820 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 20s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ HDFS-17384 Compile Tests _ | | +1 :green_heart: | mvninstall | 33m 5s | | HDFS-17384 passed | | +1 :green_heart: | compile | 0m 43s | | HDFS-17384 passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | compile | 0m 38s | | HDFS-17384 passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | +1 :green_heart: | checkstyle | 0m 39s | | HDFS-17384 passed | | +1 :green_heart: | mvnsite | 0m 46s | | HDFS-17384 passed | | +1 :green_heart: | javadoc | 0m 41s | | HDFS-17384 passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | javadoc | 1m 10s | | HDFS-17384 passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | +1 :green_heart: | spotbugs | 1m 45s | | HDFS-17384 passed | | +1 :green_heart: | shadedclient | 21m 21s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | -1 :x: | mvninstall | 0m 25s | [/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6696/1/artifact/out/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch failed. | | -1 :x: | compile | 0m 26s | [/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6696/1/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt) | hadoop-hdfs in the patch failed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1. | | -1 :x: | javac | 0m 26s | [/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6696/1/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt) | hadoop-hdfs in the patch failed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1. | | -1 :x: | compile | 0m 26s | [/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6696/1/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt) | hadoop-hdfs in the patch failed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06. | | -1 :x: | javac | 0m 26s | [/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6696/1/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt) | hadoop-hdfs in the patch failed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06. | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 29s | | the patch passed | | -1 :x: | mvnsite | 0m 23s | [/patch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6696/1/artifact/out/patch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch failed. | | -1 :x: | javadoc | 0m 22s | [/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6696/1/artifact/out/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt) | hadoop-hdfs in the patch failed with JDK Ubuntu-11.0.22+7-post-Ubuntu-
[jira] [Commented] (HDFS-17424) [FGL] DelegationTokenSecretManager supports fine-grained lock
[ https://issues.apache.org/jira/browse/HDFS-17424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17833039#comment-17833039 ] ASF GitHub Bot commented on HDFS-17424: --- yuanboliu commented on PR #6696: URL: https://github.com/apache/hadoop/pull/6696#issuecomment-2031233895 delegation token is a very independent system, so it's proper to use an seperated r/w lock instead of fs lock for getting/renewing/expiring/canceling token or updating master key. We can seperate it from FS lock in phase-2. > [FGL] DelegationTokenSecretManager supports fine-grained lock > - > > Key: HDFS-17424 > URL: https://issues.apache.org/jira/browse/HDFS-17424 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: ZanderXu >Assignee: Yuanbo Liu >Priority: Major > Labels: pull-request-available > > DelegationTokenSecretManager supports fine-grained lock -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-17424) [FGL] DelegationTokenSecretManager supports fine-grained lock
[ https://issues.apache.org/jira/browse/HDFS-17424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17833038#comment-17833038 ] ASF GitHub Bot commented on HDFS-17424: --- yuanboliu opened a new pull request, #6696: URL: https://github.com/apache/hadoop/pull/6696 ### Description of PR ### How was this patch tested? ### For code changes: - [ ] Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')? - [ ] Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, `NOTICE-binary` files? > [FGL] DelegationTokenSecretManager supports fine-grained lock > - > > Key: HDFS-17424 > URL: https://issues.apache.org/jira/browse/HDFS-17424 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: ZanderXu >Assignee: Yuanbo Liu >Priority: Major > > DelegationTokenSecretManager supports fine-grained lock -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-17424) [FGL] DelegationTokenSecretManager supports fine-grained lock
[ https://issues.apache.org/jira/browse/HDFS-17424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HDFS-17424: -- Labels: pull-request-available (was: ) > [FGL] DelegationTokenSecretManager supports fine-grained lock > - > > Key: HDFS-17424 > URL: https://issues.apache.org/jira/browse/HDFS-17424 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: ZanderXu >Assignee: Yuanbo Liu >Priority: Major > Labels: pull-request-available > > DelegationTokenSecretManager supports fine-grained lock -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org