[jira] [Work logged] (HDFS-16514) Reduce the failover sleep time if multiple namenode are configured

2022-04-14 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16514?focusedWorklogId=757291=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-757291
 ]

ASF GitHub Bot logged work on HDFS-16514:
-

Author: ASF GitHub Bot
Created on: 15/Apr/22 04:20
Start Date: 15/Apr/22 04:20
Worklog Time Spent: 10m 
  Work Description: liubingxing commented on PR #4088:
URL: https://github.com/apache/hadoop/pull/4088#issuecomment-1099836851

   @tasanuma Please take a look at this. Thanks a lot.




Issue Time Tracking
---

Worklog Id: (was: 757291)
Time Spent: 1.5h  (was: 1h 20m)

> Reduce the failover sleep time if multiple namenode are configured
> --
>
> Key: HDFS-16514
> URL: https://issues.apache.org/jira/browse/HDFS-16514
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: qinyuren
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2022-03-21-18-11-37-191.png
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> Recently, we used the [Standby Read] feature in our test cluster, and 
> deployed 4 namenode as follow:
> node1 -> active nn
> node2 -> standby nn
> node3 -> observer nn
> node3 -> observer nn
> If we set ’dfs.client.failover.random.order=true‘, the client may failover 
> twice and wait a long time to send msync to active namenode. 
> !image-2022-03-21-18-11-37-191.png|width=698,height=169!
> I think we can reduce the sleep time of the first several failover based on 
> the number of namenode
> For example, if 4 namenode are configured, the sleep time of first three 
> failover operations is set to zero.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16514) Reduce the failover sleep time if multiple namenode are configured

2022-04-14 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16514?focusedWorklogId=757290=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-757290
 ]

ASF GitHub Bot logged work on HDFS-16514:
-

Author: ASF GitHub Bot
Created on: 15/Apr/22 04:19
Start Date: 15/Apr/22 04:19
Worklog Time Spent: 10m 
  Work Description: liubingxing commented on code in PR #4088:
URL: https://github.com/apache/hadoop/pull/4088#discussion_r851047073


##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/retry/RetryPolicies.java:
##
@@ -639,19 +647,24 @@ public FailoverOnNetworkExceptionRetry(RetryPolicy 
fallbackPolicy,
 
 public FailoverOnNetworkExceptionRetry(RetryPolicy fallbackPolicy,
 int maxFailovers, int maxRetries, long delayMillis, long maxDelayBase) 
{
+  this(fallbackPolicy, maxFailovers, maxRetries, delayMillis, 
maxDelayBase, 2);
+}
+public FailoverOnNetworkExceptionRetry(RetryPolicy fallbackPolicy,
+int maxFailovers, int maxRetries, long delayMillis, long maxDelayBase, 
int nnSize) {
   this.fallbackPolicy = fallbackPolicy;
   this.maxFailovers = maxFailovers;
   this.maxRetries = maxRetries;
   this.delayMillis = delayMillis;
   this.maxDelayBase = maxDelayBase;
+  this.nnSize = nnSize;
 }
 
 /**
  * @return 0 if this is our first failover/retry (i.e., retry immediately),

Review Comment:
   @cndaimin I updated the code. Please take a look.



##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/retry/RetryPolicies.java:
##
@@ -639,19 +647,24 @@ public FailoverOnNetworkExceptionRetry(RetryPolicy 
fallbackPolicy,
 
 public FailoverOnNetworkExceptionRetry(RetryPolicy fallbackPolicy,
 int maxFailovers, int maxRetries, long delayMillis, long maxDelayBase) 
{
+  this(fallbackPolicy, maxFailovers, maxRetries, delayMillis, 
maxDelayBase, 2);
+}
+public FailoverOnNetworkExceptionRetry(RetryPolicy fallbackPolicy,
+int maxFailovers, int maxRetries, long delayMillis, long maxDelayBase, 
int nnSize) {
   this.fallbackPolicy = fallbackPolicy;
   this.maxFailovers = maxFailovers;
   this.maxRetries = maxRetries;
   this.delayMillis = delayMillis;
   this.maxDelayBase = maxDelayBase;
+  this.nnSize = nnSize;
 }
 
 /**
  * @return 0 if this is our first failover/retry (i.e., retry immediately),

Review Comment:
   @cndaimin Thanks for the review. I will add the comments later.





Issue Time Tracking
---

Worklog Id: (was: 757290)
Time Spent: 1h 20m  (was: 1h 10m)

> Reduce the failover sleep time if multiple namenode are configured
> --
>
> Key: HDFS-16514
> URL: https://issues.apache.org/jira/browse/HDFS-16514
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: qinyuren
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2022-03-21-18-11-37-191.png
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Recently, we used the [Standby Read] feature in our test cluster, and 
> deployed 4 namenode as follow:
> node1 -> active nn
> node2 -> standby nn
> node3 -> observer nn
> node3 -> observer nn
> If we set ’dfs.client.failover.random.order=true‘, the client may failover 
> twice and wait a long time to send msync to active namenode. 
> !image-2022-03-21-18-11-37-191.png|width=698,height=169!
> I think we can reduce the sleep time of the first several failover based on 
> the number of namenode
> For example, if 4 namenode are configured, the sleep time of first three 
> failover operations is set to zero.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16541) Fix a typo in NameNodeLayoutVersion.

2022-04-14 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16541?focusedWorklogId=757274=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-757274
 ]

ASF GitHub Bot logged work on HDFS-16541:
-

Author: ASF GitHub Bot
Created on: 15/Apr/22 02:57
Start Date: 15/Apr/22 02:57
Worklog Time Spent: 10m 
  Work Description: Happy-shi opened a new pull request, #4176:
URL: https://github.com/apache/hadoop/pull/4176

   JIRA: [HDFS-16541](https://issues.apache.org/jira/browse/HDFS-16541).
   
   Fix a typo in NameNodeLayoutVersion.
   
   




Issue Time Tracking
---

Worklog Id: (was: 757274)
Remaining Estimate: 0h
Time Spent: 10m

> Fix a typo in NameNodeLayoutVersion.
> 
>
> Key: HDFS-16541
> URL: https://issues.apache.org/jira/browse/HDFS-16541
> Project: Hadoop HDFS
>  Issue Type: Wish
>Reporter: ZhiWei Shi
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Fix a typo in NameNodeLayoutVersion.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16541) Fix a typo in NameNodeLayoutVersion.

2022-04-14 Thread ZhiWei Shi (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhiWei Shi updated HDFS-16541:
--
Summary: Fix a typo in NameNodeLayoutVersion.  (was: Fix typo in 
NameNodeLayoutVersion.)

> Fix a typo in NameNodeLayoutVersion.
> 
>
> Key: HDFS-16541
> URL: https://issues.apache.org/jira/browse/HDFS-16541
> Project: Hadoop HDFS
>  Issue Type: Wish
>Reporter: ZhiWei Shi
>Priority: Minor
>  Labels: pull-request-available
>
> Fix typo in NameNodeLayoutVersion.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16541) Fix a typo in NameNodeLayoutVersion.

2022-04-14 Thread ZhiWei Shi (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhiWei Shi updated HDFS-16541:
--
Description: Fix a typo in NameNodeLayoutVersion.  (was: Fix typo in 
NameNodeLayoutVersion.)

> Fix a typo in NameNodeLayoutVersion.
> 
>
> Key: HDFS-16541
> URL: https://issues.apache.org/jira/browse/HDFS-16541
> Project: Hadoop HDFS
>  Issue Type: Wish
>Reporter: ZhiWei Shi
>Priority: Minor
>  Labels: pull-request-available
>
> Fix a typo in NameNodeLayoutVersion.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16541) Fix typo in NameNodeLayoutVersion.

2022-04-14 Thread ZhiWei Shi (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhiWei Shi updated HDFS-16541:
--
Description: Fix typo in NameNodeLayoutVersion.  (was: Fix typo for 
NameNodeLayoutVersion.)

> Fix typo in NameNodeLayoutVersion.
> --
>
> Key: HDFS-16541
> URL: https://issues.apache.org/jira/browse/HDFS-16541
> Project: Hadoop HDFS
>  Issue Type: Wish
>Reporter: ZhiWei Shi
>Priority: Minor
>  Labels: pull-request-available
>
> Fix typo in NameNodeLayoutVersion.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16541) Fix typo in NameNodeLayoutVersion.

2022-04-14 Thread ZhiWei Shi (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhiWei Shi updated HDFS-16541:
--
Summary: Fix typo in NameNodeLayoutVersion.  (was: Fix typo for 
NameNodeLayoutVersion.)

> Fix typo in NameNodeLayoutVersion.
> --
>
> Key: HDFS-16541
> URL: https://issues.apache.org/jira/browse/HDFS-16541
> Project: Hadoop HDFS
>  Issue Type: Wish
>Reporter: ZhiWei Shi
>Priority: Minor
>  Labels: pull-request-available
>
> Fix typo for NameNodeLayoutVersion.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16540) Data locality is lost when DataNode pod restarts in kubernetes

2022-04-14 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16540?focusedWorklogId=757262=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-757262
 ]

ASF GitHub Bot logged work on HDFS-16540:
-

Author: ASF GitHub Bot
Created on: 15/Apr/22 01:26
Start Date: 15/Apr/22 01:26
Worklog Time Spent: 10m 
  Work Description: tomscut commented on code in PR #4170:
URL: https://github.com/apache/hadoop/pull/4170#discussion_r850967827


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java:
##
@@ -1189,16 +1190,26 @@ public void registerDatanode(DatanodeRegistration 
nodeReg)
 nodes with its data cleared (or user can just remove the StorageID
 value in "VERSION" file under the data directory of the datanode,
 but this is might not work if VERSION file format has changed 
- */
+ */
+  // Check if nodeS's host information is same as nodeReg's, if not,
+  // it needs to update host2DatanodeMap accordringly.
+  updateHost2DatanodeMap = 
!nodeS.getIpAddr().equals(nodeReg.getAddress()) ||

Review Comment:
   `nodeReg.getAddress()` contains port, but `nodeS.getIpAddr()` doesn't, so 
`updateHost2DatanodeMap` is always `true`, right?





Issue Time Tracking
---

Worklog Id: (was: 757262)
Time Spent: 0.5h  (was: 20m)

> Data locality is lost when DataNode pod restarts in kubernetes 
> ---
>
> Key: HDFS-16540
> URL: https://issues.apache.org/jira/browse/HDFS-16540
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.3.2
>Reporter: Huaxiang Sun
>Assignee: Huaxiang Sun
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> We have HBase RegionServer and Hdfs DataNode running in one pod. When the pod 
> restarts, we found that data locality is lost after we do a major compaction 
> of hbase regions. After some debugging, we found that upon pod restarts, its 
> ip changes. In DatanodeManager, maps like networktopology are updated with 
> the new info. host2DatanodeMap is not updated accordingly. When hdfs client 
> with the new ip tries to find a local DataNode, it fails. 
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16538) EC decoding failed due to not enough valid inputs

2022-04-14 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16538?focusedWorklogId=757118=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-757118
 ]

ASF GitHub Bot logged work on HDFS-16538:
-

Author: ASF GitHub Bot
Created on: 14/Apr/22 17:28
Start Date: 14/Apr/22 17:28
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #4167:
URL: https://github.com/apache/hadoop/pull/4167#issuecomment-1099446513

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  0s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m 57s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  27m 48s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   6m 37s |  |  trunk passed with JDK 
Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |   6m 16s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   1m 25s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 43s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m  0s |  |  trunk passed with JDK 
Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   2m 17s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   6m 40s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  25m 54s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 31s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 15s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   6m 46s |  |  the patch passed with JDK 
Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |   6m 46s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   6m  9s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   6m  9s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m 10s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   2m 18s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 36s |  |  the patch passed with JDK 
Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   2m  7s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   6m 26s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  26m  7s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 27s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | +1 :green_heart: |  unit  | 234m 43s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 51s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 389m 33s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4167/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4167 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux a7b6b3da85bb 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 22359a90c8e8cd1dce2291ba8b69ca0a25161872 |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4167/3/testReport/ |
   | Max. process+thread count | 3058 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-client 

[jira] [Commented] (HDFS-16507) [SBN read] Avoid purging edit log which is in progress

2022-04-14 Thread Erik Krogen (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17522432#comment-17522432
 ] 

Erik Krogen commented on HDFS-16507:


Sounds good, thanks for that context.

> [SBN read] Avoid purging edit log which is in progress
> --
>
> Key: HDFS-16507
> URL: https://issues.apache.org/jira/browse/HDFS-16507
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.1.0
>Reporter: tomscut
>Assignee: tomscut
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.2.4, 3.3.4
>
>  Time Spent: 4h 50m
>  Remaining Estimate: 0h
>
> We introduced [Standby Read] feature in branch-3.1.0, but found a FATAL 
> exception. It looks like it's purging edit logs which is in process.
> According to the analysis, I suspect that the editlog which is in progress to 
> be purged(after SNN checkpoint) does not finalize(See HDFS-14317) before ANN 
> rolls edit its self. 
> The stack:
> {code:java}
> java.lang.Thread.getStackTrace(Thread.java:1552)
>     org.apache.hadoop.util.StringUtils.getStackTrace(StringUtils.java:1032)
>     
> org.apache.hadoop.hdfs.server.namenode.FileJournalManager.purgeLogsOlderThan(FileJournalManager.java:185)
>     
> org.apache.hadoop.hdfs.server.namenode.JournalSet$5.apply(JournalSet.java:623)
>     
> org.apache.hadoop.hdfs.server.namenode.JournalSet.mapJournalsAndReportErrors(JournalSet.java:388)
>     
> org.apache.hadoop.hdfs.server.namenode.JournalSet.purgeLogsOlderThan(JournalSet.java:620)
>     
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.purgeLogsOlderThan(FSEditLog.java:1512)
> org.apache.hadoop.hdfs.server.namenode.NNStorageRetentionManager.purgeOldStorage(NNStorageRetentionManager.java:177)
>     
> org.apache.hadoop.hdfs.server.namenode.FSImage.purgeOldStorage(FSImage.java:1249)
>     
> org.apache.hadoop.hdfs.server.namenode.ImageServlet$2.run(ImageServlet.java:617)
>     
> org.apache.hadoop.hdfs.server.namenode.ImageServlet$2.run(ImageServlet.java:516)
>     java.security.AccessController.doPrivileged(Native Method)
>     javax.security.auth.Subject.doAs(Subject.java:422)
>     
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
>     
> org.apache.hadoop.hdfs.server.namenode.ImageServlet.doPut(ImageServlet.java:515)
>     javax.servlet.http.HttpServlet.service(HttpServlet.java:710)
>     javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
>     org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:848)
>     
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1772)
>     
> org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1604)
>     
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
>     org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
>     
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
>     org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
>     
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>     
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>     
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
>     
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
>     org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
>     
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>     
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
>     
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>     
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
>     
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>     org.eclipse.jetty.server.Server.handle(Server.java:539)
>     org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:333)
>     
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
>     
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)
>     org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108)
>     
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
>     
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
>     
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
>     
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
>     
> 

[jira] [Work logged] (HDFS-16509) Fix decommission UnsupportedOperationException: Remove unsupported

2022-04-14 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16509?focusedWorklogId=756967=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-756967
 ]

ASF GitHub Bot logged work on HDFS-16509:
-

Author: ASF GitHub Bot
Created on: 14/Apr/22 13:43
Start Date: 14/Apr/22 13:43
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #4172:
URL: https://github.com/apache/hadoop/pull/4172#issuecomment-1099201582

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   8m 20s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ branch-3.2 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  30m  3s |  |  branch-3.2 passed  |
   | +1 :green_heart: |  compile  |   1m  2s |  |  branch-3.2 passed  |
   | +1 :green_heart: |  checkstyle  |   0m 52s |  |  branch-3.2 passed  |
   | +1 :green_heart: |  mvnsite  |   1m 11s |  |  branch-3.2 passed  |
   | +1 :green_heart: |  javadoc  |   1m  5s |  |  branch-3.2 passed  |
   | +1 :green_heart: |  spotbugs  |   3m  8s |  |  branch-3.2 passed  |
   | +1 :green_heart: |  shadedclient  |  15m 58s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  5s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 56s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 56s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 37s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m  2s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 49s |  |  the patch passed  |
   | +1 :green_heart: |  spotbugs  |   2m 47s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  16m 40s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 190m 10s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4172/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 43s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 274m 15s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer |
   |   | hadoop.hdfs.TestReconstructStripedFileWithValidator |
   |   | hadoop.hdfs.server.namenode.TestFsck |
   |   | hadoop.hdfs.server.blockmanagement.TestBlockManager |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4172/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4172 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux ef95e6fb8f5a 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-3.2 / d36fd04c27cbfad77507d82a17faeb8795f50a84 |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~18.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4172/1/testReport/ |
   | Max. process+thread count | 2810 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4172/1/console |
   | versions | git=2.17.1 maven=3.6.0 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




Issue Time Tracking
---

Worklog Id: (was: 756967)
Time Spent: 3h 40m  (was: 3.5h)

> Fix decommission UnsupportedOperationException: Remove unsupported
> --
>
> Key: HDFS-16509
> URL: https://issues.apache.org/jira/browse/HDFS-16509
> Project: Hadoop HDFS
>  Issue Type: Bug
>  

[jira] [Created] (HDFS-16541) Fix typo for NameNodeLayoutVersion.

2022-04-14 Thread ZhiWei Shi (Jira)
ZhiWei Shi created HDFS-16541:
-

 Summary: Fix typo for NameNodeLayoutVersion.
 Key: HDFS-16541
 URL: https://issues.apache.org/jira/browse/HDFS-16541
 Project: Hadoop HDFS
  Issue Type: Wish
Reporter: ZhiWei Shi


Fix typo for NameNodeLayoutVersion.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16422) Fix thread safety of EC decoding during concurrent preads

2022-04-14 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17522275#comment-17522275
 ] 

Steve Loughran commented on HDFS-16422:
---

this patch is going in to 3.3.3, for people who need it

> Fix thread safety of EC decoding during concurrent preads
> -
>
> Key: HDFS-16422
> URL: https://issues.apache.org/jira/browse/HDFS-16422
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: dfsclient, ec, erasure-coding
>Affects Versions: 3.3.0, 3.3.1
>Reporter: daimin
>Assignee: daimin
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.2.3, 3.3.4
>
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>
> Reading data on an erasure-coded file with missing replicas(internal block of 
> block group) will cause online reconstruction: read dataUnits part of data 
> and decode them into the target missing data. Each DFSStripedInputStream 
> object has a RawErasureDecoder object, and when we doing pread concurrently, 
> RawErasureDecoder.decode will be invoked concurrently too. 
> RawErasureDecoder.decode is not thread safe, as a result of that we get wrong 
> data from pread occasionally.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16538) EC decoding failed due to not enough valid inputs

2022-04-14 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16538?focusedWorklogId=756948=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-756948
 ]

ASF GitHub Bot logged work on HDFS-16538:
-

Author: ASF GitHub Bot
Created on: 14/Apr/22 12:32
Start Date: 14/Apr/22 12:32
Worklog Time Spent: 10m 
  Work Description: liubingxing commented on PR #4167:
URL: https://github.com/apache/hadoop/pull/4167#issuecomment-1099137583

   @tasanuma Please take a look at this.




Issue Time Tracking
---

Worklog Id: (was: 756948)
Time Spent: 50m  (was: 40m)

>  EC decoding failed due to not enough valid inputs
> --
>
> Key: HDFS-16538
> URL: https://issues.apache.org/jira/browse/HDFS-16538
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: qinyuren
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Currently, we found this error if the #StripeReader.readStripe() have more 
> than one block read failed.
> We use the EC policy ec(6+3) in our cluster.
> {code:java}
> Caused by: org.apache.hadoop.HadoopIllegalArgumentException: No enough valid 
> inputs are provided, not recoverable
>         at 
> org.apache.hadoop.io.erasurecode.rawcoder.ByteBufferDecodingState.checkInputBuffers(ByteBufferDecodingState.java:119)
>         at 
> org.apache.hadoop.io.erasurecode.rawcoder.ByteBufferDecodingState.(ByteBufferDecodingState.java:47)
>         at 
> org.apache.hadoop.io.erasurecode.rawcoder.RawErasureDecoder.decode(RawErasureDecoder.java:86)
>         at 
> org.apache.hadoop.io.erasurecode.rawcoder.RawErasureDecoder.decode(RawErasureDecoder.java:170)
>         at 
> org.apache.hadoop.hdfs.StripeReader.decodeAndFillBuffer(StripeReader.java:462)
>         at 
> org.apache.hadoop.hdfs.StatefulStripeReader.decode(StatefulStripeReader.java:94)
>         at 
> org.apache.hadoop.hdfs.StripeReader.readStripe(StripeReader.java:406)
>         at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.readOneStripe(DFSStripedInputStream.java:327)
>         at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.readWithStrategy(DFSStripedInputStream.java:420)
>         at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:892)
>         at java.base/java.io.DataInputStream.read(DataInputStream.java:149)
>         at java.base/java.io.DataInputStream.read(DataInputStream.java:149) 
> {code}
>  
> {code:java}
> while (!futures.isEmpty()) {
>   try {
> StripingChunkReadResult r = StripedBlockUtil
> .getNextCompletedStripedRead(service, futures, 0);
> dfsStripedInputStream.updateReadStats(r.getReadStats());
> DFSClient.LOG.debug("Read task returned: {}, for stripe {}",
> r, alignedStripe);
> StripingChunk returnedChunk = alignedStripe.chunks[r.index];
> Preconditions.checkNotNull(returnedChunk);
> Preconditions.checkState(returnedChunk.state == StripingChunk.PENDING);
> if (r.state == StripingChunkReadResult.SUCCESSFUL) {
>   returnedChunk.state = StripingChunk.FETCHED;
>   alignedStripe.fetchedChunksNum++;
>   updateState4SuccessRead(r);
>   if (alignedStripe.fetchedChunksNum == dataBlkNum) {
> clearFutures();
> break;
>   }
> } else {
>   returnedChunk.state = StripingChunk.MISSING;
>   // close the corresponding reader
>   dfsStripedInputStream.closeReader(readerInfos[r.index]);
>   final int missing = alignedStripe.missingChunksNum;
>   alignedStripe.missingChunksNum++;
>   checkMissingBlocks();
>   readDataForDecoding();
>   readParityChunks(alignedStripe.missingChunksNum - missing);
> } {code}
> This error can be trigger by #StatefulStripeReader.decode.
> The reason is that:
>  # If there are more than one *data block* read failed, the 
> #readDataForDecoding will be called multiple times;
>  # The *decodeInputs array* will be initialized repeatedly.
>  # The *parity* *data* in *decodeInputs array* which filled by 
> #readParityChunks previously will be set to null.
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16514) Reduce the failover sleep time if multiple namenode are configured

2022-04-14 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16514?focusedWorklogId=756928=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-756928
 ]

ASF GitHub Bot logged work on HDFS-16514:
-

Author: ASF GitHub Bot
Created on: 14/Apr/22 11:14
Start Date: 14/Apr/22 11:14
Worklog Time Spent: 10m 
  Work Description: liubingxing commented on code in PR #4088:
URL: https://github.com/apache/hadoop/pull/4088#discussion_r850340871


##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/retry/RetryPolicies.java:
##
@@ -639,19 +647,24 @@ public FailoverOnNetworkExceptionRetry(RetryPolicy 
fallbackPolicy,
 
 public FailoverOnNetworkExceptionRetry(RetryPolicy fallbackPolicy,
 int maxFailovers, int maxRetries, long delayMillis, long maxDelayBase) 
{
+  this(fallbackPolicy, maxFailovers, maxRetries, delayMillis, 
maxDelayBase, 2);
+}
+public FailoverOnNetworkExceptionRetry(RetryPolicy fallbackPolicy,
+int maxFailovers, int maxRetries, long delayMillis, long maxDelayBase, 
int nnSize) {
   this.fallbackPolicy = fallbackPolicy;
   this.maxFailovers = maxFailovers;
   this.maxRetries = maxRetries;
   this.delayMillis = delayMillis;
   this.maxDelayBase = maxDelayBase;
+  this.nnSize = nnSize;
 }
 
 /**
  * @return 0 if this is our first failover/retry (i.e., retry immediately),

Review Comment:
   @cndaimin Thanks for the review. I will add the comments later.





Issue Time Tracking
---

Worklog Id: (was: 756928)
Time Spent: 1h 10m  (was: 1h)

> Reduce the failover sleep time if multiple namenode are configured
> --
>
> Key: HDFS-16514
> URL: https://issues.apache.org/jira/browse/HDFS-16514
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: qinyuren
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2022-03-21-18-11-37-191.png
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Recently, we used the [Standby Read] feature in our test cluster, and 
> deployed 4 namenode as follow:
> node1 -> active nn
> node2 -> standby nn
> node3 -> observer nn
> node3 -> observer nn
> If we set ’dfs.client.failover.random.order=true‘, the client may failover 
> twice and wait a long time to send msync to active namenode. 
> !image-2022-03-21-18-11-37-191.png|width=698,height=169!
> I think we can reduce the sleep time of the first several failover based on 
> the number of namenode
> For example, if 4 namenode are configured, the sleep time of first three 
> failover operations is set to zero.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16539) RBF: Support refreshing/changing router fairness policy controller without rebooting router

2022-04-14 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16539?focusedWorklogId=756927=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-756927
 ]

ASF GitHub Bot logged work on HDFS-16539:
-

Author: ASF GitHub Bot
Created on: 14/Apr/22 11:11
Start Date: 14/Apr/22 11:11
Worklog Time Spent: 10m 
  Work Description: kokonguyen191 commented on PR #4168:
URL: https://github.com/apache/hadoop/pull/4168#issuecomment-1099076819

   Don't think the failed install is related since it failed on yarn




Issue Time Tracking
---

Worklog Id: (was: 756927)
Time Spent: 40m  (was: 0.5h)

> RBF: Support refreshing/changing router fairness policy controller without 
> rebooting router
> ---
>
> Key: HDFS-16539
> URL: https://issues.apache.org/jira/browse/HDFS-16539
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Felix N
>Assignee: Felix N
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Add support for refreshing/changing router fairness policy controller without 
> the need to reboot a router.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16538) EC decoding failed due to not enough valid inputs

2022-04-14 Thread qinyuren (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

qinyuren updated HDFS-16538:

Description: 
Currently, we found this error if the #StripeReader.readStripe() have more than 
one block read failed.

We use the EC policy ec(6+3) in our cluster.
{code:java}
Caused by: org.apache.hadoop.HadoopIllegalArgumentException: No enough valid 
inputs are provided, not recoverable
        at 
org.apache.hadoop.io.erasurecode.rawcoder.ByteBufferDecodingState.checkInputBuffers(ByteBufferDecodingState.java:119)
        at 
org.apache.hadoop.io.erasurecode.rawcoder.ByteBufferDecodingState.(ByteBufferDecodingState.java:47)
        at 
org.apache.hadoop.io.erasurecode.rawcoder.RawErasureDecoder.decode(RawErasureDecoder.java:86)
        at 
org.apache.hadoop.io.erasurecode.rawcoder.RawErasureDecoder.decode(RawErasureDecoder.java:170)
        at 
org.apache.hadoop.hdfs.StripeReader.decodeAndFillBuffer(StripeReader.java:462)
        at 
org.apache.hadoop.hdfs.StatefulStripeReader.decode(StatefulStripeReader.java:94)
        at org.apache.hadoop.hdfs.StripeReader.readStripe(StripeReader.java:406)
        at 
org.apache.hadoop.hdfs.DFSStripedInputStream.readOneStripe(DFSStripedInputStream.java:327)
        at 
org.apache.hadoop.hdfs.DFSStripedInputStream.readWithStrategy(DFSStripedInputStream.java:420)
        at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:892)
        at java.base/java.io.DataInputStream.read(DataInputStream.java:149)
        at java.base/java.io.DataInputStream.read(DataInputStream.java:149) 
{code}
 
{code:java}
while (!futures.isEmpty()) {
  try {
StripingChunkReadResult r = StripedBlockUtil
.getNextCompletedStripedRead(service, futures, 0);
dfsStripedInputStream.updateReadStats(r.getReadStats());
DFSClient.LOG.debug("Read task returned: {}, for stripe {}",
r, alignedStripe);
StripingChunk returnedChunk = alignedStripe.chunks[r.index];
Preconditions.checkNotNull(returnedChunk);
Preconditions.checkState(returnedChunk.state == StripingChunk.PENDING);

if (r.state == StripingChunkReadResult.SUCCESSFUL) {
  returnedChunk.state = StripingChunk.FETCHED;
  alignedStripe.fetchedChunksNum++;
  updateState4SuccessRead(r);
  if (alignedStripe.fetchedChunksNum == dataBlkNum) {
clearFutures();
break;
  }
} else {
  returnedChunk.state = StripingChunk.MISSING;
  // close the corresponding reader
  dfsStripedInputStream.closeReader(readerInfos[r.index]);

  final int missing = alignedStripe.missingChunksNum;
  alignedStripe.missingChunksNum++;
  checkMissingBlocks();

  readDataForDecoding();
  readParityChunks(alignedStripe.missingChunksNum - missing);
} {code}
This error can be trigger by #StatefulStripeReader.decode.

The reason is that:
 # If there are more than one *data block* read failed, the 
#readDataForDecoding will be called multiple times;
 # The *decodeInputs array* will be initialized repeatedly.
 # The *parity* *data* in *decodeInputs array* which filled by 
#readParityChunks previously will be set to null.

 

 

 

  was:
Currently, we found this error if the #StripeReader.readStripe() have more than 
one block read failed.

We use the EC policy ec(6+3) in our cluster.
{code:java}
Caused by: org.apache.hadoop.HadoopIllegalArgumentException: No enough valid 
inputs are provided, not recoverable
        at 
org.apache.hadoop.io.erasurecode.rawcoder.ByteBufferDecodingState.checkInputBuffers(ByteBufferDecodingState.java:119)
        at 
org.apache.hadoop.io.erasurecode.rawcoder.ByteBufferDecodingState.(ByteBufferDecodingState.java:47)
        at 
org.apache.hadoop.io.erasurecode.rawcoder.RawErasureDecoder.decode(RawErasureDecoder.java:86)
        at 
org.apache.hadoop.io.erasurecode.rawcoder.RawErasureDecoder.decode(RawErasureDecoder.java:170)
        at 
org.apache.hadoop.hdfs.StripeReader.decodeAndFillBuffer(StripeReader.java:462)
        at 
org.apache.hadoop.hdfs.StatefulStripeReader.decode(StatefulStripeReader.java:94)
        at org.apache.hadoop.hdfs.StripeReader.readStripe(StripeReader.java:406)
        at 
org.apache.hadoop.hdfs.DFSStripedInputStream.readOneStripe(DFSStripedInputStream.java:327)
        at 
org.apache.hadoop.hdfs.DFSStripedInputStream.readWithStrategy(DFSStripedInputStream.java:420)
        at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:892)
        at java.base/java.io.DataInputStream.read(DataInputStream.java:149)
        at java.base/java.io.DataInputStream.read(DataInputStream.java:149) 
{code}
 
{code:java}
while (!futures.isEmpty()) {
  try {
StripingChunkReadResult r = StripedBlockUtil
.getNextCompletedStripedRead(service, futures, 0);
dfsStripedInputStream.updateReadStats(r.getReadStats());
DFSClient.LOG.debug("Read task returned: {}, for stripe {}",
r, alignedStripe);

[jira] [Updated] (HDFS-16538) EC decoding failed due to not enough valid inputs

2022-04-14 Thread qinyuren (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

qinyuren updated HDFS-16538:

Description: 
Currently, we found this error if the #StripeReader.readStripe() have more than 
one block read failed.

We use the EC policy ec(6+3) in our cluster.
{code:java}
Caused by: org.apache.hadoop.HadoopIllegalArgumentException: No enough valid 
inputs are provided, not recoverable
        at 
org.apache.hadoop.io.erasurecode.rawcoder.ByteBufferDecodingState.checkInputBuffers(ByteBufferDecodingState.java:119)
        at 
org.apache.hadoop.io.erasurecode.rawcoder.ByteBufferDecodingState.(ByteBufferDecodingState.java:47)
        at 
org.apache.hadoop.io.erasurecode.rawcoder.RawErasureDecoder.decode(RawErasureDecoder.java:86)
        at 
org.apache.hadoop.io.erasurecode.rawcoder.RawErasureDecoder.decode(RawErasureDecoder.java:170)
        at 
org.apache.hadoop.hdfs.StripeReader.decodeAndFillBuffer(StripeReader.java:462)
        at 
org.apache.hadoop.hdfs.StatefulStripeReader.decode(StatefulStripeReader.java:94)
        at org.apache.hadoop.hdfs.StripeReader.readStripe(StripeReader.java:406)
        at 
org.apache.hadoop.hdfs.DFSStripedInputStream.readOneStripe(DFSStripedInputStream.java:327)
        at 
org.apache.hadoop.hdfs.DFSStripedInputStream.readWithStrategy(DFSStripedInputStream.java:420)
        at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:892)
        at java.base/java.io.DataInputStream.read(DataInputStream.java:149)
        at java.base/java.io.DataInputStream.read(DataInputStream.java:149) 
{code}
 
{code:java}
while (!futures.isEmpty()) {
  try {
StripingChunkReadResult r = StripedBlockUtil
.getNextCompletedStripedRead(service, futures, 0);
dfsStripedInputStream.updateReadStats(r.getReadStats());
DFSClient.LOG.debug("Read task returned: {}, for stripe {}",
r, alignedStripe);
StripingChunk returnedChunk = alignedStripe.chunks[r.index];
Preconditions.checkNotNull(returnedChunk);
Preconditions.checkState(returnedChunk.state == StripingChunk.PENDING);

if (r.state == StripingChunkReadResult.SUCCESSFUL) {
  returnedChunk.state = StripingChunk.FETCHED;
  alignedStripe.fetchedChunksNum++;
  updateState4SuccessRead(r);
  if (alignedStripe.fetchedChunksNum == dataBlkNum) {
clearFutures();
break;
  }
} else {
  returnedChunk.state = StripingChunk.MISSING;
  // close the corresponding reader
  dfsStripedInputStream.closeReader(readerInfos[r.index]);

  final int missing = alignedStripe.missingChunksNum;
  alignedStripe.missingChunksNum++;
  checkMissingBlocks();

  readDataForDecoding();
  readParityChunks(alignedStripe.missingChunksNum - missing);
} {code}
This error can be trigger by #StatefulStripeReader.decode.

The reason is that:

1. If there are more than one *data block* read failed, the 
#readDataForDecoding will be called multiple times;

The *decodeInputs array* will be initialized repeatedly, and the *parity* 
*data* in *decodeInputs array* which filled by #readParityChunks will be set to 
null.

 

 

 

  was:
Currently, we found this error if the #StripeReader.readStripe() have more than 
one block read failed.

We use the EC policy ec(6+3) in our cluster.
{code:java}
Caused by: org.apache.hadoop.HadoopIllegalArgumentException: No enough valid 
inputs are provided, not recoverable
        at 
org.apache.hadoop.io.erasurecode.rawcoder.ByteBufferDecodingState.checkInputBuffers(ByteBufferDecodingState.java:119)
        at 
org.apache.hadoop.io.erasurecode.rawcoder.ByteBufferDecodingState.(ByteBufferDecodingState.java:47)
        at 
org.apache.hadoop.io.erasurecode.rawcoder.RawErasureDecoder.decode(RawErasureDecoder.java:86)
        at 
org.apache.hadoop.io.erasurecode.rawcoder.RawErasureDecoder.decode(RawErasureDecoder.java:170)
        at 
org.apache.hadoop.hdfs.StripeReader.decodeAndFillBuffer(StripeReader.java:462)
        at 
org.apache.hadoop.hdfs.StatefulStripeReader.decode(StatefulStripeReader.java:94)
        at org.apache.hadoop.hdfs.StripeReader.readStripe(StripeReader.java:406)
        at 
org.apache.hadoop.hdfs.DFSStripedInputStream.readOneStripe(DFSStripedInputStream.java:327)
        at 
org.apache.hadoop.hdfs.DFSStripedInputStream.readWithStrategy(DFSStripedInputStream.java:420)
        at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:892)
        at java.base/java.io.DataInputStream.read(DataInputStream.java:149)
        at java.base/java.io.DataInputStream.read(DataInputStream.java:149) 
{code}
 

 
{code:java}
while (!futures.isEmpty()) {
  try {
StripingChunkReadResult r = StripedBlockUtil
.getNextCompletedStripedRead(service, futures, 0);
dfsStripedInputStream.updateReadStats(r.getReadStats());
DFSClient.LOG.debug("Read task returned: {}, for stripe {}",
r, alignedStripe);
StripingChunk 

[jira] [Updated] (HDFS-16456) EC: Decommission a rack with only on dn will fail when the rack number is equal with replication

2022-04-14 Thread Takanobu Asanuma (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HDFS-16456:

Fix Version/s: 3.4.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> EC: Decommission a rack with only on dn will fail when the rack number is 
> equal with replication
> 
>
> Key: HDFS-16456
> URL: https://issues.apache.org/jira/browse/HDFS-16456
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ec, namenode
>Affects Versions: 3.4.0
>Reporter: caozhiqiang
>Assignee: caozhiqiang
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: HDFS-16456.001.patch, HDFS-16456.002.patch, 
> HDFS-16456.003.patch, HDFS-16456.004.patch, HDFS-16456.005.patch, 
> HDFS-16456.006.patch, HDFS-16456.007.patch, HDFS-16456.008.patch, 
> HDFS-16456.009.patch, HDFS-16456.010.patch
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> In below scenario, decommission will fail by TOO_MANY_NODES_ON_RACK reason:
>  # Enable EC policy, such as RS-6-3-1024k.
>  # The rack number in this cluster is equal with or less than the replication 
> number(9)
>  # A rack only has one DN, and decommission this DN.
> The root cause is in 
> BlockPlacementPolicyRackFaultTolerant::getMaxNodesPerRack() function, it will 
> give a limit parameter maxNodesPerRack for choose targets. In this scenario, 
> the maxNodesPerRack is 1, which means each rack can only be chosen one 
> datanode.
> {code:java}
>   protected int[] getMaxNodesPerRack(int numOfChosen, int numOfReplicas) {
>...
>     // If more replicas than racks, evenly spread the replicas.
>     // This calculation rounds up.
>     int maxNodesPerRack = (totalNumOfReplicas - 1) / numOfRacks + 1;
> return new int[] {numOfReplicas, maxNodesPerRack};
>   } {code}
> int maxNodesPerRack = (totalNumOfReplicas - 1) / numOfRacks + 1;
> here will be called, where totalNumOfReplicas=9 and  numOfRacks=9  
> When we decommission one dn which is only one node in its rack, the 
> chooseOnce() in BlockPlacementPolicyRackFaultTolerant::chooseTargetInOrder() 
> will throw NotEnoughReplicasException, but the exception will not be caught 
> and fail to fallback to chooseEvenlyFromRemainingRacks() function.
> When decommission, after choose targets, verifyBlockPlacement() function will 
> return the total rack number contains the invalid rack, and 
> BlockPlacementStatusDefault::isPlacementPolicySatisfied() will return false 
> and it will also cause decommission fail.
> {code:java}
>   public BlockPlacementStatus verifyBlockPlacement(DatanodeInfo[] locs,
>       int numberOfReplicas) {
>     if (locs == null)
>       locs = DatanodeDescriptor.EMPTY_ARRAY;
>     if (!clusterMap.hasClusterEverBeenMultiRack()) {
>       // only one rack
>       return new BlockPlacementStatusDefault(1, 1, 1);
>     }
>     // Count locations on different racks.
>     Set racks = new HashSet<>();
>     for (DatanodeInfo dn : locs) {
>       racks.add(dn.getNetworkLocation());
>     }
>     return new BlockPlacementStatusDefault(racks.size(), numberOfReplicas,
>         clusterMap.getNumOfRacks());
>   } {code}
> {code:java}
>   public boolean isPlacementPolicySatisfied() {
>     return requiredRacks <= currentRacks || currentRacks >= totalRacks;
>   }{code}
> According to the above description, we should make the below modify to fix it:
>  # In startDecommission() or stopDecommission(), we should also change the 
> numOfRacks in class NetworkTopology. Or choose targets may fail for the 
> maxNodesPerRack is too small. And even choose targets success, 
> isPlacementPolicySatisfied will also return false cause decommission fail.
>  # In BlockPlacementPolicyRackFaultTolerant::chooseTargetInOrder(), the first 
> chooseOnce() function should also be put in try..catch..., or it will not 
> fallback to call chooseEvenlyFromRemainingRacks() when throw exception.
>  # In verifyBlockPlacement, we need to remove invalid racks from total 
> numOfRacks, or isPlacementPolicySatisfied() will return false and cause fail 
> to reconstruct data.
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16456) EC: Decommission a rack with only on dn will fail when the rack number is equal with replication

2022-04-14 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16456?focusedWorklogId=756899=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-756899
 ]

ASF GitHub Bot logged work on HDFS-16456:
-

Author: ASF GitHub Bot
Created on: 14/Apr/22 09:43
Start Date: 14/Apr/22 09:43
Worklog Time Spent: 10m 
  Work Description: tasanuma commented on PR #4126:
URL: https://github.com/apache/hadoop/pull/4126#issuecomment-1098943763

   Merged. Thanks for your contribution, @lfxy!




Issue Time Tracking
---

Worklog Id: (was: 756899)
Time Spent: 2h  (was: 1h 50m)

> EC: Decommission a rack with only on dn will fail when the rack number is 
> equal with replication
> 
>
> Key: HDFS-16456
> URL: https://issues.apache.org/jira/browse/HDFS-16456
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ec, namenode
>Affects Versions: 3.4.0
>Reporter: caozhiqiang
>Assignee: caozhiqiang
>Priority: Critical
>  Labels: pull-request-available
> Attachments: HDFS-16456.001.patch, HDFS-16456.002.patch, 
> HDFS-16456.003.patch, HDFS-16456.004.patch, HDFS-16456.005.patch, 
> HDFS-16456.006.patch, HDFS-16456.007.patch, HDFS-16456.008.patch, 
> HDFS-16456.009.patch, HDFS-16456.010.patch
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> In below scenario, decommission will fail by TOO_MANY_NODES_ON_RACK reason:
>  # Enable EC policy, such as RS-6-3-1024k.
>  # The rack number in this cluster is equal with or less than the replication 
> number(9)
>  # A rack only has one DN, and decommission this DN.
> The root cause is in 
> BlockPlacementPolicyRackFaultTolerant::getMaxNodesPerRack() function, it will 
> give a limit parameter maxNodesPerRack for choose targets. In this scenario, 
> the maxNodesPerRack is 1, which means each rack can only be chosen one 
> datanode.
> {code:java}
>   protected int[] getMaxNodesPerRack(int numOfChosen, int numOfReplicas) {
>...
>     // If more replicas than racks, evenly spread the replicas.
>     // This calculation rounds up.
>     int maxNodesPerRack = (totalNumOfReplicas - 1) / numOfRacks + 1;
> return new int[] {numOfReplicas, maxNodesPerRack};
>   } {code}
> int maxNodesPerRack = (totalNumOfReplicas - 1) / numOfRacks + 1;
> here will be called, where totalNumOfReplicas=9 and  numOfRacks=9  
> When we decommission one dn which is only one node in its rack, the 
> chooseOnce() in BlockPlacementPolicyRackFaultTolerant::chooseTargetInOrder() 
> will throw NotEnoughReplicasException, but the exception will not be caught 
> and fail to fallback to chooseEvenlyFromRemainingRacks() function.
> When decommission, after choose targets, verifyBlockPlacement() function will 
> return the total rack number contains the invalid rack, and 
> BlockPlacementStatusDefault::isPlacementPolicySatisfied() will return false 
> and it will also cause decommission fail.
> {code:java}
>   public BlockPlacementStatus verifyBlockPlacement(DatanodeInfo[] locs,
>       int numberOfReplicas) {
>     if (locs == null)
>       locs = DatanodeDescriptor.EMPTY_ARRAY;
>     if (!clusterMap.hasClusterEverBeenMultiRack()) {
>       // only one rack
>       return new BlockPlacementStatusDefault(1, 1, 1);
>     }
>     // Count locations on different racks.
>     Set racks = new HashSet<>();
>     for (DatanodeInfo dn : locs) {
>       racks.add(dn.getNetworkLocation());
>     }
>     return new BlockPlacementStatusDefault(racks.size(), numberOfReplicas,
>         clusterMap.getNumOfRacks());
>   } {code}
> {code:java}
>   public boolean isPlacementPolicySatisfied() {
>     return requiredRacks <= currentRacks || currentRacks >= totalRacks;
>   }{code}
> According to the above description, we should make the below modify to fix it:
>  # In startDecommission() or stopDecommission(), we should also change the 
> numOfRacks in class NetworkTopology. Or choose targets may fail for the 
> maxNodesPerRack is too small. And even choose targets success, 
> isPlacementPolicySatisfied will also return false cause decommission fail.
>  # In BlockPlacementPolicyRackFaultTolerant::chooseTargetInOrder(), the first 
> chooseOnce() function should also be put in try..catch..., or it will not 
> fallback to call chooseEvenlyFromRemainingRacks() when throw exception.
>  # In verifyBlockPlacement, we need to remove invalid racks from total 
> numOfRacks, or isPlacementPolicySatisfied() will return false and cause fail 
> to reconstruct data.
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: 

[jira] [Work logged] (HDFS-16456) EC: Decommission a rack with only on dn will fail when the rack number is equal with replication

2022-04-14 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16456?focusedWorklogId=756897=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-756897
 ]

ASF GitHub Bot logged work on HDFS-16456:
-

Author: ASF GitHub Bot
Created on: 14/Apr/22 09:42
Start Date: 14/Apr/22 09:42
Worklog Time Spent: 10m 
  Work Description: tasanuma merged PR #4126:
URL: https://github.com/apache/hadoop/pull/4126




Issue Time Tracking
---

Worklog Id: (was: 756897)
Time Spent: 1h 50m  (was: 1h 40m)

> EC: Decommission a rack with only on dn will fail when the rack number is 
> equal with replication
> 
>
> Key: HDFS-16456
> URL: https://issues.apache.org/jira/browse/HDFS-16456
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ec, namenode
>Affects Versions: 3.4.0
>Reporter: caozhiqiang
>Assignee: caozhiqiang
>Priority: Critical
>  Labels: pull-request-available
> Attachments: HDFS-16456.001.patch, HDFS-16456.002.patch, 
> HDFS-16456.003.patch, HDFS-16456.004.patch, HDFS-16456.005.patch, 
> HDFS-16456.006.patch, HDFS-16456.007.patch, HDFS-16456.008.patch, 
> HDFS-16456.009.patch, HDFS-16456.010.patch
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> In below scenario, decommission will fail by TOO_MANY_NODES_ON_RACK reason:
>  # Enable EC policy, such as RS-6-3-1024k.
>  # The rack number in this cluster is equal with or less than the replication 
> number(9)
>  # A rack only has one DN, and decommission this DN.
> The root cause is in 
> BlockPlacementPolicyRackFaultTolerant::getMaxNodesPerRack() function, it will 
> give a limit parameter maxNodesPerRack for choose targets. In this scenario, 
> the maxNodesPerRack is 1, which means each rack can only be chosen one 
> datanode.
> {code:java}
>   protected int[] getMaxNodesPerRack(int numOfChosen, int numOfReplicas) {
>...
>     // If more replicas than racks, evenly spread the replicas.
>     // This calculation rounds up.
>     int maxNodesPerRack = (totalNumOfReplicas - 1) / numOfRacks + 1;
> return new int[] {numOfReplicas, maxNodesPerRack};
>   } {code}
> int maxNodesPerRack = (totalNumOfReplicas - 1) / numOfRacks + 1;
> here will be called, where totalNumOfReplicas=9 and  numOfRacks=9  
> When we decommission one dn which is only one node in its rack, the 
> chooseOnce() in BlockPlacementPolicyRackFaultTolerant::chooseTargetInOrder() 
> will throw NotEnoughReplicasException, but the exception will not be caught 
> and fail to fallback to chooseEvenlyFromRemainingRacks() function.
> When decommission, after choose targets, verifyBlockPlacement() function will 
> return the total rack number contains the invalid rack, and 
> BlockPlacementStatusDefault::isPlacementPolicySatisfied() will return false 
> and it will also cause decommission fail.
> {code:java}
>   public BlockPlacementStatus verifyBlockPlacement(DatanodeInfo[] locs,
>       int numberOfReplicas) {
>     if (locs == null)
>       locs = DatanodeDescriptor.EMPTY_ARRAY;
>     if (!clusterMap.hasClusterEverBeenMultiRack()) {
>       // only one rack
>       return new BlockPlacementStatusDefault(1, 1, 1);
>     }
>     // Count locations on different racks.
>     Set racks = new HashSet<>();
>     for (DatanodeInfo dn : locs) {
>       racks.add(dn.getNetworkLocation());
>     }
>     return new BlockPlacementStatusDefault(racks.size(), numberOfReplicas,
>         clusterMap.getNumOfRacks());
>   } {code}
> {code:java}
>   public boolean isPlacementPolicySatisfied() {
>     return requiredRacks <= currentRacks || currentRacks >= totalRacks;
>   }{code}
> According to the above description, we should make the below modify to fix it:
>  # In startDecommission() or stopDecommission(), we should also change the 
> numOfRacks in class NetworkTopology. Or choose targets may fail for the 
> maxNodesPerRack is too small. And even choose targets success, 
> isPlacementPolicySatisfied will also return false cause decommission fail.
>  # In BlockPlacementPolicyRackFaultTolerant::chooseTargetInOrder(), the first 
> chooseOnce() function should also be put in try..catch..., or it will not 
> fallback to call chooseEvenlyFromRemainingRacks() when throw exception.
>  # In verifyBlockPlacement, we need to remove invalid racks from total 
> numOfRacks, or isPlacementPolicySatisfied() will return false and cause fail 
> to reconstruct data.
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16519) Add throttler to EC reconstruction

2022-04-14 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16519?focusedWorklogId=756884=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-756884
 ]

ASF GitHub Bot logged work on HDFS-16519:
-

Author: ASF GitHub Bot
Created on: 14/Apr/22 09:17
Start Date: 14/Apr/22 09:17
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #4101:
URL: https://github.com/apache/hadoop/pull/4101#issuecomment-1098910667

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 43s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  38m 59s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 29s |  |  trunk passed with JDK 
Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |   1m 21s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   1m  8s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 29s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  8s |  |  trunk passed with JDK 
Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 34s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 32s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m 10s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 17s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 20s |  |  the patch passed with JDK 
Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |   1m 19s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 15s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   1m 15s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 53s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 18s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 53s |  |  the patch passed with JDK 
Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 25s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 17s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  22m 31s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 227m 32s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4101/4/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 47s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 334m 20s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.tools.TestHdfsConfigFields |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4101/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4101 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 69679b31d92a 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 7285e95af89b8ea5892cbf9234e4d2ec32452e5c |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 

[jira] [Work logged] (HDFS-16509) Fix decommission UnsupportedOperationException: Remove unsupported

2022-04-14 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16509?focusedWorklogId=756882=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-756882
 ]

ASF GitHub Bot logged work on HDFS-16509:
-

Author: ASF GitHub Bot
Created on: 14/Apr/22 09:12
Start Date: 14/Apr/22 09:12
Worklog Time Spent: 10m 
  Work Description: cndaimin commented on PR #4077:
URL: https://github.com/apache/hadoop/pull/4077#issuecomment-1098904092

   @jojochuang I have submitted a new PR here: [Backport HDFS-16509 to branch 
branch-3.2](https://github.com/apache/hadoop/pull/4172) to resolve the 
conflicts. Please take a look, thanks!




Issue Time Tracking
---

Worklog Id: (was: 756882)
Time Spent: 3.5h  (was: 3h 20m)

> Fix decommission UnsupportedOperationException: Remove unsupported
> --
>
> Key: HDFS-16509
> URL: https://issues.apache.org/jira/browse/HDFS-16509
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.3.1, 3.3.2
>Reporter: daimin
>Assignee: daimin
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.4
>
>  Time Spent: 3.5h
>  Remaining Estimate: 0h
>
> We encountered an "UnsupportedOperationException: Remove unsupported" error 
> when some datanodes were in decommission. The reason of the exception is that 
> datanode.getBlockIterator() returns an Iterator does not support remove, 
> however DatanodeAdminDefaultMonitor#processBlocksInternal invokes it.remove() 
> when a block not found, e.g, the file containing the block is deleted.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16509) Fix decommission UnsupportedOperationException: Remove unsupported

2022-04-14 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16509?focusedWorklogId=756880=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-756880
 ]

ASF GitHub Bot logged work on HDFS-16509:
-

Author: ASF GitHub Bot
Created on: 14/Apr/22 09:08
Start Date: 14/Apr/22 09:08
Worklog Time Spent: 10m 
  Work Description: cndaimin opened a new pull request, #4172:
URL: https://github.com/apache/hadoop/pull/4172

   Fix cherry-pick conflicts. 
   Tested by `TestDecommission#testDecommissionWithUnknownBlock` and `mvn clean 
install -DskipTests`




Issue Time Tracking
---

Worklog Id: (was: 756880)
Time Spent: 3h 20m  (was: 3h 10m)

> Fix decommission UnsupportedOperationException: Remove unsupported
> --
>
> Key: HDFS-16509
> URL: https://issues.apache.org/jira/browse/HDFS-16509
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.3.1, 3.3.2
>Reporter: daimin
>Assignee: daimin
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.4
>
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> We encountered an "UnsupportedOperationException: Remove unsupported" error 
> when some datanodes were in decommission. The reason of the exception is that 
> datanode.getBlockIterator() returns an Iterator does not support remove, 
> however DatanodeAdminDefaultMonitor#processBlocksInternal invokes it.remove() 
> when a block not found, e.g, the file containing the block is deleted.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16539) RBF: Support refreshing/changing router fairness policy controller without rebooting router

2022-04-14 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16539?focusedWorklogId=756870=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-756870
 ]

ASF GitHub Bot logged work on HDFS-16539:
-

Author: ASF GitHub Bot
Created on: 14/Apr/22 08:20
Start Date: 14/Apr/22 08:20
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #4168:
URL: https://github.com/apache/hadoop/pull/4168#issuecomment-1098839448

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 43s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | -1 :x: |  mvninstall  |  14m  6s | 
[/branch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4168/2/artifact/out/branch-mvninstall-root.txt)
 |  root in trunk failed.  |
   | +1 :green_heart: |  compile  |   1m 38s |  |  trunk passed with JDK 
Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |   0m 35s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 29s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 41s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 46s |  |  trunk passed with JDK 
Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 57s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 39s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  24m 59s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 39s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 42s |  |  the patch passed with JDK 
Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |   0m 42s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 33s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   0m 33s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 18s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 38s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 36s |  |  the patch passed with JDK 
Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 55s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 31s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  26m 30s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  21m  4s |  |  hadoop-hdfs-rbf in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 40s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 101m 42s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4168/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4168 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux d3b3d0889d6e 4.15.0-169-generic #177-Ubuntu SMP Thu Feb 3 
10:50:38 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / c6694d92bf1e06d218f1a8ef78020bc3d408706f |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4168/2/testReport/ |
   | Max. process+thread count | 2185 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4168/2/console |
   

[jira] [Updated] (HDFS-16509) Fix decommission UnsupportedOperationException: Remove unsupported

2022-04-14 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-16509:
---
Fix Version/s: 3.3.4

> Fix decommission UnsupportedOperationException: Remove unsupported
> --
>
> Key: HDFS-16509
> URL: https://issues.apache.org/jira/browse/HDFS-16509
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.3.1, 3.3.2
>Reporter: daimin
>Assignee: daimin
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.4
>
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> We encountered an "UnsupportedOperationException: Remove unsupported" error 
> when some datanodes were in decommission. The reason of the exception is that 
> datanode.getBlockIterator() returns an Iterator does not support remove, 
> however DatanodeAdminDefaultMonitor#processBlocksInternal invokes it.remove() 
> when a block not found, e.g, the file containing the block is deleted.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16509) Fix decommission UnsupportedOperationException: Remove unsupported

2022-04-14 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16509?focusedWorklogId=756869=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-756869
 ]

ASF GitHub Bot logged work on HDFS-16509:
-

Author: ASF GitHub Bot
Created on: 14/Apr/22 08:18
Start Date: 14/Apr/22 08:18
Worklog Time Spent: 10m 
  Work Description: jojochuang commented on PR #4077:
URL: https://github.com/apache/hadoop/pull/4077#issuecomment-1098838363

   Cherrypicked cleanly and pushed to branch-3.3. 
   The patch doesn't apply in branch-3.2 though (due to the renaming of 
decommission manager class) and will require a new PR to resolve code conflicts.




Issue Time Tracking
---

Worklog Id: (was: 756869)
Time Spent: 3h 10m  (was: 3h)

> Fix decommission UnsupportedOperationException: Remove unsupported
> --
>
> Key: HDFS-16509
> URL: https://issues.apache.org/jira/browse/HDFS-16509
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.3.1, 3.3.2
>Reporter: daimin
>Assignee: daimin
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> We encountered an "UnsupportedOperationException: Remove unsupported" error 
> when some datanodes were in decommission. The reason of the exception is that 
> datanode.getBlockIterator() returns an Iterator does not support remove, 
> however DatanodeAdminDefaultMonitor#processBlocksInternal invokes it.remove() 
> when a block not found, e.g, the file containing the block is deleted.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16535) SlotReleaser should reuse the domain socket based on socket paths

2022-04-14 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16535?focusedWorklogId=756862=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-756862
 ]

ASF GitHub Bot logged work on HDFS-16535:
-

Author: ASF GitHub Bot
Created on: 14/Apr/22 07:55
Start Date: 14/Apr/22 07:55
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #4158:
URL: https://github.com/apache/hadoop/pull/4158#issuecomment-1098816342

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 38s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  16m 10s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  25m 11s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   6m  9s |  |  trunk passed with JDK 
Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |   5m 46s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   1m 16s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 29s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 50s |  |  trunk passed with JDK 
Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   2m 18s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   6m  0s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m 42s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 28s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m  4s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   5m 59s |  |  the patch passed with JDK 
Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |   5m 59s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   5m 49s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   5m 49s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  5s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   2m  8s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 28s |  |  the patch passed with JDK 
Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 57s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   5m 50s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  22m 47s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 23s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | +1 :green_heart: |  unit  | 233m 13s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 44s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 374m 32s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4158/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4158 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 07f572f61ba0 4.15.0-169-generic #177-Ubuntu SMP Thu Feb 3 
10:50:38 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 02bc8ad3ff4d4b2696217d22d764f0a79fe96e0b |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4158/2/testReport/ |
   | Max. process+thread count | 3097 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-client