[jira] [Work logged] (HDFS-16590) Fix Junit Test Deprecated assertThat

2022-05-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16590?focusedWorklogId=775691=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-775691
 ]

ASF GitHub Bot logged work on HDFS-16590:
-

Author: ASF GitHub Bot
Created on: 29/May/22 04:53
Start Date: 29/May/22 04:53
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #4349:
URL: https://github.com/apache/hadoop/pull/4349#issuecomment-1140377089

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 58s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  1s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 60 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 53s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  24m 50s |  |  trunk passed  |
   | -1 :x: |  compile  |   4m  6s | 
[/branch-compile-root-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4349/5/artifact/out/branch-compile-root-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt)
 |  root in trunk failed with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.  |
   | -1 :x: |  compile  |   3m 36s | 
[/branch-compile-root-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4349/5/artifact/out/branch-compile-root-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt)
 |  root in trunk failed with JDK Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.  |
   | +1 :green_heart: |  checkstyle  |   3m 57s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |  11m 54s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   9m 57s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   9m 30s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +0 :ok: |  spotbugs  |   0m 56s |  |  branch/hadoop-project no spotbugs 
output file (spotbugsXml.xml)  |
   | +1 :green_heart: |  shadedclient  |  20m 50s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 31s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   8m  9s |  |  the patch passed  |
   | -1 :x: |  compile  |   3m 51s | 
[/patch-compile-root-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4349/5/artifact/out/patch-compile-root-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt)
 |  root in the patch failed with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.  |
   | -1 :x: |  javac  |   3m 51s | 
[/patch-compile-root-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4349/5/artifact/out/patch-compile-root-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt)
 |  root in the patch failed with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.  |
   | -1 :x: |  compile  |   3m 21s | 
[/patch-compile-root-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4349/5/artifact/out/patch-compile-root-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt)
 |  root in the patch failed with JDK Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.  |
   | -1 :x: |  javac  |   3m 21s | 
[/patch-compile-root-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4349/5/artifact/out/patch-compile-root-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt)
 |  root in the patch failed with JDK Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   3m 38s |  |  root: The patch generated 
0 new + 688 unchanged - 2 fixed = 688 total (was 690)  |
   | +1 :green_heart: |  mvnsite  |   9m 44s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   7m 41s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   7m 33s 

[jira] [Work logged] (HDFS-16463) Make dirent cross platform compatible

2022-05-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16463?focusedWorklogId=775690=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-775690
 ]

ASF GitHub Bot logged work on HDFS-16463:
-

Author: ASF GitHub Bot
Created on: 29/May/22 04:38
Start Date: 29/May/22 04:38
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #4370:
URL: https://github.com/apache/hadoop/pull/4370#issuecomment-1140376017

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m 12s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 5 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  41m 13s |  |  trunk passed  |
   | -1 :x: |  compile  |   0m 36s | 
[/branch-compile-hadoop-hdfs-project_hadoop-hdfs-native-client.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4370/6/artifact/out/branch-compile-hadoop-hdfs-project_hadoop-hdfs-native-client.txt)
 |  hadoop-hdfs-native-client in trunk failed.  |
   | +1 :green_heart: |  mvnsite  |   0m 38s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  64m 41s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 21s |  |  the patch passed  |
   | -1 :x: |  compile  |   0m 23s | 
[/patch-compile-hadoop-hdfs-project_hadoop-hdfs-native-client.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4370/6/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-native-client.txt)
 |  hadoop-hdfs-native-client in the patch failed.  |
   | -1 :x: |  cc  |   0m 23s | 
[/patch-compile-hadoop-hdfs-project_hadoop-hdfs-native-client.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4370/6/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-native-client.txt)
 |  hadoop-hdfs-native-client in the patch failed.  |
   | -1 :x: |  golang  |   0m 23s | 
[/patch-compile-hadoop-hdfs-project_hadoop-hdfs-native-client.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4370/6/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-native-client.txt)
 |  hadoop-hdfs-native-client in the patch failed.  |
   | -1 :x: |  javac  |   0m 23s | 
[/patch-compile-hadoop-hdfs-project_hadoop-hdfs-native-client.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4370/6/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-native-client.txt)
 |  hadoop-hdfs-native-client in the patch failed.  |
   | -1 :x: |  blanks  |   0m  0s | 
[/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4370/6/artifact/out/blanks-eol.txt)
 |  The patch has 1 line(s) that end in blanks. Use git apply --whitespace=fix 
<>. Refer https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  mvnsite  |   0m 20s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  22m  9s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  |   0m 33s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs-native-client.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4370/6/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-native-client.txt)
 |  hadoop-hdfs-native-client in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   0m 47s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  93m  2s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4370/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4370 |
   | Optional Tests | dupname asflicense compile cc mvnsite javac unit 
codespell detsecrets golang |
   | uname | Linux 3d85363852f9 4.15.0-175-generic #184-Ubuntu SMP Thu Mar 24 
17:48:36 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 64920302e60d4d3f0f1de2b32ee5ec74ae082ef9 |
   | Default Java | Red Hat, Inc.-1.8.0_332-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4370/6/testReport/ |
   | Max. process+thread count | 606 (vs. ulimit of 5500) |
   | modules | C: 

[jira] [Work logged] (HDFS-16595) Slow peer metrics - add median, mad and upper latency limits

2022-05-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16595?focusedWorklogId=775689=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-775689
 ]

ASF GitHub Bot logged work on HDFS-16595:
-

Author: ASF GitHub Bot
Created on: 29/May/22 04:37
Start Date: 29/May/22 04:37
Worklog Time Spent: 10m 
  Work Description: virajjasani commented on code in PR #4357:
URL: https://github.com/apache/hadoop/pull/4357#discussion_r884209200


##
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestSlowPeerTracker.java:
##
@@ -79,9 +79,12 @@ public void testEmptyReports() {
 
   @Test
   public void testReportsAreRetrieved() {
-tracker.addReport("node2", "node1", 1.2);
-tracker.addReport("node3", "node1", 2.1);
-tracker.addReport("node3", "node2", 1.22);
+OutlierMetrics outlierMetrics1 = new OutlierMetrics(0.0, 0.0, 0.0, 1.2);

Review Comment:
   Got it, addressed this change in the latest commit, thanks @tomscut 



##
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestSlowPeerTracker.java:
##
@@ -161,17 +176,28 @@ public void testGetJson() throws IOException {
 
   @Test
   public void testGetJsonSizeIsLimited() throws IOException {
-tracker.addReport("node1", "node2", 1.634);
-tracker.addReport("node1", "node3", 2.3566);
-tracker.addReport("node2", "node3", 3.869);
-tracker.addReport("node2", "node4", 4.1356);
-tracker.addReport("node3", "node4", 1.73057);
-tracker.addReport("node3", "node5", 2.4956730);
-tracker.addReport("node4", "node6", 3.29847);
-tracker.addReport("node5", "node6", 4.13444);
-tracker.addReport("node5", "node7", 5.10845);
-tracker.addReport("node6", "node8", 2.37464);
-tracker.addReport("node6", "node7", 1.29475656);
+OutlierMetrics outlierMetrics1 = new OutlierMetrics(0.0, 0.0, 0.0, 1.634);

Review Comment:
   Done, thanks





Issue Time Tracking
---

Worklog Id: (was: 775689)
Time Spent: 1h 40m  (was: 1.5h)

> Slow peer metrics - add median, mad and upper latency limits
> 
>
> Key: HDFS-16595
> URL: https://issues.apache.org/jira/browse/HDFS-16595
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> Slow datanode metrics include slow node and it's reporting node details. With 
> HDFS-16582, we added the aggregate latency that is perceived by the reporting 
> nodes.
> In order to get more insights into how the outlier slownode's latencies 
> differ from the rest of the nodes, we should also expose median, median 
> absolute deviation and the calculated upper latency limit details.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16603) Improve DatanodeHttpServer With Netty recommended method

2022-05-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16603?focusedWorklogId=775688=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-775688
 ]

ASF GitHub Bot logged work on HDFS-16603:
-

Author: ASF GitHub Bot
Created on: 29/May/22 03:43
Start Date: 29/May/22 03:43
Worklog Time Spent: 10m 
  Work Description: slfan1989 opened a new pull request, #4372:
URL: https://github.com/apache/hadoop/pull/4372

   JIRA: HDFS-16603. Improve DatanodeHttpServer With Netty recommended method.
   
   When reading the code, I found that some usage methods are outdated due to 
the upgrade of netty components.
   1.DatanodeHttpServer#Constructor
   ```
   @Deprecated
   public static final ChannelOption WRITE_BUFFER_HIGH_WATER_MARK = 
valueOf("WRITE_BUFFER_HIGH_WATER_MARK"); 
   Deprecated. Use WRITE_BUFFER_WATER_MARK
   
   @Deprecated
   public static final ChannelOption WRITE_BUFFER_LOW_WATER_MARK = 
valueOf("WRITE_BUFFER_LOW_WATER_MARK");
   Deprecated. Use WRITE_BUFFER_WATER_MARK
   -
   this.httpServer.childOption(
 ChannelOption.WRITE_BUFFER_HIGH_WATER_MARK,
 conf.getInt(
 DFSConfigKeys.DFS_WEBHDFS_NETTY_HIGH_WATERMARK,
 DFSConfigKeys.DFS_WEBHDFS_NETTY_HIGH_WATERMARK_DEFAULT));
   
   this.httpServer.childOption(
 ChannelOption.WRITE_BUFFER_LOW_WATER_MARK,
 conf.getInt(
 DFSConfigKeys.DFS_WEBHDFS_NETTY_LOW_WATERMARK,
 DFSConfigKeys.DFS_WEBHDFS_NETTY_LOW_WATERMARK_DEFAULT));
   
   ```
   2.Duplicate code
   ```
   ChannelFuture f = httpServer.bind(infoAddr);
   try {
f.syncUninterruptibly();
   } catch (Throwable e) {
 if (e instanceof BindException) {
  throw NetUtils.wrapException(null, 0, infoAddr.getHostName(),
  infoAddr.getPort(), (SocketException) e);
} else {
  throw e;
}
   }
   httpAddress = (InetSocketAddress) f.channel().localAddress();
   ```
   3.io.netty.bootstrap.ChannelFactory Deprecated
   use io.netty.channel.ChannelFactory instead.
   




Issue Time Tracking
---

Worklog Id: (was: 775688)
Remaining Estimate: 0h
Time Spent: 10m

> Improve DatanodeHttpServer With Netty recommended method
> 
>
> Key: HDFS-16603
> URL: https://issues.apache.org/jira/browse/HDFS-16603
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Minor
> Fix For: 3.4.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When reading the code, I found that some usage methods are outdated due to 
> the upgrade of netty components.
> {color:#172b4d}*1.DatanodeHttpServer#Constructor*{color}
> {code:java}
> @Deprecated
> public static final ChannelOption WRITE_BUFFER_HIGH_WATER_MARK = 
> valueOf("WRITE_BUFFER_HIGH_WATER_MARK"); 
> Deprecated. Use WRITE_BUFFER_WATER_MARK
> @Deprecated
> public static final ChannelOption WRITE_BUFFER_LOW_WATER_MARK = 
> valueOf("WRITE_BUFFER_LOW_WATER_MARK");
> Deprecated. Use WRITE_BUFFER_WATER_MARK
> -
> this.httpServer.childOption(
>           ChannelOption.WRITE_BUFFER_HIGH_WATER_MARK,
>           conf.getInt(
>               DFSConfigKeys.DFS_WEBHDFS_NETTY_HIGH_WATERMARK,
>               DFSConfigKeys.DFS_WEBHDFS_NETTY_HIGH_WATERMARK_DEFAULT));
> this.httpServer.childOption(
>           ChannelOption.WRITE_BUFFER_LOW_WATER_MARK,
>           conf.getInt(
>               DFSConfigKeys.DFS_WEBHDFS_NETTY_LOW_WATERMARK,
>               DFSConfigKeys.DFS_WEBHDFS_NETTY_LOW_WATERMARK_DEFAULT));
> {code}
> *2.Duplicate code* 
> {code:java}
> ChannelFuture f = httpServer.bind(infoAddr);
> try {
>  f.syncUninterruptibly();
> } catch (Throwable e) {
>   if (e instanceof BindException) {
>    throw NetUtils.wrapException(null, 0, infoAddr.getHostName(),
>    infoAddr.getPort(), (SocketException) e);
>  } else {
>    throw e;
>  }
> }
> httpAddress = (InetSocketAddress) f.channel().localAddress();
> LOG.info("Listening HTTP traffic on " + httpAddress);{code}
> *3.io.netty.bootstrap.ChannelFactory Deprecated*
> *use io.netty.channel.ChannelFactory instead.*
> {code:java}
> /** @deprecated */
> @Deprecated
> public interface ChannelFactory {
>     T newChannel();
> }{code}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16603) Improve DatanodeHttpServer With Netty recommended method

2022-05-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-16603:
--
Labels: pull-request-available  (was: )

> Improve DatanodeHttpServer With Netty recommended method
> 
>
> Key: HDFS-16603
> URL: https://issues.apache.org/jira/browse/HDFS-16603
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When reading the code, I found that some usage methods are outdated due to 
> the upgrade of netty components.
> {color:#172b4d}*1.DatanodeHttpServer#Constructor*{color}
> {code:java}
> @Deprecated
> public static final ChannelOption WRITE_BUFFER_HIGH_WATER_MARK = 
> valueOf("WRITE_BUFFER_HIGH_WATER_MARK"); 
> Deprecated. Use WRITE_BUFFER_WATER_MARK
> @Deprecated
> public static final ChannelOption WRITE_BUFFER_LOW_WATER_MARK = 
> valueOf("WRITE_BUFFER_LOW_WATER_MARK");
> Deprecated. Use WRITE_BUFFER_WATER_MARK
> -
> this.httpServer.childOption(
>           ChannelOption.WRITE_BUFFER_HIGH_WATER_MARK,
>           conf.getInt(
>               DFSConfigKeys.DFS_WEBHDFS_NETTY_HIGH_WATERMARK,
>               DFSConfigKeys.DFS_WEBHDFS_NETTY_HIGH_WATERMARK_DEFAULT));
> this.httpServer.childOption(
>           ChannelOption.WRITE_BUFFER_LOW_WATER_MARK,
>           conf.getInt(
>               DFSConfigKeys.DFS_WEBHDFS_NETTY_LOW_WATERMARK,
>               DFSConfigKeys.DFS_WEBHDFS_NETTY_LOW_WATERMARK_DEFAULT));
> {code}
> *2.Duplicate code* 
> {code:java}
> ChannelFuture f = httpServer.bind(infoAddr);
> try {
>  f.syncUninterruptibly();
> } catch (Throwable e) {
>   if (e instanceof BindException) {
>    throw NetUtils.wrapException(null, 0, infoAddr.getHostName(),
>    infoAddr.getPort(), (SocketException) e);
>  } else {
>    throw e;
>  }
> }
> httpAddress = (InetSocketAddress) f.channel().localAddress();
> LOG.info("Listening HTTP traffic on " + httpAddress);{code}
> *3.io.netty.bootstrap.ChannelFactory Deprecated*
> *use io.netty.channel.ChannelFactory instead.*
> {code:java}
> /** @deprecated */
> @Deprecated
> public interface ChannelFactory {
>     T newChannel();
> }{code}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16603) Improve DatanodeHttpServer With Netty recommended method

2022-05-28 Thread fanshilun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fanshilun updated HDFS-16603:
-
Description: 
When reading the code, I found that some usage methods are outdated due to the 
upgrade of netty components.

{color:#172b4d}*1.DatanodeHttpServer#Constructor*{color}
{code:java}
@Deprecated
public static final ChannelOption WRITE_BUFFER_HIGH_WATER_MARK = 
valueOf("WRITE_BUFFER_HIGH_WATER_MARK"); 
Deprecated. Use WRITE_BUFFER_WATER_MARK

@Deprecated
public static final ChannelOption WRITE_BUFFER_LOW_WATER_MARK = 
valueOf("WRITE_BUFFER_LOW_WATER_MARK");
Deprecated. Use WRITE_BUFFER_WATER_MARK
-

this.httpServer.childOption(
          ChannelOption.WRITE_BUFFER_HIGH_WATER_MARK,
          conf.getInt(
              DFSConfigKeys.DFS_WEBHDFS_NETTY_HIGH_WATERMARK,
              DFSConfigKeys.DFS_WEBHDFS_NETTY_HIGH_WATERMARK_DEFAULT));

this.httpServer.childOption(
          ChannelOption.WRITE_BUFFER_LOW_WATER_MARK,
          conf.getInt(
              DFSConfigKeys.DFS_WEBHDFS_NETTY_LOW_WATERMARK,
              DFSConfigKeys.DFS_WEBHDFS_NETTY_LOW_WATERMARK_DEFAULT));

{code}
*2.Duplicate code* 
{code:java}
ChannelFuture f = httpServer.bind(infoAddr);
try {
 f.syncUninterruptibly();
} catch (Throwable e) {
  if (e instanceof BindException) {
   throw NetUtils.wrapException(null, 0, infoAddr.getHostName(),
   infoAddr.getPort(), (SocketException) e);
 } else {
   throw e;
 }
}
httpAddress = (InetSocketAddress) f.channel().localAddress();
LOG.info("Listening HTTP traffic on " + httpAddress);{code}
*3.io.netty.bootstrap.ChannelFactory Deprecated*

*use io.netty.channel.ChannelFactory instead.*
{code:java}
/** @deprecated */
@Deprecated
public interface ChannelFactory {
    T newChannel();
}{code}

  was:
When reading the code, I found that some usage methods are outdated due to the 
upgrade of netty components.

{color:#172b4d}*1.DatanodeHttpServer#Constructor*{color}
{code:java}
@Deprecated
public static final ChannelOption WRITE_BUFFER_HIGH_WATER_MARK = 
valueOf("WRITE_BUFFER_HIGH_WATER_MARK"); 
Deprecated. Use WRITE_BUFFER_WATER_MARK

@Deprecated
public static final ChannelOption WRITE_BUFFER_LOW_WATER_MARK = 
valueOf("WRITE_BUFFER_LOW_WATER_MARK");
Deprecated. Use WRITE_BUFFER_WATER_MARK
-

this.httpServer.childOption(
          ChannelOption.WRITE_BUFFER_HIGH_WATER_MARK,
          conf.getInt(
              DFSConfigKeys.DFS_WEBHDFS_NETTY_HIGH_WATERMARK,
              DFSConfigKeys.DFS_WEBHDFS_NETTY_HIGH_WATERMARK_DEFAULT));

this.httpServer.childOption(
          ChannelOption.WRITE_BUFFER_LOW_WATER_MARK,
          conf.getInt(
              DFSConfigKeys.DFS_WEBHDFS_NETTY_LOW_WATERMARK,
              DFSConfigKeys.DFS_WEBHDFS_NETTY_LOW_WATERMARK_DEFAULT));

{code}
*2.Duplicate code* 
{code:java}
ChannelFuture f = httpServer.bind(infoAddr);
try {
 f.syncUninterruptibly();
} catch (Throwable e) {
  if (e instanceof BindException) {
   throw NetUtils.wrapException(null, 0, infoAddr.getHostName(),
   infoAddr.getPort(), (SocketException) e);
 } else {
   throw e;
 }
}
httpAddress = (InetSocketAddress) f.channel().localAddress();
LOG.info("Listening HTTP traffic on " + httpAddress);{code}
*3.io.netty.bootstrap.ChannelFactory Deprecated*
{code:java}
/** @deprecated */
@Deprecated
public interface ChannelFactory {
    T newChannel();
}{code}


> Improve DatanodeHttpServer With Netty recommended method
> 
>
> Key: HDFS-16603
> URL: https://issues.apache.org/jira/browse/HDFS-16603
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Minor
> Fix For: 3.4.0
>
>
> When reading the code, I found that some usage methods are outdated due to 
> the upgrade of netty components.
> {color:#172b4d}*1.DatanodeHttpServer#Constructor*{color}
> {code:java}
> @Deprecated
> public static final ChannelOption WRITE_BUFFER_HIGH_WATER_MARK = 
> valueOf("WRITE_BUFFER_HIGH_WATER_MARK"); 
> Deprecated. Use WRITE_BUFFER_WATER_MARK
> @Deprecated
> public static final ChannelOption WRITE_BUFFER_LOW_WATER_MARK = 
> valueOf("WRITE_BUFFER_LOW_WATER_MARK");
> Deprecated. Use WRITE_BUFFER_WATER_MARK
> -
> this.httpServer.childOption(
>           ChannelOption.WRITE_BUFFER_HIGH_WATER_MARK,
>           conf.getInt(
>               DFSConfigKeys.DFS_WEBHDFS_NETTY_HIGH_WATERMARK,
>               DFSConfigKeys.DFS_WEBHDFS_NETTY_HIGH_WATERMARK_DEFAULT));
> this.httpServer.childOption(
>           ChannelOption.WRITE_BUFFER_LOW_WATER_MARK,
>           conf.getInt(
>               DFSConfigKeys.DFS_WEBHDFS_NETTY_LOW_WATERMARK,
>               DFSConfigKeys.DFS_WEBHDFS_NETTY_LOW_WATERMARK_DEFAULT));
> {code}
> *2.Duplicate code* 
> {code:java}
> ChannelFuture f = httpServer.bind(infoAddr);
> try {
>  f.syncUninterruptibly();
> } catch 

[jira] [Updated] (HDFS-16603) Improve DatanodeHttpServer With Netty recommended method

2022-05-28 Thread fanshilun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fanshilun updated HDFS-16603:
-
Description: 
When reading the code, I found that some usage methods are outdated due to the 
upgrade of netty components.

{color:#172b4d}*1.DatanodeHttpServer#Constructor*{color}
{code:java}
@Deprecated
public static final ChannelOption WRITE_BUFFER_HIGH_WATER_MARK = 
valueOf("WRITE_BUFFER_HIGH_WATER_MARK"); 
Deprecated. Use WRITE_BUFFER_WATER_MARK

@Deprecated
public static final ChannelOption WRITE_BUFFER_LOW_WATER_MARK = 
valueOf("WRITE_BUFFER_LOW_WATER_MARK");
Deprecated. Use WRITE_BUFFER_WATER_MARK
-

this.httpServer.childOption(
          ChannelOption.WRITE_BUFFER_HIGH_WATER_MARK,
          conf.getInt(
              DFSConfigKeys.DFS_WEBHDFS_NETTY_HIGH_WATERMARK,
              DFSConfigKeys.DFS_WEBHDFS_NETTY_HIGH_WATERMARK_DEFAULT));

this.httpServer.childOption(
          ChannelOption.WRITE_BUFFER_LOW_WATER_MARK,
          conf.getInt(
              DFSConfigKeys.DFS_WEBHDFS_NETTY_LOW_WATERMARK,
              DFSConfigKeys.DFS_WEBHDFS_NETTY_LOW_WATERMARK_DEFAULT));

{code}
*2.Duplicate code* 
{code:java}
ChannelFuture f = httpServer.bind(infoAddr);
try {
 f.syncUninterruptibly();
} catch (Throwable e) {
  if (e instanceof BindException) {
   throw NetUtils.wrapException(null, 0, infoAddr.getHostName(),
   infoAddr.getPort(), (SocketException) e);
 } else {
   throw e;
 }
}
httpAddress = (InetSocketAddress) f.channel().localAddress();
LOG.info("Listening HTTP traffic on " + httpAddress);{code}
*3.io.netty.bootstrap.ChannelFactory Deprecated*
{code:java}
/** @deprecated */
@Deprecated
public interface ChannelFactory {
    T newChannel();
}{code}

  was:
When reading the code, I found that some usage methods are outdated due to the 
upgrade of netty components.

{color:#172b4d}*1.DatanodeHttpServer#Constructor*{color}
{code:java}
@Deprecated
public static final ChannelOption WRITE_BUFFER_HIGH_WATER_MARK = 
valueOf("WRITE_BUFFER_HIGH_WATER_MARK"); 
Deprecated. Use WRITE_BUFFER_WATER_MARK

@Deprecated
public static final ChannelOption WRITE_BUFFER_LOW_WATER_MARK = 
valueOf("WRITE_BUFFER_LOW_WATER_MARK");
Deprecated. Use WRITE_BUFFER_WATER_MARK
-

this.httpServer.childOption(
          ChannelOption.WRITE_BUFFER_HIGH_WATER_MARK,
          conf.getInt(
              DFSConfigKeys.DFS_WEBHDFS_NETTY_HIGH_WATERMARK,
              DFSConfigKeys.DFS_WEBHDFS_NETTY_HIGH_WATERMARK_DEFAULT));

this.httpServer.childOption(
          ChannelOption.WRITE_BUFFER_LOW_WATER_MARK,
          conf.getInt(
              DFSConfigKeys.DFS_WEBHDFS_NETTY_LOW_WATERMARK,
              DFSConfigKeys.DFS_WEBHDFS_NETTY_LOW_WATERMARK_DEFAULT));

{code}
*2.Duplicate code* 
{code:java}
ChannelFuture f = httpServer.bind(infoAddr);
      try {
        f.syncUninterruptibly();
      } catch (Throwable e) {
        if (e instanceof BindException) {
          throw NetUtils.wrapException(null, 0, infoAddr.getHostName(),
              infoAddr.getPort(), (SocketException) e);
        } else {
          throw e;
        }
      }
      httpAddress = (InetSocketAddress) f.channel().localAddress(); {code}


> Improve DatanodeHttpServer With Netty recommended method
> 
>
> Key: HDFS-16603
> URL: https://issues.apache.org/jira/browse/HDFS-16603
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Minor
> Fix For: 3.4.0
>
>
> When reading the code, I found that some usage methods are outdated due to 
> the upgrade of netty components.
> {color:#172b4d}*1.DatanodeHttpServer#Constructor*{color}
> {code:java}
> @Deprecated
> public static final ChannelOption WRITE_BUFFER_HIGH_WATER_MARK = 
> valueOf("WRITE_BUFFER_HIGH_WATER_MARK"); 
> Deprecated. Use WRITE_BUFFER_WATER_MARK
> @Deprecated
> public static final ChannelOption WRITE_BUFFER_LOW_WATER_MARK = 
> valueOf("WRITE_BUFFER_LOW_WATER_MARK");
> Deprecated. Use WRITE_BUFFER_WATER_MARK
> -
> this.httpServer.childOption(
>           ChannelOption.WRITE_BUFFER_HIGH_WATER_MARK,
>           conf.getInt(
>               DFSConfigKeys.DFS_WEBHDFS_NETTY_HIGH_WATERMARK,
>               DFSConfigKeys.DFS_WEBHDFS_NETTY_HIGH_WATERMARK_DEFAULT));
> this.httpServer.childOption(
>           ChannelOption.WRITE_BUFFER_LOW_WATER_MARK,
>           conf.getInt(
>               DFSConfigKeys.DFS_WEBHDFS_NETTY_LOW_WATERMARK,
>               DFSConfigKeys.DFS_WEBHDFS_NETTY_LOW_WATERMARK_DEFAULT));
> {code}
> *2.Duplicate code* 
> {code:java}
> ChannelFuture f = httpServer.bind(infoAddr);
> try {
>  f.syncUninterruptibly();
> } catch (Throwable e) {
>   if (e instanceof BindException) {
>    throw NetUtils.wrapException(null, 0, infoAddr.getHostName(),
>    infoAddr.getPort(), (SocketException) e);
>  } else {
>  

[jira] [Work logged] (HDFS-16590) Fix Junit Test Deprecated assertThat

2022-05-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16590?focusedWorklogId=775687=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-775687
 ]

ASF GitHub Bot logged work on HDFS-16590:
-

Author: ASF GitHub Bot
Created on: 29/May/22 03:18
Start Date: 29/May/22 03:18
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #4349:
URL: https://github.com/apache/hadoop/pull/4349#issuecomment-1140368949

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 57s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  1s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 60 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 41s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  27m 50s |  |  trunk passed  |
   | -1 :x: |  compile  |   4m 23s | 
[/branch-compile-root-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4349/4/artifact/out/branch-compile-root-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt)
 |  root in trunk failed with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.  |
   | -1 :x: |  compile  |   3m 37s | 
[/branch-compile-root-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4349/4/artifact/out/branch-compile-root-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt)
 |  root in trunk failed with JDK Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.  |
   | +1 :green_heart: |  checkstyle  |   4m 22s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   9m 41s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   7m 43s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   7m 36s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |  15m 58s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m 33s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 27s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   7m  9s |  |  the patch passed  |
   | -1 :x: |  compile  |   4m  4s | 
[/patch-compile-root-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4349/4/artifact/out/patch-compile-root-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt)
 |  root in the patch failed with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.  |
   | -1 :x: |  javac  |   4m  4s | 
[/patch-compile-root-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4349/4/artifact/out/patch-compile-root-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt)
 |  root in the patch failed with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.  |
   | -1 :x: |  compile  |   3m 28s | 
[/patch-compile-root-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4349/4/artifact/out/patch-compile-root-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt)
 |  root in the patch failed with JDK Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.  |
   | -1 :x: |  javac  |   3m 28s | 
[/patch-compile-root-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4349/4/artifact/out/patch-compile-root-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt)
 |  root in the patch failed with JDK Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   4m  0s |  |  root: The patch generated 
0 new + 688 unchanged - 2 fixed = 688 total (was 690)  |
   | +1 :green_heart: |  mvnsite  |   8m 20s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   6m 12s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   6m 18s |  |  the patch passed with JDK 
Private 

[jira] [Updated] (HDFS-16603) Improve DatanodeHttpServer With Netty recommended method

2022-05-28 Thread fanshilun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fanshilun updated HDFS-16603:
-
Description: 
When reading the code, I found that some usage methods are outdated due to the 
upgrade of netty components.

{color:#172b4d}*1.DatanodeHttpServer#Constructor*{color}
{code:java}
@Deprecated
public static final ChannelOption WRITE_BUFFER_HIGH_WATER_MARK = 
valueOf("WRITE_BUFFER_HIGH_WATER_MARK"); 
Deprecated. Use WRITE_BUFFER_WATER_MARK

@Deprecated
public static final ChannelOption WRITE_BUFFER_LOW_WATER_MARK = 
valueOf("WRITE_BUFFER_LOW_WATER_MARK");
Deprecated. Use WRITE_BUFFER_WATER_MARK
-

this.httpServer.childOption(
          ChannelOption.WRITE_BUFFER_HIGH_WATER_MARK,
          conf.getInt(
              DFSConfigKeys.DFS_WEBHDFS_NETTY_HIGH_WATERMARK,
              DFSConfigKeys.DFS_WEBHDFS_NETTY_HIGH_WATERMARK_DEFAULT));

this.httpServer.childOption(
          ChannelOption.WRITE_BUFFER_LOW_WATER_MARK,
          conf.getInt(
              DFSConfigKeys.DFS_WEBHDFS_NETTY_LOW_WATERMARK,
              DFSConfigKeys.DFS_WEBHDFS_NETTY_LOW_WATERMARK_DEFAULT));

{code}
*2.Duplicate code* 
{code:java}
ChannelFuture f = httpServer.bind(infoAddr);
      try {
        f.syncUninterruptibly();
      } catch (Throwable e) {
        if (e instanceof BindException) {
          throw NetUtils.wrapException(null, 0, infoAddr.getHostName(),
              infoAddr.getPort(), (SocketException) e);
        } else {
          throw e;
        }
      }
      httpAddress = (InetSocketAddress) f.channel().localAddress(); {code}

  was:
When reading the code, I found that some usage methods are outdated due to the 
upgrade of netty components.

{color:#172b4d}*1.DatanodeHttpServer#Constructor*{color}

 
{code:java}
@Deprecated
public static final ChannelOption WRITE_BUFFER_HIGH_WATER_MARK = 
valueOf("WRITE_BUFFER_HIGH_WATER_MARK"); 
Deprecated. Use WRITE_BUFFER_WATER_MARK

@Deprecated
public static final ChannelOption WRITE_BUFFER_LOW_WATER_MARK = 
valueOf("WRITE_BUFFER_LOW_WATER_MARK");
Deprecated. Use WRITE_BUFFER_WATER_MARK
-

this.httpServer.childOption(
          ChannelOption.WRITE_BUFFER_HIGH_WATER_MARK,
          conf.getInt(
              DFSConfigKeys.DFS_WEBHDFS_NETTY_HIGH_WATERMARK,
              DFSConfigKeys.DFS_WEBHDFS_NETTY_HIGH_WATERMARK_DEFAULT));

this.httpServer.childOption(
          ChannelOption.WRITE_BUFFER_LOW_WATER_MARK,
          conf.getInt(
              DFSConfigKeys.DFS_WEBHDFS_NETTY_LOW_WATERMARK,
              DFSConfigKeys.DFS_WEBHDFS_NETTY_LOW_WATERMARK_DEFAULT));

{code}
 

*2.Duplicate code* 
{code:java}
ChannelFuture f = httpServer.bind(infoAddr);
      try {
        f.syncUninterruptibly();
      } catch (Throwable e) {
        if (e instanceof BindException) {
          throw NetUtils.wrapException(null, 0, infoAddr.getHostName(),
              infoAddr.getPort(), (SocketException) e);
        } else {
          throw e;
        }
      }
      httpAddress = (InetSocketAddress) f.channel().localAddress(); {code}


> Improve DatanodeHttpServer With Netty recommended method
> 
>
> Key: HDFS-16603
> URL: https://issues.apache.org/jira/browse/HDFS-16603
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Minor
> Fix For: 3.4.0
>
>
> When reading the code, I found that some usage methods are outdated due to 
> the upgrade of netty components.
> {color:#172b4d}*1.DatanodeHttpServer#Constructor*{color}
> {code:java}
> @Deprecated
> public static final ChannelOption WRITE_BUFFER_HIGH_WATER_MARK = 
> valueOf("WRITE_BUFFER_HIGH_WATER_MARK"); 
> Deprecated. Use WRITE_BUFFER_WATER_MARK
> @Deprecated
> public static final ChannelOption WRITE_BUFFER_LOW_WATER_MARK = 
> valueOf("WRITE_BUFFER_LOW_WATER_MARK");
> Deprecated. Use WRITE_BUFFER_WATER_MARK
> -
> this.httpServer.childOption(
>           ChannelOption.WRITE_BUFFER_HIGH_WATER_MARK,
>           conf.getInt(
>               DFSConfigKeys.DFS_WEBHDFS_NETTY_HIGH_WATERMARK,
>               DFSConfigKeys.DFS_WEBHDFS_NETTY_HIGH_WATERMARK_DEFAULT));
> this.httpServer.childOption(
>           ChannelOption.WRITE_BUFFER_LOW_WATER_MARK,
>           conf.getInt(
>               DFSConfigKeys.DFS_WEBHDFS_NETTY_LOW_WATERMARK,
>               DFSConfigKeys.DFS_WEBHDFS_NETTY_LOW_WATERMARK_DEFAULT));
> {code}
> *2.Duplicate code* 
> {code:java}
> ChannelFuture f = httpServer.bind(infoAddr);
>       try {
>         f.syncUninterruptibly();
>       } catch (Throwable e) {
>         if (e instanceof BindException) {
>           throw NetUtils.wrapException(null, 0, infoAddr.getHostName(),
>               infoAddr.getPort(), (SocketException) e);
>         } else {
>           throw e;
>         }
>       }
>       httpAddress = 

[jira] [Updated] (HDFS-16603) Improve DatanodeHttpServer With Netty recommended method

2022-05-28 Thread fanshilun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fanshilun updated HDFS-16603:
-
Description: 
When reading the code, I found that some usage methods are outdated due to the 
upgrade of netty components.

{color:#172b4d}*1.DatanodeHttpServer#Constructor*{color}

 
{code:java}
@Deprecated
public static final ChannelOption WRITE_BUFFER_HIGH_WATER_MARK = 
valueOf("WRITE_BUFFER_HIGH_WATER_MARK"); 
Deprecated. Use WRITE_BUFFER_WATER_MARK

@Deprecated
public static final ChannelOption WRITE_BUFFER_LOW_WATER_MARK = 
valueOf("WRITE_BUFFER_LOW_WATER_MARK");
Deprecated. Use WRITE_BUFFER_WATER_MARK
-

this.httpServer.childOption(
          ChannelOption.WRITE_BUFFER_HIGH_WATER_MARK,
          conf.getInt(
              DFSConfigKeys.DFS_WEBHDFS_NETTY_HIGH_WATERMARK,
              DFSConfigKeys.DFS_WEBHDFS_NETTY_HIGH_WATERMARK_DEFAULT));

this.httpServer.childOption(
          ChannelOption.WRITE_BUFFER_LOW_WATER_MARK,
          conf.getInt(
              DFSConfigKeys.DFS_WEBHDFS_NETTY_LOW_WATERMARK,
              DFSConfigKeys.DFS_WEBHDFS_NETTY_LOW_WATERMARK_DEFAULT));

{code}
 

*2.Duplicate code* 
{code:java}
ChannelFuture f = httpServer.bind(infoAddr);
      try {
        f.syncUninterruptibly();
      } catch (Throwable e) {
        if (e instanceof BindException) {
          throw NetUtils.wrapException(null, 0, infoAddr.getHostName(),
              infoAddr.getPort(), (SocketException) e);
        } else {
          throw e;
        }
      }
      httpAddress = (InetSocketAddress) f.channel().localAddress(); {code}

  was:When reading the code, I found that some usage methods are outdated due 
to the upgrade of netty components.


> Improve DatanodeHttpServer With Netty recommended method
> 
>
> Key: HDFS-16603
> URL: https://issues.apache.org/jira/browse/HDFS-16603
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Minor
> Fix For: 3.4.0
>
>
> When reading the code, I found that some usage methods are outdated due to 
> the upgrade of netty components.
> {color:#172b4d}*1.DatanodeHttpServer#Constructor*{color}
>  
> {code:java}
> @Deprecated
> public static final ChannelOption WRITE_BUFFER_HIGH_WATER_MARK = 
> valueOf("WRITE_BUFFER_HIGH_WATER_MARK"); 
> Deprecated. Use WRITE_BUFFER_WATER_MARK
> @Deprecated
> public static final ChannelOption WRITE_BUFFER_LOW_WATER_MARK = 
> valueOf("WRITE_BUFFER_LOW_WATER_MARK");
> Deprecated. Use WRITE_BUFFER_WATER_MARK
> -
> this.httpServer.childOption(
>           ChannelOption.WRITE_BUFFER_HIGH_WATER_MARK,
>           conf.getInt(
>               DFSConfigKeys.DFS_WEBHDFS_NETTY_HIGH_WATERMARK,
>               DFSConfigKeys.DFS_WEBHDFS_NETTY_HIGH_WATERMARK_DEFAULT));
> this.httpServer.childOption(
>           ChannelOption.WRITE_BUFFER_LOW_WATER_MARK,
>           conf.getInt(
>               DFSConfigKeys.DFS_WEBHDFS_NETTY_LOW_WATERMARK,
>               DFSConfigKeys.DFS_WEBHDFS_NETTY_LOW_WATERMARK_DEFAULT));
> {code}
>  
> *2.Duplicate code* 
> {code:java}
> ChannelFuture f = httpServer.bind(infoAddr);
>       try {
>         f.syncUninterruptibly();
>       } catch (Throwable e) {
>         if (e instanceof BindException) {
>           throw NetUtils.wrapException(null, 0, infoAddr.getHostName(),
>               infoAddr.getPort(), (SocketException) e);
>         } else {
>           throw e;
>         }
>       }
>       httpAddress = (InetSocketAddress) f.channel().localAddress(); {code}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16603) Improve Datanode HttpServer With Netty recommended method

2022-05-28 Thread fanshilun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fanshilun updated HDFS-16603:
-
Description: When reading the code, I found that some usage methods are 
outdated due to the upgrade of netty components.

> Improve Datanode HttpServer With Netty recommended method
> -
>
> Key: HDFS-16603
> URL: https://issues.apache.org/jira/browse/HDFS-16603
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Minor
> Fix For: 3.4.0
>
>
> When reading the code, I found that some usage methods are outdated due to 
> the upgrade of netty components.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16603) Improve DatanodeHttpServer With Netty recommended method

2022-05-28 Thread fanshilun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fanshilun updated HDFS-16603:
-
Summary: Improve DatanodeHttpServer With Netty recommended method  (was: 
Improve Datanode HttpServer With Netty recommended method)

> Improve DatanodeHttpServer With Netty recommended method
> 
>
> Key: HDFS-16603
> URL: https://issues.apache.org/jira/browse/HDFS-16603
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Minor
> Fix For: 3.4.0
>
>
> When reading the code, I found that some usage methods are outdated due to 
> the upgrade of netty components.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDFS-16603) Improve Datanode HttpServer With Netty recommended method

2022-05-28 Thread fanshilun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-16603 started by fanshilun.

> Improve Datanode HttpServer With Netty recommended method
> -
>
> Key: HDFS-16603
> URL: https://issues.apache.org/jira/browse/HDFS-16603
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Minor
> Fix For: 3.4.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-16603) Improve Datanode HttpServer With Netty recommended method

2022-05-28 Thread fanshilun (Jira)
fanshilun created HDFS-16603:


 Summary: Improve Datanode HttpServer With Netty recommended method
 Key: HDFS-16603
 URL: https://issues.apache.org/jira/browse/HDFS-16603
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: fanshilun
Assignee: fanshilun
 Fix For: 3.4.0






--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13245) RBF: State store DBMS implementation

2022-05-28 Thread fanshilun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-13245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fanshilun updated HDFS-13245:
-
Resolution: Duplicate
Status: Resolved  (was: Patch Available)

> RBF: State store DBMS implementation
> 
>
> Key: HDFS-13245
> URL: https://issues.apache.org/jira/browse/HDFS-13245
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Baolong Mao
>Assignee: Yiran Wu
>Priority: Major
> Attachments: HDFS-13245.001.patch, HDFS-13245.002.patch, 
> HDFS-13245.003.patch, HDFS-13245.004.patch, HDFS-13245.005.patch, 
> HDFS-13245.006.patch, HDFS-13245.007.patch, HDFS-13245.008.patch, 
> HDFS-13245.009.patch, HDFS-13245.010.patch, HDFS-13245.011.patch, 
> HDFS-13245.012.patch
>
>
> Add a DBMS implementation for the State Store.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13245) RBF: State store DBMS implementation

2022-05-28 Thread fanshilun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-13245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17543558#comment-17543558
 ] 

fanshilun commented on HDFS-13245:
--

I think this jira can be closed, and related functions have been implemented in 
YARN-3663.

> RBF: State store DBMS implementation
> 
>
> Key: HDFS-13245
> URL: https://issues.apache.org/jira/browse/HDFS-13245
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Baolong Mao
>Assignee: Yiran Wu
>Priority: Major
> Attachments: HDFS-13245.001.patch, HDFS-13245.002.patch, 
> HDFS-13245.003.patch, HDFS-13245.004.patch, HDFS-13245.005.patch, 
> HDFS-13245.006.patch, HDFS-13245.007.patch, HDFS-13245.008.patch, 
> HDFS-13245.009.patch, HDFS-13245.010.patch, HDFS-13245.011.patch, 
> HDFS-13245.012.patch
>
>
> Add a DBMS implementation for the State Store.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16595) Slow peer metrics - add median, mad and upper latency limits

2022-05-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16595?focusedWorklogId=775685=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-775685
 ]

ASF GitHub Bot logged work on HDFS-16595:
-

Author: ASF GitHub Bot
Created on: 29/May/22 01:08
Start Date: 29/May/22 01:08
Worklog Time Spent: 10m 
  Work Description: tomscut commented on code in PR #4357:
URL: https://github.com/apache/hadoop/pull/4357#discussion_r884077287


##
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestSlowPeerTracker.java:
##
@@ -79,9 +79,12 @@ public void testEmptyReports() {
 
   @Test
   public void testReportsAreRetrieved() {
-tracker.addReport("node2", "node1", 1.2);
-tracker.addReport("node3", "node1", 2.1);
-tracker.addReport("node3", "node2", 1.22);
+OutlierMetrics outlierMetrics1 = new OutlierMetrics(0.0, 0.0, 0.0, 1.2);

Review Comment:
   Can we do not define a new variable `outlierMetrics1` here, just to make it 
a little bit cleaner.





Issue Time Tracking
---

Worklog Id: (was: 775685)
Time Spent: 1h 20m  (was: 1h 10m)

> Slow peer metrics - add median, mad and upper latency limits
> 
>
> Key: HDFS-16595
> URL: https://issues.apache.org/jira/browse/HDFS-16595
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Slow datanode metrics include slow node and it's reporting node details. With 
> HDFS-16582, we added the aggregate latency that is perceived by the reporting 
> nodes.
> In order to get more insights into how the outlier slownode's latencies 
> differ from the rest of the nodes, we should also expose median, median 
> absolute deviation and the calculated upper latency limit details.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16595) Slow peer metrics - add median, mad and upper latency limits

2022-05-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16595?focusedWorklogId=775686=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-775686
 ]

ASF GitHub Bot logged work on HDFS-16595:
-

Author: ASF GitHub Bot
Created on: 29/May/22 01:08
Start Date: 29/May/22 01:08
Worklog Time Spent: 10m 
  Work Description: tomscut commented on code in PR #4357:
URL: https://github.com/apache/hadoop/pull/4357#discussion_r884195601


##
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestSlowPeerTracker.java:
##
@@ -79,9 +79,12 @@ public void testEmptyReports() {
 
   @Test
   public void testReportsAreRetrieved() {
-tracker.addReport("node2", "node1", 1.2);
-tracker.addReport("node3", "node1", 2.1);
-tracker.addReport("node3", "node2", 1.22);
+OutlierMetrics outlierMetrics1 = new OutlierMetrics(0.0, 0.0, 0.0, 1.2);

Review Comment:
   I mean just write it like this:
   tracker.addReport("node2", "node1", new OutlierMetrics(0.0, 0.0, 0.0, 1.2));





Issue Time Tracking
---

Worklog Id: (was: 775686)
Time Spent: 1.5h  (was: 1h 20m)

> Slow peer metrics - add median, mad and upper latency limits
> 
>
> Key: HDFS-16595
> URL: https://issues.apache.org/jira/browse/HDFS-16595
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> Slow datanode metrics include slow node and it's reporting node details. With 
> HDFS-16582, we added the aggregate latency that is perceived by the reporting 
> nodes.
> In order to get more insights into how the outlier slownode's latencies 
> differ from the rest of the nodes, we should also expose median, median 
> absolute deviation and the calculated upper latency limit details.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-15646) Track failing tests in HDFS

2022-05-28 Thread fanshilun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17543477#comment-17543477
 ] 

fanshilun edited comment on HDFS-15646 at 5/29/22 1:05 AM:
---

For Junit Test failure, it is a big trouble for developers, like the following 
example:

[pr-4349|https://github.com/apache/hadoop/pull/4349]
The test report shows that 
org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes.testCircularLinkedListWrites
 Failed.
{code:java}
 Error DetailsSome writers didn't complete in expected runtime! Current writer 
state:[Circular Writer:
  directory: /test-0
  target length: 50
  current item: 41
  done: false
, Circular Writer:
  directory: /test-1
  target length: 50
  current item: 39
  done: false
, Circular Writer:
  directory: /test-2
  target length: 50
  current item: 39
  done: false
] expected:<0> but was:<3> {code}
But I was surprised to find.

[https://github.com/apache/hadoop/pull/4339]

The test report shows that 
org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes.testCircularLinkedListWrites
 Success.All Tests
|[Test 
name|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4339/4/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestSeveralNameNodes/#]|[Duration|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4339/4/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestSeveralNameNodes/#]|[Status|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4339/4/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestSeveralNameNodes/#]|
|[testCircularLinkedListWrites|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4339/4/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestSeveralNameNodes/testCircularLinkedListWrites]|1
 min 48 sec|Passed|

I now suspect that the junit test failure caused by the compile environment.

Is it possible that the compiled memory is too small, can you increase the 
memory used by some compilations?
{panel:title=patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt}
Caused by: org.apache.maven.surefire.booter.SurefireBooterForkException: The 
forked VM terminated without properly saying goodbye. VM crash or System.exit 
called? Command was /bin/sh -c cd 
/home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-4349/ubuntu-focal/src/hadoop-hdfs-project/hadoop-hdfs
 && {color:#ff}/usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java 
-Xmx2048m{color} -XX:+HeapDumpOnOutOfMemoryError 
-DminiClusterDedicatedDirs=true -jar 
/home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-4349/ubuntu-focal/src/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefirebooter3248290060089244263.jar
 
/home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-4349/ubuntu-focal/src/hadoop-hdfs-project/hadoop-hdfs/target/surefire
 2022-05-27T03-04-44_807-jvmRun2 surefire7444566443427356222tmp 
surefire_6685537493283452808462tmp Error occurred in starting fork, check 
output in log Process Exit Code: 1 Crashed tests:
{panel}
 

 

 


was (Author: slfan1989):
For Junit Test failure, it is a big trouble for developers, like the following 
example:

[pr-4349|https://github.com/apache/hadoop/pull/4349]
The test report shows that 
org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes.testCircularLinkedListWrites
 Failed.
{code:java}
 Error DetailsSome writers didn't complete in expected runtime! Current writer 
state:[Circular Writer:
  directory: /test-0
  target length: 50
  current item: 41
  done: false
, Circular Writer:
  directory: /test-1
  target length: 50
  current item: 39
  done: false
, Circular Writer:
  directory: /test-2
  target length: 50
  current item: 39
  done: false
] expected:<0> but was:<3> {code}
But I was surprised to find.

[https://github.com/apache/hadoop/pull/4339]

The test report shows that 
org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes.testCircularLinkedListWrites
 Success.All Tests
|[Test 
name|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4339/4/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestSeveralNameNodes/#]|[Duration|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4339/4/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestSeveralNameNodes/#]|[Status|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4339/4/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestSeveralNameNodes/#]|
|[testCircularLinkedListWrites|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4339/4/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestSeveralNameNodes/testCircularLinkedListWrites]|1
 min 48 sec|Passed|

I now suspect that the junit test failure caused by the docker environment.

Is it possible that the compiled memory is too small, can you increase the 
memory used by some compilations?
{panel:title=patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt}
Caused by: 

[jira] [Comment Edited] (HDFS-15646) Track failing tests in HDFS

2022-05-28 Thread fanshilun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17543477#comment-17543477
 ] 

fanshilun edited comment on HDFS-15646 at 5/29/22 1:04 AM:
---

For Junit Test failure, it is a big trouble for developers, like the following 
example:

[pr-4349|https://github.com/apache/hadoop/pull/4349]
The test report shows that 
org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes.testCircularLinkedListWrites
 Failed.
{code:java}
 Error DetailsSome writers didn't complete in expected runtime! Current writer 
state:[Circular Writer:
  directory: /test-0
  target length: 50
  current item: 41
  done: false
, Circular Writer:
  directory: /test-1
  target length: 50
  current item: 39
  done: false
, Circular Writer:
  directory: /test-2
  target length: 50
  current item: 39
  done: false
] expected:<0> but was:<3> {code}
But I was surprised to find.

[https://github.com/apache/hadoop/pull/4339]

The test report shows that 
org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes.testCircularLinkedListWrites
 Success.All Tests
|[Test 
name|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4339/4/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestSeveralNameNodes/#]|[Duration|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4339/4/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestSeveralNameNodes/#]|[Status|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4339/4/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestSeveralNameNodes/#]|
|[testCircularLinkedListWrites|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4339/4/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestSeveralNameNodes/testCircularLinkedListWrites]|1
 min 48 sec|Passed|

I now suspect that the junit test failure caused by the docker environment.

Is it possible that the compiled memory is too small, can you increase the 
memory used by some compilations?
{panel:title=patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt}
Caused by: org.apache.maven.surefire.booter.SurefireBooterForkException: The 
forked VM terminated without properly saying goodbye. VM crash or System.exit 
called? Command was /bin/sh -c cd 
/home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-4349/ubuntu-focal/src/hadoop-hdfs-project/hadoop-hdfs
 && {color:#ff}/usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java 
-Xmx2048m{color} -XX:+HeapDumpOnOutOfMemoryError 
-DminiClusterDedicatedDirs=true -jar 
/home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-4349/ubuntu-focal/src/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefirebooter3248290060089244263.jar
 
/home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-4349/ubuntu-focal/src/hadoop-hdfs-project/hadoop-hdfs/target/surefire
 2022-05-27T03-04-44_807-jvmRun2 surefire7444566443427356222tmp 
surefire_6685537493283452808462tmp Error occurred in starting fork, check 
output in log Process Exit Code: 1 Crashed tests:
{panel}
[~elgoiri] [~aajisaka] [~ayushtkn] [~stev...@iseran.com] I hope you can help 
solve related problems or give some suggestions

 

 


was (Author: slfan1989):
For Junit Test failure, it is a big trouble for developers, like the following 
example:

[pr-4349|https://github.com/apache/hadoop/pull/4349]
The test report shows that 
org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes.testCircularLinkedListWrites
 Failed.
{code:java}
 Error DetailsSome writers didn't complete in expected runtime! Current writer 
state:[Circular Writer:
  directory: /test-0
  target length: 50
  current item: 41
  done: false
, Circular Writer:
  directory: /test-1
  target length: 50
  current item: 39
  done: false
, Circular Writer:
  directory: /test-2
  target length: 50
  current item: 39
  done: false
] expected:<0> but was:<3> {code}
But I was surprised to find.

[pr-4339|[https://github.com/apache/hadoop/pull/4339]]

The test report shows that 
org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes.testCircularLinkedListWrites
 Success.All Tests
|[Test 
name|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4339/4/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestSeveralNameNodes/#]|[Duration|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4339/4/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestSeveralNameNodes/#]|[Status|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4339/4/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestSeveralNameNodes/#]|
|[testCircularLinkedListWrites|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4339/4/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestSeveralNameNodes/testCircularLinkedListWrites]|1
 min 48 sec|Passed|

I now suspect that the junit test failure caused by the docker environment.

Is it possible that the compiled memory is too small, can you increase the 
memory used by some 

[jira] [Work logged] (HDFS-16463) Make dirent cross platform compatible

2022-05-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16463?focusedWorklogId=775679=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-775679
 ]

ASF GitHub Bot logged work on HDFS-16463:
-

Author: ASF GitHub Bot
Created on: 28/May/22 17:21
Start Date: 28/May/22 17:21
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #4370:
URL: https://github.com/apache/hadoop/pull/4370#issuecomment-1140300771

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m  0s |  |  Docker mode activated.  |
   | -1 :x: |  docker  |   2m 13s |  |  Docker failed to build run-specific 
yetus/hadoop:tp-13568}.  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/4370 |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4370/5/console |
   | versions | git=2.17.1 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




Issue Time Tracking
---

Worklog Id: (was: 775679)
Time Spent: 1h  (was: 50m)

> Make dirent cross platform compatible
> -
>
> Key: HDFS-16463
> URL: https://issues.apache.org/jira/browse/HDFS-16463
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: libhdfs++
>Affects Versions: 3.4.0
> Environment: Windows 10
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: libhdfscpp, pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> [jnihelper.c|https://github.com/apache/hadoop/blob/1fed18bb2d8ac3dbaecc3feddded30bed918d556/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/jni_helper.c#L28]
>  in HDFS native client uses *dirent.h*. This header file isn't available on 
> Windows. Thus, we need to replace this with a cross platform compatible 
> implementation for dirent.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16463) Make dirent cross platform compatible

2022-05-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16463?focusedWorklogId=775678=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-775678
 ]

ASF GitHub Bot logged work on HDFS-16463:
-

Author: ASF GitHub Bot
Created on: 28/May/22 17:10
Start Date: 28/May/22 17:10
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #4370:
URL: https://github.com/apache/hadoop/pull/4370#issuecomment-1140298917

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m  0s |  |  Docker mode activated.  |
   | -1 :x: |  docker  |   2m 14s |  |  Docker failed to build run-specific 
yetus/hadoop:tp-24724}.  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/4370 |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4370/4/console |
   | versions | git=2.17.1 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




Issue Time Tracking
---

Worklog Id: (was: 775678)
Time Spent: 50m  (was: 40m)

> Make dirent cross platform compatible
> -
>
> Key: HDFS-16463
> URL: https://issues.apache.org/jira/browse/HDFS-16463
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: libhdfs++
>Affects Versions: 3.4.0
> Environment: Windows 10
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: libhdfscpp, pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> [jnihelper.c|https://github.com/apache/hadoop/blob/1fed18bb2d8ac3dbaecc3feddded30bed918d556/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/jni_helper.c#L28]
>  in HDFS native client uses *dirent.h*. This header file isn't available on 
> Windows. Thus, we need to replace this with a cross platform compatible 
> implementation for dirent.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-15646) Track failing tests in HDFS

2022-05-28 Thread fanshilun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17543477#comment-17543477
 ] 

fanshilun edited comment on HDFS-15646 at 5/28/22 2:46 PM:
---

For Junit Test failure, it is a big trouble for developers, like the following 
example:

[pr-4349|https://github.com/apache/hadoop/pull/4349]
The test report shows that 
org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes.testCircularLinkedListWrites
 Failed.
{code:java}
 Error DetailsSome writers didn't complete in expected runtime! Current writer 
state:[Circular Writer:
  directory: /test-0
  target length: 50
  current item: 41
  done: false
, Circular Writer:
  directory: /test-1
  target length: 50
  current item: 39
  done: false
, Circular Writer:
  directory: /test-2
  target length: 50
  current item: 39
  done: false
] expected:<0> but was:<3> {code}
But I was surprised to find.

[pr-4339|[https://github.com/apache/hadoop/pull/4339]]

The test report shows that 
org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes.testCircularLinkedListWrites
 Success.All Tests
|[Test 
name|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4339/4/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestSeveralNameNodes/#]|[Duration|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4339/4/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestSeveralNameNodes/#]|[Status|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4339/4/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestSeveralNameNodes/#]|
|[testCircularLinkedListWrites|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4339/4/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestSeveralNameNodes/testCircularLinkedListWrites]|1
 min 48 sec|Passed|

I now suspect that the junit test failure caused by the docker environment.

Is it possible that the compiled memory is too small, can you increase the 
memory used by some compilations?
{panel:title=patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt}

Caused by: org.apache.maven.surefire.booter.SurefireBooterForkException: The 
forked VM terminated without properly saying goodbye. VM crash or System.exit 
called? Command was /bin/sh -c cd 
/home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-4349/ubuntu-focal/src/hadoop-hdfs-project/hadoop-hdfs
 && {color:#ff}/usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java 
-Xmx2048m{color} -XX:+HeapDumpOnOutOfMemoryError 
-DminiClusterDedicatedDirs=true -jar 
/home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-4349/ubuntu-focal/src/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefirebooter3248290060089244263.jar
 
/home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-4349/ubuntu-focal/src/hadoop-hdfs-project/hadoop-hdfs/target/surefire
 2022-05-27T03-04-44_807-jvmRun2 surefire7444566443427356222tmp 
surefire_6685537493283452808462tmp Error occurred in starting fork, check 
output in log Process Exit Code: 1 Crashed tests:
{panel}
[~elgoiri] [~aajisaka] [~ayushtkn] [~stev...@iseran.com] I hope you can help 
solve related problems or give some suggestions

 

 


was (Author: slfan1989):
For Junit Test failure, it is a big trouble for developers, like the following 
example:

[pr-4349|https://github.com/apache/hadoop/pull/4349]
The test report shows that 
org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes.testCircularLinkedListWrites
 Failed.
{code:java}
 Error DetailsSome writers didn't complete in expected runtime! Current writer 
state:[Circular Writer:
  directory: /test-0
  target length: 50
  current item: 41
  done: false
, Circular Writer:
  directory: /test-1
  target length: 50
  current item: 39
  done: false
, Circular Writer:
  directory: /test-2
  target length: 50
  current item: 39
  done: false
] expected:<0> but was:<3> {code}
But I was surprised to find.

[pr-4339|[https://github.com/apache/hadoop/pull/4339]]

The test report shows that 
org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes.testCircularLinkedListWrites
 Success.All Tests
|[Test 
name|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4339/4/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestSeveralNameNodes/#]|[Duration|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4339/4/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestSeveralNameNodes/#]|[Status|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4339/4/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestSeveralNameNodes/#]|
|[testCircularLinkedListWrites|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4339/4/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestSeveralNameNodes/testCircularLinkedListWrites]|1
 min 48 sec|Passed|

I now suspect that the junit test failure caused by the docker environment.
{panel:title=我的标题}
Caused by: org.apache.maven.surefire.booter.SurefireBooterForkException: The 

[jira] [Comment Edited] (HDFS-15646) Track failing tests in HDFS

2022-05-28 Thread fanshilun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17543477#comment-17543477
 ] 

fanshilun edited comment on HDFS-15646 at 5/28/22 2:42 PM:
---

For Junit Test failure, it is a big trouble for developers, like the following 
example:

[pr-4349|https://github.com/apache/hadoop/pull/4349]
The test report shows that 
org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes.testCircularLinkedListWrites
 Failed.
{code:java}
 Error DetailsSome writers didn't complete in expected runtime! Current writer 
state:[Circular Writer:
  directory: /test-0
  target length: 50
  current item: 41
  done: false
, Circular Writer:
  directory: /test-1
  target length: 50
  current item: 39
  done: false
, Circular Writer:
  directory: /test-2
  target length: 50
  current item: 39
  done: false
] expected:<0> but was:<3> {code}
But I was surprised to find.

[pr-4339|[https://github.com/apache/hadoop/pull/4339]]

The test report shows that 
org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes.testCircularLinkedListWrites
 Success.All Tests
|[Test 
name|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4339/4/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestSeveralNameNodes/#]|[Duration|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4339/4/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestSeveralNameNodes/#]|[Status|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4339/4/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestSeveralNameNodes/#]|
|[testCircularLinkedListWrites|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4339/4/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestSeveralNameNodes/testCircularLinkedListWrites]|1
 min 48 sec|Passed|

I now suspect that the junit test failure caused by the docker environment.
{panel:title=我的标题}
Caused by: org.apache.maven.surefire.booter.SurefireBooterForkException: The 
forked VM terminated without properly saying goodbye. VM crash or System.exit 
called? Command was /bin/sh -c cd 
/home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-4349/ubuntu-focal/src/hadoop-hdfs-project/hadoop-hdfs
 && {color:#FF}/usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java 
-Xmx2048m{color} -XX:+HeapDumpOnOutOfMemoryError 
-DminiClusterDedicatedDirs=true -jar 
/home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-4349/ubuntu-focal/src/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefirebooter3248290060089244263.jar
 
/home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-4349/ubuntu-focal/src/hadoop-hdfs-project/hadoop-hdfs/target/surefire
 2022-05-27T03-04-44_807-jvmRun2 surefire7444566443427356222tmp 
surefire_6685537493283452808462tmp Error occurred in starting fork, check 
output in log Process Exit Code: 1 Crashed tests:{panel}
 

 

 


was (Author: slfan1989):
For Junit Test failure, it is a big trouble for developers, like the following 
example:

[pr-4349| https://github.com/apache/hadoop/pull/4349]
The test report shows that 
org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes.testCircularLinkedListWrites
 Failed.
{code:java}
 Error DetailsSome writers didn't complete in expected runtime! Current writer 
state:[Circular Writer:
  directory: /test-0
  target length: 50
  current item: 41
  done: false
, Circular Writer:
  directory: /test-1
  target length: 50
  current item: 39
  done: false
, Circular Writer:
  directory: /test-2
  target length: 50
  current item: 39
  done: false
] expected:<0> but was:<3> {code}
But I was surprised to find.

[pr-4339|[https://github.com/apache/hadoop/pull/4339]]

The test report shows that 
org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes.testCircularLinkedListWrites
 Success.All Tests
|[Test 
name|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4339/4/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestSeveralNameNodes/#]|[Duration|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4339/4/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestSeveralNameNodes/#]|[Status|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4339/4/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestSeveralNameNodes/#]|
|[testCircularLinkedListWrites|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4339/4/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestSeveralNameNodes/testCircularLinkedListWrites]|1
 min 48 sec|Passed|

I now suspect that the junit test failure caused by the docker environment.

 

 

 

> Track failing tests in HDFS
> ---
>
> Key: HDFS-15646
> URL: https://issues.apache.org/jira/browse/HDFS-15646
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Ahmed Hussein
>Priority: Blocker
>
> There are several Units that are 

[jira] [Comment Edited] (HDFS-15646) Track failing tests in HDFS

2022-05-28 Thread fanshilun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17543477#comment-17543477
 ] 

fanshilun edited comment on HDFS-15646 at 5/28/22 2:40 PM:
---

For Junit Test failure, it is a big trouble for developers, like the following 
example:

[pr-4349| https://github.com/apache/hadoop/pull/4349]
The test report shows that 
org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes.testCircularLinkedListWrites
 Failed.
{code:java}
 Error DetailsSome writers didn't complete in expected runtime! Current writer 
state:[Circular Writer:
  directory: /test-0
  target length: 50
  current item: 41
  done: false
, Circular Writer:
  directory: /test-1
  target length: 50
  current item: 39
  done: false
, Circular Writer:
  directory: /test-2
  target length: 50
  current item: 39
  done: false
] expected:<0> but was:<3> {code}
But I was surprised to find.

[pr-4339|[https://github.com/apache/hadoop/pull/4339]]

The test report shows that 
org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes.testCircularLinkedListWrites
 Success.All Tests
|[Test 
name|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4339/4/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestSeveralNameNodes/#]|[Duration|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4339/4/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestSeveralNameNodes/#]|[Status|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4339/4/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestSeveralNameNodes/#]|
|[testCircularLinkedListWrites|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4339/4/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestSeveralNameNodes/testCircularLinkedListWrites]|1
 min 48 sec|Passed|

I now suspect that the junit test failure caused by the docker environment.

 

 

 


was (Author: slfan1989):
For Junit Test failure, it is a big trouble for developers, like the following 
example:

[pr-4349|[https://github.com/apache/hadoop/pull/4349|https://github.com/apache/hadoop/pull/4339]]
The test report shows that 
org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes.testCircularLinkedListWrites
 Failed.
{code:java}
 Error DetailsSome writers didn't complete in expected runtime! Current writer 
state:[Circular Writer:
  directory: /test-0
  target length: 50
  current item: 41
  done: false
, Circular Writer:
  directory: /test-1
  target length: 50
  current item: 39
  done: false
, Circular Writer:
  directory: /test-2
  target length: 50
  current item: 39
  done: false
] expected:<0> but was:<3> {code}
But I was surprised to find.

[pr-4339|[https://github.com/apache/hadoop/pull/4339]]

The test report shows that 
org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes.testCircularLinkedListWrites
 Success.All Tests
|[Test 
name|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4339/4/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestSeveralNameNodes/#]|[Duration|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4339/4/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestSeveralNameNodes/#]|[Status|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4339/4/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestSeveralNameNodes/#]|
|[testCircularLinkedListWrites|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4339/4/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestSeveralNameNodes/testCircularLinkedListWrites]|1
 min 48 sec|Passed|

I now suspect that the junit test failure caused by the docker environment.

 

 

 

> Track failing tests in HDFS
> ---
>
> Key: HDFS-15646
> URL: https://issues.apache.org/jira/browse/HDFS-15646
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Ahmed Hussein
>Priority: Blocker
>
> There are several Units that are consistently failing on Yetus for a log 
> period of time.
>  The list keeps growing and it is driving the repository into unstable 
> status. Qbt  reports more than *40 failing unit tests* on average.
> Personally, over the last week, with every submitted patch, I have to spend a 
> considerable time looking at the same stack trace to double check whether or 
> not the patch contributes to those failures.
> I found out that the majority of those tests were failing for quite sometime 
> but +no Jiras were filed+.
> The main problem of those consistent failures is that they have side effect 
> on the runtime of the other Junits by sucking up resources such as memory and 
> ports.
> {{StripedFile}} and {{EC}} tests in particular are 100% show-ups in the list 
> of bad tests.
>  I looked at those tests and they certainly need some improvements (i.e., 
> HDFS-15459). Is any one interested in those test cases? Can we 

[jira] [Comment Edited] (HDFS-15646) Track failing tests in HDFS

2022-05-28 Thread fanshilun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17543477#comment-17543477
 ] 

fanshilun edited comment on HDFS-15646 at 5/28/22 2:39 PM:
---

For Junit Test failure, it is a big trouble for developers, like the following 
example:

[pr-4349|[https://github.com/apache/hadoop/pull/4349|https://github.com/apache/hadoop/pull/4339]]
The test report shows that 
org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes.testCircularLinkedListWrites
 Failed.
{code:java}
 Error DetailsSome writers didn't complete in expected runtime! Current writer 
state:[Circular Writer:
  directory: /test-0
  target length: 50
  current item: 41
  done: false
, Circular Writer:
  directory: /test-1
  target length: 50
  current item: 39
  done: false
, Circular Writer:
  directory: /test-2
  target length: 50
  current item: 39
  done: false
] expected:<0> but was:<3> {code}
But I was surprised to find.

[pr-4339|[https://github.com/apache/hadoop/pull/4339]]

The test report shows that 
org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes.testCircularLinkedListWrites
 Success.All Tests
|[Test 
name|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4339/4/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestSeveralNameNodes/#]|[Duration|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4339/4/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestSeveralNameNodes/#]|[Status|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4339/4/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestSeveralNameNodes/#]|
|[testCircularLinkedListWrites|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4339/4/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestSeveralNameNodes/testCircularLinkedListWrites]|1
 min 48 sec|Passed|

I now suspect that the junit test failure caused by the docker environment.

 

 

 


was (Author: slfan1989):
For Junit Test failure, it is a big trouble for developers, like the following 
example:

[pr-4349|https://github.com/apache/hadoop/pull/4349]
The test report shows that 
org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes.testCircularLinkedListWrites
 Failed.
{code:java}
 Error DetailsSome writers didn't complete in expected runtime! Current writer 
state:[Circular Writer:
  directory: /test-0
  target length: 50
  current item: 41
  done: false
, Circular Writer:
  directory: /test-1
  target length: 50
  current item: 39
  done: false
, Circular Writer:
  directory: /test-2
  target length: 50
  current item: 39
  done: false
] expected:<0> but was:<3> {code}
But I was surprised to find.

[pr-4339|[https://github.com/apache/hadoop/pull/4339]]

The test report shows that 
org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes.testCircularLinkedListWrites
 Success.All Tests
|[Test 
name|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4339/4/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestSeveralNameNodes/#]|[Duration|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4339/4/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestSeveralNameNodes/#]|[Status|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4339/4/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestSeveralNameNodes/#]|
|[testCircularLinkedListWrites|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4339/4/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestSeveralNameNodes/testCircularLinkedListWrites]|1
 min 48 sec|Passed|

I now suspect that the junit test failure caused by the docker environment.

 

 

 

> Track failing tests in HDFS
> ---
>
> Key: HDFS-15646
> URL: https://issues.apache.org/jira/browse/HDFS-15646
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Ahmed Hussein
>Priority: Blocker
>
> There are several Units that are consistently failing on Yetus for a log 
> period of time.
>  The list keeps growing and it is driving the repository into unstable 
> status. Qbt  reports more than *40 failing unit tests* on average.
> Personally, over the last week, with every submitted patch, I have to spend a 
> considerable time looking at the same stack trace to double check whether or 
> not the patch contributes to those failures.
> I found out that the majority of those tests were failing for quite sometime 
> but +no Jiras were filed+.
> The main problem of those consistent failures is that they have side effect 
> on the runtime of the other Junits by sucking up resources such as memory and 
> ports.
> {{StripedFile}} and {{EC}} tests in particular are 100% show-ups in the list 
> of bad tests.
>  I looked at those tests and they certainly need some improvements (i.e., 
> HDFS-15459). Is any one interested in those test cases? Can we 

[jira] [Commented] (HDFS-15646) Track failing tests in HDFS

2022-05-28 Thread fanshilun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17543477#comment-17543477
 ] 

fanshilun commented on HDFS-15646:
--

For Junit Test failure, it is a big trouble for developers, like the following 
example:

[pr-4349|https://github.com/apache/hadoop/pull/4349]
The test report shows that 
org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes.testCircularLinkedListWrites
 Failed.
{code:java}
 Error DetailsSome writers didn't complete in expected runtime! Current writer 
state:[Circular Writer:
  directory: /test-0
  target length: 50
  current item: 41
  done: false
, Circular Writer:
  directory: /test-1
  target length: 50
  current item: 39
  done: false
, Circular Writer:
  directory: /test-2
  target length: 50
  current item: 39
  done: false
] expected:<0> but was:<3> {code}
But I was surprised to find.

[pr-4339|[https://github.com/apache/hadoop/pull/4339]]

The test report shows that 
org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes.testCircularLinkedListWrites
 Success.All Tests
|[Test 
name|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4339/4/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestSeveralNameNodes/#]|[Duration|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4339/4/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestSeveralNameNodes/#]|[Status|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4339/4/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestSeveralNameNodes/#]|
|[testCircularLinkedListWrites|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4339/4/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestSeveralNameNodes/testCircularLinkedListWrites]|1
 min 48 sec|Passed|

I now suspect that the junit test failure caused by the docker environment.

 

 

 

> Track failing tests in HDFS
> ---
>
> Key: HDFS-15646
> URL: https://issues.apache.org/jira/browse/HDFS-15646
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Ahmed Hussein
>Priority: Blocker
>
> There are several Units that are consistently failing on Yetus for a log 
> period of time.
>  The list keeps growing and it is driving the repository into unstable 
> status. Qbt  reports more than *40 failing unit tests* on average.
> Personally, over the last week, with every submitted patch, I have to spend a 
> considerable time looking at the same stack trace to double check whether or 
> not the patch contributes to those failures.
> I found out that the majority of those tests were failing for quite sometime 
> but +no Jiras were filed+.
> The main problem of those consistent failures is that they have side effect 
> on the runtime of the other Junits by sucking up resources such as memory and 
> ports.
> {{StripedFile}} and {{EC}} tests in particular are 100% show-ups in the list 
> of bad tests.
>  I looked at those tests and they certainly need some improvements (i.e., 
> HDFS-15459). Is any one interested in those test cases? Can we just turn them 
> off?
> I like to give some heads-up that we need some more collaboration to enforce 
> the stability of the code set.
>  * For all developers, please, {color:#ff}file a Jira once you see a 
> failing test whether it is unrelated to your patch or not{color}. This gives 
> heads-up to other developers about the potential failures. Please do not stop 
> at commenting on your patch "_+this is unrelated to my work+_".
>  * Volunteer to dedicate more time on fixing flaky tests.
>  * Periodically, make sure that the list of failing tests does not exceed a 
> certain number of tests. We have Qbt reports to monitor that, but there is no 
> follow up on its status.
>  * We should consider aggressive strategies such as blocking any merges until 
> the code is brought back to stability.
>  * We need a clear and well-defined process to address Yetus issues: 
> configuration, investigating running out of memory, slowness..etc.
>  * Turn-off the Junits within the modules that are not being actively used in 
> the community (i.e., EC, stripedFiles, or..etc.). 
>  
> CC: [~aajisaka], [~elgoiri], [~kihwal], [~daryn], [~weichiu]
> Do you guys have any thoughts on the current status of the HDFS ?
>  
> +The following list is a quick list of failing Junits from Qbt reports:+
>  
> !https://ci-hadoop.apache.org/static/0ead8630/images/16x16/document_add.png!  
> [org.apache.hadoop.crypto.key.kms.server.TestKMS.testKMSProviderCaching|https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/299/testReport/org.apache.hadoop.crypto.key.kms.server/TestKMS/testKMSProviderCaching/]1.5
>  
> sec[1|https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/299/]
> !https://ci-hadoop.apache.org/static/0ead8630/images/16x16/document_add.png!  
> 

[jira] [Work logged] (HDFS-16463) Make dirent cross platform compatible

2022-05-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16463?focusedWorklogId=775669=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-775669
 ]

ASF GitHub Bot logged work on HDFS-16463:
-

Author: ASF GitHub Bot
Created on: 28/May/22 12:54
Start Date: 28/May/22 12:54
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #4370:
URL: https://github.com/apache/hadoop/pull/4370#issuecomment-1140256061

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m  0s |  |  Docker mode activated.  |
   | -1 :x: |  docker  |   2m 13s |  |  Docker failed to build run-specific 
yetus/hadoop:tp-14850}.  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/4370 |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4370/3/console |
   | versions | git=2.17.1 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




Issue Time Tracking
---

Worklog Id: (was: 775669)
Time Spent: 40m  (was: 0.5h)

> Make dirent cross platform compatible
> -
>
> Key: HDFS-16463
> URL: https://issues.apache.org/jira/browse/HDFS-16463
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: libhdfs++
>Affects Versions: 3.4.0
> Environment: Windows 10
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: libhdfscpp, pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> [jnihelper.c|https://github.com/apache/hadoop/blob/1fed18bb2d8ac3dbaecc3feddded30bed918d556/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/jni_helper.c#L28]
>  in HDFS native client uses *dirent.h*. This header file isn't available on 
> Windows. Thus, we need to replace this with a cross platform compatible 
> implementation for dirent.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16463) Make dirent cross platform compatible

2022-05-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16463?focusedWorklogId=775667=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-775667
 ]

ASF GitHub Bot logged work on HDFS-16463:
-

Author: ASF GitHub Bot
Created on: 28/May/22 12:33
Start Date: 28/May/22 12:33
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #4370:
URL: https://github.com/apache/hadoop/pull/4370#issuecomment-1140252106

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m  0s |  |  Docker mode activated.  |
   | -1 :x: |  docker  |   8m 55s |  |  Docker failed to build run-specific 
yetus/hadoop:tp-3776}.  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/4370 |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4370/2/console |
   | versions | git=2.17.1 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




Issue Time Tracking
---

Worklog Id: (was: 775667)
Time Spent: 0.5h  (was: 20m)

> Make dirent cross platform compatible
> -
>
> Key: HDFS-16463
> URL: https://issues.apache.org/jira/browse/HDFS-16463
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: libhdfs++
>Affects Versions: 3.4.0
> Environment: Windows 10
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: libhdfscpp, pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> [jnihelper.c|https://github.com/apache/hadoop/blob/1fed18bb2d8ac3dbaecc3feddded30bed918d556/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/jni_helper.c#L28]
>  in HDFS native client uses *dirent.h*. This header file isn't available on 
> Windows. Thus, we need to replace this with a cross platform compatible 
> implementation for dirent.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16599) Fix typo in hadoop-hdfs-rbf modle

2022-05-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16599?focusedWorklogId=775662=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-775662
 ]

ASF GitHub Bot logged work on HDFS-16599:
-

Author: ASF GitHub Bot
Created on: 28/May/22 10:50
Start Date: 28/May/22 10:50
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #4368:
URL: https://github.com/apache/hadoop/pull/4368#issuecomment-1140235894

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 48s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 20 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  39m  3s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 54s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   0m 48s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 41s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 53s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 59s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m  7s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 42s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  24m  9s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 38s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 41s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   0m 41s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 36s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   0m 35s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 22s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 39s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 38s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 55s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 25s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  23m 36s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  39m 42s |  |  hadoop-hdfs-rbf in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 43s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 142m 46s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4368/8/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4368 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 95395939cb45 4.15.0-175-generic #184-Ubuntu SMP Thu Mar 24 
17:48:36 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 788606367c93407c7509a79fa9c4c3a732301d52 |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4368/8/testReport/ |
   | Max. process+thread count | 2108 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4368/8/console |
   

[jira] [Work logged] (HDFS-16601) Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try

2022-05-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16601?focusedWorklogId=775661=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-775661
 ]

ASF GitHub Bot logged work on HDFS-16601:
-

Author: ASF GitHub Bot
Created on: 28/May/22 10:43
Start Date: 28/May/22 10:43
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #4369:
URL: https://github.com/apache/hadoop/pull/4369#issuecomment-1140234577

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 39s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  54m 30s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 46s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   1m 37s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   1m 25s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 48s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 26s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 49s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 39s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m 53s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 25s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 26s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   1m 26s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 21s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   1m 21s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  2s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 26s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 57s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 32s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 22s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  22m 32s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 243m  7s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m 15s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 369m 21s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4369/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4369 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 2a881ab226da 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / ea78ff70fe4e1e527ebca5486eba4fd67203fa37 |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4369/1/testReport/ |
   | Max. process+thread count | 3466 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4369/1/console |
   | versions | 

[jira] [Work logged] (HDFS-16599) Fix typo in hadoop-hdfs-rbf modle

2022-05-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16599?focusedWorklogId=775660=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-775660
 ]

ASF GitHub Bot logged work on HDFS-16599:
-

Author: ASF GitHub Bot
Created on: 28/May/22 10:25
Start Date: 28/May/22 10:25
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #4368:
URL: https://github.com/apache/hadoop/pull/4368#issuecomment-1140230506

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 40s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 20 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  38m 43s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m  5s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   0m 54s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 48s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 59s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  7s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 16s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 50s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m 50s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 40s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 43s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   0m 43s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 39s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   0m 39s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 27s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 42s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 41s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 59s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 27s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  20m 54s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  21m 25s |  |  hadoop-hdfs-rbf in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 51s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 120m 58s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4368/7/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4368 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux fec93da87c1e 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 788606367c93407c7509a79fa9c4c3a732301d52 |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4368/7/testReport/ |
   | Max. process+thread count | 2405 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4368/7/console |
   | 

[jira] [Work logged] (HDFS-16463) Make dirent cross platform compatible

2022-05-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16463?focusedWorklogId=775658=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-775658
 ]

ASF GitHub Bot logged work on HDFS-16463:
-

Author: ASF GitHub Bot
Created on: 28/May/22 09:56
Start Date: 28/May/22 09:56
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #4370:
URL: https://github.com/apache/hadoop/pull/4370#issuecomment-1140225506

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  52m  9s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 5 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  40m 41s |  |  trunk passed  |
   | -1 :x: |  compile  |   0m 36s | 
[/branch-compile-hadoop-hdfs-project_hadoop-hdfs-native-client.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4370/1/artifact/out/branch-compile-hadoop-hdfs-project_hadoop-hdfs-native-client.txt)
 |  hadoop-hdfs-native-client in trunk failed.  |
   | +1 :green_heart: |  mvnsite  |   0m 39s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  64m  7s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 19s |  |  the patch passed  |
   | -1 :x: |  compile  |   0m 26s | 
[/patch-compile-hadoop-hdfs-project_hadoop-hdfs-native-client.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4370/1/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-native-client.txt)
 |  hadoop-hdfs-native-client in the patch failed.  |
   | -1 :x: |  cc  |   0m 26s | 
[/patch-compile-hadoop-hdfs-project_hadoop-hdfs-native-client.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4370/1/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-native-client.txt)
 |  hadoop-hdfs-native-client in the patch failed.  |
   | -1 :x: |  golang  |   0m 26s | 
[/patch-compile-hadoop-hdfs-project_hadoop-hdfs-native-client.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4370/1/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-native-client.txt)
 |  hadoop-hdfs-native-client in the patch failed.  |
   | -1 :x: |  javac  |   0m 26s | 
[/patch-compile-hadoop-hdfs-project_hadoop-hdfs-native-client.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4370/1/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-native-client.txt)
 |  hadoop-hdfs-native-client in the patch failed.  |
   | -1 :x: |  blanks  |   0m  0s | 
[/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4370/1/artifact/out/blanks-eol.txt)
 |  The patch has 1 line(s) that end in blanks. Use git apply --whitespace=fix 
<>. Refer https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  mvnsite  |   0m 20s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  21m 51s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  |   0m 29s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs-native-client.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4370/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-native-client.txt)
 |  hadoop-hdfs-native-client in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   0m 45s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 143m  9s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4370/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4370 |
   | Optional Tests | dupname asflicense compile cc mvnsite javac unit 
codespell detsecrets golang |
   | uname | Linux c4194da9ee24 4.15.0-175-generic #184-Ubuntu SMP Thu Mar 24 
17:48:36 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 64920302e60d4d3f0f1de2b32ee5ec74ae082ef9 |
   | Default Java | Red Hat, Inc.-1.8.0_332-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4370/1/testReport/ |
   | Max. process+thread count | 575 (vs. ulimit of 5500) |
   | modules | C: 

[jira] [Work logged] (HDFS-16602) Use "defined" directive along with #if

2022-05-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16602?focusedWorklogId=775657=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-775657
 ]

ASF GitHub Bot logged work on HDFS-16602:
-

Author: ASF GitHub Bot
Created on: 28/May/22 09:49
Start Date: 28/May/22 09:49
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #4371:
URL: https://github.com/apache/hadoop/pull/4371#issuecomment-1140223971

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  41m 29s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  37m 51s |  |  trunk passed  |
   | -1 :x: |  compile  |   0m 46s | 
[/branch-compile-hadoop-hdfs-project_hadoop-hdfs-native-client.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4371/1/artifact/out/branch-compile-hadoop-hdfs-project_hadoop-hdfs-native-client.txt)
 |  hadoop-hdfs-native-client in trunk failed.  |
   | +1 :green_heart: |  mvnsite  |   0m 46s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  58m 24s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 24s |  |  the patch passed  |
   | -1 :x: |  compile  |   0m 27s | 
[/patch-compile-hadoop-hdfs-project_hadoop-hdfs-native-client.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4371/1/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-native-client.txt)
 |  hadoop-hdfs-native-client in the patch failed.  |
   | -1 :x: |  cc  |   0m 27s | 
[/patch-compile-hadoop-hdfs-project_hadoop-hdfs-native-client.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4371/1/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-native-client.txt)
 |  hadoop-hdfs-native-client in the patch failed.  |
   | -1 :x: |  golang  |   0m 27s | 
[/patch-compile-hadoop-hdfs-project_hadoop-hdfs-native-client.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4371/1/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-native-client.txt)
 |  hadoop-hdfs-native-client in the patch failed.  |
   | -1 :x: |  javac  |   0m 27s | 
[/patch-compile-hadoop-hdfs-project_hadoop-hdfs-native-client.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4371/1/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-native-client.txt)
 |  hadoop-hdfs-native-client in the patch failed.  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  mvnsite  |   0m 25s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  18m 48s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  |   0m 38s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs-native-client.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4371/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-native-client.txt)
 |  hadoop-hdfs-native-client in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   0m 52s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 124m  4s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4371/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4371 |
   | Optional Tests | dupname asflicense compile cc mvnsite javac unit 
codespell detsecrets golang |
   | uname | Linux 9577bc635904 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / ac6811fcd0704464c90361c78f46e7bc4617bddc |
   | Default Java | Red Hat, Inc.-1.8.0_332-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4371/1/testReport/ |
   | Max. process+thread count | 545 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4371/1/console |
   | 

[jira] [Work logged] (HDFS-16600) Deadlock on DataNode

2022-05-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16600?focusedWorklogId=775656=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-775656
 ]

ASF GitHub Bot logged work on HDFS-16600:
-

Author: ASF GitHub Bot
Created on: 28/May/22 09:36
Start Date: 28/May/22 09:36
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #4367:
URL: https://github.com/apache/hadoop/pull/4367#issuecomment-1140222126

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 47s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  36m 51s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 27s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   1m 20s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   1m  5s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 26s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  9s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 29s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 22s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m 11s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 12s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 18s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   1m 18s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  8s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   1m  8s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 52s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 17s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 50s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 23s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 15s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  21m 41s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 236m 34s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 50s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 339m  3s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4367/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4367 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 8e8fbc25f7bf 4.15.0-169-generic #177-Ubuntu SMP Thu Feb 3 
10:50:38 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / d477636990330ffb7029eb22ec41f99d6dc67fdf |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4367/1/testReport/ |
   | Max. process+thread count | 2717 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 

[jira] [Work logged] (HDFS-16599) Fix typo in hadoop-hdfs-rbf modle

2022-05-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16599?focusedWorklogId=775654=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-775654
 ]

ASF GitHub Bot logged work on HDFS-16599:
-

Author: ASF GitHub Bot
Created on: 28/May/22 08:51
Start Date: 28/May/22 08:51
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #4368:
URL: https://github.com/apache/hadoop/pull/4368#issuecomment-1140215061

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 57s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 20 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  39m 51s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 57s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   0m 47s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 45s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 54s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 58s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m  7s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 44s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  25m 43s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 41s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 52s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   0m 52s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 39s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   0m 39s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 23s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 40s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 38s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 55s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 35s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  24m 10s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  42m  3s |  |  hadoop-hdfs-rbf in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 45s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 148m 49s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4368/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4368 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 412ea3c30793 4.15.0-166-generic #174-Ubuntu SMP Wed Dec 8 
19:07:44 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 65668a30df79066a204355bd1f2f1fbcecbf55d4 |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4368/5/testReport/ |
   | Max. process+thread count | 2333 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4368/5/console |
   | 

[jira] [Work logged] (HDFS-16599) Fix typo in hadoop-hdfs-rbf modle

2022-05-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16599?focusedWorklogId=775653=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-775653
 ]

ASF GitHub Bot logged work on HDFS-16599:
-

Author: ASF GitHub Bot
Created on: 28/May/22 08:40
Start Date: 28/May/22 08:40
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #4368:
URL: https://github.com/apache/hadoop/pull/4368#issuecomment-1140213128

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 47s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 20 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  39m 36s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 53s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   0m 47s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 40s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 53s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 58s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m  5s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 45s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  24m  5s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 38s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 44s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   0m 44s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 36s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   0m 36s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 22s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 41s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 39s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 53s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 29s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  23m 56s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  30m 35s |  |  hadoop-hdfs-rbf in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 43s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 134m 30s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4368/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4368 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux d7b8a44a70c0 4.15.0-175-generic #184-Ubuntu SMP Thu Mar 24 
17:48:36 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 65668a30df79066a204355bd1f2f1fbcecbf55d4 |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4368/6/testReport/ |
   | Max. process+thread count | 2504 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4368/6/console |
   

[jira] [Work logged] (HDFS-16599) Fix typo in hadoop-hdfs-rbf modle

2022-05-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16599?focusedWorklogId=775652=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-775652
 ]

ASF GitHub Bot logged work on HDFS-16599:
-

Author: ASF GitHub Bot
Created on: 28/May/22 08:36
Start Date: 28/May/22 08:36
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #4368:
URL: https://github.com/apache/hadoop/pull/4368#issuecomment-1140212393

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 53s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  2s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 20 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  38m  1s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m  2s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   0m 57s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 45s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m  1s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  3s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m  7s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 41s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 56s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 43s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 44s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   0m 44s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 40s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   0m 40s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 24s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 44s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 42s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 58s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 32s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  23m 51s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  37m  5s |  |  hadoop-hdfs-rbf in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 52s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 137m 48s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4368/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4368 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 9869eb6f61bf 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / a368485b688293527caf309e6fbd56447535fbba |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4368/4/testReport/ |
   | Max. process+thread count | 2728 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4368/4/console |
   | 

[jira] [Work logged] (HDFS-16599) Fix typo in hadoop-hdfs-rbf modle

2022-05-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16599?focusedWorklogId=775651=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-775651
 ]

ASF GitHub Bot logged work on HDFS-16599:
-

Author: ASF GitHub Bot
Created on: 28/May/22 08:32
Start Date: 28/May/22 08:32
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #4368:
URL: https://github.com/apache/hadoop/pull/4368#issuecomment-1140210638

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 51s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  37m 58s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 58s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   0m 47s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 41s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 51s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  7s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 17s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 51s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m 24s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 39s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 41s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   0m 41s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 37s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   0m 37s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 23s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 36s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 40s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m  0s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 39s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  23m 25s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  37m  4s |  |  hadoop-hdfs-rbf in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 49s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 136m 57s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4368/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4368 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux c62b144b84c6 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 54294a286d091638c28580c472e06273578b4c6c |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4368/3/testReport/ |
   | Max. process+thread count | 2730 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 

[jira] [Work logged] (HDFS-16599) Fix typo in hadoop-hdfs-rbf modle

2022-05-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16599?focusedWorklogId=775650=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-775650
 ]

ASF GitHub Bot logged work on HDFS-16599:
-

Author: ASF GitHub Bot
Created on: 28/May/22 08:30
Start Date: 28/May/22 08:30
Worklog Time Spent: 10m 
  Work Description: slfan1989 commented on PR #4368:
URL: https://github.com/apache/hadoop/pull/4368#issuecomment-1140209976

   @ayushtkn Thanks a lot for your suggestion, I've done the fix.




Issue Time Tracking
---

Worklog Id: (was: 775650)
Time Spent: 1.5h  (was: 1h 20m)

> Fix typo in hadoop-hdfs-rbf modle
> -
>
> Key: HDFS-16599
> URL: https://issues.apache.org/jira/browse/HDFS-16599
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.4.0
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16595) Slow peer metrics - add median, mad and upper latency limits

2022-05-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16595?focusedWorklogId=775649=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-775649
 ]

ASF GitHub Bot logged work on HDFS-16595:
-

Author: ASF GitHub Bot
Created on: 28/May/22 08:07
Start Date: 28/May/22 08:07
Worklog Time Spent: 10m 
  Work Description: virajjasani commented on code in PR #4357:
URL: https://github.com/apache/hadoop/pull/4357#discussion_r884094166


##
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestSlowPeerTracker.java:
##
@@ -79,9 +79,12 @@ public void testEmptyReports() {
 
   @Test
   public void testReportsAreRetrieved() {
-tracker.addReport("node2", "node1", 1.2);
-tracker.addReport("node3", "node1", 2.1);
-tracker.addReport("node3", "node2", 1.22);
+OutlierMetrics outlierMetrics1 = new OutlierMetrics(0.0, 0.0, 0.0, 1.2);

Review Comment:
   Sorry I didn't get it. It's already `outlierMetrics1` :)





Issue Time Tracking
---

Worklog Id: (was: 775649)
Time Spent: 1h 10m  (was: 1h)

> Slow peer metrics - add median, mad and upper latency limits
> 
>
> Key: HDFS-16595
> URL: https://issues.apache.org/jira/browse/HDFS-16595
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Slow datanode metrics include slow node and it's reporting node details. With 
> HDFS-16582, we added the aggregate latency that is perceived by the reporting 
> nodes.
> In order to get more insights into how the outlier slownode's latencies 
> differ from the rest of the nodes, we should also expose median, median 
> absolute deviation and the calculated upper latency limit details.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16602) Use "defined" directive along with #if

2022-05-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16602?focusedWorklogId=775645=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-775645
 ]

ASF GitHub Bot logged work on HDFS-16602:
-

Author: ASF GitHub Bot
Created on: 28/May/22 07:43
Start Date: 28/May/22 07:43
Worklog Time Spent: 10m 
  Work Description: GauthamBanasandra opened a new pull request, #4371:
URL: https://github.com/apache/hadoop/pull/4371

   
   
   ### Description of PR
   The `#if` preprocessor directive expects a boolean expression. Thus, we need 
to use the `defined` directive as well to check if the macro has been defined.
   
   
   ### How was this patch tested?
   In progress.
   
   ### For code changes:
   
   - [x] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   




Issue Time Tracking
---

Worklog Id: (was: 775645)
Remaining Estimate: 0h
Time Spent: 10m

> Use "defined" directive along with #if
> --
>
> Key: HDFS-16602
> URL: https://issues.apache.org/jira/browse/HDFS-16602
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: libhdfs++
>Affects Versions: 3.4.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: libhdfscpp
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The #if preprocessor directive expects a boolean expression. Thus, we need to 
> use the "defined" directive as well to check if the macro has been defined.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16602) Use "defined" directive along with #if

2022-05-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-16602:
--
Labels: libhdfscpp pull-request-available  (was: libhdfscpp)

> Use "defined" directive along with #if
> --
>
> Key: HDFS-16602
> URL: https://issues.apache.org/jira/browse/HDFS-16602
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: libhdfs++
>Affects Versions: 3.4.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: libhdfscpp, pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The #if preprocessor directive expects a boolean expression. Thus, we need to 
> use the "defined" directive as well to check if the macro has been defined.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-16602) Use "defined" directive along with #if

2022-05-28 Thread Gautham Banasandra (Jira)
Gautham Banasandra created HDFS-16602:
-

 Summary: Use "defined" directive along with #if
 Key: HDFS-16602
 URL: https://issues.apache.org/jira/browse/HDFS-16602
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: libhdfs++
Affects Versions: 3.4.0
Reporter: Gautham Banasandra
Assignee: Gautham Banasandra


The #if preprocessor directive expects a boolean expression. Thus, we need to 
use the "defined" directive as well to check if the macro has been defined.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16599) Fix typo in hadoop-hdfs-rbf modle

2022-05-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16599?focusedWorklogId=775643=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-775643
 ]

ASF GitHub Bot logged work on HDFS-16599:
-

Author: ASF GitHub Bot
Created on: 28/May/22 07:36
Start Date: 28/May/22 07:36
Worklog Time Spent: 10m 
  Work Description: ayushtkn commented on code in PR #4368:
URL: https://github.com/apache/hadoop/pull/4368#discussion_r884086228


##
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/FederationMBean.java:
##
@@ -345,7 +345,7 @@ public interface FederationMBean {
   long getHighestPriorityLowRedundancyECBlocks();
 
   /**
-   * Returns the number of paths to be processed by storage policy satisfier.
+   * Returns the number of paths to be processed by storage policy satisfies.

Review Comment:
   SPS abbreviates to Storage Policy Satisfier. So this is correct only



##
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/FederationRPCPerformanceMonitor.java:
##
@@ -246,7 +246,7 @@ public void routerFailureLocked() {
 
 
   /**
-   * Get time between we receiving the operation and sending it to the 
Namenode.
+   * Get time between we're receiving the operation and sending it to the 
Namenode.

Review Comment:
   not required, it is ok only



##
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/NamenodePriorityComparator.java:
##
@@ -60,7 +60,7 @@ public int compare(FederationNamenodeContext o1,
*/
   private int compareModDates(FederationNamenodeContext o1,
   FederationNamenodeContext o2) {
-// Reverse sort, lowest position is highest priority.
+// Reverse sort, the lowest position is the highest priority.

Review Comment:
   avoid, it is ok only



##
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java:
##
@@ -325,14 +325,14 @@ protected static boolean isUnavailableSubclusterException(
   /**
* Check if a remote method can be retried in other subclusters when it
* failed in the original destination. This method returns the list of
-   * locations to retry in. This is used by fault tolerant mount points.
+   * locations to retry in. This is used by fault-tolerant mount points.

Review Comment:
   avoid



##
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/driver/StateStoreDriver.java:
##
@@ -32,8 +32,8 @@
 
 /**
  * Driver class for an implementation of a {@link StateStoreService}
- * provider. Driver implementations will extend this class and implement some 
of
- * the default methods.
+ * provider. Driver implementations will extend this class and implement some
+ * default methods.

Review Comment:
   correct only, revert



##
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java:
##
@@ -1830,8 +1830,8 @@ public HAServiceProtocol.HAServiceState 
getHAServiceState() {
   }
 
   /**
-   * Determines combinations of eligible src/dst locations for a rename. A
-   * rename cannot change the namespace. Renames are only allowed if there is 
an
+   * Determines combinations of eligible src/dst locations for a renamed. A
+   * renamed cannot change the namespace. Renames are only allowed if there is 
an

Review Comment:
   revert. It was better previously 



##
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterFaultTolerant.java:
##
@@ -178,7 +178,7 @@ public void cleanup() throws Exception {
   }
 
   /**
-   * Update a mount table entry to be fault tolerant.
+   * Update a mount table entry to be fault-tolerant.

Review Comment:
   no need of -, it is ok, revert



##
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/MiniRouterDFSCluster.java:
##
@@ -924,7 +924,7 @@ public void waitRouterRegistrationQuorum(RouterContext 
router,
 
   /**
* Wait for name spaces to be active.
-   * @throws Exception If we cannot check the status or we timeout.
+   * @throws Exception If we cannot check the status or we time out.

Review Comment:
   timeout is well accepted, revert



##
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterMountTableCacheRefresh.java:
##
@@ -308,9 +308,9 @@ public void run() {
 TimeUnit.SECONDS);
 mountTableRefresherService.init(config);
 // One router is not responding for 1 minute, still refresh should
-// finished in 5 second as cache update timeout is set 5 second.
+// be finished in 5 second as cache update timeout 

[jira] [Work logged] (HDFS-16463) Make dirent cross platform compatible

2022-05-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16463?focusedWorklogId=775642=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-775642
 ]

ASF GitHub Bot logged work on HDFS-16463:
-

Author: ASF GitHub Bot
Created on: 28/May/22 07:32
Start Date: 28/May/22 07:32
Worklog Time Spent: 10m 
  Work Description: GauthamBanasandra opened a new pull request, #4370:
URL: https://github.com/apache/hadoop/pull/4370

   
   
   ### Description of PR
   
[jnihelper.c](https://github.com/apache/hadoop/blob/1fed18bb2d8ac3dbaecc3feddded30bed918d556/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/jni_helper.c#L28)
 in HDFS native client uses dirent.h. This header file isn't available on 
Windows. Thus, we need to replace this with a cross platform compatible 
implementation for dirent.
   
   
   ### How was this patch tested?
   In progress.
   
   
   ### For code changes:
   
   - [x] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   




Issue Time Tracking
---

Worklog Id: (was: 775642)
Remaining Estimate: 0h
Time Spent: 10m

> Make dirent cross platform compatible
> -
>
> Key: HDFS-16463
> URL: https://issues.apache.org/jira/browse/HDFS-16463
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: libhdfs++
>Affects Versions: 3.4.0
> Environment: Windows 10
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: libhdfscpp
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> [jnihelper.c|https://github.com/apache/hadoop/blob/1fed18bb2d8ac3dbaecc3feddded30bed918d556/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/jni_helper.c#L28]
>  in HDFS native client uses *dirent.h*. This header file isn't available on 
> Windows. Thus, we need to replace this with a cross platform compatible 
> implementation for dirent.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16463) Make dirent cross platform compatible

2022-05-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-16463:
--
Labels: libhdfscpp pull-request-available  (was: libhdfscpp)

> Make dirent cross platform compatible
> -
>
> Key: HDFS-16463
> URL: https://issues.apache.org/jira/browse/HDFS-16463
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: libhdfs++
>Affects Versions: 3.4.0
> Environment: Windows 10
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: libhdfscpp, pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> [jnihelper.c|https://github.com/apache/hadoop/blob/1fed18bb2d8ac3dbaecc3feddded30bed918d556/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/jni_helper.c#L28]
>  in HDFS native client uses *dirent.h*. This header file isn't available on 
> Windows. Thus, we need to replace this with a cross platform compatible 
> implementation for dirent.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16463) Make dirent cross platform compatible

2022-05-28 Thread Gautham Banasandra (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gautham Banasandra updated HDFS-16463:
--
Summary: Make dirent cross platform compatible  (was: Make dirent.h cross 
platform compatible)

> Make dirent cross platform compatible
> -
>
> Key: HDFS-16463
> URL: https://issues.apache.org/jira/browse/HDFS-16463
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: libhdfs++
>Affects Versions: 3.4.0
> Environment: Windows 10
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: libhdfscpp
>
> [jnihelper.c|https://github.com/apache/hadoop/blob/1fed18bb2d8ac3dbaecc3feddded30bed918d556/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/jni_helper.c#L28]
>  in HDFS native client uses *dirent.h*. This header file isn't available on 
> Windows. Thus, we need to replace this with a cross platform compatible 
> implementation for dirent.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16598) All datanodes [DatanodeInfoWithStorage[127.0.0.1:57448,DS-1b5f7e33-a2bf-4edc-9122-a74c995a99f5,DISK]] are bad. Aborting...

2022-05-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16598?focusedWorklogId=775641=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-775641
 ]

ASF GitHub Bot logged work on HDFS-16598:
-

Author: ASF GitHub Bot
Created on: 28/May/22 07:28
Start Date: 28/May/22 07:28
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #4366:
URL: https://github.com/apache/hadoop/pull/4366#issuecomment-1140194556

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 34s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  40m 55s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 44s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   1m 32s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   1m 25s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 46s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 25s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 45s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 42s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m 59s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 20s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 24s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   1m 24s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 18s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   1m 18s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  1s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 24s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 57s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 32s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 23s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  22m 12s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 246m 21s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m 11s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 357m 51s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4366/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4366 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 7d5221a52297 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / bb5d4d05683b21249aa6372f1d578e3d2fc49c31 |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4366/1/testReport/ |
   | Max. process+thread count | 3217 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 

[jira] [Work logged] (HDFS-16599) Fix typo in hadoop-hdfs-rbf modle

2022-05-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16599?focusedWorklogId=775640=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-775640
 ]

ASF GitHub Bot logged work on HDFS-16599:
-

Author: ASF GitHub Bot
Created on: 28/May/22 07:17
Start Date: 28/May/22 07:17
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #4368:
URL: https://github.com/apache/hadoop/pull/4368#issuecomment-1140192710

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 51s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  46m 11s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 53s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   0m 47s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 41s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 54s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 58s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m  4s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 41s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m 56s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 38s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 41s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   0m 41s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 35s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   0m 35s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 22s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 39s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 37s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 53s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 26s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  24m  8s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  39m 40s |  |  hadoop-hdfs-rbf in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 46s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 150m  4s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4368/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4368 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 0725d393f282 4.15.0-175-generic #184-Ubuntu SMP Thu Mar 24 
17:48:36 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 170d9377f3d47f026c54ba66d1c8836fddb953d0 |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4368/2/testReport/ |
   | Max. process+thread count | 2233 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 

[jira] [Work logged] (HDFS-16599) Fix typo in hadoop-hdfs-rbf modle

2022-05-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16599?focusedWorklogId=775638=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-775638
 ]

ASF GitHub Bot logged work on HDFS-16599:
-

Author: ASF GitHub Bot
Created on: 28/May/22 06:28
Start Date: 28/May/22 06:28
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #4368:
URL: https://github.com/apache/hadoop/pull/4368#issuecomment-1140185740

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 41s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  40m  3s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m  0s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   0m 57s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 49s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m  3s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  0s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 11s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 43s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m 44s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 42s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 47s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   0m 47s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 37s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   0m 37s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 41s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 38s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 58s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 28s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  21m  0s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  21m 46s |  |  hadoop-hdfs-rbf in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 49s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 121m 53s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4368/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4368 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 9d86aeb7392f 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 262035b9e8469cd603c72bc1174961a645de5ff9 |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4368/1/testReport/ |
   | Max. process+thread count | 2402 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 

[jira] [Updated] (HDFS-16599) Fix typo in hadoop-hdfs-rbf modle

2022-05-28 Thread fanshilun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fanshilun updated HDFS-16599:
-
Summary: Fix typo in hadoop-hdfs-rbf modle  (was: Fix typo in 
RouterRpcClient)

> Fix typo in hadoop-hdfs-rbf modle
> -
>
> Key: HDFS-16599
> URL: https://issues.apache.org/jira/browse/HDFS-16599
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.4.0
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDFS-16599) Fix typo in RouterRpcClient

2022-05-28 Thread fanshilun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-16599 started by fanshilun.

> Fix typo in RouterRpcClient
> ---
>
> Key: HDFS-16599
> URL: https://issues.apache.org/jira/browse/HDFS-16599
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.4.0
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16599) Fix typo in RouterRpcClient

2022-05-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16599?focusedWorklogId=775637=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-775637
 ]

ASF GitHub Bot logged work on HDFS-16599:
-

Author: ASF GitHub Bot
Created on: 28/May/22 06:22
Start Date: 28/May/22 06:22
Worklog Time Spent: 10m 
  Work Description: slfan1989 commented on code in PR #4368:
URL: https://github.com/apache/hadoop/pull/4368#discussion_r884080726


##
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcClient.java:
##
@@ -875,7 +875,7 @@ public  T invokeSingle(final String nsId, RemoteMethod 
method,
* @param method The remote method and parameters to invoke.
* @param clazz Class for the return type.
* @return The result of invoking the method.
-   * @throws IOException If the invoke generated an error.
+   * @throws IOException If to invoke generated an error.

Review Comment:
   OK, I will fix it.





Issue Time Tracking
---

Worklog Id: (was: 775637)
Time Spent: 50m  (was: 40m)

> Fix typo in RouterRpcClient
> ---
>
> Key: HDFS-16599
> URL: https://issues.apache.org/jira/browse/HDFS-16599
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.4.0
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16599) Fix typo in RouterRpcClient

2022-05-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16599?focusedWorklogId=775635=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-775635
 ]

ASF GitHub Bot logged work on HDFS-16599:
-

Author: ASF GitHub Bot
Created on: 28/May/22 06:22
Start Date: 28/May/22 06:22
Worklog Time Spent: 10m 
  Work Description: slfan1989 commented on code in PR #4368:
URL: https://github.com/apache/hadoop/pull/4368#discussion_r884080701


##
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcClient.java:
##
@@ -624,7 +624,7 @@ private void addClientIpToCallerContext() {
* @param nsId Identifier for the namespace
* @param retryCount Current retry times
* @param method Method to invoke
-   * @param obj Target object for the method
+   * @param obj Target Object for the method

Review Comment:
   OK, I will fix it.





Issue Time Tracking
---

Worklog Id: (was: 775635)
Time Spent: 40m  (was: 0.5h)

> Fix typo in RouterRpcClient
> ---
>
> Key: HDFS-16599
> URL: https://issues.apache.org/jira/browse/HDFS-16599
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.4.0
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16557) BootstrapStandby failed because of checking gap for inprogress EditLogInputStream

2022-05-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16557?focusedWorklogId=775636=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-775636
 ]

ASF GitHub Bot logged work on HDFS-16557:
-

Author: ASF GitHub Bot
Created on: 28/May/22 06:22
Start Date: 28/May/22 06:22
Worklog Time Spent: 10m 
  Work Description: tomscut commented on PR #4219:
URL: https://github.com/apache/hadoop/pull/4219#issuecomment-1140184555

   Hi @ayushtkn , could you please also take a look at this. Thanks.




Issue Time Tracking
---

Worklog Id: (was: 775636)
Time Spent: 2.5h  (was: 2h 20m)

> BootstrapStandby failed because of checking gap for inprogress 
> EditLogInputStream
> -
>
> Key: HDFS-16557
> URL: https://issues.apache.org/jira/browse/HDFS-16557
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Tao Li
>Assignee: Tao Li
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2022-04-22-17-17-14-577.png, 
> image-2022-04-22-17-17-14-618.png, image-2022-04-22-17-17-23-113.png, 
> image-2022-04-22-17-17-32-487.png
>
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> The lastTxId of an inprogress EditLogInputStream lastTxId isn't necessarily 
> HdfsServerConstants.INVALID_TXID. We can determine its status directly by 
> EditLogInputStream#isInProgress.
> We introduced [SBN READ], and set 
> {color:#ff}{{dfs.ha.tail-edits.in-progress=true}}{color}. Then 
> bootstrapStandby, the EditLogInputStream of inProgress is misjudged, 
> resulting in a gap check failure, which causes bootstrapStandby to fail.
> hdfs namenode -bootstrapStandby
> !image-2022-04-22-17-17-32-487.png|width=766,height=161!
> !image-2022-04-22-17-17-14-577.png|width=598,height=187!



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org