[jira] [Commented] (HADOOP-18832) Upgrade aws-java-sdk to 1.12.499+

2023-07-30 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17749045#comment-17749045
 ] 

ASF GitHub Bot commented on HADOOP-18832:
-

virajjasani commented on PR #5908:
URL: https://github.com/apache/hadoop/pull/5908#issuecomment-1657726850

   With encryption enabled:
   
   ```
   $ mvn clean verify -Dparallel-tests -DtestsThreadCount=8 -Dscale -Dprefetch
   
   [INFO] -

> Upgrade aws-java-sdk to 1.12.499+
> -
>
> Key: HADOOP-18832
> URL: https://issues.apache.org/jira/browse/HADOOP-18832
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>
> aws sdk versions < 1.12.499 uses a vulnerable version of netty and hence 
> showing up in security CVE scans (CVE-2023-34462). The safe version for netty 
> is 4.1.94.Final and this is used by aws-java-sdk:1.12.499+



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] virajjasani commented on pull request #5908: HADOOP-18832. Upgrade aws-java-sdk to 1.12.499

2023-07-30 Thread via GitHub


virajjasani commented on PR #5908:
URL: https://github.com/apache/hadoop/pull/5908#issuecomment-1657726850

   With encryption enabled:
   
   ```
   $ mvn clean verify -Dparallel-tests -DtestsThreadCount=8 -Dscale -Dprefetch
   
   [INFO] --- maven-failsafe-plugin:3.0.0-M1:integration-test 
(default-integration-test) @ hadoop-aws ---
   [INFO] 
   [INFO] ---
   [INFO]  T E S T S
   [INFO] ---
   
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 1170, Failures: 0, Errors: 0, Skipped: 143
   
   
   [INFO] --- maven-failsafe-plugin:3.0.0-M1:integration-test 
(sequential-integration-tests) @ hadoop-aws ---
   [INFO] 
   [INFO] ---
   [INFO]  T E S T S
   [INFO] ---
   
   [INFO] Results:
   [INFO] 
   [ERROR] Errors: 
   [ERROR]   
ITestS3AContractRootDir>AbstractContractRootDirectoryTest.testRecursiveRootListing:266
 » TestTimedOut
   [INFO] 
   [ERROR] Tests run: 135, Failures: 0, Errors: 1, Skipped: 10
   
   
   [INFO] 

   [INFO] BUILD SUCCESS
   [INFO] 

   [INFO] Total time:  33:28 min
   [INFO] Finished at: 2023-07-30T16:57:05-07:00
   [INFO] 

   
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] Tre2878 commented on a diff in pull request #5855: HDFS-17093. In the case of all datanodes sending FBR when the namenode restarts (large clusters), there is an issue with incomplete

2023-07-30 Thread via GitHub


Tre2878 commented on code in PR #5855:
URL: https://github.com/apache/hadoop/pull/5855#discussion_r1278828777


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java:
##
@@ -2904,7 +2908,8 @@ public boolean processReport(final DatanodeID nodeID,
   }
   if (namesystem.isInStartupSafeMode()
   && !StorageType.PROVIDED.equals(storageInfo.getStorageType())
-  && storageInfo.getBlockReportCount() > 0) {
+  && storageInfo.getBlockReportCount() > 0
+  && totalReportNum == currentReportNum) {

Review Comment:
   @zhangshuyan0 Thank you for your retrial,This change can achieve the same 
effect, but I think node.hasStaleStorages() is also a Datanode-level operation 
that should also be called on the last disk, but logically, functionally, it's 
not that different。Listen to other people's opinions ,@Hexiaoqiao What do you 
think about that



##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java:
##
@@ -2904,7 +2908,8 @@ public boolean processReport(final DatanodeID nodeID,
   }
   if (namesystem.isInStartupSafeMode()
   && !StorageType.PROVIDED.equals(storageInfo.getStorageType())
-  && storageInfo.getBlockReportCount() > 0) {
+  && storageInfo.getBlockReportCount() > 0
+  && totalReportNum == currentReportNum) {

Review Comment:
   @zhangshuyan0 Thank you for your retrial,This change can achieve the same 
effect, but I think node.hasStaleStorages() is also a Datanode-level operation 
that should also be called on the last disk, but logically, functionally, it's 
not that different。Listen to other people's opinions ,@Hexiaoqiao What do you 
think about that



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18832) Upgrade aws-java-sdk to 1.12.499+

2023-07-30 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17749043#comment-17749043
 ] 

ASF GitHub Bot commented on HADOOP-18832:
-

virajjasani commented on PR #5908:
URL: https://github.com/apache/hadoop/pull/5908#issuecomment-1657720539

   us-west-2
   
   ```
   $ mvn clean verify -Dparallel-tests -DtestsThreadCount=8 -Dscale -Dprefetch
   
   [INFO] -

> Upgrade aws-java-sdk to 1.12.499+
> -
>
> Key: HADOOP-18832
> URL: https://issues.apache.org/jira/browse/HADOOP-18832
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>
> aws sdk versions < 1.12.499 uses a vulnerable version of netty and hence 
> showing up in security CVE scans (CVE-2023-34462). The safe version for netty 
> is 4.1.94.Final and this is used by aws-java-sdk:1.12.499+



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] virajjasani commented on pull request #5908: HADOOP-18832. Upgrade aws-java-sdk to 1.12.499

2023-07-30 Thread via GitHub


virajjasani commented on PR #5908:
URL: https://github.com/apache/hadoop/pull/5908#issuecomment-1657720539

   us-west-2
   
   ```
   $ mvn clean verify -Dparallel-tests -DtestsThreadCount=8 -Dscale -Dprefetch
   
   [INFO] --- maven-failsafe-plugin:3.0.0-M1:integration-test 
(default-integration-test) @ hadoop-aws ---
   [INFO] 
   [INFO] ---
   [INFO]  T E S T S
   [INFO] ---
   
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 1170, Failures: 0, Errors: 0, Skipped: 148
   
   
   [INFO] --- maven-failsafe-plugin:3.0.0-M1:integration-test 
(sequential-integration-tests) @ hadoop-aws ---
   [INFO] 
   [INFO] ---
   [INFO]  T E S T S
   [INFO] ---
   
   [INFO] Results:
   [INFO] 
   [ERROR] Errors: 
   [ERROR]   
ITestS3AContractRootDir>AbstractContractRootDirectoryTest.testRecursiveRootListing:266
 » TestTimedOut
   [INFO] 
   [ERROR] Tests run: 135, Failures: 0, Errors: 1, Skipped: 10
   
   
   [INFO] 

   [INFO] BUILD SUCCESS
   [INFO] 

   [INFO] Total time:  29:35 min
   [INFO] Finished at: 2023-07-30T14:54:18-07:00
   [INFO] 

   
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18832) Upgrade aws-java-sdk to 1.12.499+

2023-07-30 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HADOOP-18832:

Labels: pull-request-available  (was: )

> Upgrade aws-java-sdk to 1.12.499+
> -
>
> Key: HADOOP-18832
> URL: https://issues.apache.org/jira/browse/HADOOP-18832
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>
> aws sdk versions < 1.12.499 uses a vulnerable version of netty and hence 
> showing up in security CVE scans (CVE-2023-34462). The safe version for netty 
> is 4.1.94.Final and this is used by aws-java-sdk:1.12.499+



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18832) Upgrade aws-java-sdk to 1.12.499+

2023-07-30 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17749042#comment-17749042
 ] 

ASF GitHub Bot commented on HADOOP-18832:
-

virajjasani opened a new pull request, #5908:
URL: https://github.com/apache/hadoop/pull/5908

   (no comment)




> Upgrade aws-java-sdk to 1.12.499+
> -
>
> Key: HADOOP-18832
> URL: https://issues.apache.org/jira/browse/HADOOP-18832
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>
> aws sdk versions < 1.12.499 uses a vulnerable version of netty and hence 
> showing up in security CVE scans (CVE-2023-34462). The safe version for netty 
> is 4.1.94.Final and this is used by aws-java-sdk:1.12.499+



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] virajjasani opened a new pull request, #5908: HADOOP-18832. Upgrade aws-java-sdk to 1.12.499

2023-07-30 Thread via GitHub


virajjasani opened a new pull request, #5908:
URL: https://github.com/apache/hadoop/pull/5908

   (no comment)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #5900: HDFS-17134. RBF: Fix duplicate results of getListing through Router.

2023-07-30 Thread via GitHub


hadoop-yetus commented on PR #5900:
URL: https://github.com/apache/hadoop/pull/5900#issuecomment-1657664433

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 51s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  43m  0s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 45s |  |  trunk passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  compile  |   0m 43s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  checkstyle  |   0m 36s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 48s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 47s |  |  trunk passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 36s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  spotbugs  |   1m 27s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  34m 19s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 34s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 36s |  |  the patch passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javac  |   0m 36s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 32s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  javac  |   0m 32s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 21s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 35s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 31s |  |  the patch passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  spotbugs  |   1m 22s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  34m 37s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  22m 36s |  |  hadoop-hdfs-rbf in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 43s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 151m  8s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5900/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5900 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux c930b2114c43 4.15.0-212-generic #223-Ubuntu SMP Tue May 23 
13:09:22 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 0e7a196e709f6d1272de04c9bf56b81a49c6e382 |
   | Default Java | Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5900/5/testReport/ |
   | Max. process+thread count | 2720 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5900/5/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queri

[GitHub] [hadoop] hadoop-yetus commented on pull request #5901: YARN-7402. BackPort [GPG] Fix potential connection leak in GPGUtils.

2023-07-30 Thread via GitHub


hadoop-yetus commented on PR #5901:
URL: https://github.com/apache/hadoop/pull/5901#issuecomment-1657660013

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 50s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  47m 37s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 25s |  |  trunk passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  compile  |   0m 23s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  checkstyle  |   0m 23s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 26s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 29s |  |  trunk passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  spotbugs  |   0m 43s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  38m 20s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 17s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 17s |  |  the patch passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javac  |   0m 17s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 16s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  javac  |   0m 16s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 12s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 17s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 18s |  |  the patch passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 16s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  spotbugs  |   0m 41s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  38m 30s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 36s |  |  
hadoop-yarn-server-globalpolicygenerator in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 35s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 136m 48s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5901/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5901 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 19be9cefd5c0 4.15.0-213-generic #224-Ubuntu SMP Mon Jun 19 
13:30:12 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 25facff80fbfd6006cab3d487fad03eae59dbe0c |
   | Default Java | Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5901/2/testReport/ |
   | Max. process+thread count | 529 (vs. ulimit of 5500) |
   | modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator
 |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5901/2/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to 

[GitHub] [hadoop] hadoop-yetus commented on pull request #5901: YARN-7402. BackPort [GPG] Fix potential connection leak in GPGUtils.

2023-07-30 Thread via GitHub


hadoop-yetus commented on PR #5901:
URL: https://github.com/apache/hadoop/pull/5901#issuecomment-1657645681

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 38s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  43m 30s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 29s |  |  trunk passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  compile  |   0m 29s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  checkstyle  |   0m 31s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 34s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 38s |  |  trunk passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 31s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  spotbugs  |   0m 49s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  33m 44s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 20s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 20s |  |  the patch passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javac  |   0m 20s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 18s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  javac  |   0m 18s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 16s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 22s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 21s |  |  the patch passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 20s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  spotbugs  |   0m 43s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  33m 43s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 42s |  |  
hadoop-yarn-server-globalpolicygenerator in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 41s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 124m 35s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5901/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5901 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 6070db9fdfe9 4.15.0-213-generic #224-Ubuntu SMP Mon Jun 19 
13:30:12 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 25facff80fbfd6006cab3d487fad03eae59dbe0c |
   | Default Java | Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5901/3/testReport/ |
   | Max. process+thread count | 665 (vs. ulimit of 5500) |
   | modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator
 |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5901/3/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to 

[GitHub] [hadoop] haiyang1987 commented on pull request #5904: HDFS-17135. Update fsck -blockId to display excess state info of blocks

2023-07-30 Thread via GitHub


haiyang1987 commented on PR #5904:
URL: https://github.com/apache/hadoop/pull/5904#issuecomment-1657544481

   Thanks @Hexiaoqiao @slfan1989 @tomscut help me review and merge it!!!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] slfan1989 commented on pull request #5903: YARN-3660. [Addendum] Fix GPG Pom.xml Typo.

2023-07-30 Thread via GitHub


slfan1989 commented on PR #5903:
URL: https://github.com/apache/hadoop/pull/5903#issuecomment-1657511587

   @ayushtkn Can you help review this pr? Thank you very much! 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] zhangshuyan0 commented on a diff in pull request #5855: HDFS-17093. In the case of all datanodes sending FBR when the namenode restarts (large clusters), there is an issue with incom

2023-07-30 Thread via GitHub


zhangshuyan0 commented on code in PR #5855:
URL: https://github.com/apache/hadoop/pull/5855#discussion_r1278756125


##
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockReportLease.java:
##
@@ -269,4 +272,84 @@ private StorageBlockReport[] 
createReports(DatanodeStorage[] dnStorages,
 }
 return storageBlockReports;
   }
+
+  @Test

Review Comment:
   Need add a timeout here.



##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java:
##
@@ -2904,7 +2908,8 @@ public boolean processReport(final DatanodeID nodeID,
   }
   if (namesystem.isInStartupSafeMode()
   && !StorageType.PROVIDED.equals(storageInfo.getStorageType())
-  && storageInfo.getBlockReportCount() > 0) {
+  && storageInfo.getBlockReportCount() > 0
+  && totalReportNum == currentReportNum) {

Review Comment:
   If a datanode report twice during namenode safemode, the second report will 
be almost completely processed, which may extend startup time. How about modify 
code like this? This can also avoid changes in the method signature.
   ```
   if (namesystem.isInStartupSafeMode()
 && !StorageType.PROVIDED.equals(storageInfo.getStorageType())
 && storageInfo.getBlockReportCount() > 0) {
   blockLog.info("BLOCK* processReport 0x{} with lease ID 0x{}: "
   + "discarded non-initial block report from datanode {} storage 
{} "
   + " because namenode still in startup phase",
   strBlockReportId, fullBrLeaseId, nodeID, 
storageInfo.getStorageID());
   boolean needRemoveLease = true;
   for (DatanodeStorageInfo sInfo : node.getStorageInfos()) {
 if (sInfo.getBlockReportCount() == 0) {
   needRemoveLease = false;
 }
   }
   if (needRemoveLease) {
 blockReportLeaseManager.removeLease(node);
   }
   return !node.hasStaleStorages();
 }
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] zhangshuyan0 commented on pull request #5900: HDFS-17134. RBF: Fix duplicate results of getListing through Router.

2023-07-30 Thread via GitHub


zhangshuyan0 commented on PR #5900:
URL: https://github.com/apache/hadoop/pull/5900#issuecomment-1657439169

   The failed tests can success in my local environment.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] zhiubok commented on pull request #5907: HDFS-17136. Fix annotation description and typo in BlockPlacementPolicyDefault Class

2023-07-30 Thread via GitHub


zhiubok commented on PR #5907:
URL: https://github.com/apache/hadoop/pull/5907#issuecomment-1657410140

   > The failed unit test is unrelated to this change.
   > 
   > Thanks @zhiubok for your contribution. Thanks @slfan1989 for your review.
   
   Thanks for your merge. The commit info is incorrect. Would it be convenient 
to help modify it? The author is' huangzhaobo 'and No co authors, which is 
convenient for me to count.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] slfan1989 commented on pull request #5896: YARN-11543: Fix checkstyle issues after YARN-11520.

2023-07-30 Thread via GitHub


slfan1989 commented on PR #5896:
URL: https://github.com/apache/hadoop/pull/5896#issuecomment-1657390072

   @brumi1024 This pr can already be merged to the trunk branch, but I don't 
know whether the direct merge will affect your development plan, wait for 
yourself to merge this pr at the right time. Thanks for your contribution!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] Tre2878 commented on pull request #5855: HDFS-17093. In the case of all datanodes sending FBR when the namenode restarts (large clusters), there is an issue with incomplete block rep

2023-07-30 Thread via GitHub


Tre2878 commented on PR #5855:
URL: https://github.com/apache/hadoop/pull/5855#issuecomment-1657373806

   @zhangshuyan0 Do the new unit tests account for the bug?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] tomscut merged pull request #5907: HDFS-17136. Fix annotation description and typo in BlockPlacementPolicyDefault Class

2023-07-30 Thread via GitHub


tomscut merged PR #5907:
URL: https://github.com/apache/hadoop/pull/5907


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] tomscut commented on pull request #5907: HDFS-17136. Fix annotation description and typo in BlockPlacementPolicyDefault Class

2023-07-30 Thread via GitHub


tomscut commented on PR #5907:
URL: https://github.com/apache/hadoop/pull/5907#issuecomment-1657366732

   The failed unit test is unrelated to this change.
   
   Thanks @zhiubok for your contribution. Thanks @slfan1989 for your review.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] tomscut commented on pull request #5904: HDFS-17135. Update fsck -blockId to display excess state info of blocks

2023-07-30 Thread via GitHub


tomscut commented on PR #5904:
URL: https://github.com/apache/hadoop/pull/5904#issuecomment-1657356379

   Thanks @haiyang1987 for your contribution. Thanks @Hexiaoqiao and @slfan1989 
for the review.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] tomscut merged pull request #5904: HDFS-17135. Update fsck -blockId to display excess state info of blocks

2023-07-30 Thread via GitHub


tomscut merged PR #5904:
URL: https://github.com/apache/hadoop/pull/5904


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18832) Upgrade aws-java-sdk to 1.12.499+

2023-07-30 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17748981#comment-17748981
 ] 

Viraj Jasani commented on HADOOP-18832:
---

ITestS3AFileContextStatistics#testStatistics is flaky:
{code:java}
[ERROR] Tests run: 3, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 3.983 s 
<<< FAILURE! - in 
org.apache.hadoop.fs.s3a.fileContext.ITestS3AFileContextStatistics
[ERROR] 
testStatistics(org.apache.hadoop.fs.s3a.fileContext.ITestS3AFileContextStatistics)
  Time elapsed: 1.776 s  <<< FAILURE!
java.lang.AssertionError: expected:<512> but was:<448>
    at org.junit.Assert.fail(Assert.java:89)
    at org.junit.Assert.failNotEquals(Assert.java:835)
    at org.junit.Assert.assertEquals(Assert.java:647)
    at org.junit.Assert.assertEquals(Assert.java:633)
    at 
org.apache.hadoop.fs.FCStatisticsBaseTest.testStatistics(FCStatisticsBaseTest.java:108)
 {code}
This only happened once, now unable to reproduce it locally.

> Upgrade aws-java-sdk to 1.12.499+
> -
>
> Key: HADOOP-18832
> URL: https://issues.apache.org/jira/browse/HADOOP-18832
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>
> aws sdk versions < 1.12.499 uses a vulnerable version of netty and hence 
> showing up in security CVE scans (CVE-2023-34462). The safe version for netty 
> is 4.1.94.Final and this is used by aws-java-sdk:1.12.499+



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18832) Upgrade aws-java-sdk to 1.12.499+

2023-07-30 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17748980#comment-17748980
 ] 

Viraj Jasani commented on HADOOP-18832:
---

Testing in progress: Test results look good with -scale and -prefetch so far.

Now running some encryption tests (bucket with algo: SSE-KMS).

> Upgrade aws-java-sdk to 1.12.499+
> -
>
> Key: HADOOP-18832
> URL: https://issues.apache.org/jira/browse/HADOOP-18832
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>
> aws sdk versions < 1.12.499 uses a vulnerable version of netty and hence 
> showing up in security CVE scans (CVE-2023-34462). The safe version for netty 
> is 4.1.94.Final and this is used by aws-java-sdk:1.12.499+



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] slfan1989 commented on pull request #5890: YARN-11538. CS UI: queue filter do not work as expected when submitti…

2023-07-30 Thread via GitHub


slfan1989 commented on PR #5890:
URL: https://github.com/apache/hadoop/pull/5890#issuecomment-1657295820

   @yangjiandan Thanks for your contribution! Merged Into Trunk.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] slfan1989 merged pull request #5890: YARN-11538. CS UI: queue filter do not work as expected when submitti…

2023-07-30 Thread via GitHub


slfan1989 merged PR #5890:
URL: https://github.com/apache/hadoop/pull/5890


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] slfan1989 commented on pull request #5904: HDFS-17135. Update fsck -blockId to display excess state info of blocks

2023-07-30 Thread via GitHub


slfan1989 commented on PR #5904:
URL: https://github.com/apache/hadoop/pull/5904#issuecomment-1657295614

   LGTM +1.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #5902: YARN-7708. BackPort [GPG] Load based policy generator.

2023-07-30 Thread via GitHub


hadoop-yetus commented on PR #5902:
URL: https://github.com/apache/hadoop/pull/5902#issuecomment-1657213519

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  1s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  13m 45s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  38m 29s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   8m 13s |  |  trunk passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  compile  |   7m 23s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  checkstyle  |   2m  1s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   5m 28s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   5m 32s |  |  trunk passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   4m 44s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  spotbugs  |  15m 42s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  38m 48s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 29s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   4m 32s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   7m 31s |  |  the patch passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javac  |   7m 31s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   7m 23s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  javac  |   7m 23s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m 54s | 
[/results-checkstyle-hadoop-yarn-project_hadoop-yarn.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5902/2/artifact/out/results-checkstyle-hadoop-yarn-project_hadoop-yarn.txt)
 |  hadoop-yarn-project/hadoop-yarn: The patch generated 19 new + 164 unchanged 
- 0 fixed = 183 total (was 164)  |
   | +1 :green_heart: |  mvnsite  |   5m  7s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   5m 11s |  |  the patch passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   4m 30s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  spotbugs  |  16m 20s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  37m 46s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 246m  9s |  |  hadoop-yarn in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   1m 11s |  |  hadoop-yarn-api in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   5m 39s |  |  hadoop-yarn-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   0m 58s |  |  
hadoop-yarn-server-globalpolicygenerator in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m  1s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 492m 43s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5902/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5902 |
   | Optional Tests | dupname asflicense codespell detsecrets xmllint compile 
javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle |
   | uname | Linux 97888681cfc5 4.15.0-212-generic #223-Ubuntu SMP Tue May 23 
13:09:22 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 801e9a9c231f73b31a057f273b01751d0d683178 |
   | Default Java | Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_362-8u372-g

[jira] [Resolved] (HADOOP-18817) Upgrade version of aws-java-sdk-bundle to 1.12.368 avoid verify error.

2023-07-30 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-18817.
-
Resolution: Cannot Reproduce

> Upgrade version of aws-java-sdk-bundle to 1.12.368 avoid verify error. 
> ---
>
> Key: HADOOP-18817
> URL: https://issues.apache.org/jira/browse/HADOOP-18817
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.3.6
>Reporter: kuper
>Priority: Major
>  Labels: pull-request-available
>
> The compilation failed when I packaged through the maven command 
>  
> {code:java}
> mvn clean install -DskipTests -Dtar -Pdist -Pnative  {code}
>  
>  
> report an error: 
>  
> {code:java}
> [WARNING]
> Dependency convergence error for 
> com.amazonaws:aws-java-sdk-simpleworkflow:1.12.367 paths to dependency are:
> +-org.apache.hadoop:hadoop-aws:3.3.6
>   +-com.amazonaws:aws-java-sdk-bundle:1.12.367
>     +-com.amazonaws:aws-java-sdk:1.12.367
>       +-com.amazonaws:aws-java-sdk-simpleworkflow:1.12.367
> and
> +-org.apache.hadoop:hadoop-aws:3.3.6
>   +-com.amazonaws:aws-java-sdk-bundle:1.12.367
>     +-com.amazonaws:aws-java-sdk:1.12.367
>       +-com.amazonaws:aws-java-sdk-swf-libraries:1.11.22
>         +-com.amazonaws:aws-java-sdk-simpleworkflow:1.11.22
> [WARNING] Rule 0: org.apache.maven.plugins.enforcer.DependencyConvergence 
> failed with message:
> Failed while enforcing releasability. See above detailed error message. {code}
>  
> com.amazonaws:aws-java-sdk-swf-libraries are not required
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18817) Upgrade version of aws-java-sdk-bundle to 1.12.368 avoid verify error.

2023-07-30 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17748943#comment-17748943
 ] 

Steve Loughran commented on HADOOP-18817:
-

I suspect that. good to know it's been resolved. Every so often it's good to 
prune your maven repository, even if the penalty is the following week's builds 
have to download everything again.

> Upgrade version of aws-java-sdk-bundle to 1.12.368 avoid verify error. 
> ---
>
> Key: HADOOP-18817
> URL: https://issues.apache.org/jira/browse/HADOOP-18817
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.3.6
>Reporter: kuper
>Priority: Major
>  Labels: pull-request-available
>
> The compilation failed when I packaged through the maven command 
>  
> {code:java}
> mvn clean install -DskipTests -Dtar -Pdist -Pnative  {code}
>  
>  
> report an error: 
>  
> {code:java}
> [WARNING]
> Dependency convergence error for 
> com.amazonaws:aws-java-sdk-simpleworkflow:1.12.367 paths to dependency are:
> +-org.apache.hadoop:hadoop-aws:3.3.6
>   +-com.amazonaws:aws-java-sdk-bundle:1.12.367
>     +-com.amazonaws:aws-java-sdk:1.12.367
>       +-com.amazonaws:aws-java-sdk-simpleworkflow:1.12.367
> and
> +-org.apache.hadoop:hadoop-aws:3.3.6
>   +-com.amazonaws:aws-java-sdk-bundle:1.12.367
>     +-com.amazonaws:aws-java-sdk:1.12.367
>       +-com.amazonaws:aws-java-sdk-swf-libraries:1.11.22
>         +-com.amazonaws:aws-java-sdk-simpleworkflow:1.11.22
> [WARNING] Rule 0: org.apache.maven.plugins.enforcer.DependencyConvergence 
> failed with message:
> Failed while enforcing releasability. See above detailed error message. {code}
>  
> com.amazonaws:aws-java-sdk-swf-libraries are not required
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] zhangshuyan0 commented on pull request #5900: HDFS-17134. RBF: Fix duplicate results of getListing through Router.

2023-07-30 Thread via GitHub


zhangshuyan0 commented on PR #5900:
URL: https://github.com/apache/hadoop/pull/5900#issuecomment-1657180745

   @Hexiaoqiao Thanks for your review. Code style problems have been fixed. 
I'll check failed unit tests.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] Hexiaoqiao commented on a diff in pull request #5900: HDFS-17134. RBF: Fix duplicate results of getListing through Router.

2023-07-30 Thread via GitHub


Hexiaoqiao commented on code in PR #5900:
URL: https://github.com/apache/hadoop/pull/5900#discussion_r1278571071


##
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java:
##
@@ -819,20 +821,34 @@ public void renewLease(String clientName, List 
namespaces)
 }
   }
 
+  /**
+   * For {@link this#getListing(String,byte[],boolean)} to sort results.

Review Comment:
   This java doc are not compliant with rules. try to as the following style.
   `For {@link #getListing(String,byte[],boolean) GetLisiting} to sort results.`



##
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java:
##
@@ -819,20 +821,34 @@ public void renewLease(String clientName, List 
namespaces)
 }
   }
 
+  /**
+   * For {@link this#getListing(String,byte[],boolean)} to sort results.
+   */
+  private static class GetListingComparator
+  implements Comparator, Serializable {
+@Override
+public int compare(byte[] o1, byte[] o2) {
+  return DFSUtilClient.compareBytes(o1, o2);
+}
+  }
+
+  private static final GetListingComparator comparator =

Review Comment:
   Don't need `final` modifier.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] haiyang1987 commented on pull request #5904: HDFS-17135. Update fsck -blockId to display excess state info of blocks

2023-07-30 Thread via GitHub


haiyang1987 commented on PR #5904:
URL: https://github.com/apache/hadoop/pull/5904#issuecomment-1657162074

   Hi Sir  @Hexiaoqiao @ayushtkn  @tomscut @slfan1989 Could you please help me 
review this minor changes when you have free time ? Thanks a lot~


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #5900: HDFS-17134. RBF: Fix duplicate results of getListing through Router.

2023-07-30 Thread via GitHub


hadoop-yetus commented on PR #5900:
URL: https://github.com/apache/hadoop/pull/5900#issuecomment-1657148455

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 51s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  43m 25s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 44s |  |  trunk passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  compile  |   0m 42s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  checkstyle  |   0m 36s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 48s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 48s |  |  trunk passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 34s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  spotbugs  |   1m 26s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  34m  3s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 35s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 36s |  |  the patch passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javac  |   0m 36s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 32s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  javac  |   0m 32s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 21s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5900/4/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt)
 |  hadoop-hdfs-project/hadoop-hdfs-rbf: The patch generated 1 new + 2 
unchanged - 0 fixed = 3 total (was 2)  |
   | +1 :green_heart: |  mvnsite  |   0m 34s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   0m 32s | 
[/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-rbf-jdkUbuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5900/4/artifact/out/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-rbf-jdkUbuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1.txt)
 |  hadoop-hdfs-rbf in the patch failed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1.  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  spotbugs  |   1m 21s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  34m 12s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  |  22m 43s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5900/4/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt)
 |  hadoop-hdfs-rbf in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 43s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 150m 40s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.federation.router.TestRouterRPCMultipleDestinationMountTableResolver
 |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5900/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5900 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 5acb949303d7 4.15.0-212-generic #223-Ubuntu SMP Tue May 23 
13:09:22 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 97c3eac894208f0e53266531001e75590b182e9d |
   | Default Java | Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd

[GitHub] [hadoop] hadoop-yetus commented on pull request #5879: HDFS-17130. Blocks on IN_MAINTENANCE DNs should be sorted properly in LocatedBlocks.

2023-07-30 Thread via GitHub


hadoop-yetus commented on PR #5879:
URL: https://github.com/apache/hadoop/pull/5879#issuecomment-1657101842

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 30s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  31m 34s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 56s |  |  trunk passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  compile  |   0m 48s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  checkstyle  |   0m 44s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 57s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 50s |  |  trunk passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   1m  8s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  spotbugs  |   1m 58s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m 15s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 45s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 46s |  |  the patch passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javac  |   0m 46s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 43s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  javac  |   0m 43s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 36s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 48s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 38s |  |  the patch passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   1m  8s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  spotbugs  |   1m 52s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  22m 26s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 189m 13s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 40s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 282m 16s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5879/11/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5879 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 9bafcc5c606e 4.15.0-213-generic #224-Ubuntu SMP Mon Jun 19 
13:30:12 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / df79c4ec7da571017caf3739890c7c25a1c08c16 |
   | Default Java | Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5879/11/testReport/ |
   | Max. process+thread count | 3922 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5879/11/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about 

[jira] [Updated] (HADOOP-18835) Hdfs client will easily to oom when enable hedged read

2023-07-30 Thread Smith Cruise (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Smith Cruise updated HADOOP-18835:
--
Description: 
In the same workload, when I disable hedged read, JVM heap is:

```bash

Heap Configuration:
   MinHeapFreeRatio         = 40
   MaxHeapFreeRatio         = 70
   MaxHeapSize              = 32178700288 (30688.0MB)
   NewSize                  = 1363144 (1.254223632812MB)
   MaxNewSize               = 19306381312 (18412.0MB)
   OldSize                  = 5452592 (5.169482421875MB)
   NewRatio                 = 2
   SurvivorRatio            = 8
   MetaspaceSize            = 21807104 (20.796875MB)
   CompressedClassSpaceSize = 1073741824 (1024.0MB)
   MaxMetaspaceSize         = 17592186044415 MB
   G1HeapRegionSize         = 4194304 (4.0MB)
Heap Usage:
G1 Heap:
   regions  = 7672
   capacity = 32178700288 (30688.0MB)
   used     = 794118192 (757.3301239013672MB)
   free     = 31384582096 (29930.669876098633MB)
   2.467837994986207% used
G1 Young Generation:
Eden Space:
   regions  = 177
   capacity = 1732247552 (1652.0MB)
   used     = 742391808 (708.0MB)
   free     = 989855744 (944.0MB)
   42.857142857142854% used
Survivor Space:
   regions  = 6
   capacity = 25165824 (24.0MB)
   used     = 25165824 (24.0MB)
   free     = 0 (0.0MB)
   100.0% used
G1 Old Generation:
   regions  = 7
   capacity = 1035993088 (988.0MB)
   used     = 26560560 (25.330123901367188MB)
   free     = 1009432528 (962.6698760986328MB)
   2.56322810444% used

```

 

When I enable hedged read, it easily oom:

```bash

preadDirect: FSDataInputStream#read error:
OutOfMemoryError: Java heap spacejava.lang.OutOfMemoryError: Java heap space
preadDirect: FSDataInputStream#read error:
OutOfMemoryError: Java heap spacejava.lang.OutOfMemoryError: Java heap space
    at java.base/java.nio.HeapByteBuffer.(HeapByteBuffer.java:61)
    at java.base/java.nio.ByteBuffer.allocate(ByteBuffer.java:348)
    at 
org.apache.hadoop.hdfs.DFSInputStream.hedgedFetchBlockByteRange(DFSInputStream.java:1292)
    at org.apache.hadoop.hdfs.DFSInputStream.pread(DFSInputStream.java:1493)
    at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:1705)
    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:259)

```

 

```bash

Heap Configuration:
   MinHeapFreeRatio         = 40
   MaxHeapFreeRatio         = 70
   MaxHeapSize              = 32178700288 (30688.0MB)
   NewSize                  = 1363144 (1.254223632812MB)
   MaxNewSize               = 19306381312 (18412.0MB)
   OldSize                  = 5452592 (5.169482421875MB)
   NewRatio                 = 2
   SurvivorRatio            = 8
   MetaspaceSize            = 21807104 (20.796875MB)
   CompressedClassSpaceSize = 1073741824 (1024.0MB)
   MaxMetaspaceSize         = 17592186044415 MB
   G1HeapRegionSize         = 4194304 (4.0MB)
Heap Usage:
G1 Heap:
   regions  = 7672
   capacity = 32178700288 (30688.0MB)
   used     = 14680397040 (14000.317611694336MB)
   free     = 17498303248 (16687.682388305664MB)
   45.62147292653264% used
G1 Young Generation:
Eden Space:
   regions  = 1
   capacity = 11991515136 (11436.0MB)
   used     = 4194304 (4.0MB)
   free     = 11987320832 (11432.0MB)
   0.03497726477789437% used
Survivor Space:
   regions  = 1
   capacity = 4194304 (4.0MB)
   used     = 4194304 (4.0MB)
   free     = 0 (0.0MB)
   100.0% used
G1 Old Generation:
   regions  = 3500
   capacity = 20182990848 (19248.0MB)
   used     = 14672008432 (13992.317611694336MB)
   free     = 5510982416 (5255.682388305664MB)
   72.69491693523658% used

```

 

Any idea about this?

I look about hedged read metrics, TotalHedgedReadOpsWin/TotalHedgedReadOps is 
0, but the TotalHedgedReadOpsInCurThread has a large number(

177117)

  was:
In the same workload, when I disable hedged read, JVM heap is:

```bash

Heap Configuration:
   MinHeapFreeRatio         = 40
   MaxHeapFreeRatio         = 70
   MaxHeapSize              = 32178700288 (30688.0MB)
   NewSize                  = 1363144 (1.254223632812MB)
   MaxNewSize               = 19306381312 (18412.0MB)
   OldSize                  = 5452592 (5.169482421875MB)
   NewRatio                 = 2
   SurvivorRatio            = 8
   MetaspaceSize            = 21807104 (20.796875MB)
   CompressedClassSpaceSize = 1073741824 (1024.0MB)
   MaxMetaspaceSize         = 17592186044415 MB
   G1HeapRegionSize         = 4194304 (4.0MB)
Heap Usage:
G1 Heap:
   regions  = 7672
   capacity = 32178700288 (30688.0MB)
   used     = 794118192 (757.3301239013672MB)
   free     = 31384582096 (29930.669876098633MB)
   2.467837994986207% used
G1 Young Generation:
Eden Space:
   regions  = 177
   capacity = 1732247552 (1652.0MB)
   used     = 742391808 (708.0MB)
   free     = 989855744 (944.0MB)
   42.857142857142854% used
Survivor Space:
   regions  = 6
   capacity = 25165824 (24.0MB)
   used     = 25165824 (24.0MB)
   free     = 0 (0.0MB)

[jira] [Updated] (HADOOP-18835) Hdfs client will easily to oom when enable hedged read

2023-07-30 Thread Smith Cruise (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Smith Cruise updated HADOOP-18835:
--
Description: 
In the same workload, when I disable hedged read, JVM heap is:

```bash

Heap Configuration:
   MinHeapFreeRatio         = 40
   MaxHeapFreeRatio         = 70
   MaxHeapSize              = 32178700288 (30688.0MB)
   NewSize                  = 1363144 (1.254223632812MB)
   MaxNewSize               = 19306381312 (18412.0MB)
   OldSize                  = 5452592 (5.169482421875MB)
   NewRatio                 = 2
   SurvivorRatio            = 8
   MetaspaceSize            = 21807104 (20.796875MB)
   CompressedClassSpaceSize = 1073741824 (1024.0MB)
   MaxMetaspaceSize         = 17592186044415 MB
   G1HeapRegionSize         = 4194304 (4.0MB)
Heap Usage:
G1 Heap:
   regions  = 7672
   capacity = 32178700288 (30688.0MB)
   used     = 794118192 (757.3301239013672MB)
   free     = 31384582096 (29930.669876098633MB)
   2.467837994986207% used
G1 Young Generation:
Eden Space:
   regions  = 177
   capacity = 1732247552 (1652.0MB)
   used     = 742391808 (708.0MB)
   free     = 989855744 (944.0MB)
   42.857142857142854% used
Survivor Space:
   regions  = 6
   capacity = 25165824 (24.0MB)
   used     = 25165824 (24.0MB)
   free     = 0 (0.0MB)
   100.0% used
G1 Old Generation:
   regions  = 7
   capacity = 1035993088 (988.0MB)
   used     = 26560560 (25.330123901367188MB)
   free     = 1009432528 (962.6698760986328MB)
   2.56322810444% used

```

 

When I enable hedged read, it easily oom:

```bash

preadDirect: FSDataInputStream#read error:
OutOfMemoryError: Java heap spacejava.lang.OutOfMemoryError: Java heap space
preadDirect: FSDataInputStream#read error:
OutOfMemoryError: Java heap spacejava.lang.OutOfMemoryError: Java heap space
    at java.base/java.nio.HeapByteBuffer.(HeapByteBuffer.java:61)
    at java.base/java.nio.ByteBuffer.allocate(ByteBuffer.java:348)
    at 
org.apache.hadoop.hdfs.DFSInputStream.hedgedFetchBlockByteRange(DFSInputStream.java:1292)
    at org.apache.hadoop.hdfs.DFSInputStream.pread(DFSInputStream.java:1493)
    at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:1705)
    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:259)

```

 

```bash

Heap Configuration:
   MinHeapFreeRatio         = 40
   MaxHeapFreeRatio         = 70
   MaxHeapSize              = 32178700288 (30688.0MB)
   NewSize                  = 1363144 (1.254223632812MB)
   MaxNewSize               = 19306381312 (18412.0MB)
   OldSize                  = 5452592 (5.169482421875MB)
   NewRatio                 = 2
   SurvivorRatio            = 8
   MetaspaceSize            = 21807104 (20.796875MB)
   CompressedClassSpaceSize = 1073741824 (1024.0MB)
   MaxMetaspaceSize         = 17592186044415 MB
   G1HeapRegionSize         = 4194304 (4.0MB)
Heap Usage:
G1 Heap:
   regions  = 7672
   capacity = 32178700288 (30688.0MB)
   used     = 14680397040 (14000.317611694336MB)
   free     = 17498303248 (16687.682388305664MB)
   45.62147292653264% used
G1 Young Generation:
Eden Space:
   regions  = 1
   capacity = 11991515136 (11436.0MB)
   used     = 4194304 (4.0MB)
   free     = 11987320832 (11432.0MB)
   0.03497726477789437% used
Survivor Space:
   regions  = 1
   capacity = 4194304 (4.0MB)
   used     = 4194304 (4.0MB)
   free     = 0 (0.0MB)
   100.0% used
G1 Old Generation:
   regions  = 3500
   capacity = 20182990848 (19248.0MB)
   used     = 14672008432 (13992.317611694336MB)
   free     = 5510982416 (5255.682388305664MB)
   72.69491693523658% used

```

 

Any idea about this?

I look about hedged read metrics, TotalHedgedReadOpsWin is 0, but the 

  was:
In the same workload, when I disable hedged read, JVM heap is:

```bash

Heap Configuration:
   MinHeapFreeRatio         = 40
   MaxHeapFreeRatio         = 70
   MaxHeapSize              = 32178700288 (30688.0MB)
   NewSize                  = 1363144 (1.254223632812MB)
   MaxNewSize               = 19306381312 (18412.0MB)
   OldSize                  = 5452592 (5.169482421875MB)
   NewRatio                 = 2
   SurvivorRatio            = 8
   MetaspaceSize            = 21807104 (20.796875MB)
   CompressedClassSpaceSize = 1073741824 (1024.0MB)
   MaxMetaspaceSize         = 17592186044415 MB
   G1HeapRegionSize         = 4194304 (4.0MB)
Heap Usage:
G1 Heap:
   regions  = 7672
   capacity = 32178700288 (30688.0MB)
   used     = 794118192 (757.3301239013672MB)
   free     = 31384582096 (29930.669876098633MB)
   2.467837994986207% used
G1 Young Generation:
Eden Space:
   regions  = 177
   capacity = 1732247552 (1652.0MB)
   used     = 742391808 (708.0MB)
   free     = 989855744 (944.0MB)
   42.857142857142854% used
Survivor Space:
   regions  = 6
   capacity = 25165824 (24.0MB)
   used     = 25165824 (24.0MB)
   free     = 0 (0.0MB)
   100.0% used
G1 Old Generation:
   regions  = 7
   capacity = 1035993088 (9

[jira] [Updated] (HADOOP-18835) Hdfs client will easily to oom when enable hedged read

2023-07-30 Thread Smith Cruise (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Smith Cruise updated HADOOP-18835:
--
Environment: 
I configure hedged read about below config:

```xml


dfs.client.hedged.read.threadpool.size
128


dfs.client.hedged.read.threshold.millis
2


```

threshold is a really large value

  was:
I configure hedged read about below config:

```xml


dfs.client.hedged.read.threadpool.size
128


dfs.client.hedged.read.threshold.millis
2000


```


> Hdfs client will easily to oom when enable hedged read
> --
>
> Key: HADOOP-18835
> URL: https://issues.apache.org/jira/browse/HADOOP-18835
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 3.3.3
> Environment: I configure hedged read about below config:
> ```xml
> 
> 
> dfs.client.hedged.read.threadpool.size
> 128
> 
> 
> dfs.client.hedged.read.threshold.millis
> 2
> 
> 
> ```
> threshold is a really large value
>Reporter: Smith Cruise
>Priority: Major
>
> In the same workload, when I disable hedged read, JVM heap is:
> ```bash
> Heap Configuration:
>    MinHeapFreeRatio         = 40
>    MaxHeapFreeRatio         = 70
>    MaxHeapSize              = 32178700288 (30688.0MB)
>    NewSize                  = 1363144 (1.254223632812MB)
>    MaxNewSize               = 19306381312 (18412.0MB)
>    OldSize                  = 5452592 (5.169482421875MB)
>    NewRatio                 = 2
>    SurvivorRatio            = 8
>    MetaspaceSize            = 21807104 (20.796875MB)
>    CompressedClassSpaceSize = 1073741824 (1024.0MB)
>    MaxMetaspaceSize         = 17592186044415 MB
>    G1HeapRegionSize         = 4194304 (4.0MB)
> Heap Usage:
> G1 Heap:
>    regions  = 7672
>    capacity = 32178700288 (30688.0MB)
>    used     = 794118192 (757.3301239013672MB)
>    free     = 31384582096 (29930.669876098633MB)
>    2.467837994986207% used
> G1 Young Generation:
> Eden Space:
>    regions  = 177
>    capacity = 1732247552 (1652.0MB)
>    used     = 742391808 (708.0MB)
>    free     = 989855744 (944.0MB)
>    42.857142857142854% used
> Survivor Space:
>    regions  = 6
>    capacity = 25165824 (24.0MB)
>    used     = 25165824 (24.0MB)
>    free     = 0 (0.0MB)
>    100.0% used
> G1 Old Generation:
>    regions  = 7
>    capacity = 1035993088 (988.0MB)
>    used     = 26560560 (25.330123901367188MB)
>    free     = 1009432528 (962.6698760986328MB)
>    2.56322810444% used
> ```
>  
> When I enable hedged read, it easily oom:
> ```bash
> preadDirect: FSDataInputStream#read error:
> OutOfMemoryError: Java heap spacejava.lang.OutOfMemoryError: Java heap space
> preadDirect: FSDataInputStream#read error:
> OutOfMemoryError: Java heap spacejava.lang.OutOfMemoryError: Java heap space
>     at java.base/java.nio.HeapByteBuffer.(HeapByteBuffer.java:61)
>     at java.base/java.nio.ByteBuffer.allocate(ByteBuffer.java:348)
>     at 
> org.apache.hadoop.hdfs.DFSInputStream.hedgedFetchBlockByteRange(DFSInputStream.java:1292)
>     at org.apache.hadoop.hdfs.DFSInputStream.pread(DFSInputStream.java:1493)
>     at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:1705)
>     at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:259)
> ```
>  
> ```bash
> Heap Configuration:
>    MinHeapFreeRatio         = 40
>    MaxHeapFreeRatio         = 70
>    MaxHeapSize              = 32178700288 (30688.0MB)
>    NewSize                  = 1363144 (1.254223632812MB)
>    MaxNewSize               = 19306381312 (18412.0MB)
>    OldSize                  = 5452592 (5.169482421875MB)
>    NewRatio                 = 2
>    SurvivorRatio            = 8
>    MetaspaceSize            = 21807104 (20.796875MB)
>    CompressedClassSpaceSize = 1073741824 (1024.0MB)
>    MaxMetaspaceSize         = 17592186044415 MB
>    G1HeapRegionSize         = 4194304 (4.0MB)
> Heap Usage:
> G1 Heap:
>    regions  = 7672
>    capacity = 32178700288 (30688.0MB)
>    used     = 14680397040 (14000.317611694336MB)
>    free     = 17498303248 (16687.682388305664MB)
>    45.62147292653264% used
> G1 Young Generation:
> Eden Space:
>    regions  = 1
>    capacity = 11991515136 (11436.0MB)
>    used     = 4194304 (4.0MB)
>    free     = 11987320832 (11432.0MB)
>    0.03497726477789437% used
> Survivor Space:
>    regions  = 1
>    capacity = 4194304 (4.0MB)
>    used     = 4194304 (4.0MB)
>    free     = 0 (0.0MB)
>    100.0% used
> G1 Old Generation:
>    regions  = 3500
>    capacity = 20182990848 (19248.0MB)
>    used     = 14672008432 (13992.317611694336MB)
>    free     = 5510982416 (5255.682388305664MB)
>    72.69491693523658% used
> ```
>  
> Any idea about this?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---

[jira] [Created] (HADOOP-18835) Hdfs client will easily to oom when enable hedged read

2023-07-30 Thread Smith Cruise (Jira)
Smith Cruise created HADOOP-18835:
-

 Summary: Hdfs client will easily to oom when enable hedged read
 Key: HADOOP-18835
 URL: https://issues.apache.org/jira/browse/HADOOP-18835
 Project: Hadoop Common
  Issue Type: Bug
  Components: hdfs-client
Affects Versions: 3.3.3
 Environment: I configure hedged read about below config:

```xml


dfs.client.hedged.read.threadpool.size
128


dfs.client.hedged.read.threshold.millis
2000


```
Reporter: Smith Cruise


In the same workload, when I disable hedged read, JVM heap is:

```bash

Heap Configuration:
   MinHeapFreeRatio         = 40
   MaxHeapFreeRatio         = 70
   MaxHeapSize              = 32178700288 (30688.0MB)
   NewSize                  = 1363144 (1.254223632812MB)
   MaxNewSize               = 19306381312 (18412.0MB)
   OldSize                  = 5452592 (5.169482421875MB)
   NewRatio                 = 2
   SurvivorRatio            = 8
   MetaspaceSize            = 21807104 (20.796875MB)
   CompressedClassSpaceSize = 1073741824 (1024.0MB)
   MaxMetaspaceSize         = 17592186044415 MB
   G1HeapRegionSize         = 4194304 (4.0MB)
Heap Usage:
G1 Heap:
   regions  = 7672
   capacity = 32178700288 (30688.0MB)
   used     = 794118192 (757.3301239013672MB)
   free     = 31384582096 (29930.669876098633MB)
   2.467837994986207% used
G1 Young Generation:
Eden Space:
   regions  = 177
   capacity = 1732247552 (1652.0MB)
   used     = 742391808 (708.0MB)
   free     = 989855744 (944.0MB)
   42.857142857142854% used
Survivor Space:
   regions  = 6
   capacity = 25165824 (24.0MB)
   used     = 25165824 (24.0MB)
   free     = 0 (0.0MB)
   100.0% used
G1 Old Generation:
   regions  = 7
   capacity = 1035993088 (988.0MB)
   used     = 26560560 (25.330123901367188MB)
   free     = 1009432528 (962.6698760986328MB)
   2.56322810444% used

```

 

When I enable hedged read, it easily oom:

```bash

preadDirect: FSDataInputStream#read error:
OutOfMemoryError: Java heap spacejava.lang.OutOfMemoryError: Java heap space
preadDirect: FSDataInputStream#read error:
OutOfMemoryError: Java heap spacejava.lang.OutOfMemoryError: Java heap space
    at java.base/java.nio.HeapByteBuffer.(HeapByteBuffer.java:61)
    at java.base/java.nio.ByteBuffer.allocate(ByteBuffer.java:348)
    at 
org.apache.hadoop.hdfs.DFSInputStream.hedgedFetchBlockByteRange(DFSInputStream.java:1292)
    at org.apache.hadoop.hdfs.DFSInputStream.pread(DFSInputStream.java:1493)
    at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:1705)
    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:259)

```

 

```bash

Heap Configuration:
   MinHeapFreeRatio         = 40
   MaxHeapFreeRatio         = 70
   MaxHeapSize              = 32178700288 (30688.0MB)
   NewSize                  = 1363144 (1.254223632812MB)
   MaxNewSize               = 19306381312 (18412.0MB)
   OldSize                  = 5452592 (5.169482421875MB)
   NewRatio                 = 2
   SurvivorRatio            = 8
   MetaspaceSize            = 21807104 (20.796875MB)
   CompressedClassSpaceSize = 1073741824 (1024.0MB)
   MaxMetaspaceSize         = 17592186044415 MB
   G1HeapRegionSize         = 4194304 (4.0MB)
Heap Usage:
G1 Heap:
   regions  = 7672
   capacity = 32178700288 (30688.0MB)
   used     = 14680397040 (14000.317611694336MB)
   free     = 17498303248 (16687.682388305664MB)
   45.62147292653264% used
G1 Young Generation:
Eden Space:
   regions  = 1
   capacity = 11991515136 (11436.0MB)
   used     = 4194304 (4.0MB)
   free     = 11987320832 (11432.0MB)
   0.03497726477789437% used
Survivor Space:
   regions  = 1
   capacity = 4194304 (4.0MB)
   used     = 4194304 (4.0MB)
   free     = 0 (0.0MB)
   100.0% used
G1 Old Generation:
   regions  = 3500
   capacity = 20182990848 (19248.0MB)
   used     = 14672008432 (13992.317611694336MB)
   free     = 5510982416 (5255.682388305664MB)
   72.69491693523658% used

```

 

Any idea about this?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #5900: HDFS-17134. RBF: Fix duplicate results of getListing through Router.

2023-07-30 Thread via GitHub


hadoop-yetus commented on PR #5900:
URL: https://github.com/apache/hadoop/pull/5900#issuecomment-1657077258

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 50s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  43m 36s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 43s |  |  trunk passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  compile  |   0m 44s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  checkstyle  |   0m 37s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 47s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 48s |  |  trunk passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 36s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  spotbugs  |   1m 28s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  34m 19s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 34s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 36s |  |  the patch passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javac  |   0m 36s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 31s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  javac  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 20s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5900/3/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt)
 |  hadoop-hdfs-project/hadoop-hdfs-rbf: The patch generated 2 new + 2 
unchanged - 0 fixed = 4 total (was 2)  |
   | +1 :green_heart: |  mvnsite  |   0m 35s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 30s |  |  the patch passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | -1 :x: |  spotbugs  |   1m 23s | 
[/new-spotbugs-hadoop-hdfs-project_hadoop-hdfs-rbf.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5900/3/artifact/out/new-spotbugs-hadoop-hdfs-project_hadoop-hdfs-rbf.html)
 |  hadoop-hdfs-project/hadoop-hdfs-rbf generated 1 new + 0 unchanged - 0 fixed 
= 1 total (was 0)  |
   | +1 :green_heart: |  shadedclient  |  38m 55s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  22m 37s |  |  hadoop-hdfs-rbf in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 43s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 155m 35s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | SpotBugs | module:hadoop-hdfs-project/hadoop-hdfs-rbf |
   |  |  
org.apache.hadoop.hdfs.server.federation.router.RouterClientProtocol$GetListingComparator
 implements Comparator but not Serializable  At 
RouterClientProtocol.java:Serializable  At RouterClientProtocol.java:[lines 
827-830] |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5900/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5900 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 69251673e4e9 4.15.0-212-generic #223-Ubuntu SMP Tue May 23 
13:09:22 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 4beed7e9b9d06d25df5d3be7ce89e860ce82c971 |
   | Default Java | Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.19+7-post-Ubuntu-0

[jira] [Created] (HADOOP-18834) Install strings utility for git bash on Windows

2023-07-30 Thread Gautham Banasandra (Jira)
Gautham Banasandra created HADOOP-18834:
---

 Summary: Install strings utility for git bash on Windows
 Key: HADOOP-18834
 URL: https://issues.apache.org/jira/browse/HADOOP-18834
 Project: Hadoop Common
  Issue Type: Bug
  Components: common
Affects Versions: 3.4.0
 Environment: Windows 10
Reporter: Gautham Banasandra
Assignee: Gautham Banasandra
 Fix For: 3.4.0


We get the following error while building Hadoop on Windows 10 -

{code}
[2023-07-28T07:16:22.389Z] 

[2023-07-28T07:16:22.389Z] 

[2023-07-28T07:16:22.389Z]  Determining needed tests
[2023-07-28T07:16:22.389Z] 

[2023-07-28T07:16:22.389Z] 

[2023-07-28T07:16:22.389Z] 
[2023-07-28T07:16:22.389Z] 
[2023-07-28T07:16:22.389Z] (Depending upon input size and number of plug-ins, 
this may take a while)
[2023-07-28T07:20:59.610Z] /c/out/precommit/plugins.d/maven.sh: line 275: 
strings: command not found
{code}

We need to install the strings utility for git bash on Windows to fix this.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18832) Upgrade aws-java-sdk to 1.12.499+

2023-07-30 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated HADOOP-18832:
--
Description: aws sdk versions < 1.12.499 uses a vulnerable version of netty 
and hence showing up in security CVE scans (CVE-2023-34462). The safe version 
for netty is 4.1.94.Final and this is used by aws-java-sdk:1.12.499+  (was: aws 
sdk versions < 1.12.499 uses a vulnerable version of netty and hence showing up 
in security CVE scans (CVE-2023-34462). The safe version for netty is 
4.1.94.Final and this is used by aws-java-adk:1.12.499+)

> Upgrade aws-java-sdk to 1.12.499+
> -
>
> Key: HADOOP-18832
> URL: https://issues.apache.org/jira/browse/HADOOP-18832
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>
> aws sdk versions < 1.12.499 uses a vulnerable version of netty and hence 
> showing up in security CVE scans (CVE-2023-34462). The safe version for netty 
> is 4.1.94.Final and this is used by aws-java-sdk:1.12.499+



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #5907: HDFS-17136. Fix annotation description and typo in BlockPlacementPolicyDefault Class

2023-07-30 Thread via GitHub


hadoop-yetus commented on PR #5907:
URL: https://github.com/apache/hadoop/pull/5907#issuecomment-1657067230

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 29s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m 23s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 56s |  |  trunk passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  compile  |   0m 50s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  checkstyle  |   0m 45s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 55s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 50s |  |  trunk passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 13s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  spotbugs  |   1m 59s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m 56s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 46s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 48s |  |  the patch passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javac  |   0m 48s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 43s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  javac  |   0m 43s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 33s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 47s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 36s |  |  the patch passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   1m  4s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  spotbugs  |   1m 59s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  22m 19s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 186m 16s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5907/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 39s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 279m 48s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestFileChecksum |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5907/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5907 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux af8ba7c1cddb 4.15.0-212-generic #223-Ubuntu SMP Tue May 23 
13:09:22 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 4c19bb87afbd8fedc15b9b513aa58851d0c142f0 |
   | Default Java | Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5907/1/testReport/ |
   | Max. process+thread count | 3381 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5907/1/console |
   | versions | git=2.25.1 maven=3.6.3 

[jira] [Updated] (HADOOP-18833) Install bats for building Hadoop on Windows

2023-07-30 Thread Gautham Banasandra (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gautham Banasandra updated HADOOP-18833:

Description: 
We get the following error while building Hadoop on Windows (logs attached -  
[^archive.zip] ) -

{code}
[INFO] --- maven-antrun-plugin:1.8:run (common-test-bats-driver) @ 
hadoop-common ---
[INFO] Executing tasks

main:
 [exec] 
 [exec] 
 [exec] ERROR: bats not installed. Skipping bash tests.
 [exec] ERROR: Please install bats as soon as possible.
 [exec] 
{code}

We need to install bats to fix this.

  was:
We get the following error while building Hadoop on Windows -

{code}
[INFO] --- maven-antrun-plugin:1.8:run (common-test-bats-driver) @ 
hadoop-common ---
[INFO] Executing tasks

main:
 [exec] 
 [exec] 
 [exec] ERROR: bats not installed. Skipping bash tests.
 [exec] ERROR: Please install bats as soon as possible.
 [exec] 
{code}

We need to install bats to fix this.


> Install bats for building Hadoop on Windows
> ---
>
> Key: HADOOP-18833
> URL: https://issues.apache.org/jira/browse/HADOOP-18833
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.4.0
> Environment: Windows 10
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: archive.zip
>
>
> We get the following error while building Hadoop on Windows (logs attached -  
> [^archive.zip] ) -
> {code}
> [INFO] --- maven-antrun-plugin:1.8:run (common-test-bats-driver) @ 
> hadoop-common ---
> [INFO] Executing tasks
> main:
>  [exec] 
>  [exec] 
>  [exec] ERROR: bats not installed. Skipping bash tests.
>  [exec] ERROR: Please install bats as soon as possible.
>  [exec] 
> {code}
> We need to install bats to fix this.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18833) Install bats for building Hadoop on Windows

2023-07-30 Thread Gautham Banasandra (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gautham Banasandra updated HADOOP-18833:

Attachment: archive.zip

> Install bats for building Hadoop on Windows
> ---
>
> Key: HADOOP-18833
> URL: https://issues.apache.org/jira/browse/HADOOP-18833
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.4.0
> Environment: Windows 10
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: archive.zip
>
>
> We get the following error while building Hadoop on Windows -
> {code}
> [INFO] --- maven-antrun-plugin:1.8:run (common-test-bats-driver) @ 
> hadoop-common ---
> [INFO] Executing tasks
> main:
>  [exec] 
>  [exec] 
>  [exec] ERROR: bats not installed. Skipping bash tests.
>  [exec] ERROR: Please install bats as soon as possible.
>  [exec] 
> {code}
> We need to install bats to fix this.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18833) Install bats for building Hadoop on Windows

2023-07-30 Thread Gautham Banasandra (Jira)
Gautham Banasandra created HADOOP-18833:
---

 Summary: Install bats for building Hadoop on Windows
 Key: HADOOP-18833
 URL: https://issues.apache.org/jira/browse/HADOOP-18833
 Project: Hadoop Common
  Issue Type: Bug
  Components: common
Affects Versions: 3.4.0
 Environment: Windows 10
Reporter: Gautham Banasandra
Assignee: Gautham Banasandra
 Fix For: 3.4.0


We get the following error while building Hadoop on Windows -

{code}
[INFO] --- maven-antrun-plugin:1.8:run (common-test-bats-driver) @ 
hadoop-common ---
[INFO] Executing tasks

main:
 [exec] 
 [exec] 
 [exec] ERROR: bats not installed. Skipping bash tests.
 [exec] ERROR: Please install bats as soon as possible.
 [exec] 
{code}

We need to install bats to fix this.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-18832) Upgrade aws-java-sdk to 1.12.499+

2023-07-30 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani reassigned HADOOP-18832:
-

Assignee: Viraj Jasani

> Upgrade aws-java-sdk to 1.12.499+
> -
>
> Key: HADOOP-18832
> URL: https://issues.apache.org/jira/browse/HADOOP-18832
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>
> aws sdk versions < 1.12.499 uses a vulnerable version of netty and hence 
> showing up in security CVE scans (CVE-2023-34462). The safe version for netty 
> is 4.1.94.Final and this is used by aws-java-adk:1.12.499+



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18832) Upgrade aws-java-sdk to 1.12.499+

2023-07-30 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-18832:
-

 Summary: Upgrade aws-java-sdk to 1.12.499+
 Key: HADOOP-18832
 URL: https://issues.apache.org/jira/browse/HADOOP-18832
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Reporter: Viraj Jasani


aws sdk versions < 1.12.499 uses a vulnerable version of netty and hence 
showing up in security CVE scans (CVE-2023-34462). The safe version for netty 
is 4.1.94.Final and this is used by aws-java-adk:1.12.499+



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org