Re: [PR] HDFS-17278. Fix order dependent flakiness in TestViewfsWithNfs3.java under hadoop-hdfs-nfs module [hadoop]
hadoop-yetus commented on PR #6329: URL: https://github.com/apache/hadoop/pull/6329#issuecomment-1844822834 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 21s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 32m 23s | | trunk passed | | +1 :green_heart: | compile | 0m 15s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 0m 13s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 0m 17s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 21s | | trunk passed | | +1 :green_heart: | javadoc | 0m 20s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 19s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 0m 34s | | trunk passed | | +1 :green_heart: | shadedclient | 19m 16s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | -1 :x: | mvninstall | 0m 12s | [/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs-nfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6329/3/artifact/out/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs-nfs.txt) | hadoop-hdfs-nfs in the patch failed. | | -1 :x: | compile | 0m 12s | [/patch-compile-hadoop-hdfs-project_hadoop-hdfs-nfs-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6329/3/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-nfs-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt) | hadoop-hdfs-nfs in the patch failed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04. | | -1 :x: | javac | 0m 12s | [/patch-compile-hadoop-hdfs-project_hadoop-hdfs-nfs-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6329/3/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-nfs-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt) | hadoop-hdfs-nfs in the patch failed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04. | | -1 :x: | compile | 0m 10s | [/patch-compile-hadoop-hdfs-project_hadoop-hdfs-nfs-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6329/3/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-nfs-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt) | hadoop-hdfs-nfs in the patch failed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08. | | -1 :x: | javac | 0m 10s | [/patch-compile-hadoop-hdfs-project_hadoop-hdfs-nfs-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6329/3/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-nfs-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt) | hadoop-hdfs-nfs in the patch failed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08. | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 10s | [/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs-nfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6329/3/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs-nfs.txt) | hadoop-hdfs-project/hadoop-hdfs-nfs: The patch generated 1 new + 1 unchanged - 0 fixed = 2 total (was 1) | | -1 :x: | mvnsite | 0m 12s | [/patch-mvnsite-hadoop-hdfs-project_hadoop-hdfs-nfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6329/3/artifact/out/patch-mvnsite-hadoop-hdfs-project_hadoop-hdfs-nfs.txt) | hadoop-hdfs-nfs in the patch failed. | | +1 :green_heart: | javadoc | 0m 13s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 12s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | -1 :x: | spotbugs | 0m 12s | [/patch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-nfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6329/3/artifact/out/patch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-nfs.txt) | hadoop-hdfs-nfs in the patch failed. | |
Re: [PR] YARN-11621: Fix intermittently failing unit test: TestAMRMProxy.testAMRMProxyTokenRenewal [hadoop]
hadoop-yetus commented on PR #6330: URL: https://github.com/apache/hadoop/pull/6330#issuecomment-1844810695 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 4m 7s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ branch-3.3 Compile Tests _ | | +1 :green_heart: | mvninstall | 33m 22s | | branch-3.3 passed | | +1 :green_heart: | compile | 0m 23s | | branch-3.3 passed | | +1 :green_heart: | checkstyle | 0m 20s | | branch-3.3 passed | | +1 :green_heart: | mvnsite | 0m 26s | | branch-3.3 passed | | +1 :green_heart: | javadoc | 0m 24s | | branch-3.3 passed | | +1 :green_heart: | spotbugs | 0m 41s | | branch-3.3 passed | | +1 :green_heart: | shadedclient | 21m 8s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 20s | | the patch passed | | +1 :green_heart: | compile | 0m 17s | | the patch passed | | +1 :green_heart: | javac | 0m 17s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 10s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 19s | | the patch passed | | +1 :green_heart: | javadoc | 0m 14s | | the patch passed | | +1 :green_heart: | spotbugs | 0m 40s | | the patch passed | | +1 :green_heart: | shadedclient | 20m 47s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 24m 50s | | hadoop-yarn-client in the patch passed. | | +1 :green_heart: | asflicense | 0m 26s | | The patch does not generate ASF License warnings. | | | | 111m 7s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6330/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6330 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux f2c1efcf5527 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | branch-3.3 / 3c1b3cf26fb4750b02efe0115e426dc0c230e930 | | Default Java | Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~18.04-b09 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6330/1/testReport/ | | Max. process+thread count | 571 (vs. ulimit of 5500) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6330/1/console | | versions | git=2.17.1 maven=3.6.0 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-19004) S3A: Support Authentication through HttpSigner API
[ https://issues.apache.org/jira/browse/HADOOP-19004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17794055#comment-17794055 ] ASF GitHub Bot commented on HADOOP-19004: - ahmarsuhail commented on code in PR #6324: URL: https://github.com/apache/hadoop/pull/6324#discussion_r1418453963 ## hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java: ## @@ -5751,6 +5753,10 @@ public StoreContext createStoreContext() { .build(); } + public CreateSessionResponse createSessionInternal(CreateSessionRequest createSessionRequest){ Review Comment: This is unused, consider removing. if it is for internal use only, move to S3AInternals? There's also no corresponding method for it RequestFactory. > S3A: Support Authentication through HttpSigner API > --- > > Key: HADOOP-19004 > URL: https://issues.apache.org/jira/browse/HADOOP-19004 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.4.0 >Reporter: Steve Loughran >Assignee: Harshit Gupta >Priority: Major > Labels: pull-request-available > > The latest AWS SDK changes how signing works, and for signing S3Express > signatures the new {{software.amazon.awssdk.http.auth}} auth mechanism is > needed -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HADOOP-19004. S3A: Support Authentication through HttpSigner API [hadoop]
ahmarsuhail commented on code in PR #6324: URL: https://github.com/apache/hadoop/pull/6324#discussion_r1418453963 ## hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java: ## @@ -5751,6 +5753,10 @@ public StoreContext createStoreContext() { .build(); } + public CreateSessionResponse createSessionInternal(CreateSessionRequest createSessionRequest){ Review Comment: This is unused, consider removing. if it is for internal use only, move to S3AInternals? There's also no corresponding method for it RequestFactory. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HDFS-17278. Fix order dependent flakiness in TestViewfsWithNfs3.java under hadoop-hdfs-nfs module [hadoop]
yijut2 commented on PR #6329: URL: https://github.com/apache/hadoop/pull/6329#issuecomment-1844768197 > Thanks for fixing this bug! Thanks for the quick response too! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HDFS-17278. Fix order dependent flakiness in TestViewfsWithNfs3.java under hadoop-hdfs-nfs module [hadoop]
xinglin commented on code in PR #6329: URL: https://github.com/apache/hadoop/pull/6329#discussion_r1418445324 ## hadoop-hdfs-project/hadoop-hdfs-nfs/src/test/java/org/apache/hadoop/hdfs/nfs/nfs3/TestDFSClientCache.java: ## @@ -31,8 +31,14 @@ import org.apache.hadoop.hdfs.nfs.conf.NfsConfiguration; import org.apache.hadoop.security.UserGroupInformation; import org.junit.Test; +import org.junit.AfterClass; Review Comment: Please fix this as well. Otherwise, LGTM. thanks, -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HDFS-17278. Fix order dependent flakiness in TestViewfsWithNfs3.java under hadoop-hdfs-nfs module [hadoop]
yijut2 commented on code in PR #6329: URL: https://github.com/apache/hadoop/pull/6329#discussion_r1418443416 ## hadoop-hdfs-project/hadoop-hdfs-nfs/src/test/java/org/apache/hadoop/hdfs/nfs/nfs3/TestDFSClientCache.java: ## @@ -31,8 +31,14 @@ import org.apache.hadoop.hdfs.nfs.conf.NfsConfiguration; import org.apache.hadoop.security.UserGroupInformation; import org.junit.Test; +import org.junit.AfterClass; public class TestDFSClientCache { + @AfterClass Review Comment: Agreed, I think that would be better! Just updated the change, thank you. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HADOOP-18989. Use thread pool to improve the speed of creating control files in TestDFSIO [hadoop]
hadoop-yetus commented on PR #6294: URL: https://github.com/apache/hadoop/pull/6294#issuecomment-1844690261 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 22s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 31m 43s | | trunk passed | | +1 :green_heart: | compile | 0m 21s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 0m 19s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 0m 24s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 24s | | trunk passed | | +1 :green_heart: | javadoc | 0m 22s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 18s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 0m 36s | | trunk passed | | +1 :green_heart: | shadedclient | 19m 11s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 16s | | the patch passed | | +1 :green_heart: | compile | 0m 17s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 0m 17s | | the patch passed | | +1 :green_heart: | compile | 0m 13s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | javac | 0m 13s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 14s | [/results-checkstyle-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6294/10/artifact/out/results-checkstyle-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt) | hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient: The patch generated 3 new + 42 unchanged - 0 fixed = 45 total (was 42) | | +1 :green_heart: | mvnsite | 0m 19s | | the patch passed | | +1 :green_heart: | javadoc | 0m 12s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 12s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 0m 35s | | the patch passed | | +1 :green_heart: | shadedclient | 19m 6s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 113m 51s | | hadoop-mapreduce-client-jobclient in the patch passed. | | +1 :green_heart: | asflicense | 0m 21s | | The patch does not generate ASF License warnings. | | | | 191m 46s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6294/10/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6294 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 5db7c3887b36 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 03de9a17755adfe52a1923aeb168251330e34a37 | | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6294/10/testReport/ | | Max. process+thread count | 1250 (vs. ulimit of 5500) | | modules | C: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient U: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient | | Console output |
[jira] [Commented] (HADOOP-18989) Use thread pool to improve the speed of creating control files in TestDFSIO
[ https://issues.apache.org/jira/browse/HADOOP-18989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17794046#comment-17794046 ] ASF GitHub Bot commented on HADOOP-18989: - hadoop-yetus commented on PR #6294: URL: https://github.com/apache/hadoop/pull/6294#issuecomment-1844690261 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 22s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 31m 43s | | trunk passed | | +1 :green_heart: | compile | 0m 21s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 0m 19s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 0m 24s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 24s | | trunk passed | | +1 :green_heart: | javadoc | 0m 22s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 18s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 0m 36s | | trunk passed | | +1 :green_heart: | shadedclient | 19m 11s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 16s | | the patch passed | | +1 :green_heart: | compile | 0m 17s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 0m 17s | | the patch passed | | +1 :green_heart: | compile | 0m 13s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | javac | 0m 13s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 14s | [/results-checkstyle-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6294/10/artifact/out/results-checkstyle-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt) | hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient: The patch generated 3 new + 42 unchanged - 0 fixed = 45 total (was 42) | | +1 :green_heart: | mvnsite | 0m 19s | | the patch passed | | +1 :green_heart: | javadoc | 0m 12s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 12s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 0m 35s | | the patch passed | | +1 :green_heart: | shadedclient | 19m 6s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 113m 51s | | hadoop-mapreduce-client-jobclient in the patch passed. | | +1 :green_heart: | asflicense | 0m 21s | | The patch does not generate ASF License warnings. | | | | 191m 46s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6294/10/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6294 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 5db7c3887b36 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 03de9a17755adfe52a1923aeb168251330e34a37 | | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6294/10/testReport/ | | Max. process+thread count | 1250 (vs. ulimit of 5500) | |
Re: [PR] HADOOP-18989. Use thread pool to improve the speed of creating control files in TestDFSIO [hadoop]
hadoop-yetus commented on PR #6294: URL: https://github.com/apache/hadoop/pull/6294#issuecomment-1844679751 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 21s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 31m 29s | | trunk passed | | +1 :green_heart: | compile | 0m 22s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 0m 18s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 0m 24s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 25s | | trunk passed | | +1 :green_heart: | javadoc | 0m 24s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 18s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 0m 37s | | trunk passed | | +1 :green_heart: | shadedclient | 19m 14s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 18s | | the patch passed | | +1 :green_heart: | compile | 0m 16s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 0m 16s | | the patch passed | | +1 :green_heart: | compile | 0m 14s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | javac | 0m 14s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 16s | [/results-checkstyle-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6294/9/artifact/out/results-checkstyle-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt) | hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient: The patch generated 3 new + 42 unchanged - 0 fixed = 45 total (was 42) | | +1 :green_heart: | mvnsite | 0m 17s | | the patch passed | | +1 :green_heart: | javadoc | 0m 13s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 13s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 0m 35s | | the patch passed | | +1 :green_heart: | shadedclient | 19m 14s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 112m 10s | [/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6294/9/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt) | hadoop-mapreduce-client-jobclient in the patch passed. | | +1 :green_heart: | asflicense | 0m 26s | | The patch does not generate ASF License warnings. | | | | 190m 13s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.fs.TestDFSIO | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6294/9/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6294 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux fd9056ed12f2 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 3cae974a0a4226056fc43d4cc45c202309bd4762 | | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Test Results |
[jira] [Commented] (HADOOP-18989) Use thread pool to improve the speed of creating control files in TestDFSIO
[ https://issues.apache.org/jira/browse/HADOOP-18989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17794041#comment-17794041 ] ASF GitHub Bot commented on HADOOP-18989: - hadoop-yetus commented on PR #6294: URL: https://github.com/apache/hadoop/pull/6294#issuecomment-1844679751 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 21s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 31m 29s | | trunk passed | | +1 :green_heart: | compile | 0m 22s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 0m 18s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 0m 24s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 25s | | trunk passed | | +1 :green_heart: | javadoc | 0m 24s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 18s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 0m 37s | | trunk passed | | +1 :green_heart: | shadedclient | 19m 14s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 18s | | the patch passed | | +1 :green_heart: | compile | 0m 16s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 0m 16s | | the patch passed | | +1 :green_heart: | compile | 0m 14s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | javac | 0m 14s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 16s | [/results-checkstyle-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6294/9/artifact/out/results-checkstyle-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt) | hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient: The patch generated 3 new + 42 unchanged - 0 fixed = 45 total (was 42) | | +1 :green_heart: | mvnsite | 0m 17s | | the patch passed | | +1 :green_heart: | javadoc | 0m 13s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 13s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 0m 35s | | the patch passed | | +1 :green_heart: | shadedclient | 19m 14s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 112m 10s | [/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6294/9/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt) | hadoop-mapreduce-client-jobclient in the patch passed. | | +1 :green_heart: | asflicense | 0m 26s | | The patch does not generate ASF License warnings. | | | | 190m 13s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.fs.TestDFSIO | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6294/9/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6294 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux fd9056ed12f2 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 3cae974a0a4226056fc43d4cc45c202309bd4762 | | Default Java | Private
[jira] [Updated] (HADOOP-18888) S3A. createS3AsyncClient() always enables multipart
[ https://issues.apache.org/jira/browse/HADOOP-1?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ahmar Suhail updated HADOOP-1: -- Fix Version/s: 3.3.7-aws (was: 3.3.6-aws) > S3A. createS3AsyncClient() always enables multipart > --- > > Key: HADOOP-1 > URL: https://issues.apache.org/jira/browse/HADOOP-1 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.4.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0, 3.3.7-aws > > > DefaultS3ClientFactory.createS3AsyncClient() always creates clients with > multipart enabled; if it is disabled in s3a config it should be disabled here > and in the transfer manager -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-18908) Improve s3a region handling, including determining from endpoint
[ https://issues.apache.org/jira/browse/HADOOP-18908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ahmar Suhail updated HADOOP-18908: -- Fix Version/s: 3.3.7-aws (was: 3.3.6-aws) > Improve s3a region handling, including determining from endpoint > > > Key: HADOOP-18908 > URL: https://issues.apache.org/jira/browse/HADOOP-18908 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.4.0 >Reporter: Steve Loughran >Assignee: Ahmar Suhail >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0, 3.3.7-aws > > > S3A region logic improved for better inference and > to be compatible with previous releases > 1. If you are using an AWS S3 AccessPoint, its region is determined >from the ARN itself. > 2. If fs.s3a.endpoint.region is set and non-empty, it is used. > 3. If fs.s3a.endpoint is an s3.*.amazonaws.com url, >the region is determined by by parsing the URL >Note: vpce endpoints are not handled by this. > 4. If fs.s3a.endpoint.region==null, and none could be determined >from the endpoint, use us-east-2 as default. > 5. If fs.s3a.endpoint.region=="" then it is handed off to >The default AWS SDK resolution process. > Consult the AWS SDK documentation for the details on its resolution > process, knowing that it is complicated and may use environment variables, > entries in ~/.aws/config, IAM instance information within > EC2 deployments and possibly even JSON resources on the classpath. > Put differently: it is somewhat brittle across deployments. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-18930) S3A: make fs.s3a.create.performance an option you can set for the entire bucket
[ https://issues.apache.org/jira/browse/HADOOP-18930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ahmar Suhail updated HADOOP-18930: -- Fix Version/s: 3.3.7-aws (was: 3.3.6-aws) > S3A: make fs.s3a.create.performance an option you can set for the entire > bucket > --- > > Key: HADOOP-18930 > URL: https://issues.apache.org/jira/browse/HADOOP-18930 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/s3 >Affects Versions: 3.3.9 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0, 3.3.7-aws > > > make the fs.s3a.create.performance option something you can set everywhere, > rather than just in an openFile() option or under a magic path. > this improves performance on apps like iceberg where filenames are generated > with UUIDs in them, so we know there are no overwrites -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-18915) Tune/extend S3A http connection and thread pool settings
[ https://issues.apache.org/jira/browse/HADOOP-18915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ahmar Suhail updated HADOOP-18915: -- Fix Version/s: 3.3.7-aws (was: 3.3.6-aws) > Tune/extend S3A http connection and thread pool settings > > > Key: HADOOP-18915 > URL: https://issues.apache.org/jira/browse/HADOOP-18915 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.4.0 >Reporter: Ahmar Suhail >Assignee: Steve Loughran >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0, 3.3.7-aws > > > Increases existing pool sizes, as with server scale and vector > IO, larger pools are needed > fs.s3a.connection.maximum 200 > fs.s3a.threads.max 96 > Adds new configuration options for v2 sdk internal timeouts, > both with default of 60s: > fs.s3a.connection.acquisition.timeout > fs.s3a.connection.idle.time > All the pool/timoeut options are covered in performance.md > Moves all timeout/duration options in the s3a FS to taking > temporal units (h, m, s, ms,...); retaining the previous default > unit (normally millisecond) > Adds a minimum duration for most of these, in order to recover from > deployments where a timeout has been set on the assumption the unit > was seconds, not millis. > Uses java.time.Duration throughout the codebase; > retaining the older numeric constants in > org.apache.hadoop.fs.s3a.Constants for backwards compatibility; > these are now deprecated. > Adds new class AWSApiCallTimeoutException to be raised on > sdk-related methods and also gateway timeouts. This is a subclass > of org.apache.hadoop.net.ConnectTimeoutException to support > existing retry logic. > + reverted default value of fs.s3a.create.performance to false; > inadvertently set to true during testing. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-18932) Upgrade AWS v2 SDK to 2.20.160 and v1 to 1.12.565
[ https://issues.apache.org/jira/browse/HADOOP-18932?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ahmar Suhail updated HADOOP-18932: -- Fix Version/s: 3.3.7-aws (was: 3.3.6-aws) > Upgrade AWS v2 SDK to 2.20.160 and v1 to 1.12.565 > - > > Key: HADOOP-18932 > URL: https://issues.apache.org/jira/browse/HADOOP-18932 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.4.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0, 3.3.7-aws > > > Bump up the sdk versions for both...even if we don't ship v1 it helps us > qualify releases with newer versions, and means that an upgrade of that alone > to branch-3.3 will be in sync. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-18995) S3A: Upgrade AWS SDK version to 2.21.33 for Amazon S3 Express One Zone support
[ https://issues.apache.org/jira/browse/HADOOP-18995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ahmar Suhail updated HADOOP-18995: -- Fix Version/s: 3.3.7-aws (was: 3.3.6-aws) > S3A: Upgrade AWS SDK version to 2.21.33 for Amazon S3 Express One Zone support > -- > > Key: HADOOP-18995 > URL: https://issues.apache.org/jira/browse/HADOOP-18995 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.4.0 >Reporter: Ahmar Suhail >Assignee: Ahmar Suhail >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0, 3.3.7-aws > > > Upgrade SDK version to 2.21.33, which adds S3 Express One Zone support. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-18939) NPE in AWS v2 SDK RetryOnErrorCodeCondition.shouldRetry()
[ https://issues.apache.org/jira/browse/HADOOP-18939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ahmar Suhail updated HADOOP-18939: -- Fix Version/s: 3.3.7-aws (was: 3.3.6-aws) > NPE in AWS v2 SDK RetryOnErrorCodeCondition.shouldRetry() > - > > Key: HADOOP-18939 > URL: https://issues.apache.org/jira/browse/HADOOP-18939 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.4.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Critical > Labels: pull-request-available > Fix For: 3.4.0, 3.3.7-aws > > > NPE in error handling code of RetryOnErrorCodeCondition.shouldRetry(); in > bundle-2.20.128.jar > This is AWS SDK code; fix needs to go there. > {code} > Caused by: java.lang.NullPointerException > at > software.amazon.awssdk.awscore.retry.conditions.RetryOnErrorCodeCondition.shouldRetry(RetryOnErrorCodeCondition.java:45) > ~[bundle-2.20.128.jar:?] > at > software.amazon.awssdk.core.retry.conditions.OrRetryCondition.lambda$shouldRetry$0(OrRetryCondition.java:46) > ~[bundle-2.20.128.jar:?] > at java.util.stream.MatchOps$1MatchSink.accept(MatchOps.java:90) > ~[?:1.8.0_382] > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-18945) S3A: IAMInstanceCredentialsProvider failing: Failed to load credentials from IMDS
[ https://issues.apache.org/jira/browse/HADOOP-18945?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ahmar Suhail updated HADOOP-18945: -- Fix Version/s: 3.3.7-aws (was: 3.3.6-aws) > S3A: IAMInstanceCredentialsProvider failing: Failed to load credentials from > IMDS > - > > Key: HADOOP-18945 > URL: https://issues.apache.org/jira/browse/HADOOP-18945 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 7.2.18.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Blocker > Labels: pull-request-available > Fix For: 3.4.0, 3.3.7-aws > > > Failures in impala test VMs using iAM for auth > {code} > Failed to open file as a parquet file: java.net.SocketTimeoutException: > re-open > s3a://impala-test-uswest2-1/test-warehouse/test_pre_gregorian_date_parquet_2e80ae30.db/hive2_pre_gregorian.parquet > at 84 on > s3a://impala-test-uswest2-1/test-warehouse/test_pre_gregorian_date_parquet_2e80ae30.db/hive2_pre_gregorian.parquet: > org.apache.hadoop.fs.s3a.auth.NoAwsCredentialsException: +: Failed to load > credentials from IMDS > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-18996) S3A to provide full support for S3 Express One Zone
[ https://issues.apache.org/jira/browse/HADOOP-18996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ahmar Suhail updated HADOOP-18996: -- Fix Version/s: 3.3.7-aws (was: 3.3.6-aws) > S3A to provide full support for S3 Express One Zone > --- > > Key: HADOOP-18996 > URL: https://issues.apache.org/jira/browse/HADOOP-18996 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.4.0 >Reporter: Ahmar Suhail >Assignee: Steve Loughran >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0, 3.3.7-aws > > > HADOOP-18995 upgrades the SDK version which allows connecting to a s3 express > one zone support. > Complete support needs to be added to address tests that fail with s3 express > one zone, additional tests, documentation etc. > * hadoop-common path capability to indicate that treewalking may encounter > missing dirs > * use this in treewalking code in shell, mapreduce FileInputFormat etc to not > fail during treewalks > * extra path capability for s3express too. > * tests for this > * anything else -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-18946) S3A: testMultiObjectExceptionFilledIn() assertion error
[ https://issues.apache.org/jira/browse/HADOOP-18946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ahmar Suhail updated HADOOP-18946: -- Fix Version/s: 3.3.7-aws (was: 3.3.6-aws) > S3A: testMultiObjectExceptionFilledIn() assertion error > --- > > Key: HADOOP-18946 > URL: https://issues.apache.org/jira/browse/HADOOP-18946 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3, test >Affects Versions: 3.4.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > Labels: pull-request-available > Fix For: 3.4.0, 3.3.7-aws > > > Failure in the new test of HADOOP-18939. > I've been fiddling with the sdk upgrade, and only merged HADOOP-18932 after > submitting the new pr, so maybe, just maybe, the SDK changed some defaults. > anyway, > {code} > [ERROR] > testMultiObjectExceptionFilledIn(org.apache.hadoop.fs.s3a.impl.TestErrorTranslation) > Time elapsed: 0.026 s <<< FAILURE! > java.lang.AssertionError: retry policy of MultiObjectException > at org.junit.Assert.fail(Assert.java:89) > at org.junit.Assert.assertTrue(Assert.java:42) > at > {code} > easily fixed -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-18948) S3A. Add option fs.s3a.directory.operations.purge.uploads to purge on rename/delete
[ https://issues.apache.org/jira/browse/HADOOP-18948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ahmar Suhail updated HADOOP-18948: -- Fix Version/s: 3.3.7-aws (was: 3.3.6-aws) > S3A. Add option fs.s3a.directory.operations.purge.uploads to purge on > rename/delete > --- > > Key: HADOOP-18948 > URL: https://issues.apache.org/jira/browse/HADOOP-18948 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.4.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > Labels: pull-request-available > Fix For: 3.4.0, 3.3.7-aws > > > On third-party stores without lifecycle rules its possible to accrue many GB > of pending multipart uploads, including from > * magic committer jobs where spark driver/MR AM failed before commit/abort > * distcp jobs which timeout and get aborted > * any client code writing datasets which are interrupted before close. > Although there's a purge pending uploads option, that's dangerous because if > any fs is instantiated with it, it can destroy in-flight work > otherwise, the "hadoop s3guard uploads" command does work but needs > scheduling/manual execution > proposed: add a new property {{fs.s3a.directory.operations.purge.uploads}} > which will automatically cancel all pending uploads under a path > * delete: everything under the dir > * rename: all under the source dir > This will be done in parallel to the normal operation, but no attempt to post > abortMultipartUploads in different threads. The assumption here is that this > is rare. And it'll be off by default as in AWS people should have rules for > these things. > + doc (third_party?) > + add new counter/metric for abort operations, count and duration > + test to include cost assertions -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HDFS-17278. Fix order dependent flakiness in TestViewfsWithNfs3.java under hadoop-hdfs-nfs module [hadoop]
xinglin commented on code in PR #6329: URL: https://github.com/apache/hadoop/pull/6329#discussion_r1418413021 ## hadoop-hdfs-project/hadoop-hdfs-nfs/src/test/java/org/apache/hadoop/hdfs/nfs/nfs3/TestDFSClientCache.java: ## @@ -31,8 +31,14 @@ import org.apache.hadoop.hdfs.nfs.conf.NfsConfiguration; import org.apache.hadoop.security.UserGroupInformation; import org.junit.Test; +import org.junit.AfterClass; public class TestDFSClientCache { + @AfterClass Review Comment: nit: maybe @After? Basically reset/clean all side-effects after each test. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HDFS-17262 Fixed the verbose log.warn in DFSUtil.addTransferRateMetric(). [hadoop]
xinglin commented on PR #6290: URL: https://github.com/apache/hadoop/pull/6290#issuecomment-1844557563 Thanks @Hexiaoqiao for merging! Checked out the commit from trunk branch and saw "Contributed by" was changed from "Ravindra Dingankar [rdingan...@linkedin.com](mailto:rdingan...@linkedin.com)." to myself, which was unexpected. I intentionally put "Contributed by Rav" in the commit message. I should have communicated this to @Hexiaoqiao before he merges the PR. The change was originally created by Rav and I just helped contribute it back to open-source while he was on a vacation. ``` commit 607c98104284fd6364509bf0d5a62f23abef2a52 (HEAD -> trunk, origin/trunk, origin/HEAD) Author: Xing Lin Date: Wed Dec 6 18:16:23 2023 -0800 HDFS-17262. Fixed the verbose log.warn in DFSUtil.addTransferRateMetric(). (#6290). Contributed by Xing Lin. ``` cc @rdingankar -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[PR] YARN-11621: Fix intermittently failing unit test: TestAMRMProxy.testAMRMProxyTokenRenewal [hadoop]
susheelgupta7 opened a new pull request, #6330: URL: https://github.com/apache/hadoop/pull/6330 …MRMProxyTokenRenewal (#6310) ### Description of PR ### How was this patch tested? ### For code changes: - [ ] Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')? - [ ] Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, `NOTICE-binary` files? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HDFS-15413. add dfs.client.read.striped.datanode.max.attempts to fix read ecfile timeout [hadoop]
Neilxzn commented on PR #5829: URL: https://github.com/apache/hadoop/pull/5829#issuecomment-1844334814 I can pass the unit test hadoop.hdfs.TestDFSStripedInputStreamWithTimeout in my local development environment, but it fails on GitHub Jenkins. ![image](https://github.com/apache/hadoop/assets/10757009/a511b4e1-8413-44bb-9136-5e7cc1f3ff17) Check if the test log of the development environment is consistent with the assumption. When the client reads the file for the first time and stops for 10 seconds, the connection between the client and the datanode server will be automatically disconnected, resulting in a failed subsequent read by the client. @ayushtkn Any other suggestions? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18989) Use thread pool to improve the speed of creating control files in TestDFSIO
[ https://issues.apache.org/jira/browse/HADOOP-18989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17794008#comment-17794008 ] ASF GitHub Bot commented on HADOOP-18989: - hadoop-yetus commented on PR #6294: URL: https://github.com/apache/hadoop/pull/6294#issuecomment-1844310864 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 48s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 47m 59s | | trunk passed | | +1 :green_heart: | compile | 0m 36s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 0m 32s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 0m 42s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 37s | | trunk passed | | +1 :green_heart: | javadoc | 0m 33s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 27s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 0m 51s | | trunk passed | | +1 :green_heart: | shadedclient | 37m 26s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 26s | | the patch passed | | +1 :green_heart: | compile | 0m 29s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 0m 29s | | the patch passed | | +1 :green_heart: | compile | 0m 25s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | javac | 0m 25s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 29s | [/results-checkstyle-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6294/8/artifact/out/results-checkstyle-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt) | hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient: The patch generated 1 new + 42 unchanged - 0 fixed = 43 total (was 42) | | +1 :green_heart: | mvnsite | 0m 28s | | the patch passed | | +1 :green_heart: | javadoc | 0m 20s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 19s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 0m 49s | | the patch passed | | +1 :green_heart: | shadedclient | 37m 2s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 127m 53s | [/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6294/8/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt) | hadoop-mapreduce-client-jobclient in the patch passed. | | +1 :green_heart: | asflicense | 0m 41s | | The patch does not generate ASF License warnings. | | | | 263m 47s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.fs.TestDFSIO | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6294/8/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6294 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 487191db92cb 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 827af775157edd499e4da43684fe93ce7ddaa49b | | Default Java | Private
Re: [PR] HADOOP-18989. Use thread pool to improve the speed of creating control files in TestDFSIO [hadoop]
hadoop-yetus commented on PR #6294: URL: https://github.com/apache/hadoop/pull/6294#issuecomment-1844310864 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 48s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 47m 59s | | trunk passed | | +1 :green_heart: | compile | 0m 36s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 0m 32s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 0m 42s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 37s | | trunk passed | | +1 :green_heart: | javadoc | 0m 33s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 27s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 0m 51s | | trunk passed | | +1 :green_heart: | shadedclient | 37m 26s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 26s | | the patch passed | | +1 :green_heart: | compile | 0m 29s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 0m 29s | | the patch passed | | +1 :green_heart: | compile | 0m 25s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | javac | 0m 25s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 29s | [/results-checkstyle-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6294/8/artifact/out/results-checkstyle-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt) | hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient: The patch generated 1 new + 42 unchanged - 0 fixed = 43 total (was 42) | | +1 :green_heart: | mvnsite | 0m 28s | | the patch passed | | +1 :green_heart: | javadoc | 0m 20s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 19s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 0m 49s | | the patch passed | | +1 :green_heart: | shadedclient | 37m 2s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 127m 53s | [/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6294/8/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt) | hadoop-mapreduce-client-jobclient in the patch passed. | | +1 :green_heart: | asflicense | 0m 41s | | The patch does not generate ASF License warnings. | | | | 263m 47s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.fs.TestDFSIO | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6294/8/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6294 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 487191db92cb 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 827af775157edd499e4da43684fe93ce7ddaa49b | | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Test Results |
[jira] [Commented] (HADOOP-18989) Use thread pool to improve the speed of creating control files in TestDFSIO
[ https://issues.apache.org/jira/browse/HADOOP-18989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17794003#comment-17794003 ] ASF GitHub Bot commented on HADOOP-18989: - hadoop-yetus commented on PR #6294: URL: https://github.com/apache/hadoop/pull/6294#issuecomment-1844265451 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 33s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 43m 12s | | trunk passed | | +1 :green_heart: | compile | 0m 38s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 0m 33s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 0m 41s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 38s | | trunk passed | | +1 :green_heart: | javadoc | 0m 35s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 28s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 0m 54s | | trunk passed | | +1 :green_heart: | shadedclient | 31m 58s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 29s | | the patch passed | | +1 :green_heart: | compile | 0m 27s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 0m 27s | | the patch passed | | +1 :green_heart: | compile | 0m 25s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | javac | 0m 25s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 29s | [/results-checkstyle-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6294/7/artifact/out/results-checkstyle-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt) | hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient: The patch generated 1 new + 42 unchanged - 0 fixed = 43 total (was 42) | | +1 :green_heart: | mvnsite | 0m 29s | | the patch passed | | +1 :green_heart: | javadoc | 0m 20s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 19s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 0m 51s | | the patch passed | | +1 :green_heart: | shadedclient | 33m 10s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 131m 43s | [/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6294/7/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt) | hadoop-mapreduce-client-jobclient in the patch passed. | | +1 :green_heart: | asflicense | 0m 38s | | The patch does not generate ASF License warnings. | | | | 253m 11s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.fs.TestDFSIO | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6294/7/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6294 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 02d4a32696d8 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 827af775157edd499e4da43684fe93ce7ddaa49b | | Default Java | Private
Re: [PR] HADOOP-18989. Use thread pool to improve the speed of creating control files in TestDFSIO [hadoop]
hadoop-yetus commented on PR #6294: URL: https://github.com/apache/hadoop/pull/6294#issuecomment-1844265451 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 33s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 43m 12s | | trunk passed | | +1 :green_heart: | compile | 0m 38s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 0m 33s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 0m 41s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 38s | | trunk passed | | +1 :green_heart: | javadoc | 0m 35s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 28s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 0m 54s | | trunk passed | | +1 :green_heart: | shadedclient | 31m 58s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 29s | | the patch passed | | +1 :green_heart: | compile | 0m 27s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 0m 27s | | the patch passed | | +1 :green_heart: | compile | 0m 25s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | javac | 0m 25s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 29s | [/results-checkstyle-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6294/7/artifact/out/results-checkstyle-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt) | hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient: The patch generated 1 new + 42 unchanged - 0 fixed = 43 total (was 42) | | +1 :green_heart: | mvnsite | 0m 29s | | the patch passed | | +1 :green_heart: | javadoc | 0m 20s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 19s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 0m 51s | | the patch passed | | +1 :green_heart: | shadedclient | 33m 10s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 131m 43s | [/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6294/7/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt) | hadoop-mapreduce-client-jobclient in the patch passed. | | +1 :green_heart: | asflicense | 0m 38s | | The patch does not generate ASF License warnings. | | | | 253m 11s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.fs.TestDFSIO | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6294/7/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6294 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 02d4a32696d8 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 827af775157edd499e4da43684fe93ce7ddaa49b | | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Test Results |
Re: [PR] HDFS-17278. Fix order dependent flakiness in TestViewfsWithNfs3.java under hadoop-hdfs-nfs module [hadoop]
hadoop-yetus commented on PR #6329: URL: https://github.com/apache/hadoop/pull/6329#issuecomment-1844178741 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 21s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 32m 27s | | trunk passed | | +1 :green_heart: | compile | 0m 16s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 0m 16s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 0m 18s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 20s | | trunk passed | | +1 :green_heart: | javadoc | 0m 22s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 19s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 0m 34s | | trunk passed | | +1 :green_heart: | shadedclient | 19m 24s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 11s | | the patch passed | | +1 :green_heart: | compile | 0m 12s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 0m 12s | | the patch passed | | +1 :green_heart: | compile | 0m 10s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | javac | 0m 10s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 9s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 12s | | the patch passed | | +1 :green_heart: | javadoc | 0m 12s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 13s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 0m 34s | | the patch passed | | +1 :green_heart: | shadedclient | 19m 6s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 2m 9s | | hadoop-hdfs-nfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 23s | | The patch does not generate ASF License warnings. | | | | 80m 55s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6329/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6329 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 7570b4fcffe0 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / fe8553ef2922a26cb218b13c148ec282b510fb1a | | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6329/2/testReport/ | | Max. process+thread count | 634 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-nfs U: hadoop-hdfs-project/hadoop-hdfs-nfs | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6329/2/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] HADOOP-18989. Use thread pool to improve the speed of creating control files in TestDFSIO [hadoop]
hfutatzhanghb commented on code in PR #6294: URL: https://github.com/apache/hadoop/pull/6294#discussion_r1418253986 ## hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/fs/TestDFSIO.java: ## @@ -308,25 +353,49 @@ private void createControlFile(FileSystem fs, fs.delete(controlDir, true); -for(int i=0; i < nrFiles; i++) { +List> futureList = new ArrayList<>(); +for (int i = 0; i < nrFiles; i++) { String name = getFileName(i); Path controlFile = new Path(controlDir, "in_file_" + name); SequenceFile.Writer writer = null; try { writer = SequenceFile.createWriter(fs, config, controlFile, Text.class, LongWritable.class, CompressionType.NONE); -writer.append(new Text(name), new LongWritable(nrBytes)); +Runnable controlFileCreateTask = new ControlFileCreateTask(writer, name, nrBytes); +Future createFuture = completionService.submit(controlFileCreateTask, "success"); +futureList.add(createFuture); Review Comment: omg~ Sir, it's test code, I forget delete it. Have deleted it. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18989) Use thread pool to improve the speed of creating control files in TestDFSIO
[ https://issues.apache.org/jira/browse/HADOOP-18989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17793989#comment-17793989 ] ASF GitHub Bot commented on HADOOP-18989: - hfutatzhanghb commented on code in PR #6294: URL: https://github.com/apache/hadoop/pull/6294#discussion_r1418253986 ## hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/fs/TestDFSIO.java: ## @@ -308,25 +353,49 @@ private void createControlFile(FileSystem fs, fs.delete(controlDir, true); -for(int i=0; i < nrFiles; i++) { +List> futureList = new ArrayList<>(); +for (int i = 0; i < nrFiles; i++) { String name = getFileName(i); Path controlFile = new Path(controlDir, "in_file_" + name); SequenceFile.Writer writer = null; try { writer = SequenceFile.createWriter(fs, config, controlFile, Text.class, LongWritable.class, CompressionType.NONE); -writer.append(new Text(name), new LongWritable(nrBytes)); +Runnable controlFileCreateTask = new ControlFileCreateTask(writer, name, nrBytes); +Future createFuture = completionService.submit(controlFileCreateTask, "success"); +futureList.add(createFuture); Review Comment: omg~ Sir, it's test code, I forget delete it. Have deleted it. > Use thread pool to improve the speed of creating control files in TestDFSIO > --- > > Key: HADOOP-18989 > URL: https://issues.apache.org/jira/browse/HADOOP-18989 > Project: Hadoop Common > Issue Type: Improvement > Components: benchmarks, common >Affects Versions: 3.3.6 >Reporter: farmmamba >Assignee: farmmamba >Priority: Major > Labels: pull-request-available > > When we use TestDFSIO tool to test the throughouts of HDFS clusters, we found > it is so slow in the creating controll files stage. > After refering to the source code, we found that method createControlFile try > to create control files serially. It can be improved by using thread pool. > After optimizing, the TestDFSIO tool runs quicker than before. > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HDFS-17265. RBF: Throwing an exception prevents the permit from being released when using FairnessPolicyController [hadoop]
KeeProMise commented on PR #6298: URL: https://github.com/apache/hadoop/pull/6298#issuecomment-1844143655 @Hexiaoqiao @goiri @slfan1989 hi, If no more comments here, please help merge it, thanks! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18981) Move oncrpc/portmap from hadoop-nfs to hadoop-common
[ https://issues.apache.org/jira/browse/HADOOP-18981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17793987#comment-17793987 ] ASF GitHub Bot commented on HADOOP-18981: - xinglin commented on PR #6280: URL: https://github.com/apache/hadoop/pull/6280#issuecomment-1844121350 @simbadzina / @goiri / @ZanderXu / @Hexiaoqiao Can i get a review? What do you guys think of this PR? Thanks, > Move oncrpc/portmap from hadoop-nfs to hadoop-common > > > Key: HADOOP-18981 > URL: https://issues.apache.org/jira/browse/HADOOP-18981 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.4.0 >Reporter: Xing Lin >Assignee: Xing Lin >Priority: Major > Labels: pull-request-available > > We want to use udpserver/client for other use cases, rather than only for > NFS. One such use case is to export NameNodeHAState for NameNodes via a UDP > server. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HADOOP-18981. moved oncrpc/portmap from hadoop-common-project/hadoop-nfs to hadoop-common-project/hadoop-common [hadoop]
xinglin commented on PR #6280: URL: https://github.com/apache/hadoop/pull/6280#issuecomment-1844121350 @simbadzina / @goiri / @ZanderXu / @Hexiaoqiao Can i get a review? What do you guys think of this PR? Thanks, -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18989) Use thread pool to improve the speed of creating control files in TestDFSIO
[ https://issues.apache.org/jira/browse/HADOOP-18989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17793985#comment-17793985 ] ASF GitHub Bot commented on HADOOP-18989: - zhangshuyan0 commented on code in PR #6294: URL: https://github.com/apache/hadoop/pull/6294#discussion_r1418227210 ## hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/fs/TestDFSIO.java: ## @@ -308,25 +353,49 @@ private void createControlFile(FileSystem fs, fs.delete(controlDir, true); -for(int i=0; i < nrFiles; i++) { +List> futureList = new ArrayList<>(); +for (int i = 0; i < nrFiles; i++) { String name = getFileName(i); Path controlFile = new Path(controlDir, "in_file_" + name); SequenceFile.Writer writer = null; try { writer = SequenceFile.createWriter(fs, config, controlFile, Text.class, LongWritable.class, CompressionType.NONE); -writer.append(new Text(name), new LongWritable(nrBytes)); +Runnable controlFileCreateTask = new ControlFileCreateTask(writer, name, nrBytes); +Future createFuture = completionService.submit(controlFileCreateTask, "success"); +futureList.add(createFuture); Review Comment: What is this `futureList` used for? ## hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/fs/TestDFSIO.java: ## @@ -308,25 +353,49 @@ private void createControlFile(FileSystem fs, fs.delete(controlDir, true); -for(int i=0; i < nrFiles; i++) { +List> futureList = new ArrayList<>(); +for (int i = 0; i < nrFiles; i++) { String name = getFileName(i); Path controlFile = new Path(controlDir, "in_file_" + name); SequenceFile.Writer writer = null; try { writer = SequenceFile.createWriter(fs, config, controlFile, Text.class, LongWritable.class, CompressionType.NONE); -writer.append(new Text(name), new LongWritable(nrBytes)); +Runnable controlFileCreateTask = new ControlFileCreateTask(writer, name, nrBytes); +Future createFuture = completionService.submit(controlFileCreateTask, "success"); +futureList.add(createFuture); } catch(Exception e) { throw new IOException(e.getLocalizedMessage()); - } finally { -if (writer != null) { - writer.close(); + } +} + +boolean isSuccess = false; +int count = 0; +for (int i = 0; i < nrFiles; i++) { + try { +// Since control file is quiet small, we use 3 minutes here. +Future future = completionService.poll(3, TimeUnit.MINUTES); +if (future != null) { + future.get(3, TimeUnit.MINUTES); + count++; +} else { + break; } -writer = null; + } catch (ExecutionException | InterruptedException | TimeoutException e) { +throw new IOException(e); + } + + if (count == nrFiles) { Review Comment: Should line389-397 be outside the for loop? > Use thread pool to improve the speed of creating control files in TestDFSIO > --- > > Key: HADOOP-18989 > URL: https://issues.apache.org/jira/browse/HADOOP-18989 > Project: Hadoop Common > Issue Type: Improvement > Components: benchmarks, common >Affects Versions: 3.3.6 >Reporter: farmmamba >Assignee: farmmamba >Priority: Major > Labels: pull-request-available > > When we use TestDFSIO tool to test the throughouts of HDFS clusters, we found > it is so slow in the creating controll files stage. > After refering to the source code, we found that method createControlFile try > to create control files serially. It can be improved by using thread pool. > After optimizing, the TestDFSIO tool runs quicker than before. > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] YARN-11619. [Federation] Router CLI Supports List SubClusters. [hadoop]
hadoop-yetus commented on PR #6304: URL: https://github.com/apache/hadoop/pull/6304#issuecomment-1844103418 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 21s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +0 :ok: | buf | 0m 1s | | buf was not available. | | +0 :ok: | buf | 0m 1s | | buf was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 5 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 13m 44s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 19m 10s | | trunk passed | | +1 :green_heart: | compile | 3m 34s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 3m 10s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 0m 55s | | trunk passed | | +1 :green_heart: | mvnsite | 3m 3s | | trunk passed | | +1 :green_heart: | javadoc | 3m 7s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 3m 2s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 5m 39s | | trunk passed | | +1 :green_heart: | shadedclient | 19m 36s | | branch has no errors when building and testing our client artifacts. | | -0 :warning: | patch | 19m 49s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 20s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 3s | | the patch passed | | +1 :green_heart: | compile | 3m 13s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | cc | 3m 13s | | the patch passed | | +1 :green_heart: | javac | 3m 13s | | the patch passed | | +1 :green_heart: | compile | 3m 12s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | cc | 3m 12s | | the patch passed | | +1 :green_heart: | javac | 3m 12s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 52s | | the patch passed | | +1 :green_heart: | mvnsite | 2m 54s | | the patch passed | | +1 :green_heart: | javadoc | 2m 38s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 2m 49s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 6m 27s | | the patch passed | | +1 :green_heart: | shadedclient | 19m 49s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 0m 42s | | hadoop-yarn-api in the patch passed. | | +1 :green_heart: | unit | 4m 36s | | hadoop-yarn-common in the patch passed. | | +1 :green_heart: | unit | 2m 43s | | hadoop-yarn-server-common in the patch passed. | | +1 :green_heart: | unit | 86m 18s | | hadoop-yarn-server-resourcemanager in the patch passed. | | +1 :green_heart: | unit | 25m 19s | | hadoop-yarn-client in the patch passed. | | +1 :green_heart: | unit | 0m 26s | | hadoop-yarn-server-router in the patch passed. | | +1 :green_heart: | asflicense | 0m 34s | | The patch does not generate ASF License warnings. | | | | 243m 59s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6304/5/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6304 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets cc buflint bufcompat | | uname | Linux 54d294a6c842 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 6b3681bad8f1fbe178dcf39bd7e36050a42320fa |
Re: [PR] HADOOP-18989. Use thread pool to improve the speed of creating control files in TestDFSIO [hadoop]
zhangshuyan0 commented on code in PR #6294: URL: https://github.com/apache/hadoop/pull/6294#discussion_r1418227210 ## hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/fs/TestDFSIO.java: ## @@ -308,25 +353,49 @@ private void createControlFile(FileSystem fs, fs.delete(controlDir, true); -for(int i=0; i < nrFiles; i++) { +List> futureList = new ArrayList<>(); +for (int i = 0; i < nrFiles; i++) { String name = getFileName(i); Path controlFile = new Path(controlDir, "in_file_" + name); SequenceFile.Writer writer = null; try { writer = SequenceFile.createWriter(fs, config, controlFile, Text.class, LongWritable.class, CompressionType.NONE); -writer.append(new Text(name), new LongWritable(nrBytes)); +Runnable controlFileCreateTask = new ControlFileCreateTask(writer, name, nrBytes); +Future createFuture = completionService.submit(controlFileCreateTask, "success"); +futureList.add(createFuture); Review Comment: What is this `futureList` used for? ## hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/fs/TestDFSIO.java: ## @@ -308,25 +353,49 @@ private void createControlFile(FileSystem fs, fs.delete(controlDir, true); -for(int i=0; i < nrFiles; i++) { +List> futureList = new ArrayList<>(); +for (int i = 0; i < nrFiles; i++) { String name = getFileName(i); Path controlFile = new Path(controlDir, "in_file_" + name); SequenceFile.Writer writer = null; try { writer = SequenceFile.createWriter(fs, config, controlFile, Text.class, LongWritable.class, CompressionType.NONE); -writer.append(new Text(name), new LongWritable(nrBytes)); +Runnable controlFileCreateTask = new ControlFileCreateTask(writer, name, nrBytes); +Future createFuture = completionService.submit(controlFileCreateTask, "success"); +futureList.add(createFuture); } catch(Exception e) { throw new IOException(e.getLocalizedMessage()); - } finally { -if (writer != null) { - writer.close(); + } +} + +boolean isSuccess = false; +int count = 0; +for (int i = 0; i < nrFiles; i++) { + try { +// Since control file is quiet small, we use 3 minutes here. +Future future = completionService.poll(3, TimeUnit.MINUTES); +if (future != null) { + future.get(3, TimeUnit.MINUTES); + count++; +} else { + break; } -writer = null; + } catch (ExecutionException | InterruptedException | TimeoutException e) { +throw new IOException(e); + } + + if (count == nrFiles) { Review Comment: Should line389-397 be outside the for loop? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HDFS-17262 Fixed the verbose log.warn in DFSUtil.addTransferRateMetric(). [hadoop]
Hexiaoqiao commented on PR #6290: URL: https://github.com/apache/hadoop/pull/6290#issuecomment-1844090791 Committed to trunk. Thanks all for your contributions! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HDFS-17262 Fixed the verbose log.warn in DFSUtil.addTransferRateMetric(). [hadoop]
Hexiaoqiao merged PR #6290: URL: https://github.com/apache/hadoop/pull/6290 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HDFS-17278. Fix order dependent flakiness in TestViewfsWithNfs3.java under hadoop-hdfs-nfs module [hadoop]
hadoop-yetus commented on PR #6329: URL: https://github.com/apache/hadoop/pull/6329#issuecomment-1843977545 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 20s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 31m 45s | | trunk passed | | +1 :green_heart: | compile | 0m 17s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 0m 16s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 0m 17s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 19s | | trunk passed | | +1 :green_heart: | javadoc | 0m 20s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 19s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 0m 33s | | trunk passed | | +1 :green_heart: | shadedclient | 19m 1s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 11s | | the patch passed | | +1 :green_heart: | compile | 0m 11s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 0m 11s | | the patch passed | | +1 :green_heart: | compile | 0m 12s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | javac | 0m 12s | | the patch passed | | -1 :x: | blanks | 0m 0s | [/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6329/1/artifact/out/blanks-eol.txt) | The patch has 1 line(s) that end in blanks. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply | | +1 :green_heart: | checkstyle | 0m 8s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 12s | | the patch passed | | +1 :green_heart: | javadoc | 0m 13s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 13s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 0m 32s | | the patch passed | | +1 :green_heart: | shadedclient | 19m 6s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 2m 11s | | hadoop-hdfs-nfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 21s | | The patch does not generate ASF License warnings. | | | | 79m 50s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6329/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6329 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux a3f434c3f0b0 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 6fd7ea59d2942de7fa519a128b5303a4babd905f | | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6329/1/testReport/ | | Max. process+thread count | 682 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-nfs U: hadoop-hdfs-project/hadoop-hdfs-nfs | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6329/1/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above
[jira] [Commented] (HADOOP-18613) Upgrade ZooKeeper to version 3.8.2
[ https://issues.apache.org/jira/browse/HADOOP-18613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17793961#comment-17793961 ] ASF GitHub Bot commented on HADOOP-18613: - hadoop-yetus commented on PR #6327: URL: https://github.com/apache/hadoop/pull/6327#issuecomment-1843903381 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 7m 1s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 0s | | xmllint was not available. | | +0 :ok: | shelldocs | 0m 0s | | Shelldocs was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ branch-3.3 Compile Tests _ | | +0 :ok: | mvndep | 0m 20s | | Maven dependency ordering for branch | | -1 :x: | mvninstall | 0m 23s | [/branch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6327/1/artifact/out/branch-mvninstall-root.txt) | root in branch-3.3 failed. | | -1 :x: | compile | 0m 23s | [/branch-compile-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6327/1/artifact/out/branch-compile-root.txt) | root in branch-3.3 failed. | | -1 :x: | mvnsite | 0m 22s | [/branch-mvnsite-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6327/1/artifact/out/branch-mvnsite-root.txt) | root in branch-3.3 failed. | | -1 :x: | javadoc | 0m 23s | [/branch-javadoc-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6327/1/artifact/out/branch-javadoc-root.txt) | root in branch-3.3 failed. | | -1 :x: | shadedclient | 1m 34s | | branch has errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 15s | | Maven dependency ordering for patch | | -1 :x: | mvninstall | 0m 22s | [/patch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6327/1/artifact/out/patch-mvninstall-root.txt) | root in the patch failed. | | -1 :x: | mvninstall | 0m 23s | [/patch-mvninstall-hadoop-project.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6327/1/artifact/out/patch-mvninstall-hadoop-project.txt) | hadoop-project in the patch failed. | | -1 :x: | compile | 1m 31s | [/patch-compile-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6327/1/artifact/out/patch-compile-root.txt) | root in the patch failed. | | -1 :x: | javac | 1m 31s | [/patch-compile-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6327/1/artifact/out/patch-compile-root.txt) | root in the patch failed. | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -1 :x: | mvnsite | 0m 35s | [/patch-mvnsite-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6327/1/artifact/out/patch-mvnsite-root.txt) | root in the patch failed. | | +1 :green_heart: | shellcheck | 0m 0s | | No new issues. | | -1 :x: | javadoc | 0m 22s | [/patch-javadoc-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6327/1/artifact/out/patch-javadoc-root.txt) | root in the patch failed. | | +1 :green_heart: | shadedclient | 1m 44s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 489m 18s | [/patch-unit-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6327/1/artifact/out/patch-unit-root.txt) | root in the patch failed. | | +0 :ok: | asflicense | 0m 39s | | ASF License check generated no output? | | | | 510m 2s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.web.TestWebHDFS | | | hadoop.hdfs.TestDecommissionWithBackoffMonitor | | | hadoop.cli.TestHDFSCLI | | | hadoop.hdfs.server.namenode.TestCheckpoint | | | hadoop.hdfs.server.namenode.TestFSNamesystemLock | | | hadoop.hdfs.TestQuota | | | hadoop.hdfs.TestDFSShell | | | hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetCache | | | hadoop.hdfs.tools.TestViewFileSystemOverloadSchemeWithDFSAdmin | | | hadoop.hdfs.tools.TestViewFileSystemOverloadSchemeWithFSCommands |
Re: [PR] HADOOP-18613. Upgrade ZooKeeper to version 3.8.2 and Curator to versi… [hadoop]
hadoop-yetus commented on PR #6327: URL: https://github.com/apache/hadoop/pull/6327#issuecomment-1843903381 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 7m 1s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 0s | | xmllint was not available. | | +0 :ok: | shelldocs | 0m 0s | | Shelldocs was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ branch-3.3 Compile Tests _ | | +0 :ok: | mvndep | 0m 20s | | Maven dependency ordering for branch | | -1 :x: | mvninstall | 0m 23s | [/branch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6327/1/artifact/out/branch-mvninstall-root.txt) | root in branch-3.3 failed. | | -1 :x: | compile | 0m 23s | [/branch-compile-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6327/1/artifact/out/branch-compile-root.txt) | root in branch-3.3 failed. | | -1 :x: | mvnsite | 0m 22s | [/branch-mvnsite-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6327/1/artifact/out/branch-mvnsite-root.txt) | root in branch-3.3 failed. | | -1 :x: | javadoc | 0m 23s | [/branch-javadoc-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6327/1/artifact/out/branch-javadoc-root.txt) | root in branch-3.3 failed. | | -1 :x: | shadedclient | 1m 34s | | branch has errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 15s | | Maven dependency ordering for patch | | -1 :x: | mvninstall | 0m 22s | [/patch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6327/1/artifact/out/patch-mvninstall-root.txt) | root in the patch failed. | | -1 :x: | mvninstall | 0m 23s | [/patch-mvninstall-hadoop-project.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6327/1/artifact/out/patch-mvninstall-hadoop-project.txt) | hadoop-project in the patch failed. | | -1 :x: | compile | 1m 31s | [/patch-compile-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6327/1/artifact/out/patch-compile-root.txt) | root in the patch failed. | | -1 :x: | javac | 1m 31s | [/patch-compile-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6327/1/artifact/out/patch-compile-root.txt) | root in the patch failed. | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -1 :x: | mvnsite | 0m 35s | [/patch-mvnsite-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6327/1/artifact/out/patch-mvnsite-root.txt) | root in the patch failed. | | +1 :green_heart: | shellcheck | 0m 0s | | No new issues. | | -1 :x: | javadoc | 0m 22s | [/patch-javadoc-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6327/1/artifact/out/patch-javadoc-root.txt) | root in the patch failed. | | +1 :green_heart: | shadedclient | 1m 44s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 489m 18s | [/patch-unit-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6327/1/artifact/out/patch-unit-root.txt) | root in the patch failed. | | +0 :ok: | asflicense | 0m 39s | | ASF License check generated no output? | | | | 510m 2s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.web.TestWebHDFS | | | hadoop.hdfs.TestDecommissionWithBackoffMonitor | | | hadoop.cli.TestHDFSCLI | | | hadoop.hdfs.server.namenode.TestCheckpoint | | | hadoop.hdfs.server.namenode.TestFSNamesystemLock | | | hadoop.hdfs.TestQuota | | | hadoop.hdfs.TestDFSShell | | | hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetCache | | | hadoop.hdfs.tools.TestViewFileSystemOverloadSchemeWithDFSAdmin | | | hadoop.hdfs.tools.TestViewFileSystemOverloadSchemeWithFSCommands | | | hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer | | | hadoop.hdfs.server.namenode.ha.TestDelegationTokensWithHA | | | hadoop.hdfs.server.namenode.ha.TestObserverReadProxyProvider | | |
[PR] HDFS-17278. Fix order dependent flakiness in TestViewfsWithNfs3.java under hadoop-hdfs-nfs module [hadoop]
yijut2 opened a new pull request, #6329: URL: https://github.com/apache/hadoop/pull/6329 ### Description of PR The order dependent flakiness was detected if the test class `TestDFSClientCache.java` runs before `TestRpcProgramNfs3.java`. The error message looks like below: ``` [ERROR] Failures: [ERROR] TestRpcProgramNfs3.testAccess:279 Incorrect return code expected:<0> but was:<13> [ERROR] TestRpcProgramNfs3.testCommit:764 Incorrect return code: expected:<13> but was:<5> [ERROR] TestRpcProgramNfs3.testCreate:493 Incorrect return code: expected:<13> but was:<5> [ERROR] TestRpcProgramNfs3.testEncryptedReadWrite:359->createFileUsingNfs:393 Incorrect response: expected: but was: [ERROR] TestRpcProgramNfs3.testFsinfo:714 Incorrect return code: expected:<13> but was:<5> [ERROR] TestRpcProgramNfs3.testFsstat:696 Incorrect return code: expected:<0> but was:<13> [ERROR] TestRpcProgramNfs3.testGetattr:205 Incorrect return code expected:<0> but was:<13> [ERROR] TestRpcProgramNfs3.testLookup:249 Incorrect return code expected:<13> but was:<5> [ERROR] TestRpcProgramNfs3.testMkdir:517 Incorrect return code: expected:<13> but was:<5> [ERROR] TestRpcProgramNfs3.testPathconf:738 Incorrect return code: expected:<13> but was:<5> [ERROR] TestRpcProgramNfs3.testRead:341 Incorrect return code: expected:<0> but was:<13> [ERROR] TestRpcProgramNfs3.testReaddir:642 Incorrect return code: expected:<13> but was:<5> [ERROR] TestRpcProgramNfs3.testReaddirplus:666 Incorrect return code: expected:<13> but was:<5> [ERROR] TestRpcProgramNfs3.testReadlink:297 Incorrect return code: expected:<0> but was:<5> [ERROR] TestRpcProgramNfs3.testRemove:570 Incorrect return code: expected:<13> but was:<5> [ERROR] TestRpcProgramNfs3.testRename:618 Incorrect return code: expected:<13> but was:<5> [ERROR] TestRpcProgramNfs3.testRmdir:594 Incorrect return code: expected:<13> but was:<5> [ERROR] TestRpcProgramNfs3.testSetattr:225 Incorrect return code expected:<13> but was:<5> [ERROR] TestRpcProgramNfs3.testSymlink:546 Incorrect return code: expected:<13> but was:<5> [ERROR] TestRpcProgramNfs3.testWrite:468 Incorrect return code: expected:<13> but was:<5> [INFO] [ERROR] Tests run: 25, Failures: 20, Errors: 0, Skipped: 0 [INFO] [ERROR] There are test failures. ``` The polluter that led to this flakiness was the test method `testGetUserGroupInformationSecure()` in `TestDFSClientCache.java`. There was a line `UserGroupInformation.setLoginUser(currentUserUgi);` which modifies some shared state and resource, something like pre-setup the config. To fix this issue, I added the cleanup methods in `TestDFSClientCache.java` to reset the `UserGroupInformation` to ensure the isolation among each test class. ``` @AfterClass public static void cleanup() { UserGroupInformation.reset(); } ``` Including setting ``` authenticationMethod = null; conf = null; // set configuration to null setLoginUser(null); // reset login user to default null ``` ..., and so on. The `reset()` methods can be referred to `hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java`. After the fix, the error was no longer exist and the succeed message was: ``` [INFO] --- [INFO] T E S T S [INFO] --- [INFO] Running org.apache.hadoop.hdfs.nfs.nfs3.CustomTest [INFO] Tests run: 25, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.457 s - in org.apache.hadoop.hdfs.nfs.nfs3.CustomTest [INFO] [INFO] Results: [INFO] [INFO] Tests run: 25, Failures: 0, Errors: 0, Skipped: 0 [INFO] [INFO] [INFO] BUILD SUCCESS [INFO] ``` Here is the CustomTest.java file that I used to run these two tests in order, the error can be reproduce by running this `CustomTest.java`. ``` package org.apache.hadoop.hdfs.nfs.nfs3; import org.junit.runner.RunWith;import org.junit.runners.Suite; @RunWith(Suite.class) @Suite.SuiteClasses({ TestDFSClientCache.class, TestRpcProgramNfs3.class }) public class CustomTest {} ``` ### How was this patch tested? It was run under the openjdk version "17.0.9"/Apache Maven 3.9.5 environment. ### For code changes: - [x] Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')? - [ ] Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation? - [x] If adding new dependencies to the code, are these dependencies
Re: [PR] Hadoop 18860: Upgrade mockito version to 4.11.0 [hadoop]
hadoop-yetus commented on PR #6275: URL: https://github.com/apache/hadoop/pull/6275#issuecomment-1843718075 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 21s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +0 :ok: | shelldocs | 0m 0s | | Shelldocs was not available. | | +0 :ok: | xmllint | 0m 1s | | xmllint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 17 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 14m 18s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 37m 39s | | trunk passed | | +1 :green_heart: | compile | 19m 44s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 17m 26s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 4m 53s | | trunk passed | | +1 :green_heart: | mvnsite | 9m 37s | | trunk passed | | +1 :green_heart: | javadoc | 8m 4s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 7m 48s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +0 :ok: | spotbugs | 0m 38s | | branch/hadoop-project no spotbugs output file (spotbugsXml.xml) | | +0 :ok: | spotbugs | 0m 33s | | branch/hadoop-client-modules/hadoop-client-check-invariants no spotbugs output file (spotbugsXml.xml) | | +0 :ok: | spotbugs | 0m 33s | | branch/hadoop-client-modules/hadoop-client-minicluster no spotbugs output file (spotbugsXml.xml) | | +0 :ok: | spotbugs | 0m 35s | | branch/hadoop-client-modules/hadoop-client-check-test-invariants no spotbugs output file (spotbugsXml.xml) | | +1 :green_heart: | shadedclient | 36m 24s | | branch has no errors when building and testing our client artifacts. | | -0 :warning: | patch | 36m 50s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 35s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 19m 10s | | the patch passed | | +1 :green_heart: | compile | 17m 28s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 17m 28s | | the patch passed | | +1 :green_heart: | compile | 14m 59s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | javac | 14m 59s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 4m 5s | | the patch passed | | +1 :green_heart: | mvnsite | 9m 30s | | the patch passed | | +1 :green_heart: | shellcheck | 0m 1s | | No new issues. | | +1 :green_heart: | javadoc | 8m 3s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 8m 11s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +0 :ok: | spotbugs | 0m 36s | | hadoop-project has no data from spotbugs | | +0 :ok: | spotbugs | 0m 37s | | hadoop-client-modules/hadoop-client-check-invariants has no data from spotbugs | | +0 :ok: | spotbugs | 0m 37s | | hadoop-client-modules/hadoop-client-minicluster has no data from spotbugs | | +0 :ok: | spotbugs | 0m 37s | | hadoop-client-modules/hadoop-client-check-test-invariants has no data from spotbugs | | +1 :green_heart: | shadedclient | 32m 22s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 0m 36s | | hadoop-project in the patch passed. | | +1 :green_heart: | unit | 19m 56s | | hadoop-common in the patch passed. | | +1 :green_heart: | unit | 264m 50s | | hadoop-hdfs in the patch passed. | | +1 :green_heart: | unit | 105m 57s | | hadoop-yarn-server-resourcemanager in the patch passed. | | +1 :green_heart: | unit | 24m 51s | | hadoop-yarn-server-nodemanager in the patch passed. | | +1 :green_heart: | unit | 24m 32s | | hadoop-hdfs-rbf in the patch passed. | | +1 :green_heart:
[jira] [Commented] (HADOOP-18989) Use thread pool to improve the speed of creating control files in TestDFSIO
[ https://issues.apache.org/jira/browse/HADOOP-18989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17793900#comment-17793900 ] ASF GitHub Bot commented on HADOOP-18989: - hadoop-yetus commented on PR #6294: URL: https://github.com/apache/hadoop/pull/6294#issuecomment-1843534746 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 47s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 47m 48s | | trunk passed | | +1 :green_heart: | compile | 0m 36s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 0m 31s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 0m 41s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 36s | | trunk passed | | +1 :green_heart: | javadoc | 0m 34s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 27s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 0m 50s | | trunk passed | | +1 :green_heart: | shadedclient | 37m 22s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 26s | | the patch passed | | +1 :green_heart: | compile | 0m 28s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 0m 28s | | the patch passed | | +1 :green_heart: | compile | 0m 25s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | javac | 0m 25s | | the patch passed | | -1 :x: | blanks | 0m 0s | [/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6294/6/artifact/out/blanks-eol.txt) | The patch has 2 line(s) that end in blanks. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply | | -0 :warning: | checkstyle | 0m 30s | [/results-checkstyle-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6294/6/artifact/out/results-checkstyle-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt) | hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient: The patch generated 4 new + 42 unchanged - 0 fixed = 46 total (was 42) | | +1 :green_heart: | mvnsite | 0m 27s | | the patch passed | | +1 :green_heart: | javadoc | 0m 20s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 19s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 0m 50s | | the patch passed | | +1 :green_heart: | shadedclient | 37m 18s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 126m 34s | [/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6294/6/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt) | hadoop-mapreduce-client-jobclient in the patch passed. | | +1 :green_heart: | asflicense | 0m 40s | | The patch does not generate ASF License warnings. | | | | 262m 26s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.fs.TestDFSIO | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6294/6/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6294 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux ac6db93f7884 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build
Re: [PR] HADOOP-18989. Use thread pool to improve the speed of creating control files in TestDFSIO [hadoop]
hadoop-yetus commented on PR #6294: URL: https://github.com/apache/hadoop/pull/6294#issuecomment-1843534746 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 47s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 47m 48s | | trunk passed | | +1 :green_heart: | compile | 0m 36s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 0m 31s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 0m 41s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 36s | | trunk passed | | +1 :green_heart: | javadoc | 0m 34s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 27s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 0m 50s | | trunk passed | | +1 :green_heart: | shadedclient | 37m 22s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 26s | | the patch passed | | +1 :green_heart: | compile | 0m 28s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 0m 28s | | the patch passed | | +1 :green_heart: | compile | 0m 25s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | javac | 0m 25s | | the patch passed | | -1 :x: | blanks | 0m 0s | [/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6294/6/artifact/out/blanks-eol.txt) | The patch has 2 line(s) that end in blanks. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply | | -0 :warning: | checkstyle | 0m 30s | [/results-checkstyle-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6294/6/artifact/out/results-checkstyle-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt) | hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient: The patch generated 4 new + 42 unchanged - 0 fixed = 46 total (was 42) | | +1 :green_heart: | mvnsite | 0m 27s | | the patch passed | | +1 :green_heart: | javadoc | 0m 20s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 19s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 0m 50s | | the patch passed | | +1 :green_heart: | shadedclient | 37m 18s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 126m 34s | [/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6294/6/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt) | hadoop-mapreduce-client-jobclient in the patch passed. | | +1 :green_heart: | asflicense | 0m 40s | | The patch does not generate ASF License warnings. | | | | 262m 26s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.fs.TestDFSIO | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6294/6/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6294 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux ac6db93f7884 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 5c06b0b03130b2c60df590d955a5dd8c528969bf | | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Multi-JDK versions |
Re: [PR] HADOOP-18997. S3A: make createSession optional when working with S3Express buckets [hadoop]
hadoop-yetus commented on PR #6316: URL: https://github.com/apache/hadoop/pull/6316#issuecomment-1843469394 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 48s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 13 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 48m 31s | | trunk passed | | +1 :green_heart: | compile | 0m 41s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 0m 33s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 0m 31s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 40s | | trunk passed | | +1 :green_heart: | javadoc | 0m 26s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 33s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 1m 7s | | trunk passed | | +1 :green_heart: | shadedclient | 37m 45s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 29s | | the patch passed | | +1 :green_heart: | compile | 0m 35s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 0m 35s | | the patch passed | | +1 :green_heart: | compile | 0m 26s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | javac | 0m 26s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 20s | [/results-checkstyle-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6316/4/artifact/out/results-checkstyle-hadoop-tools_hadoop-aws.txt) | hadoop-tools/hadoop-aws: The patch generated 2 new + 10 unchanged - 0 fixed = 12 total (was 10) | | +1 :green_heart: | mvnsite | 0m 31s | | the patch passed | | +1 :green_heart: | javadoc | 0m 15s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 24s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 1m 6s | | the patch passed | | +1 :green_heart: | shadedclient | 37m 21s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 2m 43s | | hadoop-aws in the patch passed. | | +1 :green_heart: | asflicense | 0m 34s | | The patch does not generate ASF License warnings. | | | | 139m 49s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6316/4/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6316 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux b18de2a4c09f 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / d5bacf4c03bfc5e3172105f0e36dae63d8a8d28c | | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6316/4/testReport/ | | Max. process+thread count | 606 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6316/4/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log
[jira] [Commented] (HADOOP-18997) S3A: Add option fs.s3a.s3express.create.session to enable/disable CreateSession
[ https://issues.apache.org/jira/browse/HADOOP-18997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17793878#comment-17793878 ] ASF GitHub Bot commented on HADOOP-18997: - hadoop-yetus commented on PR #6316: URL: https://github.com/apache/hadoop/pull/6316#issuecomment-1843469394 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 48s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 13 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 48m 31s | | trunk passed | | +1 :green_heart: | compile | 0m 41s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 0m 33s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 0m 31s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 40s | | trunk passed | | +1 :green_heart: | javadoc | 0m 26s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 33s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 1m 7s | | trunk passed | | +1 :green_heart: | shadedclient | 37m 45s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 29s | | the patch passed | | +1 :green_heart: | compile | 0m 35s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 0m 35s | | the patch passed | | +1 :green_heart: | compile | 0m 26s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | javac | 0m 26s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 20s | [/results-checkstyle-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6316/4/artifact/out/results-checkstyle-hadoop-tools_hadoop-aws.txt) | hadoop-tools/hadoop-aws: The patch generated 2 new + 10 unchanged - 0 fixed = 12 total (was 10) | | +1 :green_heart: | mvnsite | 0m 31s | | the patch passed | | +1 :green_heart: | javadoc | 0m 15s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 24s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 1m 6s | | the patch passed | | +1 :green_heart: | shadedclient | 37m 21s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 2m 43s | | hadoop-aws in the patch passed. | | +1 :green_heart: | asflicense | 0m 34s | | The patch does not generate ASF License warnings. | | | | 139m 49s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6316/4/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6316 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux b18de2a4c09f 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / d5bacf4c03bfc5e3172105f0e36dae63d8a8d28c | | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6316/4/testReport/ | | Max. process+thread count | 606 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6316/4/console | | versions | git=2.25.1 maven=3.6.3
Re: [PR] The script to generate put test running results with and without cartesian [hadoop]
hadoop-yetus commented on PR #6328: URL: https://github.com/apache/hadoop/pull/6328#issuecomment-1843448990 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 32s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | _ trunk Compile Tests _ | | +1 :green_heart: | shadedclient | 40m 41s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -1 :x: | pylint | 0m 4s | [/results-pylint.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6328/1/artifact/out/results-pylint.txt) | The patch generated 60 new + 0 unchanged - 0 fixed = 60 total (was 0) | | +1 :green_heart: | shadedclient | 31m 16s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | asflicense | 0m 37s | [/results-asflicense.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6328/1/artifact/out/results-asflicense.txt) | The patch generated 3 ASF License warnings. | | | | 77m 1s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6328/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6328 | | Optional Tests | dupname asflicense codespell detsecrets pylint | | uname | Linux 110d7c982479 5.15.0-86-generic #96-Ubuntu SMP Wed Sep 20 08:23:49 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 25a8cf4a22da3e14cab16c1123af61b57ee19f0b | | Max. process+thread count | 560 (vs. ulimit of 5500) | | modules | C: . U: . | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6328/1/console | | versions | git=2.25.1 maven=3.6.3 pylint=2.6.0 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] The script to generate put test running results with and without cartesian [hadoop]
hadoop-yetus commented on PR #6328: URL: https://github.com/apache/hadoop/pull/6328#issuecomment-1843404076 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 21s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | _ trunk Compile Tests _ | | +1 :green_heart: | shadedclient | 28m 31s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -1 :x: | pylint | 0m 3s | [/results-pylint.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6328/2/artifact/out/results-pylint.txt) | The patch generated 60 new + 0 unchanged - 0 fixed = 60 total (was 0) | | +1 :green_heart: | shadedclient | 18m 39s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | asflicense | 0m 26s | [/results-asflicense.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6328/2/artifact/out/results-asflicense.txt) | The patch generated 3 ASF License warnings. | | | | 50m 22s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6328/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6328 | | Optional Tests | dupname asflicense codespell detsecrets pylint | | uname | Linux 79a068d41794 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 25a8cf4a22da3e14cab16c1123af61b57ee19f0b | | Max. process+thread count | 630 (vs. ulimit of 5500) | | modules | C: . U: . | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6328/2/console | | versions | git=2.25.1 maven=3.6.3 pylint=2.6.0 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-19003) S3A Assume role tests failing against S3Express stores
[ https://issues.apache.org/jira/browse/HADOOP-19003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17793861#comment-17793861 ] Ahmar Suhail commented on HADOOP-19003: --- Checked, even if we disable createSession, any roles still need to use the s3Express name space and CreateSession action. I can work on this once I'm back from holiday, need to see if we should create new roles or skip failing tests, as you can only restrict on a bucket level and not by prefix. > S3A Assume role tests failing against S3Express stores > -- > > Key: HADOOP-19003 > URL: https://issues.apache.org/jira/browse/HADOOP-19003 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3, test >Affects Versions: 3.4.0 >Reporter: Steve Loughran >Priority: Minor > > The test suits which assume roles with restricted permissions down paths > still fail on S3Express, even after disabling createSession. > This is with a role which *should* work. > Either the role setup is wrong, or there's something special about role > configuration for S3Express buckets -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HDFS-17276. Fix the nn fetch editlog forbidden in kerberos environment [hadoop]
hadoop-yetus commented on PR #6326: URL: https://github.com/apache/hadoop/pull/6326#issuecomment-1843372230 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 21s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 32m 7s | | trunk passed | | +1 :green_heart: | compile | 0m 42s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 0m 35s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 0m 35s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 45s | | trunk passed | | +1 :green_heart: | javadoc | 0m 42s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 5s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 1m 43s | | trunk passed | | +1 :green_heart: | shadedclient | 20m 17s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 33s | | the patch passed | | +1 :green_heart: | compile | 0m 40s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 0m 40s | | the patch passed | | +1 :green_heart: | compile | 0m 33s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | javac | 0m 33s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 29s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 37s | | the patch passed | | +1 :green_heart: | javadoc | 0m 31s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 0s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 1m 41s | | the patch passed | | +1 :green_heart: | shadedclient | 20m 35s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 192m 18s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6326/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 29s | | The patch does not generate ASF License warnings. | | | | 279m 13s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.datanode.TestDirectoryScanner | | | hadoop.hdfs.web.TestWebHdfsTokens | | | hadoop.hdfs.qjournal.server.TestGetJournalEditServlet | | | hadoop.hdfs.server.common.TestJspHelper | | | hadoop.hdfs.TestDFSStripedOutputStreamWithRandomECPolicy | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6326/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6326 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 2e1184921843 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / a9147ac02d063880895857f8e1062e3a0b54823a | | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6326/1/testReport/ | | Max. process+thread count | 5421 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output |
Re: [PR] The script to generate put test running results with and without cartesian [hadoop]
Ellen99 closed pull request #6328: The script to generate put test running results with and without cartesian URL: https://github.com/apache/hadoop/pull/6328 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18997) S3A: Add option fs.s3a.s3express.create.session to enable/disable CreateSession
[ https://issues.apache.org/jira/browse/HADOOP-18997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17793824#comment-17793824 ] ASF GitHub Bot commented on HADOOP-18997: - steveloughran commented on code in PR #6316: URL: https://github.com/apache/hadoop/pull/6316#discussion_r1417606113 ## hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/tools/ITestBucketTool.java: ## @@ -118,7 +121,10 @@ public void testRecreateTestBucketS3Express() throws Throwable { fsURI)); if (ex instanceof AWSBadRequestException) { // owned error - assertExceptionContains(OWNED, ex); + if (!ex.getMessage().contains(OWNED) + && !ex.getMessage().contains(INVALID_LOCATION)) { Review Comment: there's some hardcoded expectations about region and if you test somewhere else it blows up. > S3A: Add option fs.s3a.s3express.create.session to enable/disable > CreateSession > --- > > Key: HADOOP-18997 > URL: https://issues.apache.org/jira/browse/HADOOP-18997 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.4.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > Labels: pull-request-available > > add a way to disable the need to use the createsession call, so as to allow > for > * simplifying our role test runs > * benchmarking the performance hit > * troubleshooting IAM permissions > this can also be disabled from the sysprop "aws.disableS3ExpressAuth" -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HADOOP-18997. S3A: make createSession optional when working with S3Express buckets [hadoop]
steveloughran commented on code in PR #6316: URL: https://github.com/apache/hadoop/pull/6316#discussion_r1417606113 ## hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/tools/ITestBucketTool.java: ## @@ -118,7 +121,10 @@ public void testRecreateTestBucketS3Express() throws Throwable { fsURI)); if (ex instanceof AWSBadRequestException) { // owned error - assertExceptionContains(OWNED, ex); + if (!ex.getMessage().contains(OWNED) + && !ex.getMessage().contains(INVALID_LOCATION)) { Review Comment: there's some hardcoded expectations about region and if you test somewhere else it blows up. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18997) S3A: Add option fs.s3a.s3express.create.session to enable/disable CreateSession
[ https://issues.apache.org/jira/browse/HADOOP-18997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17793820#comment-17793820 ] ASF GitHub Bot commented on HADOOP-18997: - steveloughran commented on code in PR #6316: URL: https://github.com/apache/hadoop/pull/6316#discussion_r1417600890 ## hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/auth/ITestAssumeRole.java: ## @@ -181,6 +189,10 @@ protected Configuration createValidRoleConf() throws JsonProcessingException { conf.set(ASSUMED_ROLE_ARN, roleARN); conf.set(ASSUMED_ROLE_SESSION_NAME, "valid"); conf.set(ASSUMED_ROLE_SESSION_DURATION, "45m"); +// disable create session so there's no need to +// add a role policy for it. +disableCreateSession(conf); Review Comment: without this it was failing without createsession permissions. now its failing for s3: related IAM issues. progress > S3A: Add option fs.s3a.s3express.create.session to enable/disable > CreateSession > --- > > Key: HADOOP-18997 > URL: https://issues.apache.org/jira/browse/HADOOP-18997 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.4.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > Labels: pull-request-available > > add a way to disable the need to use the createsession call, so as to allow > for > * simplifying our role test runs > * benchmarking the performance hit > * troubleshooting IAM permissions > this can also be disabled from the sysprop "aws.disableS3ExpressAuth" -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18997) S3A: Add option fs.s3a.s3express.create.session to enable/disable CreateSession
[ https://issues.apache.org/jira/browse/HADOOP-18997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17793821#comment-17793821 ] ASF GitHub Bot commented on HADOOP-18997: - steveloughran commented on code in PR #6316: URL: https://github.com/apache/hadoop/pull/6316#discussion_r1417601215 ## hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/auth/delegation/ITestSessionDelegationInFilesystem.java: ## @@ -147,40 +150,51 @@ protected Configuration createConfiguration() { // disable if assume role opts are off assumeSessionTestsEnabled(conf); disableFilesystemCaching(conf); -String s3EncryptionMethod; -try { - s3EncryptionMethod = - getEncryptionAlgorithm(getTestBucketName(conf), conf).getMethod(); -} catch (IOException e) { - throw new UncheckedIOException("Failed to lookup encryption algorithm.", - e); -} -String s3EncryptionKey = getS3EncryptionKey(getTestBucketName(conf), conf); +final String bucket = getTestBucketName(conf); +final boolean isS3Express = isS3ExpressTestBucket(conf); + removeBaseAndBucketOverrides(conf, DELEGATION_TOKEN_BINDING, Constants.S3_ENCRYPTION_ALGORITHM, Constants.S3_ENCRYPTION_KEY, SERVER_SIDE_ENCRYPTION_ALGORITHM, -SERVER_SIDE_ENCRYPTION_KEY); +SERVER_SIDE_ENCRYPTION_KEY, +S3EXPRESS_CREATE_SESSION); conf.set(HADOOP_SECURITY_AUTHENTICATION, UserGroupInformation.AuthenticationMethod.KERBEROS.name()); enableDelegationTokens(conf, getDelegationBinding()); conf.set(AWS_CREDENTIALS_PROVIDER, " "); // switch to CSE-KMS(if specified) else SSE-KMS. -if (conf.getBoolean(KEY_ENCRYPTION_TESTS, true)) { +if (!isS3Express && conf.getBoolean(KEY_ENCRYPTION_TESTS, true)) { + String s3EncryptionMethod; + try { +s3EncryptionMethod = +getEncryptionAlgorithm(bucket, conf).getMethod(); + } catch (IOException e) { +throw new UncheckedIOException("Failed to lookup encryption algorithm.", +e); + } + String s3EncryptionKey = getS3EncryptionKey(bucket, conf); + conf.set(Constants.S3_ENCRYPTION_ALGORITHM, s3EncryptionMethod); // KMS key ID a must if CSE-KMS is being tested. conf.set(Constants.S3_ENCRYPTION_KEY, s3EncryptionKey); } // set the YARN RM up for YARN tests. conf.set(YarnConfiguration.RM_PRINCIPAL, YARN_RM); -// turn on ACLs so as to verify role DT permissions include -// write access. -conf.set(CANNED_ACL, LOG_DELIVERY_WRITE); + +if (conf.getBoolean(KEY_ACL_TESTS_ENABLED, false) + && !isS3Express) { + // turn on ACLs so as to verify role DT permissions include + // write access. + conf.set(CANNED_ACL, LOG_DELIVERY_WRITE); +} +// disable create session so there's no need to +// add a role policy for it. +disableCreateSession(conf); Review Comment: you should have got further > S3A: Add option fs.s3a.s3express.create.session to enable/disable > CreateSession > --- > > Key: HADOOP-18997 > URL: https://issues.apache.org/jira/browse/HADOOP-18997 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.4.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > Labels: pull-request-available > > add a way to disable the need to use the createsession call, so as to allow > for > * simplifying our role test runs > * benchmarking the performance hit > * troubleshooting IAM permissions > this can also be disabled from the sysprop "aws.disableS3ExpressAuth" -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HADOOP-18997. S3A: make createSession optional when working with S3Express buckets [hadoop]
steveloughran commented on code in PR #6316: URL: https://github.com/apache/hadoop/pull/6316#discussion_r1417601215 ## hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/auth/delegation/ITestSessionDelegationInFilesystem.java: ## @@ -147,40 +150,51 @@ protected Configuration createConfiguration() { // disable if assume role opts are off assumeSessionTestsEnabled(conf); disableFilesystemCaching(conf); -String s3EncryptionMethod; -try { - s3EncryptionMethod = - getEncryptionAlgorithm(getTestBucketName(conf), conf).getMethod(); -} catch (IOException e) { - throw new UncheckedIOException("Failed to lookup encryption algorithm.", - e); -} -String s3EncryptionKey = getS3EncryptionKey(getTestBucketName(conf), conf); +final String bucket = getTestBucketName(conf); +final boolean isS3Express = isS3ExpressTestBucket(conf); + removeBaseAndBucketOverrides(conf, DELEGATION_TOKEN_BINDING, Constants.S3_ENCRYPTION_ALGORITHM, Constants.S3_ENCRYPTION_KEY, SERVER_SIDE_ENCRYPTION_ALGORITHM, -SERVER_SIDE_ENCRYPTION_KEY); +SERVER_SIDE_ENCRYPTION_KEY, +S3EXPRESS_CREATE_SESSION); conf.set(HADOOP_SECURITY_AUTHENTICATION, UserGroupInformation.AuthenticationMethod.KERBEROS.name()); enableDelegationTokens(conf, getDelegationBinding()); conf.set(AWS_CREDENTIALS_PROVIDER, " "); // switch to CSE-KMS(if specified) else SSE-KMS. -if (conf.getBoolean(KEY_ENCRYPTION_TESTS, true)) { +if (!isS3Express && conf.getBoolean(KEY_ENCRYPTION_TESTS, true)) { + String s3EncryptionMethod; + try { +s3EncryptionMethod = +getEncryptionAlgorithm(bucket, conf).getMethod(); + } catch (IOException e) { +throw new UncheckedIOException("Failed to lookup encryption algorithm.", +e); + } + String s3EncryptionKey = getS3EncryptionKey(bucket, conf); + conf.set(Constants.S3_ENCRYPTION_ALGORITHM, s3EncryptionMethod); // KMS key ID a must if CSE-KMS is being tested. conf.set(Constants.S3_ENCRYPTION_KEY, s3EncryptionKey); } // set the YARN RM up for YARN tests. conf.set(YarnConfiguration.RM_PRINCIPAL, YARN_RM); -// turn on ACLs so as to verify role DT permissions include -// write access. -conf.set(CANNED_ACL, LOG_DELIVERY_WRITE); + +if (conf.getBoolean(KEY_ACL_TESTS_ENABLED, false) + && !isS3Express) { + // turn on ACLs so as to verify role DT permissions include + // write access. + conf.set(CANNED_ACL, LOG_DELIVERY_WRITE); +} +// disable create session so there's no need to +// add a role policy for it. +disableCreateSession(conf); Review Comment: you should have got further -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HADOOP-18997. S3A: make createSession optional when working with S3Express buckets [hadoop]
steveloughran commented on code in PR #6316: URL: https://github.com/apache/hadoop/pull/6316#discussion_r1417600890 ## hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/auth/ITestAssumeRole.java: ## @@ -181,6 +189,10 @@ protected Configuration createValidRoleConf() throws JsonProcessingException { conf.set(ASSUMED_ROLE_ARN, roleARN); conf.set(ASSUMED_ROLE_SESSION_NAME, "valid"); conf.set(ASSUMED_ROLE_SESSION_DURATION, "45m"); +// disable create session so there's no need to +// add a role policy for it. +disableCreateSession(conf); Review Comment: without this it was failing without createsession permissions. now its failing for s3: related IAM issues. progress -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-19004) S3A: Support Authentication through HttpSigner API
[ https://issues.apache.org/jira/browse/HADOOP-19004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17793784#comment-17793784 ] ASF GitHub Bot commented on HADOOP-19004: - hadoop-yetus commented on PR #6324: URL: https://github.com/apache/hadoop/pull/6324#issuecomment-1843116435 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 31s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 42m 57s | | trunk passed | | +1 :green_heart: | compile | 0m 42s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 0m 34s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 0m 32s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 41s | | trunk passed | | +1 :green_heart: | javadoc | 0m 27s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 34s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 1m 9s | | trunk passed | | +1 :green_heart: | shadedclient | 32m 19s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 29s | | the patch passed | | +1 :green_heart: | compile | 0m 33s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 0m 33s | | the patch passed | | +1 :green_heart: | compile | 0m 26s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | javac | 0m 26s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 21s | [/results-checkstyle-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6324/2/artifact/out/results-checkstyle-hadoop-tools_hadoop-aws.txt) | hadoop-tools/hadoop-aws: The patch generated 1 new + 7 unchanged - 0 fixed = 8 total (was 7) | | +1 :green_heart: | mvnsite | 0m 33s | | the patch passed | | +1 :green_heart: | javadoc | 0m 16s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 25s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 1m 7s | | the patch passed | | +1 :green_heart: | shadedclient | 32m 18s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 2m 48s | | hadoop-aws in the patch passed. | | +1 :green_heart: | asflicense | 0m 36s | | The patch does not generate ASF License warnings. | | | | 123m 47s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6324/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6324 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux caa8401a7fd7 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / dc08ab7e371dce34502832954f64c3f48dcb | | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6324/2/testReport/ | | Max. process+thread count | 619 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6324/2/console | | versions | git=2.25.1 maven=3.6.3
Re: [PR] HADOOP-19004. S3A: Support Authentication through HttpSigner API [hadoop]
hadoop-yetus commented on PR #6324: URL: https://github.com/apache/hadoop/pull/6324#issuecomment-1843116435 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 31s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 42m 57s | | trunk passed | | +1 :green_heart: | compile | 0m 42s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 0m 34s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 0m 32s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 41s | | trunk passed | | +1 :green_heart: | javadoc | 0m 27s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 34s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 1m 9s | | trunk passed | | +1 :green_heart: | shadedclient | 32m 19s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 29s | | the patch passed | | +1 :green_heart: | compile | 0m 33s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 0m 33s | | the patch passed | | +1 :green_heart: | compile | 0m 26s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | javac | 0m 26s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 21s | [/results-checkstyle-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6324/2/artifact/out/results-checkstyle-hadoop-tools_hadoop-aws.txt) | hadoop-tools/hadoop-aws: The patch generated 1 new + 7 unchanged - 0 fixed = 8 total (was 7) | | +1 :green_heart: | mvnsite | 0m 33s | | the patch passed | | +1 :green_heart: | javadoc | 0m 16s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 25s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 1m 7s | | the patch passed | | +1 :green_heart: | shadedclient | 32m 18s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 2m 48s | | hadoop-aws in the patch passed. | | +1 :green_heart: | asflicense | 0m 36s | | The patch does not generate ASF License warnings. | | | | 123m 47s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6324/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6324 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux caa8401a7fd7 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / dc08ab7e371dce34502832954f64c3f48dcb | | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6324/2/testReport/ | | Max. process+thread count | 619 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6324/2/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on
[PR] HADOOP-18613. Upgrade ZooKeeper to version 3.8.2 and Curator to versi… [hadoop]
BilwaST opened a new pull request, #6327: URL: https://github.com/apache/hadoop/pull/6327 …on 5.4.0 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18613) Upgrade ZooKeeper to version 3.8.2
[ https://issues.apache.org/jira/browse/HADOOP-18613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17793777#comment-17793777 ] ASF GitHub Bot commented on HADOOP-18613: - BilwaST opened a new pull request, #6327: URL: https://github.com/apache/hadoop/pull/6327 …on 5.4.0 > Upgrade ZooKeeper to version 3.8.2 > -- > > Key: HADOOP-18613 > URL: https://issues.apache.org/jira/browse/HADOOP-18613 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Affects Versions: 3.3.4 >Reporter: Tamas Penzes >Assignee: Bilwa S T >Priority: Major > Labels: pull-request-available > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18989) Use thread pool to improve the speed of creating control files in TestDFSIO
[ https://issues.apache.org/jira/browse/HADOOP-18989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17793760#comment-17793760 ] ASF GitHub Bot commented on HADOOP-18989: - hfutatzhanghb commented on code in PR #6294: URL: https://github.com/apache/hadoop/pull/6294#discussion_r1417448922 ## hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/fs/TestDFSIO.java: ## @@ -289,12 +297,43 @@ public void testTruncate() throws Exception { bench.analyzeResult(fs, TestType.TEST_TYPE_TRUNCATE, execTime); } + private class ControlFileCreateTask implements Callable { Review Comment: @zhangshuyan0 Thanks sir, have updated here. > Use thread pool to improve the speed of creating control files in TestDFSIO > --- > > Key: HADOOP-18989 > URL: https://issues.apache.org/jira/browse/HADOOP-18989 > Project: Hadoop Common > Issue Type: Improvement > Components: benchmarks, common >Affects Versions: 3.3.6 >Reporter: farmmamba >Assignee: farmmamba >Priority: Major > Labels: pull-request-available > > When we use TestDFSIO tool to test the throughouts of HDFS clusters, we found > it is so slow in the creating controll files stage. > After refering to the source code, we found that method createControlFile try > to create control files serially. It can be improved by using thread pool. > After optimizing, the TestDFSIO tool runs quicker than before. > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18989) Use thread pool to improve the speed of creating control files in TestDFSIO
[ https://issues.apache.org/jira/browse/HADOOP-18989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17793761#comment-17793761 ] ASF GitHub Bot commented on HADOOP-18989: - hfutatzhanghb commented on code in PR #6294: URL: https://github.com/apache/hadoop/pull/6294#discussion_r1417449454 ## hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/fs/TestDFSIO.java: ## @@ -116,6 +122,8 @@ public class TestDFSIO implements Tool { "test.io.block.storage.policy"; private static final String ERASURE_CODE_POLICY_NAME_KEY = "test.io.erasure.code.policy"; + private ExecutorService excutorService = Executors.newFixedThreadPool( Review Comment: Thanks sir, very nice suggestion. Have updated it. > Use thread pool to improve the speed of creating control files in TestDFSIO > --- > > Key: HADOOP-18989 > URL: https://issues.apache.org/jira/browse/HADOOP-18989 > Project: Hadoop Common > Issue Type: Improvement > Components: benchmarks, common >Affects Versions: 3.3.6 >Reporter: farmmamba >Assignee: farmmamba >Priority: Major > Labels: pull-request-available > > When we use TestDFSIO tool to test the throughouts of HDFS clusters, we found > it is so slow in the creating controll files stage. > After refering to the source code, we found that method createControlFile try > to create control files serially. It can be improved by using thread pool. > After optimizing, the TestDFSIO tool runs quicker than before. > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HADOOP-18989. Use thread pool to improve the speed of creating control files in TestDFSIO [hadoop]
hfutatzhanghb commented on code in PR #6294: URL: https://github.com/apache/hadoop/pull/6294#discussion_r1417449454 ## hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/fs/TestDFSIO.java: ## @@ -116,6 +122,8 @@ public class TestDFSIO implements Tool { "test.io.block.storage.policy"; private static final String ERASURE_CODE_POLICY_NAME_KEY = "test.io.erasure.code.policy"; + private ExecutorService excutorService = Executors.newFixedThreadPool( Review Comment: Thanks sir, very nice suggestion. Have updated it. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HADOOP-18989. Use thread pool to improve the speed of creating control files in TestDFSIO [hadoop]
hfutatzhanghb commented on code in PR #6294: URL: https://github.com/apache/hadoop/pull/6294#discussion_r1417448922 ## hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/fs/TestDFSIO.java: ## @@ -289,12 +297,43 @@ public void testTruncate() throws Exception { bench.analyzeResult(fs, TestType.TEST_TYPE_TRUNCATE, execTime); } + private class ControlFileCreateTask implements Callable { Review Comment: @zhangshuyan0 Thanks sir, have updated here. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13816) Ambiguous plugin version warning from maven build.
[ https://issues.apache.org/jira/browse/HADOOP-13816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Masatake Iwasaki updated HADOOP-13816: -- Fix Version/s: 2.10.3 > Ambiguous plugin version warning from maven build. > -- > > Key: HADOOP-13816 > URL: https://issues.apache.org/jira/browse/HADOOP-13816 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.0.0-alpha2 >Reporter: Kai >Assignee: Kai >Priority: Minor > Fix For: 3.0.0-alpha2, 2.10.3 > > Attachments: HADOOP-13816.01.patch > > > When we try to build Hadoop with maven, the below warning is shown. > {code} > [WARNING] > [WARNING] Some problems were encountered while building the effective model > for org.apache.hadoop:hadoop-rumen:jar:3.0.0-alpha2-SNAPSHOT > [WARNING] 'dependencies.dependency.(groupId:artifactId:type:classifier)' must > be unique: com.fasterxml.jackson.core:jackson-databind:jar -> duplicate > declaration of version (?) @ line 102, column 17 > [WARNING] > [WARNING] Some problems were encountered while building the effective model > for org.apache.hadoop:hadoop-build-tools:jar:3.0.0-alpha2-SNAPSHOT > [WARNING] 'build.plugins.plugin.version' for > org.apache.maven.plugins:maven-remote-resources-plugin is missing. @ > org.apache.hadoop:hadoop-build-tools:[unknown-version], > /Users/sasakikai/dev/hadoop/hadoop-build-tools/pom.xml, line 80, column 15 > [WARNING] 'build.plugins.plugin.version' for > org.apache.maven.plugins:maven-resources-plugin is missing. @ > org.apache.hadoop:hadoop-build-tools:[unknown-version], > /Users/sasakikai/dev/hadoop/hadoop-build-tools/pom.xml, line 54, column 15 > {code} > It is required to > - remove duplicated definition {{jackson-databind}} > - specify the version {{maven-resource-plugin}} and > {{maven-remote-resources-plugin}}. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13816) Ambiguous plugin version warning from maven build.
[ https://issues.apache.org/jira/browse/HADOOP-13816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17793758#comment-17793758 ] Masatake Iwasaki commented on HADOOP-13816: --- cherry-picked this to branch-2.10 in order to fix the error on running dev-support/bin/create-release script. {noformat} $ less patchprocess/mvn_install_maven_plugins.log ... [ESC[1;33mWARNINGESC[m] Some problems were encountered while building the effective model for org.apache.hadoop:hadoop-build-tools:jar:2.10.3-SNAPSHOT [ESC[1;33mWARNINGESC[m] 'build.plugins.plugin.version' for org.apache.maven.plugins:maven-resources-plugin is missing. @ org.apache.hadoop:hadoop-build-tools:[unknown-version], /build/source/hadoop-build-tools/pom.xml, line 54, column 15 [ESC[1;33mWARNINGESC[m] 'build.plugins.plugin.version' for org.apache.maven.plugins:maven-remote-resources-plugin is missing. @ org.apache.hadoop:hadoop-build-tools:[unknown-version], /build/source/hadoop-build-tools/pom.xml, line 80, column 15 ... [ESC[1;31mERRORESC[m] Failed to execute goal ESC[32morg.apache.maven.plugins:maven-remote-resources-plugin:3.1.0:bundleESC[m ESC[1m(default)ESC[m on project ESC[36mhadoop-build-toolsESC[m: ESC[1;31mExecution default of goal org.apache.maven.plugins:maven-remote-resources-plugin:3.1.0:bundle failed:\ Unable to load the mojo 'bundle' in the plugin 'org.apache.maven.plugins:maven-remote-resources-plugin:3.1.0' due to an API incompatibility: org.codehaus.plexus.component.repository.exception.ComponentLookupException: org/apache/maven/plugin/resources/remote/BundleRemoteResourcesMojo : Unsupported\ major.minor version 52.0ESC[m {noformat} > Ambiguous plugin version warning from maven build. > -- > > Key: HADOOP-13816 > URL: https://issues.apache.org/jira/browse/HADOOP-13816 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.0.0-alpha2 >Reporter: Kai >Assignee: Kai >Priority: Minor > Fix For: 3.0.0-alpha2 > > Attachments: HADOOP-13816.01.patch > > > When we try to build Hadoop with maven, the below warning is shown. > {code} > [WARNING] > [WARNING] Some problems were encountered while building the effective model > for org.apache.hadoop:hadoop-rumen:jar:3.0.0-alpha2-SNAPSHOT > [WARNING] 'dependencies.dependency.(groupId:artifactId:type:classifier)' must > be unique: com.fasterxml.jackson.core:jackson-databind:jar -> duplicate > declaration of version (?) @ line 102, column 17 > [WARNING] > [WARNING] Some problems were encountered while building the effective model > for org.apache.hadoop:hadoop-build-tools:jar:3.0.0-alpha2-SNAPSHOT > [WARNING] 'build.plugins.plugin.version' for > org.apache.maven.plugins:maven-remote-resources-plugin is missing. @ > org.apache.hadoop:hadoop-build-tools:[unknown-version], > /Users/sasakikai/dev/hadoop/hadoop-build-tools/pom.xml, line 80, column 15 > [WARNING] 'build.plugins.plugin.version' for > org.apache.maven.plugins:maven-resources-plugin is missing. @ > org.apache.hadoop:hadoop-build-tools:[unknown-version], > /Users/sasakikai/dev/hadoop/hadoop-build-tools/pom.xml, line 54, column 15 > {code} > It is required to > - remove duplicated definition {{jackson-databind}} > - specify the version {{maven-resource-plugin}} and > {{maven-remote-resources-plugin}}. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18982) Fix doc about loading native libraries
[ https://issues.apache.org/jira/browse/HADOOP-18982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17793747#comment-17793747 ] ASF GitHub Bot commented on HADOOP-18982: - hadoop-yetus commented on PR #6281: URL: https://github.com/apache/hadoop/pull/6281#issuecomment-1843000769 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 18m 37s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 48m 24s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 25s | | trunk passed | | +1 :green_heart: | shadedclient | 87m 5s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 55s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | mvnsite | 1m 17s | | the patch passed | | +1 :green_heart: | shadedclient | 38m 16s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 35s | | The patch does not generate ASF License warnings. | | | | 149m 23s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6281/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6281 | | Optional Tests | dupname asflicense mvnsite codespell detsecrets | | uname | Linux e89ff0e81a8e 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 0b6693ff85cb9b31d4dbf868b93b27c3048c48f7 | | Max. process+thread count | 536 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6281/2/console | | versions | git=2.25.1 maven=3.6.3 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. > Fix doc about loading native libraries > -- > > Key: HADOOP-18982 > URL: https://issues.apache.org/jira/browse/HADOOP-18982 > Project: Hadoop Common > Issue Type: Improvement > Components: documentation >Reporter: Shuyan Zhang >Assignee: Shuyan Zhang >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > > When we want load a native library libmyexample.so, the right way is to call > System.loadLibrary("myexample") rather than > System.loadLibrary("libmyexample.so"). -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HADOOP-18982. Fix doc about loading native libraries. [hadoop]
hadoop-yetus commented on PR #6281: URL: https://github.com/apache/hadoop/pull/6281#issuecomment-1843000769 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 18m 37s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 48m 24s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 25s | | trunk passed | | +1 :green_heart: | shadedclient | 87m 5s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 55s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | mvnsite | 1m 17s | | the patch passed | | +1 :green_heart: | shadedclient | 38m 16s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 35s | | The patch does not generate ASF License warnings. | | | | 149m 23s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6281/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6281 | | Optional Tests | dupname asflicense mvnsite codespell detsecrets | | uname | Linux e89ff0e81a8e 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 0b6693ff85cb9b31d4dbf868b93b27c3048c48f7 | | Max. process+thread count | 536 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6281/2/console | | versions | git=2.25.1 maven=3.6.3 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HDFS-17270. RBF: Fix ZKDelegationTokenSecretManagerImpl use closed zookeeper client to get token in some case. [hadoop]
Hexiaoqiao commented on PR #6315: URL: https://github.com/apache/hadoop/pull/6315#issuecomment-1842896978 Committed to trunk. Thanks @ThinkerLei and @zhangshuyan0 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HDFS-17270. RBF: Fix ZKDelegationTokenSecretManagerImpl use closed zookeeper client to get token in some case. [hadoop]
Hexiaoqiao merged PR #6315: URL: https://github.com/apache/hadoop/pull/6315 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-18982) Fix doc about loading native libraries
[ https://issues.apache.org/jira/browse/HADOOP-18982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaoqiao He resolved HADOOP-18982. -- Fix Version/s: 3.4.0 Hadoop Flags: Reviewed Resolution: Fixed > Fix doc about loading native libraries > -- > > Key: HADOOP-18982 > URL: https://issues.apache.org/jira/browse/HADOOP-18982 > Project: Hadoop Common > Issue Type: Bug >Reporter: Shuyan Zhang >Assignee: Shuyan Zhang >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > > When we want load a native library libmyexample.so, the right way is to call > System.loadLibrary("myexample") rather than > System.loadLibrary("libmyexample.so"). -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-18982) Fix doc about loading native libraries
[ https://issues.apache.org/jira/browse/HADOOP-18982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaoqiao He updated HADOOP-18982: - Issue Type: Improvement (was: Bug) > Fix doc about loading native libraries > -- > > Key: HADOOP-18982 > URL: https://issues.apache.org/jira/browse/HADOOP-18982 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Shuyan Zhang >Assignee: Shuyan Zhang >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > > When we want load a native library libmyexample.so, the right way is to call > System.loadLibrary("myexample") rather than > System.loadLibrary("libmyexample.so"). -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-18982) Fix doc about loading native libraries
[ https://issues.apache.org/jira/browse/HADOOP-18982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaoqiao He updated HADOOP-18982: - Component/s: documentation > Fix doc about loading native libraries > -- > > Key: HADOOP-18982 > URL: https://issues.apache.org/jira/browse/HADOOP-18982 > Project: Hadoop Common > Issue Type: Improvement > Components: documentation >Reporter: Shuyan Zhang >Assignee: Shuyan Zhang >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > > When we want load a native library libmyexample.so, the right way is to call > System.loadLibrary("myexample") rather than > System.loadLibrary("libmyexample.so"). -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18982) Fix doc about loading native libraries
[ https://issues.apache.org/jira/browse/HADOOP-18982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17793714#comment-17793714 ] ASF GitHub Bot commented on HADOOP-18982: - Hexiaoqiao merged PR #6281: URL: https://github.com/apache/hadoop/pull/6281 > Fix doc about loading native libraries > -- > > Key: HADOOP-18982 > URL: https://issues.apache.org/jira/browse/HADOOP-18982 > Project: Hadoop Common > Issue Type: Bug >Reporter: Shuyan Zhang >Priority: Major > Labels: pull-request-available > > When we want load a native library libmyexample.so, the right way is to call > System.loadLibrary("myexample") rather than > System.loadLibrary("libmyexample.so"). -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18982) Fix doc about loading native libraries
[ https://issues.apache.org/jira/browse/HADOOP-18982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17793715#comment-17793715 ] ASF GitHub Bot commented on HADOOP-18982: - Hexiaoqiao commented on PR #6281: URL: https://github.com/apache/hadoop/pull/6281#issuecomment-1842872502 Committed to trunk. Thanks @zhangshuyan0 > Fix doc about loading native libraries > -- > > Key: HADOOP-18982 > URL: https://issues.apache.org/jira/browse/HADOOP-18982 > Project: Hadoop Common > Issue Type: Bug >Reporter: Shuyan Zhang >Priority: Major > Labels: pull-request-available > > When we want load a native library libmyexample.so, the right way is to call > System.loadLibrary("myexample") rather than > System.loadLibrary("libmyexample.so"). -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-18982) Fix doc about loading native libraries
[ https://issues.apache.org/jira/browse/HADOOP-18982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaoqiao He reassigned HADOOP-18982: Assignee: Shuyan Zhang > Fix doc about loading native libraries > -- > > Key: HADOOP-18982 > URL: https://issues.apache.org/jira/browse/HADOOP-18982 > Project: Hadoop Common > Issue Type: Bug >Reporter: Shuyan Zhang >Assignee: Shuyan Zhang >Priority: Major > Labels: pull-request-available > > When we want load a native library libmyexample.so, the right way is to call > System.loadLibrary("myexample") rather than > System.loadLibrary("libmyexample.so"). -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HADOOP-18982. Fix doc about loading native libraries. [hadoop]
Hexiaoqiao commented on PR #6281: URL: https://github.com/apache/hadoop/pull/6281#issuecomment-1842872502 Committed to trunk. Thanks @zhangshuyan0 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HADOOP-18982. Fix doc about loading native libraries. [hadoop]
Hexiaoqiao merged PR #6281: URL: https://github.com/apache/hadoop/pull/6281 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-19004) S3A: Support Authentication through HttpSigner API
[ https://issues.apache.org/jira/browse/HADOOP-19004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-19004: Summary: S3A: Support Authentication through HttpSigner API (was: S3A: Move to a new HttpSigner for S3Express) > S3A: Support Authentication through HttpSigner API > --- > > Key: HADOOP-19004 > URL: https://issues.apache.org/jira/browse/HADOOP-19004 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.4.0 >Reporter: Steve Loughran >Assignee: Harshit Gupta >Priority: Major > Labels: pull-request-available > > The latest AWS SDK changes how signing works, and for signing S3Express > signatures the new {{software.amazon.awssdk.http.auth}} auth mechanism is > needed -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[PR] HDFS-17276. Fix the nn fetch editlog forbidden in kerberos environment [hadoop]
gp1314 opened a new pull request, #6326: URL: https://github.com/apache/hadoop/pull/6326 ### Description of PR - In a Kerberos environment, the namenode cannot fetch editlog from journalnode because the request is rejected (403). ![image-2023-12-05-20-59-33-728](https://github.com/apache/hadoop/assets/22268305/f19c2518-3fa9-4ceb-8570-63b0b38f682a) - GetJournalEditServlet checks if the request's username meets the requirements through the isValidRequestor function. After [HDFS-16686](https://issues.apache.org/jira/browse/HDFS-16686) is merged, remotePrincipal becomes ugi.getUserName(). - In a Kerberos environment, ugi.getUserName() gets the request.getRemoteUser() via DfsServlet's getUGI to get the username, and this username is not a full name. - Therefore, the obtained username is similar to namenode01 instead of namenode01/host01@@REALM.TLD, which meansit fails to pass the isValidRequestor check. ![image-2023-12-05-21-05-49-180](https://github.com/apache/hadoop/assets/22268305/1a50c620-c8a3-4499-bdfe-2b064b709d9f) **reproduction** - In the TestGetJournalEditServlet add testSecurityRequestNameNode ``` @Test public void testSecurityRequestNameNode() throws IOException, ServletException { // Test: Make a request from a namenode CONF.set(HADOOP_SECURITY_AUTHENTICATION, "kerberos"); UserGroupInformation.setConfiguration(CONF); HttpServletRequest request = mock(HttpServletRequest.class); when(request.getParameter(UserParam.NAME)).thenReturn("nn/localh...@realm.tld"); when(request.getRemoteUser()).thenReturn("jn"); boolean isValid = SERVLET.isValidRequestor(request, CONF); assertThat(isValid).isTrue(); } ``` ### How was this patch tested? ### For code changes: - [x] Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')? - [ ] Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, `NOTICE-binary` files? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18989) Use thread pool to improve the speed of creating control files in TestDFSIO
[ https://issues.apache.org/jira/browse/HADOOP-18989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17793681#comment-17793681 ] ASF GitHub Bot commented on HADOOP-18989: - zhangshuyan0 commented on code in PR #6294: URL: https://github.com/apache/hadoop/pull/6294#discussion_r1417197306 ## hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/fs/TestDFSIO.java: ## @@ -289,12 +297,43 @@ public void testTruncate() throws Exception { bench.analyzeResult(fs, TestType.TEST_TYPE_TRUNCATE, execTime); } + private class ControlFileCreateTask implements Callable { Review Comment: There is no return value here, it is more suitable to use Runnable. ## hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/fs/TestDFSIO.java: ## @@ -116,6 +122,8 @@ public class TestDFSIO implements Tool { "test.io.block.storage.policy"; private static final String ERASURE_CODE_POLICY_NAME_KEY = "test.io.erasure.code.policy"; + private ExecutorService excutorService = Executors.newFixedThreadPool( Review Comment: I suggest you use `CompletionService` here. Then we will not need to use a `CountDownLatch`. > Use thread pool to improve the speed of creating control files in TestDFSIO > --- > > Key: HADOOP-18989 > URL: https://issues.apache.org/jira/browse/HADOOP-18989 > Project: Hadoop Common > Issue Type: Improvement > Components: benchmarks, common >Affects Versions: 3.3.6 >Reporter: farmmamba >Assignee: farmmamba >Priority: Major > Labels: pull-request-available > > When we use TestDFSIO tool to test the throughouts of HDFS clusters, we found > it is so slow in the creating controll files stage. > After refering to the source code, we found that method createControlFile try > to create control files serially. It can be improved by using thread pool. > After optimizing, the TestDFSIO tool runs quicker than before. > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HADOOP-18989. Use thread pool to improve the speed of creating control files in TestDFSIO [hadoop]
zhangshuyan0 commented on code in PR #6294: URL: https://github.com/apache/hadoop/pull/6294#discussion_r1417197306 ## hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/fs/TestDFSIO.java: ## @@ -289,12 +297,43 @@ public void testTruncate() throws Exception { bench.analyzeResult(fs, TestType.TEST_TYPE_TRUNCATE, execTime); } + private class ControlFileCreateTask implements Callable { Review Comment: There is no return value here, it is more suitable to use Runnable. ## hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/fs/TestDFSIO.java: ## @@ -116,6 +122,8 @@ public class TestDFSIO implements Tool { "test.io.block.storage.policy"; private static final String ERASURE_CODE_POLICY_NAME_KEY = "test.io.erasure.code.policy"; + private ExecutorService excutorService = Executors.newFixedThreadPool( Review Comment: I suggest you use `CompletionService` here. Then we will not need to use a `CountDownLatch`. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HDFS-17269. RBF: Listing trash directory should return subdirs from all subclusters. [hadoop]
hadoop-yetus commented on PR #6312: URL: https://github.com/apache/hadoop/pull/6312#issuecomment-1842774757 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 3m 6s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 32m 22s | | trunk passed | | +1 :green_heart: | compile | 0m 24s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 0m 19s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 0m 17s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 27s | | trunk passed | | +1 :green_heart: | javadoc | 0m 27s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 19s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 0m 57s | | trunk passed | | +1 :green_heart: | shadedclient | 20m 43s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 18s | | the patch passed | | +1 :green_heart: | compile | 0m 19s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 0m 19s | | the patch passed | | +1 :green_heart: | compile | 0m 17s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | javac | 0m 17s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 10s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 20s | | the patch passed | | +1 :green_heart: | javadoc | 0m 16s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 17s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 0m 50s | | the patch passed | | +1 :green_heart: | shadedclient | 20m 12s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 18m 51s | | hadoop-hdfs-rbf in the patch passed. | | +1 :green_heart: | asflicense | 0m 23s | | The patch does not generate ASF License warnings. | | | | 104m 10s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6312/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6312 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 3aa177a5fab8 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 6deca602b618ed759badba4e6024d490f3b18110 | | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6312/3/testReport/ | | Max. process+thread count | 2311 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: hadoop-hdfs-project/hadoop-hdfs-rbf | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6312/3/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] HDFS-17269. RBF: Listing trash directory should return subdirs from all subclusters. [hadoop]
hadoop-yetus commented on PR #6312: URL: https://github.com/apache/hadoop/pull/6312#issuecomment-1842774062 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 6m 57s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 32m 26s | | trunk passed | | +1 :green_heart: | compile | 0m 22s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 0m 20s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 0m 17s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 25s | | trunk passed | | +1 :green_heart: | javadoc | 0m 26s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 19s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 0m 55s | | trunk passed | | +1 :green_heart: | shadedclient | 20m 16s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 18s | | the patch passed | | +1 :green_heart: | compile | 0m 19s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 0m 19s | | the patch passed | | +1 :green_heart: | compile | 0m 16s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | javac | 0m 16s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 9s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 19s | | the patch passed | | +1 :green_heart: | javadoc | 0m 16s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 13s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 0m 50s | | the patch passed | | +1 :green_heart: | shadedclient | 20m 19s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 18m 45s | | hadoop-hdfs-rbf in the patch passed. | | +1 :green_heart: | asflicense | 0m 25s | | The patch does not generate ASF License warnings. | | | | 107m 13s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6312/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6312 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 067f0218bc0c 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 6deca602b618ed759badba4e6024d490f3b18110 | | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6312/2/testReport/ | | Max. process+thread count | 2305 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: hadoop-hdfs-project/hadoop-hdfs-rbf | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6312/2/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HADOOP-18997) S3A: Add option fs.s3a.s3express.create.session to enable/disable CreateSession
[ https://issues.apache.org/jira/browse/HADOOP-18997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17793672#comment-17793672 ] ASF GitHub Bot commented on HADOOP-18997: - ahmarsuhail commented on code in PR #6316: URL: https://github.com/apache/hadoop/pull/6316#discussion_r1417123542 ## hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/auth/ITestAssumeRole.java: ## @@ -181,6 +189,10 @@ protected Configuration createValidRoleConf() throws JsonProcessingException { conf.set(ASSUMED_ROLE_ARN, roleARN); conf.set(ASSUMED_ROLE_SESSION_NAME, "valid"); conf.set(ASSUMED_ROLE_SESSION_DURATION, "45m"); +// disable create session so there's no need to +// add a role policy for it. +disableCreateSession(conf); Review Comment: what was happening without this? I am seeing the same failure on trunk and on this branch. for eg `testPartialDelete` fails on list for `/test/testPartialDelete/file-1/` ## hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/S3ATestUtils.java: ## @@ -493,23 +507,72 @@ public static void skipIfNotEnabled(final Configuration configuration, } /** - * Skip a test if storage class tests are disabled. + * Skip a test if storage class tests are disabled, + * or the bucket is an S3Express bucket. * @param configuration configuration to probe */ public static void skipIfStorageClassTestsDisabled( Configuration configuration) { skipIfNotEnabled(configuration, KEY_STORAGE_CLASS_TESTS_ENABLED, "Skipping storage class tests"); +skipIfS3ExpressBucket(configuration); } /** - * Skip a test if ACL class tests are disabled. + * Skip a test if ACL class tests are disabled, + * or the bucket is an S3Express bucket. * @param configuration configuration to probe */ public static void skipIfACLTestsDisabled( Configuration configuration) { skipIfNotEnabled(configuration, KEY_ACL_TESTS_ENABLED, "Skipping storage class ACL tests"); +skipIfS3ExpressBucket(configuration); + } + + /** + * Skip a test if the test bucket is an S3Express bucket. + * @param configuration configuration to probe + */ + public static void skipIfS3ExpressBucket( + Configuration configuration) { +assume("Skipping test as bucket is an S3Express bucket", +!isS3ExpressTestBucket(configuration)); + } + + /** + * Is the test bucket an S3Express bucket? + * @param conf configuration + * @return true if the bucket is an S3Express bucket. + */ + public static boolean isS3ExpressTestBucket(final Configuration conf) { +return S3ExpressStorage.isS3ExpressStore(getTestBucketName(conf), ""); + } + + /** + * Skip a test if the filesystem lacks a required capability. Review Comment: nit: update javadoc, i think this will skip if it has a capability ## hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/auth/delegation/ITestSessionDelegationInFilesystem.java: ## @@ -147,40 +150,51 @@ protected Configuration createConfiguration() { // disable if assume role opts are off assumeSessionTestsEnabled(conf); disableFilesystemCaching(conf); -String s3EncryptionMethod; -try { - s3EncryptionMethod = - getEncryptionAlgorithm(getTestBucketName(conf), conf).getMethod(); -} catch (IOException e) { - throw new UncheckedIOException("Failed to lookup encryption algorithm.", - e); -} -String s3EncryptionKey = getS3EncryptionKey(getTestBucketName(conf), conf); +final String bucket = getTestBucketName(conf); +final boolean isS3Express = isS3ExpressTestBucket(conf); + removeBaseAndBucketOverrides(conf, DELEGATION_TOKEN_BINDING, Constants.S3_ENCRYPTION_ALGORITHM, Constants.S3_ENCRYPTION_KEY, SERVER_SIDE_ENCRYPTION_ALGORITHM, -SERVER_SIDE_ENCRYPTION_KEY); +SERVER_SIDE_ENCRYPTION_KEY, +S3EXPRESS_CREATE_SESSION); conf.set(HADOOP_SECURITY_AUTHENTICATION, UserGroupInformation.AuthenticationMethod.KERBEROS.name()); enableDelegationTokens(conf, getDelegationBinding()); conf.set(AWS_CREDENTIALS_PROVIDER, " "); // switch to CSE-KMS(if specified) else SSE-KMS. -if (conf.getBoolean(KEY_ENCRYPTION_TESTS, true)) { +if (!isS3Express && conf.getBoolean(KEY_ENCRYPTION_TESTS, true)) { + String s3EncryptionMethod; + try { +s3EncryptionMethod = +getEncryptionAlgorithm(bucket, conf).getMethod(); + } catch (IOException e) { +throw new UncheckedIOException("Failed to lookup encryption algorithm.", +e); + } + String s3EncryptionKey = getS3EncryptionKey(bucket, conf); + conf.set(Constants.S3_ENCRYPTION_ALGORITHM, s3EncryptionMethod); // KMS key ID a must if CSE-KMS is being tested.
Re: [PR] HADOOP-18997. S3A: make createSession optional when working with S3Express buckets [hadoop]
ahmarsuhail commented on code in PR #6316: URL: https://github.com/apache/hadoop/pull/6316#discussion_r1417123542 ## hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/auth/ITestAssumeRole.java: ## @@ -181,6 +189,10 @@ protected Configuration createValidRoleConf() throws JsonProcessingException { conf.set(ASSUMED_ROLE_ARN, roleARN); conf.set(ASSUMED_ROLE_SESSION_NAME, "valid"); conf.set(ASSUMED_ROLE_SESSION_DURATION, "45m"); +// disable create session so there's no need to +// add a role policy for it. +disableCreateSession(conf); Review Comment: what was happening without this? I am seeing the same failure on trunk and on this branch. for eg `testPartialDelete` fails on list for `/test/testPartialDelete/file-1/` ## hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/S3ATestUtils.java: ## @@ -493,23 +507,72 @@ public static void skipIfNotEnabled(final Configuration configuration, } /** - * Skip a test if storage class tests are disabled. + * Skip a test if storage class tests are disabled, + * or the bucket is an S3Express bucket. * @param configuration configuration to probe */ public static void skipIfStorageClassTestsDisabled( Configuration configuration) { skipIfNotEnabled(configuration, KEY_STORAGE_CLASS_TESTS_ENABLED, "Skipping storage class tests"); +skipIfS3ExpressBucket(configuration); } /** - * Skip a test if ACL class tests are disabled. + * Skip a test if ACL class tests are disabled, + * or the bucket is an S3Express bucket. * @param configuration configuration to probe */ public static void skipIfACLTestsDisabled( Configuration configuration) { skipIfNotEnabled(configuration, KEY_ACL_TESTS_ENABLED, "Skipping storage class ACL tests"); +skipIfS3ExpressBucket(configuration); + } + + /** + * Skip a test if the test bucket is an S3Express bucket. + * @param configuration configuration to probe + */ + public static void skipIfS3ExpressBucket( + Configuration configuration) { +assume("Skipping test as bucket is an S3Express bucket", +!isS3ExpressTestBucket(configuration)); + } + + /** + * Is the test bucket an S3Express bucket? + * @param conf configuration + * @return true if the bucket is an S3Express bucket. + */ + public static boolean isS3ExpressTestBucket(final Configuration conf) { +return S3ExpressStorage.isS3ExpressStore(getTestBucketName(conf), ""); + } + + /** + * Skip a test if the filesystem lacks a required capability. Review Comment: nit: update javadoc, i think this will skip if it has a capability ## hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/auth/delegation/ITestSessionDelegationInFilesystem.java: ## @@ -147,40 +150,51 @@ protected Configuration createConfiguration() { // disable if assume role opts are off assumeSessionTestsEnabled(conf); disableFilesystemCaching(conf); -String s3EncryptionMethod; -try { - s3EncryptionMethod = - getEncryptionAlgorithm(getTestBucketName(conf), conf).getMethod(); -} catch (IOException e) { - throw new UncheckedIOException("Failed to lookup encryption algorithm.", - e); -} -String s3EncryptionKey = getS3EncryptionKey(getTestBucketName(conf), conf); +final String bucket = getTestBucketName(conf); +final boolean isS3Express = isS3ExpressTestBucket(conf); + removeBaseAndBucketOverrides(conf, DELEGATION_TOKEN_BINDING, Constants.S3_ENCRYPTION_ALGORITHM, Constants.S3_ENCRYPTION_KEY, SERVER_SIDE_ENCRYPTION_ALGORITHM, -SERVER_SIDE_ENCRYPTION_KEY); +SERVER_SIDE_ENCRYPTION_KEY, +S3EXPRESS_CREATE_SESSION); conf.set(HADOOP_SECURITY_AUTHENTICATION, UserGroupInformation.AuthenticationMethod.KERBEROS.name()); enableDelegationTokens(conf, getDelegationBinding()); conf.set(AWS_CREDENTIALS_PROVIDER, " "); // switch to CSE-KMS(if specified) else SSE-KMS. -if (conf.getBoolean(KEY_ENCRYPTION_TESTS, true)) { +if (!isS3Express && conf.getBoolean(KEY_ENCRYPTION_TESTS, true)) { + String s3EncryptionMethod; + try { +s3EncryptionMethod = +getEncryptionAlgorithm(bucket, conf).getMethod(); + } catch (IOException e) { +throw new UncheckedIOException("Failed to lookup encryption algorithm.", +e); + } + String s3EncryptionKey = getS3EncryptionKey(bucket, conf); + conf.set(Constants.S3_ENCRYPTION_ALGORITHM, s3EncryptionMethod); // KMS key ID a must if CSE-KMS is being tested. conf.set(Constants.S3_ENCRYPTION_KEY, s3EncryptionKey); } // set the YARN RM up for YARN tests. conf.set(YarnConfiguration.RM_PRINCIPAL, YARN_RM); -// turn on ACLs so as to verify role DT permissions include -// write access. -conf.set(CANNED_ACL,
[jira] [Commented] (HADOOP-18982) Fix doc about loading native libraries
[ https://issues.apache.org/jira/browse/HADOOP-18982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17793666#comment-17793666 ] ASF GitHub Bot commented on HADOOP-18982: - zhangshuyan0 commented on code in PR #6281: URL: https://github.com/apache/hadoop/pull/6281#discussion_r1417172435 ## hadoop-common-project/hadoop-common/src/site/markdown/NativeLibraries.md.vm: ## @@ -128,8 +128,8 @@ You can load any native shared library using DistributedCache for distributing a This example shows you how to distribute a shared library, mylib.so, and load it from a MapReduce task. -1. First copy the library to the HDFS: `bin/hadoop fs -copyFromLocal mylib.so.1 /libraries/mylib.so.1` -2. The job launching program should contain the following: `DistributedCache.createSymlink(conf);` `DistributedCache.addCacheFile("hdfs://host:port/libraries/mylib.so. 1#mylib.so", conf);` -3. The MapReduce task can contain: `System.loadLibrary("mylib.so");` +1. First copy the library to the HDFS: `bin/hadoop fs -copyFromLocal libmyexample.so.1 /libraries/libmyexample.so.1` +2. The job launching program should contain the following: `DistributedCache.createSymlink(conf);` `DistributedCache.addCacheFile("hdfs://host:port/libraries/libmyexample.so.1#libmyexample.so", conf);` +3. The MapReduce task can contain: `System.loadLibrary("myexample");` Review Comment: @Hexiaoqiao Thanks for your review. This PR is updated. > Fix doc about loading native libraries > -- > > Key: HADOOP-18982 > URL: https://issues.apache.org/jira/browse/HADOOP-18982 > Project: Hadoop Common > Issue Type: Bug >Reporter: Shuyan Zhang >Priority: Major > Labels: pull-request-available > > When we want load a native library libmyexample.so, the right way is to call > System.loadLibrary("myexample") rather than > System.loadLibrary("libmyexample.so"). -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HADOOP-18982. Fix doc about loading native libraries. [hadoop]
zhangshuyan0 commented on code in PR #6281: URL: https://github.com/apache/hadoop/pull/6281#discussion_r1417172435 ## hadoop-common-project/hadoop-common/src/site/markdown/NativeLibraries.md.vm: ## @@ -128,8 +128,8 @@ You can load any native shared library using DistributedCache for distributing a This example shows you how to distribute a shared library, mylib.so, and load it from a MapReduce task. -1. First copy the library to the HDFS: `bin/hadoop fs -copyFromLocal mylib.so.1 /libraries/mylib.so.1` -2. The job launching program should contain the following: `DistributedCache.createSymlink(conf);` `DistributedCache.addCacheFile("hdfs://host:port/libraries/mylib.so. 1#mylib.so", conf);` -3. The MapReduce task can contain: `System.loadLibrary("mylib.so");` +1. First copy the library to the HDFS: `bin/hadoop fs -copyFromLocal libmyexample.so.1 /libraries/libmyexample.so.1` +2. The job launching program should contain the following: `DistributedCache.createSymlink(conf);` `DistributedCache.addCacheFile("hdfs://host:port/libraries/libmyexample.so.1#libmyexample.so", conf);` +3. The MapReduce task can contain: `System.loadLibrary("myexample");` Review Comment: @Hexiaoqiao Thanks for your review. This PR is updated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HDFS-17272. NNThroughputBenchmark should support specifying the base directory for multi-client test [hadoop]
hadoop-yetus commented on PR #6319: URL: https://github.com/apache/hadoop/pull/6319#issuecomment-1842732133 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 53s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +0 :ok: | markdownlint | 0m 0s | | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 14m 30s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 31m 4s | | trunk passed | | +1 :green_heart: | compile | 16m 19s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 14m 47s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 4m 11s | | trunk passed | | +1 :green_heart: | mvnsite | 3m 12s | | trunk passed | | +1 :green_heart: | javadoc | 2m 24s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 2m 48s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 5m 54s | | trunk passed | | +1 :green_heart: | shadedclient | 34m 53s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 31s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 4s | | the patch passed | | +1 :green_heart: | compile | 15m 39s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 15m 39s | | the patch passed | | +1 :green_heart: | compile | 14m 50s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | javac | 14m 50s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 4m 7s | | root: The patch generated 0 new + 117 unchanged - 9 fixed = 117 total (was 126) | | +1 :green_heart: | mvnsite | 3m 5s | | the patch passed | | +1 :green_heart: | javadoc | 2m 19s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 2m 38s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 6m 15s | | the patch passed | | +1 :green_heart: | shadedclient | 35m 22s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 19m 14s | | hadoop-common in the patch passed. | | +1 :green_heart: | unit | 264m 5s | | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 1m 2s | | The patch does not generate ASF License warnings. | | | | 503m 43s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6319/5/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6319 | | Optional Tests | dupname asflicense mvnsite codespell detsecrets markdownlint compile javac javadoc mvninstall unit shadedclient spotbugs checkstyle | | uname | Linux 5f859dc6a525 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 14566b74d274a286f28f1efa751c3f2941e74d61 | | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6319/5/testReport/ | | Max. process+thread count | 3476 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs U: . | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6319/5/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0
Re: [PR] HDFS-15413. add dfs.client.read.striped.datanode.max.attempts to fix read ecfile timeout [hadoop]
hadoop-yetus commented on PR #5829: URL: https://github.com/apache/hadoop/pull/5829#issuecomment-1842713669 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 19s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 0s | | xmllint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 13m 24s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 19m 14s | | trunk passed | | +1 :green_heart: | compile | 2m 51s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 2m 47s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 0m 44s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 16s | | trunk passed | | +1 :green_heart: | javadoc | 1m 3s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 29s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 3m 3s | | trunk passed | | +1 :green_heart: | shadedclient | 20m 9s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 21s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 2s | | the patch passed | | +1 :green_heart: | compile | 2m 47s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 2m 47s | | the patch passed | | +1 :green_heart: | compile | 2m 42s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | javac | 2m 42s | | the patch passed | | -1 :x: | blanks | 0m 0s | [/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5829/4/artifact/out/blanks-eol.txt) | The patch has 1 line(s) that end in blanks. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply | | -0 :warning: | checkstyle | 0m 35s | [/results-checkstyle-hadoop-hdfs-project.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5829/4/artifact/out/results-checkstyle-hadoop-hdfs-project.txt) | hadoop-hdfs-project: The patch generated 1 new + 45 unchanged - 0 fixed = 46 total (was 45) | | +1 :green_heart: | mvnsite | 1m 6s | | the patch passed | | +1 :green_heart: | javadoc | 0m 50s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 20s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 3m 5s | | the patch passed | | +1 :green_heart: | shadedclient | 20m 7s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 1m 49s | | hadoop-hdfs-client in the patch passed. | | -1 :x: | unit | 189m 58s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5829/4/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 27s | | The patch does not generate ASF License warnings. | | | | 293m 19s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.TestDFSStripedInputStreamWithTimeout | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5829/4/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5829 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint | | uname | Linux 807603bf2dcf 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / bb46dbd471d96878492fe660b2af03a8384f8123 | | Default Java | Private
Re: [PR] HDFS-17269. RBF: Listing trash directory should return subdirs from all subclusters. [hadoop]
LiuGuH commented on code in PR #6312: URL: https://github.com/apache/hadoop/pull/6312#discussion_r1417093166 ## hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterTrash.java: ## @@ -282,6 +282,13 @@ public void testMultipleMountPoint() throws IOException, fileStatuses = fs.listStatus(new Path("/user/test-trash/.Trash/Current/" + MOUNT_POINT2)); assertEquals(0, fileStatuses.length); +// In ns1, make a trash path with timestamp to simulate a trash path. +String trashPath = "/user/test-trash/.Trash/" + System.currentTimeMillis(); +client1.mkdirs(trashPath, new FsPermission("770"), +true); +fileStatuses = fs.listStatus(new Path("/user/test-trash/.Trash")); Review Comment: Thanks for reivew. For this it is enough and others will be more complicated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HDFS-17262 Fixed the verbose log.warn in DFSUtil.addTransferRateMetric(). [hadoop]
hadoop-yetus commented on PR #6290: URL: https://github.com/apache/hadoop/pull/6290#issuecomment-1842500384 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 21s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 31m 42s | | trunk passed | | +1 :green_heart: | compile | 0m 42s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 0m 37s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 0m 39s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 41s | | trunk passed | | +1 :green_heart: | javadoc | 0m 41s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 0s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 1m 43s | | trunk passed | | +1 :green_heart: | shadedclient | 20m 19s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 35s | | the patch passed | | +1 :green_heart: | compile | 0m 36s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 0m 36s | | the patch passed | | +1 :green_heart: | compile | 0m 34s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | javac | 0m 34s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 26s | | hadoop-hdfs-project/hadoop-hdfs: The patch generated 0 new + 126 unchanged - 2 fixed = 126 total (was 128) | | +1 :green_heart: | mvnsite | 0m 36s | | the patch passed | | +1 :green_heart: | javadoc | 0m 30s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 1s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 1m 40s | | the patch passed | | +1 :green_heart: | shadedclient | 20m 13s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 185m 8s | | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 28s | | The patch does not generate ASF License warnings. | | | | 270m 46s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6290/5/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6290 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 571c7939a280 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / c18693adfb9f77959f4732bda9b0b65c2af160b1 | | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6290/5/testReport/ | | Max. process+thread count | 4277 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6290/5/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For
[jira] [Updated] (HADOOP-18996) S3A to provide full support for S3 Express One Zone
[ https://issues.apache.org/jira/browse/HADOOP-18996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ahmar Suhail updated HADOOP-18996: -- Fix Version/s: 3.3.6-aws > S3A to provide full support for S3 Express One Zone > --- > > Key: HADOOP-18996 > URL: https://issues.apache.org/jira/browse/HADOOP-18996 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.4.0 >Reporter: Ahmar Suhail >Assignee: Steve Loughran >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0, 3.3.6-aws > > > HADOOP-18995 upgrades the SDK version which allows connecting to a s3 express > one zone support. > Complete support needs to be added to address tests that fail with s3 express > one zone, additional tests, documentation etc. > * hadoop-common path capability to indicate that treewalking may encounter > missing dirs > * use this in treewalking code in shell, mapreduce FileInputFormat etc to not > fail during treewalks > * extra path capability for s3express too. > * tests for this > * anything else -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-18995) S3A: Upgrade AWS SDK version to 2.21.33 for Amazon S3 Express One Zone support
[ https://issues.apache.org/jira/browse/HADOOP-18995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ahmar Suhail updated HADOOP-18995: -- Fix Version/s: 3.3.6-aws > S3A: Upgrade AWS SDK version to 2.21.33 for Amazon S3 Express One Zone support > -- > > Key: HADOOP-18995 > URL: https://issues.apache.org/jira/browse/HADOOP-18995 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.4.0 >Reporter: Ahmar Suhail >Assignee: Ahmar Suhail >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0, 3.3.6-aws > > > Upgrade SDK version to 2.21.33, which adds S3 Express One Zone support. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-18915) Tune/extend S3A http connection and thread pool settings
[ https://issues.apache.org/jira/browse/HADOOP-18915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ahmar Suhail updated HADOOP-18915: -- Fix Version/s: 3.3.6-aws > Tune/extend S3A http connection and thread pool settings > > > Key: HADOOP-18915 > URL: https://issues.apache.org/jira/browse/HADOOP-18915 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.4.0 >Reporter: Ahmar Suhail >Assignee: Steve Loughran >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0, 3.3.6-aws > > > Increases existing pool sizes, as with server scale and vector > IO, larger pools are needed > fs.s3a.connection.maximum 200 > fs.s3a.threads.max 96 > Adds new configuration options for v2 sdk internal timeouts, > both with default of 60s: > fs.s3a.connection.acquisition.timeout > fs.s3a.connection.idle.time > All the pool/timoeut options are covered in performance.md > Moves all timeout/duration options in the s3a FS to taking > temporal units (h, m, s, ms,...); retaining the previous default > unit (normally millisecond) > Adds a minimum duration for most of these, in order to recover from > deployments where a timeout has been set on the assumption the unit > was seconds, not millis. > Uses java.time.Duration throughout the codebase; > retaining the older numeric constants in > org.apache.hadoop.fs.s3a.Constants for backwards compatibility; > these are now deprecated. > Adds new class AWSApiCallTimeoutException to be raised on > sdk-related methods and also gateway timeouts. This is a subclass > of org.apache.hadoop.net.ConnectTimeoutException to support > existing retry logic. > + reverted default value of fs.s3a.create.performance to false; > inadvertently set to true during testing. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org