[jira] [Commented] (HADOOP-18429) MutableGaugeFloat#incr(float) get stuck in an infinite loop
[ https://issues.apache.org/jira/browse/HADOOP-18429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17617487#comment-17617487 ] ASF GitHub Bot commented on HADOOP-18429: - huxinqiu commented on PR #4823: URL: https://github.com/apache/hadoop/pull/4823#issuecomment-1278521635 @ZanderXu @ashutoshcipher Thanks for helping to review the code. Can you help merge this pr into trunk branch? > MutableGaugeFloat#incr(float) get stuck in an infinite loop > --- > > Key: HADOOP-18429 > URL: https://issues.apache.org/jira/browse/HADOOP-18429 > Project: Hadoop Common > Issue Type: Bug > Components: metrics >Reporter: xinqiu.hu >Assignee: Ashutosh Gupta >Priority: Major > Labels: pull-request-available > > The current implementation converts the value from int to float, causing the > compareAndSet method to get stuck. > {code:java} > private final boolean compareAndSet(float expect, float update) { > return value.compareAndSet(Float.floatToIntBits(expect), > Float.floatToIntBits(update)); > } > private void incr(float delta) { > while (true) { > float current = value.get(); > float next = current + delta; > if (compareAndSet(current, next)) { > setChanged(); > return; > } > } > } {code} > > Perhaps it could be: > {code:java} > private void incr(float delta) { > while (true) { > float current = Float.intBitsToFloat(value.get()); > float next = current + delta; > if (compareAndSet(current, next)) { > setChanged(); > return; > } > } > } {code} > > The unit test looks like this > {code:java} > MutableGaugeFloat mgf = new MutableGaugeFloat(Context,3.2f); > assertEquals(3.2f, mgf.value(), 0.0); > mgf.incr(); > assertEquals(4.2f, mgf.value(), 0.0); {code} > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18395) Performance improvement in org.apache.hadoop.io.Text#find
[ https://issues.apache.org/jira/browse/HADOOP-18395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17617488#comment-17617488 ] ASF GitHub Bot commented on HADOOP-18395: - huxinqiu commented on PR #4714: URL: https://github.com/apache/hadoop/pull/4714#issuecomment-1278521721 @ZanderXu Thanks for helping to review the code. Can you help merge this pr into trunk branch? > Performance improvement in org.apache.hadoop.io.Text#find > - > > Key: HADOOP-18395 > URL: https://issues.apache.org/jira/browse/HADOOP-18395 > Project: Hadoop Common > Issue Type: Improvement > Components: io >Reporter: xinqiu.hu >Priority: Trivial > Labels: pull-request-available > Attachments: > 0001-add-UT-with-timeout-for-Text-find-and-fix-comments.patch > > > The current implementation reset src and tgt to the mark and continues > searching when tgt has remaining and src expired first. which is probably not > necessary. > {code:java} > public int find(String what, int start) { > try { > ByteBuffer src = ByteBuffer.wrap(this.bytes, 0, this.length); > ByteBuffer tgt = encode(what); > byte b = tgt.get(); > src.position(start); > while (src.hasRemaining()) { > if (b == src.get()) { // matching first byte > src.mark(); // save position in loop > tgt.mark(); // save position in target > boolean found = true; > int pos = src.position()-1; > while (tgt.hasRemaining()) { > if (!src.hasRemaining()) { // src expired first > tgt.reset(); > src.reset(); > found = false; > break; > } > if (!(tgt.get() == src.get())) { > tgt.reset(); > src.reset(); > found = false; > break; // no match > } > } > if (found) return pos; > } > } > return -1; // not found > } catch (CharacterCodingException e) { > throw new RuntimeException("Should not have happened", e); > } > } {code} > For example, when q is searched, it is found that src has no remaining, and > src is reset to d to continue searching. But the remaining length of src is > always smaller than tgt, at this point we can return -1 directly. > {code:java} > @Test > public void testFind() throws Exception { > Text text = new Text("abcd\u20acbdcd\u20ac"); > assertThat(text.find("cd\u20acq")).isEqualTo(-1); > } {code} > Perhaps it could be: > {code:java} > public int find(String what, int start) { > try { > ByteBuffer src = ByteBuffer.wrap(this.bytes, 0, this.length); > ByteBuffer tgt = encode(what); > byte b = tgt.get(); > src.position(start); > while (src.hasRemaining()) { > if (b == src.get()) { // matching first byte > src.mark(); // save position in loop > tgt.mark(); // save position in target > boolean found = true; > int pos = src.position()-1; > while (tgt.hasRemaining()) { > if (!src.hasRemaining()) { // src expired first > return -1; > } > if (!(tgt.get() == src.get())) { > tgt.reset(); > src.reset(); > found = false; > break; // no match > } > } > if (found) return pos; > } > } > return -1; // not found > } catch (CharacterCodingException e) { > throw new RuntimeException("Should not have happened", e); > } > }{code} -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] huxinqiu commented on pull request #4714: HADOOP-18395. Performance improvement in hadoop-common Text#find
huxinqiu commented on PR #4714: URL: https://github.com/apache/hadoop/pull/4714#issuecomment-1278521721 @ZanderXu Thanks for helping to review the code. Can you help merge this pr into trunk branch? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] huxinqiu commented on pull request #4823: HADOOP-18429. fix infinite loop in MutableGaugeFloat#incr(float)
huxinqiu commented on PR #4823: URL: https://github.com/apache/hadoop/pull/4823#issuecomment-1278521635 @ZanderXu @ashutoshcipher Thanks for helping to review the code. Can you help merge this pr into trunk branch? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] slfan1989 opened a new pull request, #5030: YARN-11345. [Federation] Refactoring Yarn Router's Application Web Page.
slfan1989 opened a new pull request, #5030: URL: https://github.com/apache/hadoop/pull/5030 JIRA: YARN-11345. [Federation] Refactoring Yarn Router's Application Web Page. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #5027: MAPREDUCE-7420. Upgrade Junit 4 to 5 in hadoop-mapreduce-client-core
hadoop-yetus commented on PR #5027: URL: https://github.com/apache/hadoop/pull/5027#issuecomment-1278497197 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 51s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 1s | | xmllint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 73 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 41m 34s | | trunk passed | | +1 :green_heart: | compile | 0m 58s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 0m 54s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 0m 59s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 0s | | trunk passed | | +1 :green_heart: | javadoc | 0m 46s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 35s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 1m 45s | | trunk passed | | +1 :green_heart: | shadedclient | 21m 24s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 45s | | the patch passed | | +1 :green_heart: | compile | 0m 41s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | -1 :x: | javac | 0m 41s | [/results-compile-javac-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core-jdkUbuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5027/1/artifact/out/results-compile-javac-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core-jdkUbuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04.txt) | hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core-jdkUbuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 generated 3 new + 99 unchanged - 3 fixed = 102 total (was 102) | | +1 :green_heart: | compile | 0m 36s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | -1 :x: | javac | 0m 36s | [/results-compile-javac-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core-jdkPrivateBuild-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5027/1/artifact/out/results-compile-javac-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core-jdkPrivateBuild-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07.txt) | hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core-jdkPrivateBuild-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 generated 3 new + 93 unchanged - 3 fixed = 96 total (was 96) | | -1 :x: | blanks | 0m 0s | [/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5027/1/artifact/out/blanks-eol.txt) | The patch has 3 line(s) that end in blanks. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply | | -0 :warning: | checkstyle | 0m 33s | [/results-checkstyle-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5027/1/artifact/out/results-checkstyle-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.txt) | hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core: The patch generated 22 new + 484 unchanged - 39 fixed = 506 total (was 523) | | +1 :green_heart: | mvnsite | 0m 41s | | the patch passed | | +1 :green_heart: | javadoc | 0m 26s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 25s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 1m 26s | | the patch passed | | +1 :green_heart: | shadedclient | 20m 31s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 7m 23s |
[GitHub] [hadoop] hadoop-yetus commented on pull request #5020: MAPREDUCE-7416. Upgrade Junit 4 to 5 in hadoop-mapreduce-client-shuffle
hadoop-yetus commented on PR #5020: URL: https://github.com/apache/hadoop/pull/5020#issuecomment-1278492201 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 54s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 0s | | xmllint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 3 new or modified test files. | _ trunk Compile Tests _ | | -1 :x: | mvninstall | 4m 4s | [/branch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5020/2/artifact/out/branch-mvninstall-root.txt) | root in trunk failed. | | +1 :green_heart: | compile | 3m 40s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 0m 25s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 0m 27s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 40s | | trunk passed | | +1 :green_heart: | javadoc | 0m 38s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 25s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 0m 56s | | trunk passed | | +1 :green_heart: | shadedclient | 25m 24s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 26s | | the patch passed | | +1 :green_heart: | compile | 0m 25s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 0m 25s | | the patch passed | | +1 :green_heart: | compile | 0m 23s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 0m 23s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 17s | [/results-checkstyle-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-shuffle.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5020/2/artifact/out/results-checkstyle-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-shuffle.txt) | hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle: The patch generated 1 new + 13 unchanged - 10 fixed = 14 total (was 23) | | +1 :green_heart: | mvnsite | 0m 26s | | the patch passed | | +1 :green_heart: | javadoc | 0m 23s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 26s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 0m 54s | | the patch passed | | +1 :green_heart: | shadedclient | 20m 31s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 0m 44s | | hadoop-mapreduce-client-shuffle in the patch passed. | | +1 :green_heart: | asflicense | 0m 46s | | The patch does not generate ASF License warnings. | | | | 65m 41s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5020/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5020 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient codespell detsecrets xmllint spotbugs checkstyle | | uname | Linux ee3ff78a04af 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / a2e58c72777c6773e5260d4e87d9f25f1279f5b7 | | Default Java | Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5020/2/testReport/ | | Max. process+thread count | 556 (vs. ulimit of
[GitHub] [hadoop] hadoop-yetus commented on pull request #5028: MAPREDUCE-7419. Upgrade Junit 4 to 5 in hadoop-mapreduce-client-common
hadoop-yetus commented on PR #5028: URL: https://github.com/apache/hadoop/pull/5028#issuecomment-1278491899 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 52s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 1s | | xmllint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 12 new or modified test files. | _ trunk Compile Tests _ | | -1 :x: | mvninstall | 7m 45s | [/branch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5028/1/artifact/out/branch-mvninstall-root.txt) | root in trunk failed. | | +1 :green_heart: | compile | 1m 49s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 0m 27s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 0m 27s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 38s | | trunk passed | | +1 :green_heart: | javadoc | 0m 39s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 24s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 1m 12s | | trunk passed | | +1 :green_heart: | shadedclient | 26m 33s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 29s | | the patch passed | | +1 :green_heart: | compile | 0m 31s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 0m 31s | | the patch passed | | +1 :green_heart: | compile | 0m 27s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 0m 27s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 18s | [/results-checkstyle-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5028/1/artifact/out/results-checkstyle-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-common.txt) | hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common: The patch generated 23 new + 52 unchanged - 30 fixed = 75 total (was 82) | | +1 :green_heart: | mvnsite | 0m 30s | | the patch passed | | +1 :green_heart: | javadoc | 0m 35s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 28s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 1m 16s | | the patch passed | | +1 :green_heart: | shadedclient | 23m 22s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 1m 12s | | hadoop-mapreduce-client-common in the patch passed. | | +1 :green_heart: | asflicense | 0m 40s | | The patch does not generate ASF License warnings. | | | | 71m 54s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5028/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5028 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient codespell detsecrets xmllint spotbugs checkstyle | | uname | Linux 6df1d78f8fa6 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 93e161c1ea010d8a1fbbcf67a03e4e926388ba2a | | Default Java | Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5028/1/testReport/ | | Max. process+thread count | 590 (vs. ulimit of
[GitHub] [hadoop] ZanderXu commented on a diff in pull request #5013: HDFS-16802.Print options when accessing ClientProtocol#rename2()
ZanderXu commented on code in PR #5013: URL: https://github.com/apache/hadoop/pull/5013#discussion_r995344427 ## hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirRenameOp.java: ## @@ -249,7 +249,7 @@ static RenameResult renameToInt( String dst = dstArg; if (NameNode.stateChangeLog.isDebugEnabled()) { NameNode.stateChangeLog.debug("DIR* NameSystem.renameTo: with options -" + - " " + src + " to " + dst); + " {} to {}, options={}", src, dst, Arrays.toString(options)); Review Comment: Can remove `if (NameNode.stateChangeLog.isDebugEnabled()) {` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #5026: [WIP] MAPREDUCE-7421. Upgrade Junit 4 to 5 in hadoop-mapreduce-client-jobclient
hadoop-yetus commented on PR #5026: URL: https://github.com/apache/hadoop/pull/5026#issuecomment-1278469187 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 53s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 5s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 0s | | xmllint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 187 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 39m 33s | | trunk passed | | +1 :green_heart: | compile | 0m 49s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 0m 49s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 1m 16s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 57s | | trunk passed | | +1 :green_heart: | javadoc | 0m 47s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 35s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 1m 12s | | trunk passed | | +1 :green_heart: | shadedclient | 20m 46s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 33s | | the patch passed | | +1 :green_heart: | compile | 0m 39s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | -1 :x: | javac | 0m 39s | [/results-compile-javac-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient-jdkUbuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5026/1/artifact/out/results-compile-javac-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient-jdkUbuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04.txt) | hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient-jdkUbuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 generated 10 new + 130 unchanged - 10 fixed = 140 total (was 140) | | +1 :green_heart: | compile | 0m 33s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | -1 :x: | javac | 0m 33s | [/results-compile-javac-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient-jdkPrivateBuild-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5026/1/artifact/out/results-compile-javac-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient-jdkPrivateBuild-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07.txt) | hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient-jdkPrivateBuild-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 generated 10 new + 120 unchanged - 10 fixed = 130 total (was 130) | | -1 :x: | blanks | 0m 0s | [/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5026/1/artifact/out/blanks-eol.txt) | The patch has 27 line(s) that end in blanks. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply | | -1 :x: | blanks | 0m 0s | [/blanks-tabs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5026/1/artifact/out/blanks-tabs.txt) | The patch 5 line(s) with tabs. | | -0 :warning: | checkstyle | 0m 47s | [/results-checkstyle-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5026/1/artifact/out/results-checkstyle-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt) | hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient: The patch generated 177 new + 1875 unchanged - 857 fixed = 2052 total (was 2732) | | +1 :green_heart: | mvnsite | 0m 34s | | the patch passed | | +1 :green_heart: | javadoc | 0m 27s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 25s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 0m 57s | | the patch
[GitHub] [hadoop] ashutoshcipher opened a new pull request, #5029: MAPREDUCE-7422. Upgrade Junit 4 to 5 in hadoop-mapreduce-examples
ashutoshcipher opened a new pull request, #5029: URL: https://github.com/apache/hadoop/pull/5029 ### Description of PR Upgrade Junit 4 to 5 in hadoop-mapreduce-examples JIRA - MAPREDUCE-7422 ### For code changes: - [X] Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')? - [ ] Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, `NOTICE-binary` files? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ZanderXu commented on pull request #4660: HDFS-16703. Enable RPC Timeout for some protocols of NameNode.
ZanderXu commented on PR #4660: URL: https://github.com/apache/hadoop/pull/4660#issuecomment-1278453491 @slfan1989 Sir, about this PR, if you thinks it's difficult to configure it, how about just enabling a configurable timeout for NamenodeProtocolPB? Because we encountered many times this problem in our prod environment that RBF can not sense the crashed namenode in time, because the `NamenodeHeartbeatService` are blocked to waiting for the response from namenode for a long time. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ashutoshcipher opened a new pull request, #5028: MAPREDUCE-7419. Upgrade Junit 4 to 5 in hadoop-mapreduce-client-common
ashutoshcipher opened a new pull request, #5028: URL: https://github.com/apache/hadoop/pull/5028 ### Description of PR Upgrade Junit 4 to 5 in hadoop-mapreduce-client-common JIRA - MAPREDUCE-7419 ### For code changes: - [X] Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')? - [ ] Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, `NOTICE-binary` files? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ashutoshcipher commented on pull request #5019: MAPREDUCE-7417. Upgrade Junit 4 to 5 in hadoop-mapreduce-client-uploader
ashutoshcipher commented on PR #5019: URL: https://github.com/apache/hadoop/pull/5019#issuecomment-1278446195 > LGTM. Thanks @slfan1989 for review :) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ashutoshcipher commented on pull request #5027: MAPREDUCE-7420. Upgrade Junit 4 to 5 in hadoop-mapreduce-client-core
ashutoshcipher commented on PR #5027: URL: https://github.com/apache/hadoop/pull/5027#issuecomment-1278442001 Thanks @slfan1989, I still need to make few changes :) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] slfan1989 commented on pull request #5027: MAPREDUCE-7420. Upgrade Junit 4 to 5 in hadoop-mapreduce-client-core
slfan1989 commented on PR #5027: URL: https://github.com/apache/hadoop/pull/5027#issuecomment-1278438674 @ashutoshcipher Thanks for the contribution, it looks like the code is cleaner. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ZanderXu commented on a diff in pull request #5013: HDFS-16802.Print options when accessing ClientProtocol#rename2()
ZanderXu commented on code in PR #5013: URL: https://github.com/apache/hadoop/pull/5013#discussion_r995309524 ## hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirRenameOp.java: ## @@ -249,7 +249,7 @@ static RenameResult renameToInt( String dst = dstArg; if (NameNode.stateChangeLog.isDebugEnabled()) { NameNode.stateChangeLog.debug("DIR* NameSystem.renameTo: with options -" + - " " + src + " to " + dst); + " {} to {}, options={}", src, dst, Arrays.toString(options)); Review Comment: ok -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ashutoshcipher opened a new pull request, #5027: MAPREDUCE-7420. Upgrade Junit 4 to 5 in hadoop-mapreduce-client-core
ashutoshcipher opened a new pull request, #5027: URL: https://github.com/apache/hadoop/pull/5027 ### Description of PR [WIP] Upgrade Junit 4 to 5 in hadoop-mapreduce-client-core JIRA - MAPREDUCE-7420 ### For code changes: - [X] Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')? - [ ] Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, `NOTICE-binary` files? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #4963: YARN-11326. [Federation] Add RM FederationStateStoreService Metrics.
hadoop-yetus commented on PR #4963: URL: https://github.com/apache/hadoop/pull/4963#issuecomment-1278429561 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 59s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 15m 14s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 31m 59s | | trunk passed | | +1 :green_heart: | compile | 6m 29s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 4m 56s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 1m 27s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 58s | | trunk passed | | +1 :green_heart: | javadoc | 1m 45s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 28s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 3m 45s | | trunk passed | | +1 :green_heart: | shadedclient | 24m 18s | | branch has no errors when building and testing our client artifacts. | | -0 :warning: | patch | 24m 40s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 28s | | Maven dependency ordering for patch | | -1 :x: | mvninstall | 0m 35s | [/patch-mvninstall-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4963/10/artifact/out/patch-mvninstall-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt) | hadoop-yarn-server-resourcemanager in the patch failed. | | -1 :x: | compile | 2m 42s | [/patch-compile-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server-jdkUbuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4963/10/artifact/out/patch-compile-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server-jdkUbuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04.txt) | hadoop-yarn-server in the patch failed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04. | | -1 :x: | javac | 2m 42s | [/patch-compile-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server-jdkUbuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4963/10/artifact/out/patch-compile-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server-jdkUbuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04.txt) | hadoop-yarn-server in the patch failed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04. | | -1 :x: | compile | 2m 21s | [/patch-compile-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server-jdkPrivateBuild-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4963/10/artifact/out/patch-compile-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server-jdkPrivateBuild-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07.txt) | hadoop-yarn-server in the patch failed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07. | | -1 :x: | javac | 2m 21s | [/patch-compile-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server-jdkPrivateBuild-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4963/10/artifact/out/patch-compile-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server-jdkPrivateBuild-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07.txt) | hadoop-yarn-server in the patch failed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07. | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 1m 11s | [/results-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4963/10/artifact/out/results-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server.txt) | hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server: The patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | | -1 :x: | mvnsite | 0m 36s |
[GitHub] [hadoop] jianghuazhu commented on a diff in pull request #5013: HDFS-16802.Print options when accessing ClientProtocol#rename2()
jianghuazhu commented on code in PR #5013: URL: https://github.com/apache/hadoop/pull/5013#discussion_r995305422 ## hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirRenameOp.java: ## @@ -249,7 +249,7 @@ static RenameResult renameToInt( String dst = dstArg; if (NameNode.stateChangeLog.isDebugEnabled()) { NameNode.stateChangeLog.debug("DIR* NameSystem.renameTo: with options -" + - " " + src + " to " + dst); + " {} to {}, options={}", src, dst, Arrays.toString(options)); Review Comment: I agree. Here are some audit logs: ` 2022-10-14 11:16:30,825 [Listener at localhost/51113] INFO FSNamesystem.audit (FSNamesystem.java:logAuditMessage(8853)) - allowed=true ugi=hdfs (auth:SIMPLE) ip=null cmd=rename ( options=[NONE]) src=/testNamenodeRetryCache/testRename2/src dst=/testNamenodeRetryCache/testRename2/target perm=hdfs:supergroup:rwxrwxrwx proto=null ` Should we keep a consistent norm? New format: ` 2022-10-14 11:20:18,813 [Listener at localhost/58086] DEBUG hdfs.StateChange(FSDirRenameOp.java:renameToInt(256)) - DIR* NameSystem.renameTo: with options=[NONE] /testNamenodeRetryCache/testRename2/src to /testNamenodeRetryCache /testRename2/target ` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18360) Update commons-csv from 1.0 to 1.9.0.
[ https://issues.apache.org/jira/browse/HADOOP-18360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17617421#comment-17617421 ] Ayush Saxena commented on HADOOP-18360: --- Nopes, [~slfan1989] can you raise a backport PR to trigger the tests? > Update commons-csv from 1.0 to 1.9.0. > - > > Key: HADOOP-18360 > URL: https://issues.apache.org/jira/browse/HADOOP-18360 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Affects Versions: 3.4.0 >Reporter: fanshilun >Assignee: fanshilun >Priority: Minor > Labels: pull-request-available > Fix For: 3.4.0 > > > commons-csv 1.0 is a very old jar, mvnrepository shows this jar as the 2014 > version, I have compiled and tested locally, I think this jar can be upgraded > to commons-csv 1.9 version. > The link to the release note is as follows: > [https://commons.apache.org/proper/commons-csv/changes-report.html] > We can see that the new version fixes some issues. > I read the code used, we use header related methods. We found that many > header-related methods have been upgraded. > *Release 1.1 – 2014-11-16* > CSVFormat#withHeader doesn't work well with #printComment, add > withHeaderComments(String...). > CSVFormat.EXCEL should ignore empty header names. > *Release 1.2 – 2015-08-24* > CSVFormat.with* methods clear the header comments. > *Release 1.3 – 2016-05-09* > Add shortcut method for using first record as header to CSVFormat. > Add withHeader(Class) to CSVFormat. > CSVPrinter doesn't skip creation of header record if skipHeaderRecord is set > to true. > Add IgnoreCase option for accessing header names. > *Release 1.5 – 2017-09-03* > Fix incorrect method name 'withFirstRowAsHeader' in user guide. > *Release 1.7 – 2019-06-01* > Cannot get headers in column order from CSVRecord. > *Release 1.8 – 2020-02-01* > CSVFormat#validate() does not account for allowDuplicateHeaderNames. > A single empty header is allowed when not allowing empty column headers. > *Release 1.9.0 – 2020-07-24* > Add possibility to use ResultSet header meta data as CSV header. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ZanderXu commented on a diff in pull request #5013: HDFS-16802.Print options when accessing ClientProtocol#rename2()
ZanderXu commented on code in PR #5013: URL: https://github.com/apache/hadoop/pull/5013#discussion_r995286333 ## hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirRenameOp.java: ## @@ -249,7 +249,7 @@ static RenameResult renameToInt( String dst = dstArg; if (NameNode.stateChangeLog.isDebugEnabled()) { NameNode.stateChangeLog.debug("DIR* NameSystem.renameTo: with options -" + - " " + src + " to " + dst); + " {} to {}, options={}", src, dst, Arrays.toString(options)); Review Comment: How about changing the code as bellow? ``` NameNode.stateChangeLog.debug("DIR* NameSystem.renameTo: with options {} - {} to {}", Arrays.toString(options), src, dst); ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18490) The check logic for erasedIndexes in XORRawDecoder is buggy
[ https://issues.apache.org/jira/browse/HADOOP-18490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17617415#comment-17617415 ] ASF GitHub Bot commented on HADOOP-18490: - ZanderXu commented on PR #5001: URL: https://github.com/apache/hadoop/pull/5001#issuecomment-1278395341 > One thing that surprises me is how the test method testValidate() passes on the newly added value set (numParityUnits = 3). The passing of the method means that XORRawDecoder could decode multiple erased indexes, which contradicts the actual statement. @FuzzingTeam Can you look into the related code to find the root cause? > The check logic for erasedIndexes in XORRawDecoder is buggy > --- > > Key: HADOOP-18490 > URL: https://issues.apache.org/jira/browse/HADOOP-18490 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.3.4 >Reporter: FuzzingTeam >Priority: Major > Labels: pull-request-available > > In the method _doDecode_ of class {_}XORRawDecoder{_}, the code does not > handle all the erased and null marked locations in the array ({_}inputs{_}) > but only skips the first erased location ({_}erasedIndexes[0]{_}). The > missing handling results in an unhandled NullPointerException. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ZanderXu commented on pull request #5001: HADOOP-18490. Fixed the check logic for erasedIndexes in XORRawDecoder
ZanderXu commented on PR #5001: URL: https://github.com/apache/hadoop/pull/5001#issuecomment-1278395341 > One thing that surprises me is how the test method testValidate() passes on the newly added value set (numParityUnits = 3). The passing of the method means that XORRawDecoder could decode multiple erased indexes, which contradicts the actual statement. @FuzzingTeam Can you look into the related code to find the root cause? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] jianghuazhu commented on pull request #5013: HDFS-16802.Print options when accessing ClientProtocol#rename2()
jianghuazhu commented on PR #5013: URL: https://github.com/apache/hadoop/pull/5013#issuecomment-1278366433 @ZanderXu , can you help review this pr again? Thanks. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #5025: MAPREDUCE-7418. Upgrade Junit 4 to 5 in hadoop-mapreduce-client-app
hadoop-yetus commented on PR #5025: URL: https://github.com/apache/hadoop/pull/5025#issuecomment-1278355588 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 59s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 2s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 0s | | xmllint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 52 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 39m 19s | | trunk passed | | +1 :green_heart: | compile | 0m 57s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 0m 45s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 1m 2s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 48s | | trunk passed | | +1 :green_heart: | javadoc | 0m 50s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 44s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 1m 26s | | trunk passed | | +1 :green_heart: | shadedclient | 21m 23s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 39s | | the patch passed | | +1 :green_heart: | compile | 0m 35s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 0m 35s | | the patch passed | | +1 :green_heart: | compile | 0m 34s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 0m 34s | | the patch passed | | -1 :x: | blanks | 0m 0s | [/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5025/1/artifact/out/blanks-eol.txt) | The patch has 4 line(s) that end in blanks. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply | | -0 :warning: | checkstyle | 0m 33s | [/results-checkstyle-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5025/1/artifact/out/results-checkstyle-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app.txt) | hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app: The patch generated 163 new + 639 unchanged - 314 fixed = 802 total (was 953) | | +1 :green_heart: | mvnsite | 0m 33s | | the patch passed | | +1 :green_heart: | javadoc | 0m 28s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 25s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 1m 11s | | the patch passed | | +1 :green_heart: | shadedclient | 20m 44s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 9m 31s | [/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5025/1/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app.txt) | hadoop-mapreduce-client-app in the patch passed. | | +1 :green_heart: | asflicense | 0m 45s | | The patch does not generate ASF License warnings. | | | | 106m 22s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.mapreduce.v2.app.webapp.TestAMWebApp | | | hadoop.mapreduce.v2.app.job.impl.TestTaskAttemptContainerRequest | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5025/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5025 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient codespell detsecrets xmllint spotbugs checkstyle | | uname | Linux 6c7563feb637 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality |
[GitHub] [hadoop] slfan1989 commented on a diff in pull request #4963: YARN-11326. [Federation] Add RM FederationStateStoreService Metrics.
slfan1989 commented on code in PR #4963: URL: https://github.com/apache/hadoop/pull/4963#discussion_r995248647 ## hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/federation/FederationStateStoreService.java: ## @@ -259,378 +261,190 @@ public Version loadVersion() { @Override public GetSubClusterPolicyConfigurationResponse getPolicyConfiguration( GetSubClusterPolicyConfigurationRequest request) throws YarnException { -try { - long startTime = clock.getTime(); - GetSubClusterPolicyConfigurationResponse response = - stateStoreClient.getPolicyConfiguration(request); - long stopTime = clock.getTime(); - FederationStateStoreServiceMetrics.succeededStateStoreServiceCall( - stopTime - startTime); - return response; -} catch (YarnException e) { - LOG.error("getPolicyConfiguration error.", e); - FederationStateStoreServiceMetrics.failedStateStoreServiceCall(); - throw e; -} +FederationClientMethod clientMethod = new FederationClientMethod( +"getPolicyConfiguration", GetSubClusterPolicyConfigurationResponse.class, request); +return invoke(clientMethod, GetSubClusterPolicyConfigurationResponse.class); } @Override public SetSubClusterPolicyConfigurationResponse setPolicyConfiguration( SetSubClusterPolicyConfigurationRequest request) throws YarnException { -try { - long startTime = clock.getTime(); - SetSubClusterPolicyConfigurationResponse response = - stateStoreClient.setPolicyConfiguration(request); - long stopTime = clock.getTime(); - FederationStateStoreServiceMetrics.succeededStateStoreServiceCall( - stopTime - startTime); - return response; -} catch (YarnException e) { - LOG.error("setPolicyConfiguration error.", e); - FederationStateStoreServiceMetrics.failedStateStoreServiceCall(); - throw e; -} +FederationClientMethod clientMethod = new FederationClientMethod( +"setPolicyConfiguration", SetSubClusterPolicyConfigurationResponse.class, request); +return invoke(clientMethod, SetSubClusterPolicyConfigurationResponse.class); } @Override public GetSubClusterPoliciesConfigurationsResponse getPoliciesConfigurations( GetSubClusterPoliciesConfigurationsRequest request) throws YarnException { -try { - long startTime = clock.getTime(); - GetSubClusterPoliciesConfigurationsResponse response = - stateStoreClient.getPoliciesConfigurations(request); - long stopTime = clock.getTime(); - FederationStateStoreServiceMetrics.succeededStateStoreServiceCall( - stopTime - startTime); - return response; -} catch (YarnException e) { - LOG.error("getPoliciesConfigurations error.", e); - FederationStateStoreServiceMetrics.failedStateStoreServiceCall(); - throw e; -} +FederationClientMethod clientMethod = new FederationClientMethod( +"getPoliciesConfigurations", GetSubClusterPoliciesConfigurationsResponse.class, request); +return invoke(clientMethod, GetSubClusterPoliciesConfigurationsResponse.class); Review Comment: I haven't finished the test yet, I will test the interface in junit Test. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] slfan1989 commented on a diff in pull request #4963: YARN-11326. [Federation] Add RM FederationStateStoreService Metrics.
slfan1989 commented on code in PR #4963: URL: https://github.com/apache/hadoop/pull/4963#discussion_r995248231 ## hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/federation/FederationStateStoreService.java: ## @@ -259,378 +261,190 @@ public Version loadVersion() { @Override public GetSubClusterPolicyConfigurationResponse getPolicyConfiguration( GetSubClusterPolicyConfigurationRequest request) throws YarnException { -try { - long startTime = clock.getTime(); - GetSubClusterPolicyConfigurationResponse response = - stateStoreClient.getPolicyConfiguration(request); - long stopTime = clock.getTime(); - FederationStateStoreServiceMetrics.succeededStateStoreServiceCall( - stopTime - startTime); - return response; -} catch (YarnException e) { - LOG.error("getPolicyConfiguration error.", e); - FederationStateStoreServiceMetrics.failedStateStoreServiceCall(); - throw e; -} +FederationClientMethod clientMethod = new FederationClientMethod( +"getPolicyConfiguration", GetSubClusterPolicyConfigurationResponse.class, request); +return invoke(clientMethod, GetSubClusterPolicyConfigurationResponse.class); } @Override public SetSubClusterPolicyConfigurationResponse setPolicyConfiguration( SetSubClusterPolicyConfigurationRequest request) throws YarnException { -try { - long startTime = clock.getTime(); - SetSubClusterPolicyConfigurationResponse response = - stateStoreClient.setPolicyConfiguration(request); - long stopTime = clock.getTime(); - FederationStateStoreServiceMetrics.succeededStateStoreServiceCall( - stopTime - startTime); - return response; -} catch (YarnException e) { - LOG.error("setPolicyConfiguration error.", e); - FederationStateStoreServiceMetrics.failedStateStoreServiceCall(); - throw e; -} +FederationClientMethod clientMethod = new FederationClientMethod( +"setPolicyConfiguration", SetSubClusterPolicyConfigurationResponse.class, request); +return invoke(clientMethod, SetSubClusterPolicyConfigurationResponse.class); } @Override public GetSubClusterPoliciesConfigurationsResponse getPoliciesConfigurations( GetSubClusterPoliciesConfigurationsRequest request) throws YarnException { -try { - long startTime = clock.getTime(); - GetSubClusterPoliciesConfigurationsResponse response = - stateStoreClient.getPoliciesConfigurations(request); - long stopTime = clock.getTime(); - FederationStateStoreServiceMetrics.succeededStateStoreServiceCall( - stopTime - startTime); - return response; -} catch (YarnException e) { - LOG.error("getPoliciesConfigurations error.", e); - FederationStateStoreServiceMetrics.failedStateStoreServiceCall(); - throw e; -} +FederationClientMethod clientMethod = new FederationClientMethod( +"getPoliciesConfigurations", GetSubClusterPoliciesConfigurationsResponse.class, request); +return invoke(clientMethod, GetSubClusterPoliciesConfigurationsResponse.class); } @Override public SubClusterRegisterResponse registerSubCluster( SubClusterRegisterRequest registerSubClusterRequest) throws YarnException { -try { - long startTime = clock.getTime(); - SubClusterRegisterResponse response = - stateStoreClient.registerSubCluster(registerSubClusterRequest); - long stopTime = clock.getTime(); - FederationStateStoreServiceMetrics.succeededStateStoreServiceCall( - stopTime - startTime); - return response; -} catch (YarnException e) { - LOG.error("registerSubCluster error.", e); - FederationStateStoreServiceMetrics.failedStateStoreServiceCall(); - throw e; -} +FederationClientMethod clientMethod = new FederationClientMethod( +"registerSubCluster", SubClusterRegisterResponse.class, registerSubClusterRequest); +return invoke(clientMethod, SubClusterRegisterResponse.class); } @Override public SubClusterDeregisterResponse deregisterSubCluster( SubClusterDeregisterRequest subClusterDeregisterRequest) throws YarnException { -try { - long startTime = clock.getTime(); - SubClusterDeregisterResponse response = - stateStoreClient.deregisterSubCluster(subClusterDeregisterRequest); - long stopTime = clock.getTime(); - FederationStateStoreServiceMetrics.succeededStateStoreServiceCall( - stopTime - startTime); - return response; -} catch (YarnException e) { - LOG.error("deregisterSubCluster error.", e); - FederationStateStoreServiceMetrics.failedStateStoreServiceCall(); - throw e; -} +FederationClientMethod clientMethod = new FederationClientMethod( +"deregisterSubCluster",
[jira] [Commented] (HADOOP-18233) Possible race condition with TemporaryAWSCredentialsProvider
[ https://issues.apache.org/jira/browse/HADOOP-18233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17617386#comment-17617386 ] ASF GitHub Bot commented on HADOOP-18233: - hadoop-yetus commented on PR #5024: URL: https://github.com/apache/hadoop/pull/5024#issuecomment-1278348867 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 21s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 42m 33s | | trunk passed | | +1 :green_heart: | compile | 1m 12s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 0m 47s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 0m 43s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 51s | | trunk passed | | +1 :green_heart: | javadoc | 0m 35s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 39s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 1m 27s | | trunk passed | | +1 :green_heart: | shadedclient | 23m 56s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 39s | | the patch passed | | +1 :green_heart: | compile | 0m 41s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | -1 :x: | javac | 0m 41s | [/results-compile-javac-hadoop-tools_hadoop-aws-jdkUbuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5024/1/artifact/out/results-compile-javac-hadoop-tools_hadoop-aws-jdkUbuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04.txt) | hadoop-tools_hadoop-aws-jdkUbuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 generated 3 new + 50 unchanged - 0 fixed = 53 total (was 50) | | +1 :green_heart: | compile | 0m 35s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | -1 :x: | javac | 0m 35s | [/results-compile-javac-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5024/1/artifact/out/results-compile-javac-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07.txt) | hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 generated 3 new + 49 unchanged - 0 fixed = 52 total (was 49) | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 22s | [/results-checkstyle-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5024/1/artifact/out/results-checkstyle-hadoop-tools_hadoop-aws.txt) | hadoop-tools/hadoop-aws: The patch generated 11 new + 3 unchanged - 0 fixed = 14 total (was 3) | | +1 :green_heart: | mvnsite | 0m 39s | | the patch passed | | +1 :green_heart: | javadoc | 0m 18s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 27s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 1m 12s | | the patch passed | | +1 :green_heart: | shadedclient | 23m 35s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 2m 50s | | hadoop-aws in the patch passed. | | +1 :green_heart: | asflicense | 0m 41s | | The patch does not generate ASF License warnings. | | | | 107m 20s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5024/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5024 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient
[GitHub] [hadoop] hadoop-yetus commented on pull request #5024: HADOOP-18233. Possible race condition with TemporaryAWSCredentialsPro…
hadoop-yetus commented on PR #5024: URL: https://github.com/apache/hadoop/pull/5024#issuecomment-1278348867 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 21s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 42m 33s | | trunk passed | | +1 :green_heart: | compile | 1m 12s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 0m 47s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 0m 43s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 51s | | trunk passed | | +1 :green_heart: | javadoc | 0m 35s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 39s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 1m 27s | | trunk passed | | +1 :green_heart: | shadedclient | 23m 56s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 39s | | the patch passed | | +1 :green_heart: | compile | 0m 41s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | -1 :x: | javac | 0m 41s | [/results-compile-javac-hadoop-tools_hadoop-aws-jdkUbuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5024/1/artifact/out/results-compile-javac-hadoop-tools_hadoop-aws-jdkUbuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04.txt) | hadoop-tools_hadoop-aws-jdkUbuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 generated 3 new + 50 unchanged - 0 fixed = 53 total (was 50) | | +1 :green_heart: | compile | 0m 35s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | -1 :x: | javac | 0m 35s | [/results-compile-javac-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5024/1/artifact/out/results-compile-javac-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07.txt) | hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 generated 3 new + 49 unchanged - 0 fixed = 52 total (was 49) | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 22s | [/results-checkstyle-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5024/1/artifact/out/results-checkstyle-hadoop-tools_hadoop-aws.txt) | hadoop-tools/hadoop-aws: The patch generated 11 new + 3 unchanged - 0 fixed = 14 total (was 3) | | +1 :green_heart: | mvnsite | 0m 39s | | the patch passed | | +1 :green_heart: | javadoc | 0m 18s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 27s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 1m 12s | | the patch passed | | +1 :green_heart: | shadedclient | 23m 35s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 2m 50s | | hadoop-aws in the patch passed. | | +1 :green_heart: | asflicense | 0m 41s | | The patch does not generate ASF License warnings. | | | | 107m 20s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5024/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5024 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 4d0af4bd3116 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | |
[GitHub] [hadoop] slfan1989 commented on a diff in pull request #4963: YARN-11326. [Federation] Add RM FederationStateStoreService Metrics.
slfan1989 commented on code in PR #4963: URL: https://github.com/apache/hadoop/pull/4963#discussion_r995248010 ## hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/federation/FederationStateStoreService.java: ## @@ -259,378 +261,190 @@ public Version loadVersion() { @Override public GetSubClusterPolicyConfigurationResponse getPolicyConfiguration( GetSubClusterPolicyConfigurationRequest request) throws YarnException { -try { - long startTime = clock.getTime(); - GetSubClusterPolicyConfigurationResponse response = - stateStoreClient.getPolicyConfiguration(request); - long stopTime = clock.getTime(); - FederationStateStoreServiceMetrics.succeededStateStoreServiceCall( - stopTime - startTime); - return response; -} catch (YarnException e) { - LOG.error("getPolicyConfiguration error.", e); - FederationStateStoreServiceMetrics.failedStateStoreServiceCall(); - throw e; -} +FederationClientMethod clientMethod = new FederationClientMethod( +"getPolicyConfiguration", GetSubClusterPolicyConfigurationResponse.class, request); +return invoke(clientMethod, GetSubClusterPolicyConfigurationResponse.class); } @Override public SetSubClusterPolicyConfigurationResponse setPolicyConfiguration( SetSubClusterPolicyConfigurationRequest request) throws YarnException { -try { - long startTime = clock.getTime(); - SetSubClusterPolicyConfigurationResponse response = - stateStoreClient.setPolicyConfiguration(request); - long stopTime = clock.getTime(); - FederationStateStoreServiceMetrics.succeededStateStoreServiceCall( - stopTime - startTime); - return response; -} catch (YarnException e) { - LOG.error("setPolicyConfiguration error.", e); - FederationStateStoreServiceMetrics.failedStateStoreServiceCall(); - throw e; -} +FederationClientMethod clientMethod = new FederationClientMethod( +"setPolicyConfiguration", SetSubClusterPolicyConfigurationResponse.class, request); +return invoke(clientMethod, SetSubClusterPolicyConfigurationResponse.class); } @Override public GetSubClusterPoliciesConfigurationsResponse getPoliciesConfigurations( GetSubClusterPoliciesConfigurationsRequest request) throws YarnException { -try { - long startTime = clock.getTime(); - GetSubClusterPoliciesConfigurationsResponse response = - stateStoreClient.getPoliciesConfigurations(request); - long stopTime = clock.getTime(); - FederationStateStoreServiceMetrics.succeededStateStoreServiceCall( - stopTime - startTime); - return response; -} catch (YarnException e) { - LOG.error("getPoliciesConfigurations error.", e); - FederationStateStoreServiceMetrics.failedStateStoreServiceCall(); - throw e; -} +FederationClientMethod clientMethod = new FederationClientMethod( +"getPoliciesConfigurations", GetSubClusterPoliciesConfigurationsResponse.class, request); +return invoke(clientMethod, GetSubClusterPoliciesConfigurationsResponse.class); } @Override public SubClusterRegisterResponse registerSubCluster( SubClusterRegisterRequest registerSubClusterRequest) throws YarnException { -try { - long startTime = clock.getTime(); - SubClusterRegisterResponse response = - stateStoreClient.registerSubCluster(registerSubClusterRequest); - long stopTime = clock.getTime(); - FederationStateStoreServiceMetrics.succeededStateStoreServiceCall( - stopTime - startTime); - return response; -} catch (YarnException e) { - LOG.error("registerSubCluster error.", e); - FederationStateStoreServiceMetrics.failedStateStoreServiceCall(); - throw e; -} +FederationClientMethod clientMethod = new FederationClientMethod( +"registerSubCluster", SubClusterRegisterResponse.class, registerSubClusterRequest); +return invoke(clientMethod, SubClusterRegisterResponse.class); } @Override public SubClusterDeregisterResponse deregisterSubCluster( SubClusterDeregisterRequest subClusterDeregisterRequest) throws YarnException { -try { - long startTime = clock.getTime(); - SubClusterDeregisterResponse response = - stateStoreClient.deregisterSubCluster(subClusterDeregisterRequest); - long stopTime = clock.getTime(); - FederationStateStoreServiceMetrics.succeededStateStoreServiceCall( - stopTime - startTime); - return response; -} catch (YarnException e) { - LOG.error("deregisterSubCluster error.", e); - FederationStateStoreServiceMetrics.failedStateStoreServiceCall(); - throw e; -} +FederationClientMethod clientMethod = new FederationClientMethod( +"deregisterSubCluster",
[GitHub] [hadoop] slfan1989 commented on a diff in pull request #4963: YARN-11326. [Federation] Add RM FederationStateStoreService Metrics.
slfan1989 commented on code in PR #4963: URL: https://github.com/apache/hadoop/pull/4963#discussion_r995247659 ## hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/federation/FederationStateStoreService.java: ## @@ -259,378 +261,190 @@ public Version loadVersion() { @Override public GetSubClusterPolicyConfigurationResponse getPolicyConfiguration( GetSubClusterPolicyConfigurationRequest request) throws YarnException { -try { - long startTime = clock.getTime(); - GetSubClusterPolicyConfigurationResponse response = - stateStoreClient.getPolicyConfiguration(request); - long stopTime = clock.getTime(); - FederationStateStoreServiceMetrics.succeededStateStoreServiceCall( - stopTime - startTime); - return response; -} catch (YarnException e) { - LOG.error("getPolicyConfiguration error.", e); - FederationStateStoreServiceMetrics.failedStateStoreServiceCall(); - throw e; -} +FederationClientMethod clientMethod = new FederationClientMethod( +"getPolicyConfiguration", GetSubClusterPolicyConfigurationResponse.class, request); +return invoke(clientMethod, GetSubClusterPolicyConfigurationResponse.class); } @Override public SetSubClusterPolicyConfigurationResponse setPolicyConfiguration( SetSubClusterPolicyConfigurationRequest request) throws YarnException { -try { - long startTime = clock.getTime(); - SetSubClusterPolicyConfigurationResponse response = - stateStoreClient.setPolicyConfiguration(request); - long stopTime = clock.getTime(); - FederationStateStoreServiceMetrics.succeededStateStoreServiceCall( - stopTime - startTime); - return response; -} catch (YarnException e) { - LOG.error("setPolicyConfiguration error.", e); - FederationStateStoreServiceMetrics.failedStateStoreServiceCall(); - throw e; -} +FederationClientMethod clientMethod = new FederationClientMethod( +"setPolicyConfiguration", SetSubClusterPolicyConfigurationResponse.class, request); +return invoke(clientMethod, SetSubClusterPolicyConfigurationResponse.class); } @Override public GetSubClusterPoliciesConfigurationsResponse getPoliciesConfigurations( GetSubClusterPoliciesConfigurationsRequest request) throws YarnException { -try { - long startTime = clock.getTime(); - GetSubClusterPoliciesConfigurationsResponse response = - stateStoreClient.getPoliciesConfigurations(request); - long stopTime = clock.getTime(); - FederationStateStoreServiceMetrics.succeededStateStoreServiceCall( - stopTime - startTime); - return response; -} catch (YarnException e) { - LOG.error("getPoliciesConfigurations error.", e); - FederationStateStoreServiceMetrics.failedStateStoreServiceCall(); - throw e; -} +FederationClientMethod clientMethod = new FederationClientMethod( +"getPoliciesConfigurations", GetSubClusterPoliciesConfigurationsResponse.class, request); +return invoke(clientMethod, GetSubClusterPoliciesConfigurationsResponse.class); } @Override public SubClusterRegisterResponse registerSubCluster( SubClusterRegisterRequest registerSubClusterRequest) throws YarnException { -try { - long startTime = clock.getTime(); - SubClusterRegisterResponse response = - stateStoreClient.registerSubCluster(registerSubClusterRequest); - long stopTime = clock.getTime(); - FederationStateStoreServiceMetrics.succeededStateStoreServiceCall( - stopTime - startTime); - return response; -} catch (YarnException e) { - LOG.error("registerSubCluster error.", e); - FederationStateStoreServiceMetrics.failedStateStoreServiceCall(); - throw e; -} +FederationClientMethod clientMethod = new FederationClientMethod( +"registerSubCluster", SubClusterRegisterResponse.class, registerSubClusterRequest); +return invoke(clientMethod, SubClusterRegisterResponse.class); } @Override public SubClusterDeregisterResponse deregisterSubCluster( SubClusterDeregisterRequest subClusterDeregisterRequest) throws YarnException { -try { - long startTime = clock.getTime(); - SubClusterDeregisterResponse response = - stateStoreClient.deregisterSubCluster(subClusterDeregisterRequest); - long stopTime = clock.getTime(); - FederationStateStoreServiceMetrics.succeededStateStoreServiceCall( - stopTime - startTime); - return response; -} catch (YarnException e) { - LOG.error("deregisterSubCluster error.", e); - FederationStateStoreServiceMetrics.failedStateStoreServiceCall(); - throw e; -} +FederationClientMethod clientMethod = new FederationClientMethod( +"deregisterSubCluster",
[GitHub] [hadoop] slfan1989 commented on a diff in pull request #4982: YARN-11332. [Federation] Improve FederationClientInterceptor#ThreadPool thread pool configuration.
slfan1989 commented on code in PR #4982: URL: https://github.com/apache/hadoop/pull/4982#discussion_r995247359 ## hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java: ## @@ -4128,11 +4128,87 @@ public static boolean isAclEnabled(Configuration conf) { public static final String ROUTER_WEBAPP_PREFIX = ROUTER_PREFIX + "webapp."; + /** + * This configurable that controls the thread pool size of the threadpool of the interceptor. + * The corePoolSize(minimumPoolSize) and maximumPoolSize of the thread pool + * are controlled by this configurable. + * In order to control the thread pool more accurately, this parameter is deprecated. + * + * corePoolSize(minimumPoolSize) use + * {@link YarnConfiguration#ROUTER_USER_CLIENT_THREAD_POOL_MINIMUM_POOL_SIZE} + * + * maximumPoolSize use + * {@link YarnConfiguration#ROUTER_USER_CLIENT_THREAD_POOL_MAXIMUM_POOL_SIZE} + * + * This configurable will be deprecated. + */ public static final String ROUTER_USER_CLIENT_THREADS_SIZE = ROUTER_PREFIX + "interceptor.user.threadpool-size"; + /** + * The default value is 5. + * which means that the corePoolSize(minimumPoolSize) and maximumPoolSize + * of the thread pool are both 5s. + * + * corePoolSize(minimumPoolSize) default value use + * {@link YarnConfiguration#DEFAULT_ROUTER_USER_CLIENT_THREAD_POOL_MINIMUM_POOL_SIZE} + * + * maximumPoolSize default value use + * {@link YarnConfiguration#DEFAULT_ROUTER_USER_CLIENT_THREAD_POOL_MAXIMUM_POOL_SIZE} + */ public static final int DEFAULT_ROUTER_USER_CLIENT_THREADS_SIZE = 5; + /** + * This configurable is used to set the corePoolSize(minimumPoolSize) + * of the thread pool of the interceptor. + * + * corePoolSize the number of threads to keep in the pool, even if they are idle. + */ + public static final String ROUTER_USER_CLIENT_THREAD_POOL_MINIMUM_POOL_SIZE = + ROUTER_PREFIX + "interceptor.user-thread-pool.minimum-pool-size"; + + /** + * This configuration is used to set the default value of corePoolSize (minimumPoolSize) + * of the thread pool of the interceptor. + * + * Default is 5. + */ + public static final int DEFAULT_ROUTER_USER_CLIENT_THREAD_POOL_MINIMUM_POOL_SIZE = 5; + + /** + * This configurable is used to set the maximumPoolSize of the thread pool of the interceptor. + * + * maximumPoolSize the maximum number of threads to allow in the pool. + */ + public static final String ROUTER_USER_CLIENT_THREAD_POOL_MAXIMUM_POOL_SIZE = + ROUTER_PREFIX + "interceptor.user-thread-pool.maximum-pool-size"; + + /** + * This configuration is used to set the default value of maximumPoolSize + * of the thread pool of the interceptor. + * + * Default is 5. + */ + public static final int DEFAULT_ROUTER_USER_CLIENT_THREAD_POOL_MAXIMUM_POOL_SIZE = 5; + + /** + * This configurable is used to set the keepAliveTime of the thread pool of the interceptor. + * + * keepAliveTime when the number of threads is greater than the core, + * this is the maximum time that excess idle threads will wait for new tasks before terminating. + */ + public static final String ROUTER_USER_CLIENT_THREAD_POOL_KEEP_ALIVE_TIME = + ROUTER_PREFIX + "interceptor.user-thread-pool.keep-alive-time"; + + /** + * This configurable is used to set the default time of keepAliveTime + * of the thread pool of the interceptor. + * + * the default value is 10s. + */ + public static final long DEFAULT_ROUTER_USER_CLIENT_THREAD_POOL_KEEP_ALIVE_TIME = Review Comment: Thanks for your suggestion, I will modify the code. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #5022: MAPREDUCE-7414. Upgrade Junit 4 to 5 in hadoop-mapreduce-client-hs
hadoop-yetus commented on PR #5022: URL: https://github.com/apache/hadoop/pull/5022#issuecomment-1278337623 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 56s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 0s | | xmllint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 28 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 41m 46s | | trunk passed | | +1 :green_heart: | compile | 0m 31s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 0m 29s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 0m 29s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 33s | | trunk passed | | +1 :green_heart: | javadoc | 0m 36s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 24s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 1m 3s | | trunk passed | | +1 :green_heart: | shadedclient | 23m 9s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 33s | | the patch passed | | +1 :green_heart: | compile | 0m 25s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 0m 25s | | the patch passed | | +1 :green_heart: | compile | 0m 22s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 0m 22s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 18s | [/results-checkstyle-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-hs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5022/1/artifact/out/results-checkstyle-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-hs.txt) | hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs: The patch generated 43 new + 76 unchanged - 127 fixed = 119 total (was 203) | | +1 :green_heart: | mvnsite | 0m 25s | | the patch passed | | +1 :green_heart: | javadoc | 0m 20s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 19s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 0m 53s | | the patch passed | | +1 :green_heart: | shadedclient | 22m 55s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 5m 36s | | hadoop-mapreduce-client-hs in the patch passed. | | +1 :green_heart: | asflicense | 0m 32s | | The patch does not generate ASF License warnings. | | | | 104m 8s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5022/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5022 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient codespell detsecrets xmllint spotbugs checkstyle | | uname | Linux 6df17b228a77 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 45e4a90a45aeee21952ebc566e132e8ae704fdf1 | | Default Java | Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5022/1/testReport/ | | Max. process+thread count | 1220 (vs. ulimit of 5500) | | modules | C: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs U:
[GitHub] [hadoop] slfan1989 commented on pull request #4915: YARN-11294. [Federation] Router Support DelegationToken store/update/remove Token With MemoryStateStore.
slfan1989 commented on PR #4915: URL: https://github.com/apache/hadoop/pull/4915#issuecomment-1278334063 @goiri Thank you very much for helping to review the code! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] slfan1989 commented on pull request #4938: YARN-8041. [Router] Federation: Improve Router REST API Metrics.
slfan1989 commented on PR #4938: URL: https://github.com/apache/hadoop/pull/4938#issuecomment-1278333942 @goiri Thank you very much for helping to review the code! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #5023: MAPREDUCE-7413. Upgrade Junit 4 to 5 in hadoop-mapreduce-client-hs-plugins
hadoop-yetus commented on PR #5023: URL: https://github.com/apache/hadoop/pull/5023#issuecomment-1278331798 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 37s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 1s | | xmllint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 39m 4s | | trunk passed | | +1 :green_heart: | compile | 0m 35s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 0m 33s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 0m 37s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 38s | | trunk passed | | +1 :green_heart: | javadoc | 0m 49s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 32s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 1m 7s | | trunk passed | | +1 :green_heart: | shadedclient | 20m 33s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 30s | | the patch passed | | +1 :green_heart: | compile | 0m 23s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 0m 23s | | the patch passed | | +1 :green_heart: | compile | 0m 25s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 0m 25s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 19s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 25s | | the patch passed | | +1 :green_heart: | javadoc | 0m 21s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 24s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 0m 42s | | the patch passed | | +1 :green_heart: | shadedclient | 20m 4s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 0m 34s | | hadoop-mapreduce-client-hs-plugins in the patch passed. | | +1 :green_heart: | asflicense | 0m 42s | | The patch does not generate ASF License warnings. | | | | 92m 6s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5023/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5023 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient codespell detsecrets xmllint spotbugs checkstyle | | uname | Linux 99e6f685f2a6 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / b24c60ddbb22bdea811afa77f8ffc4209e1b828f | | Default Java | Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5023/1/testReport/ | | Max. process+thread count | 629 (vs. ulimit of 5500) | | modules | C: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs-plugins U: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs-plugins | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5023/1/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to
[GitHub] [hadoop] ashutoshcipher opened a new pull request, #5026: [WIP] MAPREDUCE-7421. Upgrade Junit 4 to 5 in hadoop-mapreduce-client-jobclient
ashutoshcipher opened a new pull request, #5026: URL: https://github.com/apache/hadoop/pull/5026 ### Description of PR [WIP] Upgrade Junit 4 to 5 in hadoop-mapreduce-client-jobclient JIRA - MAPREDUCE-7421 ### For code changes: - [X] Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')? - [ ] Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, `NOTICE-binary` files? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #5021: MAPREDUCE-7415. Upgrade Junit 4 to 5 in hadoop-mapreduce-client-nativetask
hadoop-yetus commented on PR #5021: URL: https://github.com/apache/hadoop/pull/5021#issuecomment-1278320748 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 49s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 1s | | xmllint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 20 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 38m 51s | | trunk passed | | +1 :green_heart: | compile | 1m 10s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 1m 11s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 0m 46s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 41s | | trunk passed | | +1 :green_heart: | javadoc | 0m 54s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 37s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 1m 7s | | trunk passed | | +1 :green_heart: | shadedclient | 20m 30s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 35s | | the patch passed | | +1 :green_heart: | compile | 0m 59s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 0m 59s | | the patch passed | | +1 :green_heart: | compile | 0m 57s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 0m 57s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 22s | [/results-checkstyle-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-nativetask.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5021/1/artifact/out/results-checkstyle-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-nativetask.txt) | hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask: The patch generated 4 new + 26 unchanged - 35 fixed = 30 total (was 61) | | +1 :green_heart: | mvnsite | 0m 27s | | the patch passed | | +1 :green_heart: | javadoc | 0m 24s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 23s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 0m 52s | | the patch passed | | +1 :green_heart: | shadedclient | 20m 7s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 3m 19s | | hadoop-mapreduce-client-nativetask in the patch passed. | | +1 :green_heart: | asflicense | 0m 44s | | The patch does not generate ASF License warnings. | | | | 97m 52s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5021/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5021 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient codespell detsecrets xmllint spotbugs checkstyle | | uname | Linux 84221004a9f4 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / c612792cd38014a76ca42b6d4f1623875909fde4 | | Default Java | Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5021/1/testReport/ | | Max. process+thread count | 695 (vs. ulimit of 5500) | | modules | C: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask U:
[GitHub] [hadoop] hadoop-yetus commented on pull request #5019: MAPREDUCE-7417. Upgrade Junit 4 to 5 in hadoop-mapreduce-client-uploader
hadoop-yetus commented on PR #5019: URL: https://github.com/apache/hadoop/pull/5019#issuecomment-1278318943 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 0s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 1s | | xmllint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 42m 1s | | trunk passed | | +1 :green_heart: | compile | 0m 32s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 0m 30s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 0m 32s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 35s | | trunk passed | | +1 :green_heart: | javadoc | 0m 40s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 28s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 0m 59s | | trunk passed | | +1 :green_heart: | shadedclient | 23m 32s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 25s | | the patch passed | | +1 :green_heart: | compile | 0m 22s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 0m 22s | | the patch passed | | +1 :green_heart: | compile | 0m 20s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 0m 20s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 17s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 23s | | the patch passed | | +1 :green_heart: | javadoc | 0m 19s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 19s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 0m 44s | | the patch passed | | +1 :green_heart: | shadedclient | 23m 8s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 0m 39s | | hadoop-mapreduce-client-uploader in the patch passed. | | +1 :green_heart: | asflicense | 0m 40s | | The patch does not generate ASF License warnings. | | | | 100m 20s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5019/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5019 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient codespell detsecrets xmllint spotbugs checkstyle | | uname | Linux b1fc041bba23 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / f595f56ab6d75f399d2c4642522fa25bfb396569 | | Default Java | Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5019/1/testReport/ | | Max. process+thread count | 560 (vs. ulimit of 5500) | | modules | C: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-uploader U: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-uploader | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5019/1/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the
[GitHub] [hadoop] hadoop-yetus commented on pull request #5020: MAPREDUCE-7416. Upgrade Junit 4 to 5 in hadoop-mapreduce-client-shuffle
hadoop-yetus commented on PR #5020: URL: https://github.com/apache/hadoop/pull/5020#issuecomment-1278316298 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 50s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 1s | | xmllint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 3 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 39m 25s | | trunk passed | | +1 :green_heart: | compile | 0m 37s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 0m 44s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 0m 48s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 47s | | trunk passed | | +1 :green_heart: | javadoc | 0m 45s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 43s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 1m 7s | | trunk passed | | +1 :green_heart: | shadedclient | 20m 41s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 30s | | the patch passed | | +1 :green_heart: | compile | 0m 30s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 0m 30s | | the patch passed | | +1 :green_heart: | compile | 0m 27s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 0m 27s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 28s | [/results-checkstyle-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-shuffle.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5020/1/artifact/out/results-checkstyle-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-shuffle.txt) | hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle: The patch generated 3 new + 13 unchanged - 10 fixed = 16 total (was 23) | | +1 :green_heart: | mvnsite | 0m 30s | | the patch passed | | +1 :green_heart: | javadoc | 0m 26s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 22s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 0m 49s | | the patch passed | | +1 :green_heart: | shadedclient | 20m 0s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 0m 48s | | hadoop-mapreduce-client-shuffle in the patch passed. | | +1 :green_heart: | asflicense | 0m 47s | | The patch does not generate ASF License warnings. | | | | 94m 28s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5020/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5020 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient codespell detsecrets xmllint spotbugs checkstyle | | uname | Linux 8b56beb1682a 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 72f0be56aae7e21c5fcb9d8d5fade32cecb48d1e | | Default Java | Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5020/1/testReport/ | | Max. process+thread count | 750 (vs. ulimit of 5500) | | modules | C: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle U:
[GitHub] [hadoop] goiri merged pull request #4938: YARN-8041. [Router] Federation: Improve Router REST API Metrics.
goiri merged PR #4938: URL: https://github.com/apache/hadoop/pull/4938 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] goiri commented on a diff in pull request #4982: YARN-11332. [Federation] Improve FederationClientInterceptor#ThreadPool thread pool configuration.
goiri commented on code in PR #4982: URL: https://github.com/apache/hadoop/pull/4982#discussion_r995215183 ## hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java: ## @@ -4128,11 +4128,87 @@ public static boolean isAclEnabled(Configuration conf) { public static final String ROUTER_WEBAPP_PREFIX = ROUTER_PREFIX + "webapp."; + /** + * This configurable that controls the thread pool size of the threadpool of the interceptor. + * The corePoolSize(minimumPoolSize) and maximumPoolSize of the thread pool + * are controlled by this configurable. + * In order to control the thread pool more accurately, this parameter is deprecated. + * + * corePoolSize(minimumPoolSize) use + * {@link YarnConfiguration#ROUTER_USER_CLIENT_THREAD_POOL_MINIMUM_POOL_SIZE} + * + * maximumPoolSize use + * {@link YarnConfiguration#ROUTER_USER_CLIENT_THREAD_POOL_MAXIMUM_POOL_SIZE} + * + * This configurable will be deprecated. + */ public static final String ROUTER_USER_CLIENT_THREADS_SIZE = ROUTER_PREFIX + "interceptor.user.threadpool-size"; + /** + * The default value is 5. + * which means that the corePoolSize(minimumPoolSize) and maximumPoolSize + * of the thread pool are both 5s. + * + * corePoolSize(minimumPoolSize) default value use + * {@link YarnConfiguration#DEFAULT_ROUTER_USER_CLIENT_THREAD_POOL_MINIMUM_POOL_SIZE} + * + * maximumPoolSize default value use + * {@link YarnConfiguration#DEFAULT_ROUTER_USER_CLIENT_THREAD_POOL_MAXIMUM_POOL_SIZE} + */ public static final int DEFAULT_ROUTER_USER_CLIENT_THREADS_SIZE = 5; + /** + * This configurable is used to set the corePoolSize(minimumPoolSize) + * of the thread pool of the interceptor. + * + * corePoolSize the number of threads to keep in the pool, even if they are idle. + */ + public static final String ROUTER_USER_CLIENT_THREAD_POOL_MINIMUM_POOL_SIZE = + ROUTER_PREFIX + "interceptor.user-thread-pool.minimum-pool-size"; + + /** + * This configuration is used to set the default value of corePoolSize (minimumPoolSize) + * of the thread pool of the interceptor. + * + * Default is 5. + */ + public static final int DEFAULT_ROUTER_USER_CLIENT_THREAD_POOL_MINIMUM_POOL_SIZE = 5; + + /** + * This configurable is used to set the maximumPoolSize of the thread pool of the interceptor. + * + * maximumPoolSize the maximum number of threads to allow in the pool. + */ + public static final String ROUTER_USER_CLIENT_THREAD_POOL_MAXIMUM_POOL_SIZE = + ROUTER_PREFIX + "interceptor.user-thread-pool.maximum-pool-size"; + + /** + * This configuration is used to set the default value of maximumPoolSize + * of the thread pool of the interceptor. + * + * Default is 5. + */ + public static final int DEFAULT_ROUTER_USER_CLIENT_THREAD_POOL_MAXIMUM_POOL_SIZE = 5; + + /** + * This configurable is used to set the keepAliveTime of the thread pool of the interceptor. + * + * keepAliveTime when the number of threads is greater than the core, + * this is the maximum time that excess idle threads will wait for new tasks before terminating. + */ + public static final String ROUTER_USER_CLIENT_THREAD_POOL_KEEP_ALIVE_TIME = + ROUTER_PREFIX + "interceptor.user-thread-pool.keep-alive-time"; + + /** + * This configurable is used to set the default time of keepAliveTime + * of the thread pool of the interceptor. + * + * the default value is 10s. + */ + public static final long DEFAULT_ROUTER_USER_CLIENT_THREAD_POOL_KEEP_ALIVE_TIME = Review Comment: TimeUnit.SECOND.toMillis(10); -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] goiri merged pull request #4915: YARN-11294. [Federation] Router Support DelegationToken store/update/remove Token With MemoryStateStore.
goiri merged PR #4915: URL: https://github.com/apache/hadoop/pull/4915 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] goiri commented on a diff in pull request #4963: YARN-11326. [Federation] Add RM FederationStateStoreService Metrics.
goiri commented on code in PR #4963: URL: https://github.com/apache/hadoop/pull/4963#discussion_r995199228 ## hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/federation/FederationStateStoreService.java: ## @@ -259,378 +261,190 @@ public Version loadVersion() { @Override public GetSubClusterPolicyConfigurationResponse getPolicyConfiguration( GetSubClusterPolicyConfigurationRequest request) throws YarnException { -try { - long startTime = clock.getTime(); - GetSubClusterPolicyConfigurationResponse response = - stateStoreClient.getPolicyConfiguration(request); - long stopTime = clock.getTime(); - FederationStateStoreServiceMetrics.succeededStateStoreServiceCall( - stopTime - startTime); - return response; -} catch (YarnException e) { - LOG.error("getPolicyConfiguration error.", e); - FederationStateStoreServiceMetrics.failedStateStoreServiceCall(); - throw e; -} +FederationClientMethod clientMethod = new FederationClientMethod( +"getPolicyConfiguration", GetSubClusterPolicyConfigurationResponse.class, request); +return invoke(clientMethod, GetSubClusterPolicyConfigurationResponse.class); } @Override public SetSubClusterPolicyConfigurationResponse setPolicyConfiguration( SetSubClusterPolicyConfigurationRequest request) throws YarnException { -try { - long startTime = clock.getTime(); - SetSubClusterPolicyConfigurationResponse response = - stateStoreClient.setPolicyConfiguration(request); - long stopTime = clock.getTime(); - FederationStateStoreServiceMetrics.succeededStateStoreServiceCall( - stopTime - startTime); - return response; -} catch (YarnException e) { - LOG.error("setPolicyConfiguration error.", e); - FederationStateStoreServiceMetrics.failedStateStoreServiceCall(); - throw e; -} +FederationClientMethod clientMethod = new FederationClientMethod( +"setPolicyConfiguration", SetSubClusterPolicyConfigurationResponse.class, request); +return invoke(clientMethod, SetSubClusterPolicyConfigurationResponse.class); } @Override public GetSubClusterPoliciesConfigurationsResponse getPoliciesConfigurations( GetSubClusterPoliciesConfigurationsRequest request) throws YarnException { -try { - long startTime = clock.getTime(); - GetSubClusterPoliciesConfigurationsResponse response = - stateStoreClient.getPoliciesConfigurations(request); - long stopTime = clock.getTime(); - FederationStateStoreServiceMetrics.succeededStateStoreServiceCall( - stopTime - startTime); - return response; -} catch (YarnException e) { - LOG.error("getPoliciesConfigurations error.", e); - FederationStateStoreServiceMetrics.failedStateStoreServiceCall(); - throw e; -} +FederationClientMethod clientMethod = new FederationClientMethod( +"getPoliciesConfigurations", GetSubClusterPoliciesConfigurationsResponse.class, request); +return invoke(clientMethod, GetSubClusterPoliciesConfigurationsResponse.class); } @Override public SubClusterRegisterResponse registerSubCluster( SubClusterRegisterRequest registerSubClusterRequest) throws YarnException { -try { - long startTime = clock.getTime(); - SubClusterRegisterResponse response = - stateStoreClient.registerSubCluster(registerSubClusterRequest); - long stopTime = clock.getTime(); - FederationStateStoreServiceMetrics.succeededStateStoreServiceCall( - stopTime - startTime); - return response; -} catch (YarnException e) { - LOG.error("registerSubCluster error.", e); - FederationStateStoreServiceMetrics.failedStateStoreServiceCall(); - throw e; -} +FederationClientMethod clientMethod = new FederationClientMethod( +"registerSubCluster", SubClusterRegisterResponse.class, registerSubClusterRequest); +return invoke(clientMethod, SubClusterRegisterResponse.class); } @Override public SubClusterDeregisterResponse deregisterSubCluster( SubClusterDeregisterRequest subClusterDeregisterRequest) throws YarnException { -try { - long startTime = clock.getTime(); - SubClusterDeregisterResponse response = - stateStoreClient.deregisterSubCluster(subClusterDeregisterRequest); - long stopTime = clock.getTime(); - FederationStateStoreServiceMetrics.succeededStateStoreServiceCall( - stopTime - startTime); - return response; -} catch (YarnException e) { - LOG.error("deregisterSubCluster error.", e); - FederationStateStoreServiceMetrics.failedStateStoreServiceCall(); - throw e; -} +FederationClientMethod clientMethod = new FederationClientMethod( +"deregisterSubCluster",
[GitHub] [hadoop] ashutoshcipher opened a new pull request, #5025: MAPREDUCE-7418. Upgrade Junit 4 to 5 in hadoop-mapreduce-client-app
ashutoshcipher opened a new pull request, #5025: URL: https://github.com/apache/hadoop/pull/5025 ### Description of PR Upgrade Junit 4 to 5 in hadoop-mapreduce-client-app JIRA - MAPREDUCE-7418 ### For code changes: - [X] Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')? - [ ] Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, `NOTICE-binary` files? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18233) Possible race condition with TemporaryAWSCredentialsProvider
[ https://issues.apache.org/jira/browse/HADOOP-18233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17617350#comment-17617350 ] ASF GitHub Bot commented on HADOOP-18233: - ashutoshcipher commented on PR #5024: URL: https://github.com/apache/hadoop/pull/5024#issuecomment-1278284700 This doc can be followed - https://hadoop.apache.org/docs/current2/hadoop-aws/tools/hadoop-aws/testing.html > Possible race condition with TemporaryAWSCredentialsProvider > > > Key: HADOOP-18233 > URL: https://issues.apache.org/jira/browse/HADOOP-18233 > Project: Hadoop Common > Issue Type: Bug > Components: auth, fs/s3 >Affects Versions: 3.3.1 > Environment: spark v3.2.0 > hadoop-aws v3.3.1 > java version 1.8.0_265 via zulu-8 >Reporter: Jason Sleight >Priority: Major > Labels: pull-request-available > > I'm in the process of upgrading spark+hadoop versions for my workflows and > observing a weird behavior regression. I'm setting > {code:java} > spark.hadoop.fs.s3a.aws.credentials.provider=org.apache.hadoop.fs.s3a.TemporaryAWSCredentialsProvider > spark.hadoop.fs.s3.impl=org.apache.hadoop.fs.s3a.S3AFileSystem > spark.sql.catalogImplementation=hive > spark.hadoop.aws.region=us-west-2 > ...many other things, I think these might be the relevant ones though...{code} > in Spark config and I'm observing some non-fatal warnings/exceptions (see > below for some examples). The warnings/exceptions randomly appear for some > tasks, which causes them to fail, but then when Spark retries the task it > will succeed. The initial tasks don't always fail either, just sometimes. > I also found that if I switch to a SimpleAWSCredentials and use static keys, > then I don't see any issues. > My old setup was spark v3.0.2 with hadoop-aws v3.2.1 which also does not have > these warnings/exceptions. > From reading some other tickets I thought perhaps adding > {code:java} > spark.sql.hive.metastore.sharedPrefixes=com.amazonaws {code} > would help, but it did not. > Appreciate any suggestions for how to proceed or debug further :) > > Example stack traces: > First one for an s3 read > {code:java} > WARN TaskSetManager: Lost task 27.0 in stage 4.0 (TID 29) ( executor > 13): java.nio.file.AccessDeniedException: > s3a://bucket/path/to/part.snappy.parquet: > org.apache.hadoop.fs.s3a.CredentialInitializationException: Provider > TemporaryAWSCredentialsProvider has no credentials > at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:206) > at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:170) > at > org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:3289) > at > org.apache.hadoop.fs.s3a.S3AFileSystem.innerGetFileStatus(S3AFileSystem.java:3185) > at > org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:3053) > at > org.apache.parquet.hadoop.util.HadoopInputFile.fromPath(HadoopInputFile.java:39) > at > org.apache.spark.sql.execution.datasources.parquet.ParquetFooterReader.readFooter(ParquetFooterReader.java:39) > at > org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat.footerFileMetaData$lzycompute$1(ParquetFileFormat.scala:268) > at > org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat.footerFileMetaData$1(ParquetFileFormat.scala:267) > at > org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat.$anonfun$buildReaderWithPartitionValues$2(ParquetFileFormat.scala:270) > at > org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.org$apache$spark$sql$execution$datasources$FileScanRDD$$anon$$readCurrentFile(FileScanRDD.scala:116) > at > org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:164) > at > org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:93) > at > org.apache.spark.sql.execution.FileSourceScanExec$$anon$1.hasNext(DataSourceScanExec.scala:522) > at > org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage7.columnartorow_nextBatch_0$(Unknown > Source) > at > org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage7.processNext(Unknown > Source) > at > org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) > at > org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:759) > at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460) > at > org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:140) > at >
[GitHub] [hadoop] ashutoshcipher commented on pull request #5024: HADOOP-18233. Possible race condition with TemporaryAWSCredentialsPro…
ashutoshcipher commented on PR #5024: URL: https://github.com/apache/hadoop/pull/5024#issuecomment-1278284700 This doc can be followed - https://hadoop.apache.org/docs/current2/hadoop-aws/tools/hadoop-aws/testing.html -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-18233) Possible race condition with TemporaryAWSCredentialsProvider
[ https://issues.apache.org/jira/browse/HADOOP-18233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HADOOP-18233: Labels: pull-request-available (was: ) > Possible race condition with TemporaryAWSCredentialsProvider > > > Key: HADOOP-18233 > URL: https://issues.apache.org/jira/browse/HADOOP-18233 > Project: Hadoop Common > Issue Type: Bug > Components: auth, fs/s3 >Affects Versions: 3.3.1 > Environment: spark v3.2.0 > hadoop-aws v3.3.1 > java version 1.8.0_265 via zulu-8 >Reporter: Jason Sleight >Priority: Major > Labels: pull-request-available > > I'm in the process of upgrading spark+hadoop versions for my workflows and > observing a weird behavior regression. I'm setting > {code:java} > spark.hadoop.fs.s3a.aws.credentials.provider=org.apache.hadoop.fs.s3a.TemporaryAWSCredentialsProvider > spark.hadoop.fs.s3.impl=org.apache.hadoop.fs.s3a.S3AFileSystem > spark.sql.catalogImplementation=hive > spark.hadoop.aws.region=us-west-2 > ...many other things, I think these might be the relevant ones though...{code} > in Spark config and I'm observing some non-fatal warnings/exceptions (see > below for some examples). The warnings/exceptions randomly appear for some > tasks, which causes them to fail, but then when Spark retries the task it > will succeed. The initial tasks don't always fail either, just sometimes. > I also found that if I switch to a SimpleAWSCredentials and use static keys, > then I don't see any issues. > My old setup was spark v3.0.2 with hadoop-aws v3.2.1 which also does not have > these warnings/exceptions. > From reading some other tickets I thought perhaps adding > {code:java} > spark.sql.hive.metastore.sharedPrefixes=com.amazonaws {code} > would help, but it did not. > Appreciate any suggestions for how to proceed or debug further :) > > Example stack traces: > First one for an s3 read > {code:java} > WARN TaskSetManager: Lost task 27.0 in stage 4.0 (TID 29) ( executor > 13): java.nio.file.AccessDeniedException: > s3a://bucket/path/to/part.snappy.parquet: > org.apache.hadoop.fs.s3a.CredentialInitializationException: Provider > TemporaryAWSCredentialsProvider has no credentials > at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:206) > at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:170) > at > org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:3289) > at > org.apache.hadoop.fs.s3a.S3AFileSystem.innerGetFileStatus(S3AFileSystem.java:3185) > at > org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:3053) > at > org.apache.parquet.hadoop.util.HadoopInputFile.fromPath(HadoopInputFile.java:39) > at > org.apache.spark.sql.execution.datasources.parquet.ParquetFooterReader.readFooter(ParquetFooterReader.java:39) > at > org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat.footerFileMetaData$lzycompute$1(ParquetFileFormat.scala:268) > at > org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat.footerFileMetaData$1(ParquetFileFormat.scala:267) > at > org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat.$anonfun$buildReaderWithPartitionValues$2(ParquetFileFormat.scala:270) > at > org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.org$apache$spark$sql$execution$datasources$FileScanRDD$$anon$$readCurrentFile(FileScanRDD.scala:116) > at > org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:164) > at > org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:93) > at > org.apache.spark.sql.execution.FileSourceScanExec$$anon$1.hasNext(DataSourceScanExec.scala:522) > at > org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage7.columnartorow_nextBatch_0$(Unknown > Source) > at > org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage7.processNext(Unknown > Source) > at > org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) > at > org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:759) > at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460) > at > org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:140) > at > org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59) > at > org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99) > at > org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:52) > at
[jira] [Commented] (HADOOP-18233) Possible race condition with TemporaryAWSCredentialsProvider
[ https://issues.apache.org/jira/browse/HADOOP-18233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17617348#comment-17617348 ] ASF GitHub Bot commented on HADOOP-18233: - sabertiger opened a new pull request, #5024: URL: https://github.com/apache/hadoop/pull/5024 …vider ### Description of PR Fix race condition during concurrent S3 authentication calls. ### How was this patch tested? Setup a spark job reading from s3a object with multiple partitions. ### For code changes: - [ x ] Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')? - [ ] Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, `NOTICE-binary` files? > Possible race condition with TemporaryAWSCredentialsProvider > > > Key: HADOOP-18233 > URL: https://issues.apache.org/jira/browse/HADOOP-18233 > Project: Hadoop Common > Issue Type: Bug > Components: auth, fs/s3 >Affects Versions: 3.3.1 > Environment: spark v3.2.0 > hadoop-aws v3.3.1 > java version 1.8.0_265 via zulu-8 >Reporter: Jason Sleight >Priority: Major > > I'm in the process of upgrading spark+hadoop versions for my workflows and > observing a weird behavior regression. I'm setting > {code:java} > spark.hadoop.fs.s3a.aws.credentials.provider=org.apache.hadoop.fs.s3a.TemporaryAWSCredentialsProvider > spark.hadoop.fs.s3.impl=org.apache.hadoop.fs.s3a.S3AFileSystem > spark.sql.catalogImplementation=hive > spark.hadoop.aws.region=us-west-2 > ...many other things, I think these might be the relevant ones though...{code} > in Spark config and I'm observing some non-fatal warnings/exceptions (see > below for some examples). The warnings/exceptions randomly appear for some > tasks, which causes them to fail, but then when Spark retries the task it > will succeed. The initial tasks don't always fail either, just sometimes. > I also found that if I switch to a SimpleAWSCredentials and use static keys, > then I don't see any issues. > My old setup was spark v3.0.2 with hadoop-aws v3.2.1 which also does not have > these warnings/exceptions. > From reading some other tickets I thought perhaps adding > {code:java} > spark.sql.hive.metastore.sharedPrefixes=com.amazonaws {code} > would help, but it did not. > Appreciate any suggestions for how to proceed or debug further :) > > Example stack traces: > First one for an s3 read > {code:java} > WARN TaskSetManager: Lost task 27.0 in stage 4.0 (TID 29) ( executor > 13): java.nio.file.AccessDeniedException: > s3a://bucket/path/to/part.snappy.parquet: > org.apache.hadoop.fs.s3a.CredentialInitializationException: Provider > TemporaryAWSCredentialsProvider has no credentials > at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:206) > at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:170) > at > org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:3289) > at > org.apache.hadoop.fs.s3a.S3AFileSystem.innerGetFileStatus(S3AFileSystem.java:3185) > at > org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:3053) > at > org.apache.parquet.hadoop.util.HadoopInputFile.fromPath(HadoopInputFile.java:39) > at > org.apache.spark.sql.execution.datasources.parquet.ParquetFooterReader.readFooter(ParquetFooterReader.java:39) > at > org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat.footerFileMetaData$lzycompute$1(ParquetFileFormat.scala:268) > at > org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat.footerFileMetaData$1(ParquetFileFormat.scala:267) > at > org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat.$anonfun$buildReaderWithPartitionValues$2(ParquetFileFormat.scala:270) > at > org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.org$apache$spark$sql$execution$datasources$FileScanRDD$$anon$$readCurrentFile(FileScanRDD.scala:116) > at > org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:164) > at > org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:93) > at > org.apache.spark.sql.execution.FileSourceScanExec$$anon$1.hasNext(DataSourceScanExec.scala:522) > at >
[GitHub] [hadoop] sabertiger opened a new pull request, #5024: HADOOP-18233. Possible race condition with TemporaryAWSCredentialsPro…
sabertiger opened a new pull request, #5024: URL: https://github.com/apache/hadoop/pull/5024 …vider ### Description of PR Fix race condition during concurrent S3 authentication calls. ### How was this patch tested? Setup a spark job reading from s3a object with multiple partitions. ### For code changes: - [ x ] Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')? - [ ] Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, `NOTICE-binary` files? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] slfan1989 commented on pull request #5009: YARN-11327. [Federation] Refactoring Yarn Router's Node Web Page.
slfan1989 commented on PR #5009: URL: https://github.com/apache/hadoop/pull/5009#issuecomment-1278275519 @goiri Thank you very much for helping to review the code! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] slfan1989 commented on pull request #5019: MAPREDUCE-7417. Upgrade Junit 4 to 5 in hadoop-mapreduce-client-uploader
slfan1989 commented on PR #5019: URL: https://github.com/apache/hadoop/pull/5019#issuecomment-1278272735 LGTM. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ashutoshcipher opened a new pull request, #5023: MAPREDUCE-7413. Upgrade Junit 4 to 5 in hadoop-mapreduce-client-hs-plugins
ashutoshcipher opened a new pull request, #5023: URL: https://github.com/apache/hadoop/pull/5023 ### Description of PR Upgrade Junit 4 to 5 in hadoop-mapreduce-client-hs-plugins JIRA - MAPREDUCE-7413 ### For code changes: - [X] Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')? - [ ] Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, `NOTICE-binary` files? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ashutoshcipher opened a new pull request, #5022: MAPREDUCE-7414. Upgrade Junit 4 to 5 in hadoop-mapreduce-client-hs
ashutoshcipher opened a new pull request, #5022: URL: https://github.com/apache/hadoop/pull/5022 ### Description of PR Upgrade Junit 4 to 5 in hadoop-mapreduce-client-hs JIRA - MAPREDUCE-7414 ### For code changes: - [X] Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')? - [ ] Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, `NOTICE-binary` files? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ashutoshcipher commented on pull request #5021: MAPREDUCE-7415. Upgrade Junit 4 to 5 in hadoop-mapreduce-client-nativetask
ashutoshcipher commented on PR #5021: URL: https://github.com/apache/hadoop/pull/5021#issuecomment-1278260908 @aajisaka - Please help in reviewing in your time. Thanks. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ashutoshcipher commented on pull request #5020: MAPREDUCE-7416. Upgrade Junit 4 to 5 in hadoop-mapreduce-client-shuffle
ashutoshcipher commented on PR #5020: URL: https://github.com/apache/hadoop/pull/5020#issuecomment-1278260819 @aajisaka - Please help in reviewing in your time. Thanks. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ashutoshcipher commented on pull request #5019: MAPREDUCE-7417. Upgrade Junit 4 to 5 in hadoop-mapreduce-client-uploader
ashutoshcipher commented on PR #5019: URL: https://github.com/apache/hadoop/pull/5019#issuecomment-1278260776 @aajisaka - Please help in reviewing in your time. Thanks. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ashutoshcipher opened a new pull request, #5021: MAPREDUCE-7415. Upgrade Junit 4 to 5 in hadoop-mapreduce-client-nativetask
ashutoshcipher opened a new pull request, #5021: URL: https://github.com/apache/hadoop/pull/5021 ### Description of PR Upgrade Junit 4 to 5 in hadoop-mapreduce-client-nativetask JIRA - MAPREDUCE-7415 ### For code changes: - [X] Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')? - [ ] Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, `NOTICE-binary` files? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ashutoshcipher opened a new pull request, #5020: MAPREDUCE-7416. Upgrade Junit 4 to 5 in hadoop-mapreduce-client-shuffle
ashutoshcipher opened a new pull request, #5020: URL: https://github.com/apache/hadoop/pull/5020 ### Description of PR Upgrade Junit 4 to 5 in hadoop-mapreduce-client-shuffle JIRA - MAPREDUCE-7416 ### For code changes: - [X] Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')? - [ ] Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, `NOTICE-binary` files? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ashutoshcipher opened a new pull request, #5019: MAPREDUCE-7417. Upgrade Junit 4 to 5 in hadoop-mapreduce-client-uploader
ashutoshcipher opened a new pull request, #5019: URL: https://github.com/apache/hadoop/pull/5019 ### Description of PR Upgrade Junit 4 to 5 in hadoop-mapreduce-client-uploader JIRA - MAPREDUCE-7417 ### For code changes: - [X] Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')? - [ ] Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, `NOTICE-binary` files? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #5014: MAPREDUCE-5608. Replace and deprecate mapred.tasktracker.indexcache.mb
hadoop-yetus commented on PR #5014: URL: https://github.com/apache/hadoop/pull/5014#issuecomment-1278189000 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 51s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +0 :ok: | markdownlint | 0m 0s | | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 16m 0s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 29m 57s | | trunk passed | | +1 :green_heart: | compile | 27m 51s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 23m 29s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 4m 39s | | trunk passed | | +1 :green_heart: | mvnsite | 3m 28s | | trunk passed | | +1 :green_heart: | javadoc | 2m 38s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 2m 5s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 5m 21s | | trunk passed | | +1 :green_heart: | shadedclient | 26m 1s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 28s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 57s | | the patch passed | | +1 :green_heart: | compile | 26m 39s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 26m 39s | | the patch passed | | +1 :green_heart: | compile | 23m 27s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 23m 27s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 4m 34s | | root: The patch generated 0 new + 671 unchanged - 1 fixed = 671 total (was 672) | | +1 :green_heart: | mvnsite | 3m 27s | | the patch passed | | +1 :green_heart: | javadoc | 2m 25s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 2m 4s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 5m 32s | | the patch passed | | +1 :green_heart: | shadedclient | 25m 55s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 19m 23s | | hadoop-common in the patch passed. | | +1 :green_heart: | unit | 7m 45s | | hadoop-mapreduce-client-core in the patch passed. | | +1 :green_heart: | asflicense | 1m 19s | | The patch does not generate ASF License warnings. | | | | 270m 56s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5014/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5014 | | Optional Tests | dupname asflicense mvnsite codespell detsecrets markdownlint compile javac javadoc mvninstall unit shadedclient spotbugs checkstyle | | uname | Linux 5beb15af47f9 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 9cec92045901c1a2ac6a7248b09de2c9f2be10be | | Default Java | Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5014/2/testReport/ | | Max. process+thread count | 3144 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core U: . | | Console output |
[GitHub] [hadoop] goiri merged pull request #5009: YARN-11327. [Federation] Refactoring Yarn Router's Node Web Page.
goiri merged PR #5009: URL: https://github.com/apache/hadoop/pull/5009 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #4655: YARN-11216. Avoid unnecessary reconstruction of ConfigurationProperties
hadoop-yetus commented on PR #4655: URL: https://github.com/apache/hadoop/pull/4655#issuecomment-1278136548 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 8s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 15m 25s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 28m 52s | | trunk passed | | +1 :green_heart: | compile | 26m 36s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 24m 6s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 4m 33s | | trunk passed | | +1 :green_heart: | mvnsite | 3m 54s | | trunk passed | | +1 :green_heart: | javadoc | 2m 46s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 2m 7s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 6m 14s | | trunk passed | | +1 :green_heart: | shadedclient | 27m 8s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 28s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 9s | | the patch passed | | +1 :green_heart: | compile | 26m 51s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 26m 51s | | the patch passed | | +1 :green_heart: | compile | 24m 43s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 24m 43s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 4m 39s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4655/9/artifact/out/results-checkstyle-root.txt) | root: The patch generated 5 new + 150 unchanged - 0 fixed = 155 total (was 150) | | +1 :green_heart: | mvnsite | 3m 37s | | the patch passed | | +1 :green_heart: | javadoc | 2m 50s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 2m 14s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | -1 :x: | spotbugs | 3m 9s | [/new-spotbugs-hadoop-common-project_hadoop-common.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4655/9/artifact/out/new-spotbugs-hadoop-common-project_hadoop-common.html) | hadoop-common-project/hadoop-common generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) | | +1 :green_heart: | shadedclient | 26m 13s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 19m 1s | | hadoop-common in the patch passed. | | +1 :green_heart: | unit | 104m 15s | | hadoop-yarn-server-resourcemanager in the patch passed. | | +1 :green_heart: | asflicense | 1m 9s | | The patch does not generate ASF License warnings. | | | | 369m 4s | | | | Reason | Tests | |---:|:--| | SpotBugs | module:hadoop-common-project/hadoop-common | | | Inconsistent synchronization of org.apache.hadoop.conf.Configuration.propAddListener; locked 66% of time Unsynchronized access at Configuration.java:66% of time Unsynchronized access at Configuration.java:[line 4084] | | | Inconsistent synchronization of org.apache.hadoop.conf.Configuration.propRemoveListener; locked 66% of time Unsynchronized access at Configuration.java:66% of time Unsynchronized access at Configuration.java:[line 4089] | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4655/9/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/4655 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux cad2cdca595d 4.15.0-191-generic
[GitHub] [hadoop] hadoop-yetus commented on pull request #4655: YARN-11216. Avoid unnecessary reconstruction of ConfigurationProperties
hadoop-yetus commented on PR #4655: URL: https://github.com/apache/hadoop/pull/4655#issuecomment-1278129187 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 53s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 16m 18s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 25m 58s | | trunk passed | | +1 :green_heart: | compile | 23m 17s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 20m 44s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 4m 11s | | trunk passed | | +1 :green_heart: | mvnsite | 3m 27s | | trunk passed | | +1 :green_heart: | javadoc | 2m 44s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 2m 19s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 5m 25s | | trunk passed | | +1 :green_heart: | shadedclient | 22m 42s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 30s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 59s | | the patch passed | | +1 :green_heart: | compile | 22m 37s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 22m 37s | | the patch passed | | +1 :green_heart: | compile | 21m 0s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 21m 0s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 4m 5s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4655/10/artifact/out/results-checkstyle-root.txt) | root: The patch generated 7 new + 150 unchanged - 0 fixed = 157 total (was 150) | | +1 :green_heart: | mvnsite | 3m 45s | | the patch passed | | +1 :green_heart: | javadoc | 2m 37s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 2m 18s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | -1 :x: | spotbugs | 2m 58s | [/new-spotbugs-hadoop-common-project_hadoop-common.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4655/10/artifact/out/new-spotbugs-hadoop-common-project_hadoop-common.html) | hadoop-common-project/hadoop-common generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) | | +1 :green_heart: | shadedclient | 24m 42s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 19m 20s | | hadoop-common in the patch passed. | | +1 :green_heart: | unit | 100m 25s | | hadoop-yarn-server-resourcemanager in the patch passed. | | +1 :green_heart: | asflicense | 1m 20s | | The patch does not generate ASF License warnings. | | | | 341m 29s | | | | Reason | Tests | |---:|:--| | SpotBugs | module:hadoop-common-project/hadoop-common | | | Inconsistent synchronization of org.apache.hadoop.conf.Configuration.propAddListener; locked 66% of time Unsynchronized access at Configuration.java:66% of time Unsynchronized access at Configuration.java:[line 4084] | | | Inconsistent synchronization of org.apache.hadoop.conf.Configuration.propRemoveListener; locked 66% of time Unsynchronized access at Configuration.java:66% of time Unsynchronized access at Configuration.java:[line 4089] | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4655/10/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/4655 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 1fefbcd2c2fb
[GitHub] [hadoop] hadoop-yetus commented on pull request #4834: HDFS-16753. WebHDFSHandler should reject non-compliant requests
hadoop-yetus commented on PR #4834: URL: https://github.com/apache/hadoop/pull/4834#issuecomment-1278127660 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 40s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 39m 21s | | trunk passed | | +1 :green_heart: | compile | 1m 35s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 1m 40s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 1m 14s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 40s | | trunk passed | | +1 :green_heart: | javadoc | 1m 23s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 39s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 3m 43s | | trunk passed | | +1 :green_heart: | shadedclient | 23m 27s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 23s | | the patch passed | | +1 :green_heart: | compile | 1m 23s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 1m 23s | | the patch passed | | +1 :green_heart: | compile | 1m 18s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 1m 18s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 1m 0s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 25s | | the patch passed | | +1 :green_heart: | javadoc | 0m 58s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 36s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 3m 21s | | the patch passed | | +1 :green_heart: | shadedclient | 22m 38s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 243m 51s | | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 56s | | The patch does not generate ASF License warnings. | | | | 354m 22s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4834/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/4834 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 366deeb46b5e 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 75b7e3155c86023126b17644b4924cd4863ed206 | | Default Java | Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4834/3/testReport/ | | Max. process+thread count | 3311 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4834/3/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific
[GitHub] [hadoop] hadoop-yetus commented on pull request #5013: HDFS-16802.Print options when accessing ClientProtocol#rename2()
hadoop-yetus commented on PR #5013: URL: https://github.com/apache/hadoop/pull/5013#issuecomment-1278124975 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 2m 21s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 41m 56s | | trunk passed | | +1 :green_heart: | compile | 1m 42s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 1m 30s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 1m 16s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 40s | | trunk passed | | +1 :green_heart: | javadoc | 1m 17s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 42s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 3m 47s | | trunk passed | | +1 :green_heart: | shadedclient | 26m 36s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 25s | | the patch passed | | +1 :green_heart: | compile | 1m 29s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 1m 29s | | the patch passed | | +1 :green_heart: | compile | 1m 18s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 1m 18s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 1m 0s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 26s | | the patch passed | | +1 :green_heart: | javadoc | 0m 58s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 30s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 3m 32s | | the patch passed | | +1 :green_heart: | shadedclient | 25m 36s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 337m 28s | | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 1m 0s | | The patch does not generate ASF License warnings. | | | | 458m 7s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5013/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5013 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 77f3c1f97e20 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 327a6bf0b2b9ebccfa63ae0fea35445015194f2a | | Default Java | Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5013/2/testReport/ | | Max. process+thread count | 1936 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5013/2/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific
[GitHub] [hadoop] hadoop-yetus commented on pull request #5009: YARN-11327. [Federation] Refactoring Yarn Router's Node Web Page.
hadoop-yetus commented on PR #5009: URL: https://github.com/apache/hadoop/pull/5009#issuecomment-1278121835 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 10s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 16m 5s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 32m 45s | | trunk passed | | +1 :green_heart: | compile | 11m 43s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 10m 8s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 2m 4s | | trunk passed | | +1 :green_heart: | mvnsite | 3m 29s | | trunk passed | | +1 :green_heart: | javadoc | 3m 14s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 2m 56s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 6m 4s | | trunk passed | | +1 :green_heart: | shadedclient | 25m 48s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 28s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 27s | | the patch passed | | +1 :green_heart: | compile | 11m 58s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 11m 58s | | the patch passed | | +1 :green_heart: | compile | 10m 6s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 10m 6s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 1m 56s | | the patch passed | | +1 :green_heart: | mvnsite | 3m 35s | | the patch passed | | +1 :green_heart: | javadoc | 2m 53s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 2m 56s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 6m 40s | | the patch passed | | +1 :green_heart: | shadedclient | 24m 44s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 5m 9s | | hadoop-yarn-common in the patch passed. | | +1 :green_heart: | unit | 105m 3s | | hadoop-yarn-server-resourcemanager in the patch passed. | | +1 :green_heart: | unit | 5m 29s | | hadoop-yarn-server-router in the patch passed. | | +1 :green_heart: | asflicense | 1m 5s | | The patch does not generate ASF License warnings. | | | | 303m 45s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5009/4/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5009 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 7e9ac6800012 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 7d3132ae033a36515e36e8550d5e54deeca5f42e | | Default Java | Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5009/4/testReport/ | | Max. process+thread count | 901 (vs. ulimit of 5500) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router U: hadoop-yarn-project/hadoop-yarn | |
[jira] [Commented] (HADOOP-17705) S3A to add option fs.s3a.endpoint.region to set AWS region
[ https://issues.apache.org/jira/browse/HADOOP-17705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17617276#comment-17617276 ] Greg Senia commented on HADOOP-17705: - [~ste...@apache.org]is it possible to accept a backport I put together for Hadoop 3.2. I have a few folks who cannot upgrade to Hadoop 3.3.x branch and need the ability to access S3 using V4 signing via VPC endpoints without having to mess with overrides and regex's as we have multiple endpoints in different regions etc. > S3A to add option fs.s3a.endpoint.region to set AWS region > -- > > Key: HADOOP-17705 > URL: https://issues.apache.org/jira/browse/HADOOP-17705 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Mehakmeet Singh >Assignee: Mehakmeet Singh >Priority: Major > Labels: pull-request-available > Fix For: 3.3.2 > > Time Spent: 3h > Remaining Estimate: 0h > > Currently, AWS region is either constructed via the endpoint URL, by making > an assumption that the 2nd component after delimiter "." is the region in > endpoint URL, which doesn't work for private links and sets the default to > us-east-1 thus causing authorization issue w.r.t the private link. > The option fs.s3a.endpoint.region allows this to be explicitly set > h2. how to set the s3 region on older hadoop releases > For anyone who needs to set the signing region on older versions of the s3a > client *you do not need this festure*. instead just provide a custom endpoint > to region mapping json file > # Download the default region mapping file > [awssdk_config_default.json|https://github.com/aws/aws-sdk-java/blob/master/aws-java-sdk-core/src/main/resources/com/amazonaws/internal/config/awssdk_config_default.json] > # Add a new regular expression to map the endpoint/hostname to the target > region > # Save the file as {{/etc/hadoop/conf/awssdk_config_override.json}} > # verify basic hadop fs -ls commands work > # copy to the rest of the cluster. > # There should be no need to restart any services -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #5005: YARN-11342. [Federation] Refactor submitApplication Use FederationActionRetry.
hadoop-yetus commented on PR #5005: URL: https://github.com/apache/hadoop/pull/5005#issuecomment-1278066375 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 0s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 15m 46s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 28m 19s | | trunk passed | | +1 :green_heart: | compile | 4m 4s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 3m 22s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 1m 15s | | trunk passed | | +1 :green_heart: | mvnsite | 2m 9s | | trunk passed | | +1 :green_heart: | javadoc | 1m 53s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 38s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 4m 13s | | trunk passed | | +1 :green_heart: | shadedclient | 22m 51s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 22s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 46s | | the patch passed | | +1 :green_heart: | compile | 4m 0s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 4m 0s | | the patch passed | | +1 :green_heart: | compile | 3m 14s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 3m 14s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 1m 5s | [/results-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5005/3/artifact/out/results-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server.txt) | hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server: The patch generated 4 new + 0 unchanged - 0 fixed = 4 total (was 0) | | +1 :green_heart: | mvnsite | 1m 51s | | the patch passed | | +1 :green_heart: | javadoc | 1m 30s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 21s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 4m 23s | | the patch passed | | +1 :green_heart: | shadedclient | 22m 37s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 2m 53s | | hadoop-yarn-server-common in the patch passed. | | +1 :green_heart: | unit | 102m 24s | | hadoop-yarn-server-resourcemanager in the patch passed. | | +1 :green_heart: | unit | 4m 45s | | hadoop-yarn-server-router in the patch passed. | | +1 :green_heart: | asflicense | 0m 33s | | The patch does not generate ASF License warnings. | | | | 241m 29s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5005/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5005 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 43f3b87b5261 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / f5c2d31562fa06f4b273036b2251d9b3284a90c6 | | Default Java | Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5005/3/testReport/ |
[jira] [Commented] (HADOOP-18490) The check logic for erasedIndexes in XORRawDecoder is buggy
[ https://issues.apache.org/jira/browse/HADOOP-18490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17617273#comment-17617273 ] ASF GitHub Bot commented on HADOOP-18490: - FuzzingTeam commented on PR #5001: URL: https://github.com/apache/hadoop/pull/5001#issuecomment-1278063976 Thanks, @ZanderXu, for the review. I researched if the XORRawDecoder can decode multiple erased indexes, and I found that XORRawDecoder can generate only 1 parity bit, implying the erased indexes to be either 1 or 0. One thing that surprises me is how the test method testValidate() passes on the newly added value set (numParityUnits = 3). The passing of the method means that XORRawDecoder could decode multiple erased indexes, which contradicts the actual statement. I propose that we could either investigate and fix the testValidate() method or add the below check to run before each test: if (encoderFactoryClass == XORRawErasureCoderFactory.class) { Assume.assumeTrue(numParityUnits == 1); } > The check logic for erasedIndexes in XORRawDecoder is buggy > --- > > Key: HADOOP-18490 > URL: https://issues.apache.org/jira/browse/HADOOP-18490 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.3.4 >Reporter: FuzzingTeam >Priority: Major > Labels: pull-request-available > > In the method _doDecode_ of class {_}XORRawDecoder{_}, the code does not > handle all the erased and null marked locations in the array ({_}inputs{_}) > but only skips the first erased location ({_}erasedIndexes[0]{_}). The > missing handling results in an unhandled NullPointerException. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] FuzzingTeam commented on pull request #5001: HADOOP-18490. Fixed the check logic for erasedIndexes in XORRawDecoder
FuzzingTeam commented on PR #5001: URL: https://github.com/apache/hadoop/pull/5001#issuecomment-1278063976 Thanks, @ZanderXu, for the review. I researched if the XORRawDecoder can decode multiple erased indexes, and I found that XORRawDecoder can generate only 1 parity bit, implying the erased indexes to be either 1 or 0. One thing that surprises me is how the test method testValidate() passes on the newly added value set (numParityUnits = 3). The passing of the method means that XORRawDecoder could decode multiple erased indexes, which contradicts the actual statement. I propose that we could either investigate and fix the testValidate() method or add the below check to run before each test: if (encoderFactoryClass == XORRawErasureCoderFactory.class) { Assume.assumeTrue(numParityUnits == 1); } -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-18496) upgrade kotlin-stdlib due to CVEs
PJ Fanning created HADOOP-18496: --- Summary: upgrade kotlin-stdlib due to CVEs Key: HADOOP-18496 URL: https://issues.apache.org/jira/browse/HADOOP-18496 Project: Hadoop Common Issue Type: Improvement Reporter: PJ Fanning I'm not an expert on Kotlin but dependabot show these 2 CVEs with the version of kotlin-stdlib used in Hadoop. * [https://github.com/advisories/GHSA-cqj8-47ch-rvvq] * [https://github.com/advisories/GHSA-2qp4-g3q3-f92w] kotlin-stlib 1.6.0 is the minimum version needed to fix both. It might be better to use latest v1.6 jar (currently 1.6.21) or even use latest jar altogether (currently 1.7.20). -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18493) update jackson-databind 2.12.7.1 due to CVE fixes
[ https://issues.apache.org/jira/browse/HADOOP-18493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17617250#comment-17617250 ] ASF GitHub Bot commented on HADOOP-18493: - pjfanning commented on PR #5011: URL: https://github.com/apache/hadoop/pull/5011#issuecomment-1278002769 @ayushtkn I created #5018 > update jackson-databind 2.12.7.1 due to CVE fixes > - > > Key: HADOOP-18493 > URL: https://issues.apache.org/jira/browse/HADOOP-18493 > Project: Hadoop Common > Issue Type: Improvement >Reporter: PJ Fanning >Priority: Major > Labels: pull-request-available > > * [https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-42003] > * [https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-42004] > * both fixes have been backported (the CVEs themselves need to be updated to > reflect this) > * [https://github.com/FasterXML/jackson-databind/pull/3622] -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] pjfanning commented on pull request #5011: HADOOP-18493: upgrade jackson-databind to 2.12.7.1
pjfanning commented on PR #5011: URL: https://github.com/apache/hadoop/pull/5011#issuecomment-1278002769 @ayushtkn I created #5018 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18493) update jackson-databind 2.12.7.1 due to CVE fixes
[ https://issues.apache.org/jira/browse/HADOOP-18493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17617249#comment-17617249 ] ASF GitHub Bot commented on HADOOP-18493: - pjfanning opened a new pull request, #5018: URL: https://github.com/apache/hadoop/pull/5018 ### Description of PR ### How was this patch tested? ### For code changes: - [ ] Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')? - [ ] Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, `NOTICE-binary` files? > update jackson-databind 2.12.7.1 due to CVE fixes > - > > Key: HADOOP-18493 > URL: https://issues.apache.org/jira/browse/HADOOP-18493 > Project: Hadoop Common > Issue Type: Improvement >Reporter: PJ Fanning >Priority: Major > Labels: pull-request-available > > * [https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-42003] > * [https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-42004] > * both fixes have been backported (the CVEs themselves need to be updated to > reflect this) > * [https://github.com/FasterXML/jackson-databind/pull/3622] -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] pjfanning opened a new pull request, #5018: HADOOP-18493: upgrade jackson-databind to 2.12.7.1 (3.3 branch)
pjfanning opened a new pull request, #5018: URL: https://github.com/apache/hadoop/pull/5018 ### Description of PR ### How was this patch tested? ### For code changes: - [ ] Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')? - [ ] Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, `NOTICE-binary` files? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] pjfanning commented on a diff in pull request #4980: MAPREDUCE-7411: use secure XML parsers
pjfanning commented on code in PR #4980: URL: https://github.com/apache/hadoop/pull/4980#discussion_r994969518 ## hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestQueueConfigurationParser.java: ## @@ -28,12 +28,13 @@ import javax.xml.transform.dom.DOMSource; import javax.xml.transform.stream.StreamResult; -import org.w3c.dom.Document; Review Comment: reverted some of the import changes -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] pjfanning commented on a diff in pull request #4980: MAPREDUCE-7411: use secure XML parsers
pjfanning commented on code in PR #4980: URL: https://github.com/apache/hadoop/pull/4980#discussion_r994969138 ## hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/QueueConfigurationParser.java: ## @@ -88,7 +91,7 @@ class QueueConfigurationParser { static final String VALUE_TAG = "value"; /** - * Default constructor for DeperacatedQueueConfigurationParser + * Default constructor for QueueConfigurationParser Review Comment: updated -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran opened a new pull request, #5017: YARN-11330. use secure XML parsers (#4981)
steveloughran opened a new pull request, #5017: URL: https://github.com/apache/hadoop/pull/5017 Move construction of XML parsers in YARN modules to using the locked-down parser factory of HADOOP-18469. One exception: GpuDeviceInformationParser still supports DTD resolution; all other features are disabled. Contributed by P J Fanning Change-Id: Id219ce1c6484f5853eeae18798c4ecf9b4ce3520 ### Description of PR ### How was this patch tested? ### For code changes: - [ ] Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')? - [ ] Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, `NOTICE-binary` files? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on a diff in pull request #4980: MAPREDUCE-7411: use secure XML parsers
steveloughran commented on code in PR #4980: URL: https://github.com/apache/hadoop/pull/4980#discussion_r994940937 ## hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestQueueConfigurationParser.java: ## @@ -28,12 +28,13 @@ import javax.xml.transform.dom.DOMSource; import javax.xml.transform.stream.StreamResult; -import org.w3c.dom.Document; Review Comment: just leave these imports alone; it makes backporting harder than it needs to be ## hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/QueueConfigurationParser.java: ## @@ -88,7 +91,7 @@ class QueueConfigurationParser { static final String VALUE_TAG = "value"; /** - * Default constructor for DeperacatedQueueConfigurationParser + * Default constructor for QueueConfigurationParser Review Comment: can you add a . at the end if you are going near this line -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran opened a new pull request, #5016: HDFS-16795. Use secure XML parsers (#4979)
steveloughran opened a new pull request, #5016: URL: https://github.com/apache/hadoop/pull/5016 Move construction of XML parsers in HDFS modules to using the locked-down parser factory of HADOOP-18469. Contributed by P J Fanning Change-Id: I9e21228eeebff699ebd22f46a99722cb9efb0cf4 ### Description of PR ### How was this patch tested? ### For code changes: - [ ] Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')? - [ ] Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, `NOTICE-binary` files? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran merged pull request #4981: YARN-11330: use secure XML parsers
steveloughran merged PR #4981: URL: https://github.com/apache/hadoop/pull/4981 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-18469) Add XMLUtils methods to centralise code that creates secure XML parsers
[ https://issues.apache.org/jira/browse/HADOOP-18469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-18469: Description: Relates to HDFS-16766 There are other places in the code where DocumentBuilderFactory instances are created that could benefit from the same changes as HDFS-16766 h3. sonatype-2022-5820 If anyone is landing on this page following the sonatype-2022-5820 alert, know that there is no known issue here, just a centralisation of all construction of XML parsers with lockdown of all the features. was: Relates to HDFS-16766 There are other places in the code where DocumentBuilderFactory instances are created that could benefit from the same changes as HDFS-16766 h3. sonatype-2022-5820 If anyone is landing on this page following the sonatype-2022-5820 alert, know that there > Add XMLUtils methods to centralise code that creates secure XML parsers > --- > > Key: HADOOP-18469 > URL: https://issues.apache.org/jira/browse/HADOOP-18469 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.3.4 >Reporter: PJ Fanning >Assignee: PJ Fanning >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0, 3.3.5 > > > Relates to HDFS-16766 > There are other places in the code where DocumentBuilderFactory instances are > created that could benefit from the same changes as HDFS-16766 > h3. sonatype-2022-5820 > If anyone is landing on this page following the sonatype-2022-5820 alert, > know that there is no known issue here, just a centralisation of all > construction of XML parsers with lockdown of all the features. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-18469) Add XMLUtils methods to centralise code that creates secure XML parsers
[ https://issues.apache.org/jira/browse/HADOOP-18469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-18469: Description: Relates to HDFS-16766 There are other places in the code where DocumentBuilderFactory instances are created that could benefit from the same changes as HDFS-16766 h3. sonatype-2022-5820 If anyone is landing on this page following the sonatype-2022-5820 alert, know that there was: Relates to HDFS-16766 There are other places in the code where DocumentBuilderFactory instances are created that could benefit from the same changes as HDFS-16766 > Add XMLUtils methods to centralise code that creates secure XML parsers > --- > > Key: HADOOP-18469 > URL: https://issues.apache.org/jira/browse/HADOOP-18469 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.3.4 >Reporter: PJ Fanning >Assignee: PJ Fanning >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0, 3.3.5 > > > Relates to HDFS-16766 > There are other places in the code where DocumentBuilderFactory instances are > created that could benefit from the same changes as HDFS-16766 > h3. sonatype-2022-5820 > If anyone is landing on this page following the sonatype-2022-5820 alert, > know that there -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17563) Update Bouncy Castle to 1.68 or later
[ https://issues.apache.org/jira/browse/HADOOP-17563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-17563: Hadoop Flags: Incompatible change Release Note: bouncy castle 1.68+ is a multirelease JAR containing java classes compiled for different target JREs. older versions of asm.jar and maven shade plugin may have problems with these. fix: upgrade > Update Bouncy Castle to 1.68 or later > - > > Key: HADOOP-17563 > URL: https://issues.apache.org/jira/browse/HADOOP-17563 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Affects Versions: 3.3.1 >Reporter: Takanobu Asanuma >Assignee: Takanobu Asanuma >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 3h > Remaining Estimate: 0h > > -Bouncy Castle 1.60 has Hash Collision Vulnerability. Let's update to 1.68.- > Bouncy Castle 1.60 has the following vulnerabilities. Let's update to 1.68. > * [https://nvd.nist.gov/vuln/detail/CVE-2020-26939] > * [https://nvd.nist.gov/vuln/detail/CVE-2020-28052] > * [https://nvd.nist.gov/vuln/detail/CVE-2020-15522] > for anyone backporting this, note that recent bouncy castle jars are > incompatible with older versions of asm.jar, and so older versions of spark. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #5014: MAPREDUCE-5608. Replace and deprecate mapred.tasktracker.indexcache.mb
hadoop-yetus commented on PR #5014: URL: https://github.com/apache/hadoop/pull/5014#issuecomment-1277896213 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 55s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +0 :ok: | markdownlint | 0m 0s | | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 15m 55s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 29m 35s | | trunk passed | | +1 :green_heart: | compile | 27m 50s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 23m 47s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 4m 45s | | trunk passed | | +1 :green_heart: | mvnsite | 3m 28s | | trunk passed | | +1 :green_heart: | javadoc | 2m 30s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 2m 7s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 5m 21s | | trunk passed | | +1 :green_heart: | shadedclient | 25m 10s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 28s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 50s | | the patch passed | | +1 :green_heart: | compile | 27m 24s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 27m 24s | | the patch passed | | +1 :green_heart: | compile | 23m 37s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 23m 37s | | the patch passed | | +1 :green_heart: | blanks | 0m 1s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 4m 37s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5014/1/artifact/out/results-checkstyle-root.txt) | root: The patch generated 1 new + 671 unchanged - 1 fixed = 672 total (was 672) | | +1 :green_heart: | mvnsite | 3m 29s | | the patch passed | | +1 :green_heart: | javadoc | 2m 28s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 2m 2s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 5m 36s | | the patch passed | | +1 :green_heart: | shadedclient | 26m 39s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 19m 26s | | hadoop-common in the patch passed. | | +1 :green_heart: | unit | 7m 41s | | hadoop-mapreduce-client-core in the patch passed. | | +1 :green_heart: | asflicense | 1m 22s | | The patch does not generate ASF License warnings. | | | | 271m 32s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5014/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5014 | | Optional Tests | dupname asflicense mvnsite codespell detsecrets markdownlint compile javac javadoc mvninstall unit shadedclient spotbugs checkstyle | | uname | Linux 9da8043d20c6 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 2cec8a370f7e45cbdc91ca696d8dc71d4eceb859 | | Default Java | Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5014/1/testReport/ | | Max. process+thread count | 2991 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common
[jira] [Commented] (HADOOP-18481) AWS v2 SDK warning to skip warning of EnvironmentVariableCredentialsProvider
[ https://issues.apache.org/jira/browse/HADOOP-18481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17617182#comment-17617182 ] ASF GitHub Bot commented on HADOOP-18481: - steveloughran commented on PR #4973: URL: https://github.com/apache/hadoop/pull/4973#issuecomment-1277893256 and you tested it against an s3 store again, right? > AWS v2 SDK warning to skip warning of EnvironmentVariableCredentialsProvider > > > Key: HADOOP-18481 > URL: https://issues.apache.org/jira/browse/HADOOP-18481 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.4.0 >Reporter: Steve Loughran >Assignee: Ahmar Suhail >Priority: Major > Labels: pull-request-available > > looking at test output with the sdk warnings enabled, it is now always > warning of a v1 provider reference, even if the user hasn't set any > fs.s3a.credential.provider option > {code} > 2022-10-05 14:09:09,733 [setup] DEBUG s3a.S3AUtils > (S3AUtils.java:createAWSCredentialProvider(691)) - Credential provider class > is org.apache.hadoop.fs.s3a.TemporaryAWSCredentialsProvider > 2022-10-05 14:09:09,733 [setup] DEBUG s3a.S3AUtils > (S3AUtils.java:createAWSCredentialProvider(691)) - Credential provider class > is org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider > 2022-10-05 14:09:09,734 [setup] WARN s3a.SDKV2Upgrade > (LogExactlyOnce.java:warn(39)) - Directly referencing AWS SDK V1 credential > provider com.amazonaws.auth.EnvironmentVariableCredentialsProvider. AWS SDK > V1 credential providers will be removed once S3A is upgraded to SDK V2 > 2022-10-05 14:09:09,734 [setup] DEBUG s3a.S3AUtils > (S3AUtils.java:createAWSCredentialProvider(691)) - Credential provider class > is com.amazonaws.auth.EnvironmentVariableCredentialsProvider > 2022-10-05 14:09:09,734 [setup] DEBUG s3a.S3AUtils > (S3AUtils.java:createAWSCredentialProvider(691)) - Credential provider class > is org.apache.hadoop.fs.s3a.auth.IAMInstanceCredentialsProvider > {code} > This is because the EnvironmentVariableCredentialsProvider provider is on the > default list of providers. > Everybody who is using the S3 a connector and who has not explicitly declared > a new set of providers excluding this one will be seeing the error message. > Proposed: > Don't warn on this provider. Instead with the v2 move the classname can be > patched to switch to a modified one. > The alternative would be to provide an s3a specific env var provider subclass > of this; and while that is potentially good in future it is a bit more effort > for the forthcoming 3.3.5 release. > And especially because and it will not be in previous versions people cannot > explicitly switch to it in their configs and be confident it will always be > there, -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on pull request #4973: HADOOP-18481. Don't warn on EnvironmentCredentialsProvider.
steveloughran commented on PR #4973: URL: https://github.com/apache/hadoop/pull/4973#issuecomment-1277893256 and you tested it against an s3 store again, right? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17563) Update Bouncy Castle to 1.68 or later
[ https://issues.apache.org/jira/browse/HADOOP-17563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-17563: Description: -Bouncy Castle 1.60 has Hash Collision Vulnerability. Let's update to 1.68.- Bouncy Castle 1.60 has the following vulnerabilities. Let's update to 1.68. * [https://nvd.nist.gov/vuln/detail/CVE-2020-26939] * [https://nvd.nist.gov/vuln/detail/CVE-2020-28052] * [https://nvd.nist.gov/vuln/detail/CVE-2020-15522] for anyone backporting this, note that recent bouncy castle jars are incompatible with older versions of asm.jar, and so older versions of spark. was: -Bouncy Castle 1.60 has Hash Collision Vulnerability. Let's update to 1.68.- Bouncy Castle 1.60 has the following vulnerabilities. Let's update to 1.68. * [https://nvd.nist.gov/vuln/detail/CVE-2020-26939] * [https://nvd.nist.gov/vuln/detail/CVE-2020-28052] * [https://nvd.nist.gov/vuln/detail/CVE-2020-15522] > Update Bouncy Castle to 1.68 or later > - > > Key: HADOOP-17563 > URL: https://issues.apache.org/jira/browse/HADOOP-17563 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Affects Versions: 3.3.1 >Reporter: Takanobu Asanuma >Assignee: Takanobu Asanuma >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 3h > Remaining Estimate: 0h > > -Bouncy Castle 1.60 has Hash Collision Vulnerability. Let's update to 1.68.- > Bouncy Castle 1.60 has the following vulnerabilities. Let's update to 1.68. > * [https://nvd.nist.gov/vuln/detail/CVE-2020-26939] > * [https://nvd.nist.gov/vuln/detail/CVE-2020-28052] > * [https://nvd.nist.gov/vuln/detail/CVE-2020-15522] > for anyone backporting this, note that recent bouncy castle jars are > incompatible with older versions of asm.jar, and so older versions of spark. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17563) Update Bouncy Castle to 1.68 or later
[ https://issues.apache.org/jira/browse/HADOOP-17563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17617166#comment-17617166 ] ASF GitHub Bot commented on HADOOP-17563: - steveloughran opened a new pull request, #5015: URL: https://github.com/apache/hadoop/pull/5015 Contributed by PJ Fanning ### Description of PR ### How was this patch tested? ### For code changes: - [X] Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')? - [ ] Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation? - [X] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [X] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, `NOTICE-binary` files? > Update Bouncy Castle to 1.68 or later > - > > Key: HADOOP-17563 > URL: https://issues.apache.org/jira/browse/HADOOP-17563 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Affects Versions: 3.3.1 >Reporter: Takanobu Asanuma >Assignee: Takanobu Asanuma >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 3h > Remaining Estimate: 0h > > -Bouncy Castle 1.60 has Hash Collision Vulnerability. Let's update to 1.68.- > Bouncy Castle 1.60 has the following vulnerabilities. Let's update to 1.68. > * [https://nvd.nist.gov/vuln/detail/CVE-2020-26939] > * [https://nvd.nist.gov/vuln/detail/CVE-2020-28052] > * [https://nvd.nist.gov/vuln/detail/CVE-2020-15522] -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran opened a new pull request, #5015: HADOOP-17563. Upgrade BouncyCastle to 1.68 (#3980)
steveloughran opened a new pull request, #5015: URL: https://github.com/apache/hadoop/pull/5015 Contributed by PJ Fanning ### Description of PR ### How was this patch tested? ### For code changes: - [X] Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')? - [ ] Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation? - [X] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [X] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, `NOTICE-binary` files? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17563) Update Bouncy Castle to 1.68 or later
[ https://issues.apache.org/jira/browse/HADOOP-17563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17617164#comment-17617164 ] Steve Loughran commented on HADOOP-17563: - trunk is at 1.68. given spark is updated, should we reapply this patch to 3.3. and 3.3.5? > Update Bouncy Castle to 1.68 or later > - > > Key: HADOOP-17563 > URL: https://issues.apache.org/jira/browse/HADOOP-17563 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Affects Versions: 3.3.1 >Reporter: Takanobu Asanuma >Assignee: Takanobu Asanuma >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 3h > Remaining Estimate: 0h > > -Bouncy Castle 1.60 has Hash Collision Vulnerability. Let's update to 1.68.- > Bouncy Castle 1.60 has the following vulnerabilities. Let's update to 1.68. > * [https://nvd.nist.gov/vuln/detail/CVE-2020-26939] > * [https://nvd.nist.gov/vuln/detail/CVE-2020-28052] > * [https://nvd.nist.gov/vuln/detail/CVE-2020-15522] -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17563) Update Bouncy Castle to 1.68 or later
[ https://issues.apache.org/jira/browse/HADOOP-17563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-17563: Fix Version/s: 3.4.0 > Update Bouncy Castle to 1.68 or later > - > > Key: HADOOP-17563 > URL: https://issues.apache.org/jira/browse/HADOOP-17563 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Affects Versions: 3.3.1 >Reporter: Takanobu Asanuma >Assignee: Takanobu Asanuma >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 3h > Remaining Estimate: 0h > > -Bouncy Castle 1.60 has Hash Collision Vulnerability. Let's update to 1.68.- > Bouncy Castle 1.60 has the following vulnerabilities. Let's update to 1.68. > * [https://nvd.nist.gov/vuln/detail/CVE-2020-26939] > * [https://nvd.nist.gov/vuln/detail/CVE-2020-28052] > * [https://nvd.nist.gov/vuln/detail/CVE-2020-15522] -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18493) update jackson-databind 2.12.7.1 due to CVE fixes
[ https://issues.apache.org/jira/browse/HADOOP-18493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17617154#comment-17617154 ] ASF GitHub Bot commented on HADOOP-18493: - ayushtkn commented on PR #5011: URL: https://github.com/apache/hadoop/pull/5011#issuecomment-1277829605 Should be good then, can you have a PR for 3.3 branch as well, feels good enough to include in the upcoming 3.3.x release > update jackson-databind 2.12.7.1 due to CVE fixes > - > > Key: HADOOP-18493 > URL: https://issues.apache.org/jira/browse/HADOOP-18493 > Project: Hadoop Common > Issue Type: Improvement >Reporter: PJ Fanning >Priority: Major > Labels: pull-request-available > > * [https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-42003] > * [https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-42004] > * both fixes have been backported (the CVEs themselves need to be updated to > reflect this) > * [https://github.com/FasterXML/jackson-databind/pull/3622] -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ayushtkn commented on pull request #5011: HADOOP-18493: upgrade jackson-databind to 2.12.7.1
ayushtkn commented on PR #5011: URL: https://github.com/apache/hadoop/pull/5011#issuecomment-1277829605 Should be good then, can you have a PR for 3.3 branch as well, feels good enough to include in the upcoming 3.3.x release -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] slfan1989 commented on pull request #4915: YARN-11294. [Federation] Router Support DelegationToken store/update/remove Token With MemoryStateStore.
slfan1989 commented on PR #4915: URL: https://github.com/apache/hadoop/pull/4915#issuecomment-1277800019 @goiri Can you help to merge this pr into trunk branch? I will follow up with [YARN-11295](https://issues.apache.org/jira/browse/YARN-11295), thank you very much! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] slfan1989 commented on pull request #5005: YARN-11342. [Federation] Refactor submitApplication Use FederationActionRetry.
slfan1989 commented on PR #5005: URL: https://github.com/apache/hadoop/pull/5005#issuecomment-1277798802 @goiri Can you help review this pr? Thank you very much! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] slfan1989 commented on pull request #5009: YARN-11327. [Federation] Refactoring Yarn Router's Node Web Page.
slfan1989 commented on PR #5009: URL: https://github.com/apache/hadoop/pull/5009#issuecomment-1277795345 @goiri Can you help review the code again? Thank you very much! The code modification is as follows: 1. Remove the `not enable` prompt on the Federation page and the Node page, and only keep the not enable prompt on the About page. 2.The Title of SubCluster's Metric will be displayed. https://user-images.githubusercontent.com/55643692/195638982-7facb1c8-fbf7-4818-9d97-5915b6768093.png;> -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #5005: YARN-11342. [Federation] Refactor submitApplication Use FederationActionRetry.
hadoop-yetus commented on PR #5005: URL: https://github.com/apache/hadoop/pull/5005#issuecomment-1277729746 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 0s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 15m 15s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 28m 31s | | trunk passed | | +1 :green_heart: | compile | 4m 6s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 3m 19s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 1m 14s | | trunk passed | | +1 :green_heart: | mvnsite | 2m 9s | | trunk passed | | +1 :green_heart: | javadoc | 1m 51s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 35s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 4m 16s | | trunk passed | | +1 :green_heart: | shadedclient | 23m 9s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 22s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 47s | | the patch passed | | +1 :green_heart: | compile | 3m 56s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 3m 56s | | the patch passed | | +1 :green_heart: | compile | 3m 14s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 3m 14s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 1m 4s | [/results-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5005/2/artifact/out/results-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server.txt) | hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server: The patch generated 5 new + 0 unchanged - 0 fixed = 5 total (was 0) | | +1 :green_heart: | mvnsite | 1m 54s | | the patch passed | | +1 :green_heart: | javadoc | 1m 31s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 22s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 4m 23s | | the patch passed | | +1 :green_heart: | shadedclient | 23m 0s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 2m 54s | | hadoop-yarn-server-common in the patch passed. | | +1 :green_heart: | unit | 102m 1s | | hadoop-yarn-server-resourcemanager in the patch passed. | | -1 :x: | unit | 4m 48s | [/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-router.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5005/2/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-router.txt) | hadoop-yarn-server-router in the patch passed. | | +1 :green_heart: | asflicense | 0m 35s | | The patch does not generate ASF License warnings. | | | | 241m 32s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.yarn.server.router.clientrm.TestFederationClientInterceptorRetry | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5005/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5005 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux d14b4a3b5f49 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk /