[GitHub] [hadoop] slfan1989 opened a new pull request, #4594: YARN-6572. Refactoring Router services to use common util classes for pipeline creations.
slfan1989 opened a new pull request, #4594: URL: https://github.com/apache/hadoop/pull/4594 Jira : YARN-6572. Refactoring Router services to use common util classes for pipeline creations. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-12007) GzipCodec native CodecPool leaks memory
[ https://issues.apache.org/jira/browse/HADOOP-12007?focusedWorklogId=792993=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-792993 ] ASF GitHub Bot logged work on HADOOP-12007: --- Author: ASF GitHub Bot Created on: 20/Jul/22 02:05 Start Date: 20/Jul/22 02:05 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on PR #4585: URL: https://github.com/apache/hadoop/pull/4585#issuecomment-1189722684 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 57s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 38m 2s | | trunk passed | | +1 :green_heart: | compile | 23m 14s | | trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | compile | 20m 48s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 1m 47s | | trunk passed | | +1 :green_heart: | mvnsite | 2m 12s | | trunk passed | | +1 :green_heart: | javadoc | 1m 49s | | trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javadoc | 1m 24s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 3m 12s | | trunk passed | | +1 :green_heart: | shadedclient | 23m 32s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 7s | | the patch passed | | +1 :green_heart: | compile | 22m 29s | | the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javac | 22m 29s | | the patch passed | | +1 :green_heart: | compile | 20m 42s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 20m 42s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 1m 43s | [/results-checkstyle-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4585/3/artifact/out/results-checkstyle-hadoop-common-project_hadoop-common.txt) | hadoop-common-project/hadoop-common: The patch generated 6 new + 12 unchanged - 0 fixed = 18 total (was 12) | | +1 :green_heart: | mvnsite | 2m 13s | | the patch passed | | +1 :green_heart: | javadoc | 1m 36s | | the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javadoc | 1m 22s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 3m 8s | | the patch passed | | +1 :green_heart: | shadedclient | 23m 30s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 18m 41s | | hadoop-common in the patch passed. | | +1 :green_heart: | asflicense | 1m 35s | | The patch does not generate ASF License warnings. | | | | 216m 49s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4585/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/4585 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux b1c429718345 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / a70860806c05030116508002cd80d551e29379c7 | | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | Test Results |
[GitHub] [hadoop] hadoop-yetus commented on pull request #4585: HADOOP-12007. GzipCodec native CodecPool leaks memory
hadoop-yetus commented on PR #4585: URL: https://github.com/apache/hadoop/pull/4585#issuecomment-1189722684 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 57s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 38m 2s | | trunk passed | | +1 :green_heart: | compile | 23m 14s | | trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | compile | 20m 48s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 1m 47s | | trunk passed | | +1 :green_heart: | mvnsite | 2m 12s | | trunk passed | | +1 :green_heart: | javadoc | 1m 49s | | trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javadoc | 1m 24s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 3m 12s | | trunk passed | | +1 :green_heart: | shadedclient | 23m 32s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 7s | | the patch passed | | +1 :green_heart: | compile | 22m 29s | | the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javac | 22m 29s | | the patch passed | | +1 :green_heart: | compile | 20m 42s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 20m 42s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 1m 43s | [/results-checkstyle-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4585/3/artifact/out/results-checkstyle-hadoop-common-project_hadoop-common.txt) | hadoop-common-project/hadoop-common: The patch generated 6 new + 12 unchanged - 0 fixed = 18 total (was 12) | | +1 :green_heart: | mvnsite | 2m 13s | | the patch passed | | +1 :green_heart: | javadoc | 1m 36s | | the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javadoc | 1m 22s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 3m 8s | | the patch passed | | +1 :green_heart: | shadedclient | 23m 30s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 18m 41s | | hadoop-common in the patch passed. | | +1 :green_heart: | asflicense | 1m 35s | | The patch does not generate ASF License warnings. | | | | 216m 49s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4585/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/4585 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux b1c429718345 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / a70860806c05030116508002cd80d551e29379c7 | | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4585/3/testReport/ | | Max. process+thread count | 1299 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4585/3/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This
[jira] [Work logged] (HADOOP-18079) Upgrade Netty to 4.1.77.Final
[ https://issues.apache.org/jira/browse/HADOOP-18079?focusedWorklogId=792962=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-792962 ] ASF GitHub Bot logged work on HADOOP-18079: --- Author: ASF GitHub Bot Created on: 19/Jul/22 23:48 Start Date: 19/Jul/22 23:48 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on PR #4593: URL: https://github.com/apache/hadoop/pull/4593#issuecomment-1189656061 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 16m 21s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 0s | | xmllint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ branch-3.2 Compile Tests _ | | +1 :green_heart: | mvninstall | 32m 0s | | branch-3.2 passed | | +1 :green_heart: | compile | 0m 25s | | branch-3.2 passed | | +1 :green_heart: | mvnsite | 0m 30s | | branch-3.2 passed | | +1 :green_heart: | javadoc | 0m 36s | | branch-3.2 passed | | +1 :green_heart: | shadedclient | 48m 27s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 17s | | the patch passed | | +1 :green_heart: | compile | 0m 15s | | the patch passed | | +1 :green_heart: | javac | 0m 15s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | mvnsite | 0m 18s | | the patch passed | | +1 :green_heart: | javadoc | 0m 16s | | the patch passed | | +1 :green_heart: | shadedclient | 17m 29s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 0m 21s | | hadoop-project in the patch passed. | | +1 :green_heart: | asflicense | 0m 39s | | The patch does not generate ASF License warnings. | | | | 85m 58s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4593/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/4593 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient codespell detsecrets xmllint | | uname | Linux b91fd4ea249c 4.15.0-175-generic #184-Ubuntu SMP Thu Mar 24 17:48:36 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | branch-3.2 / 0372229e1cabf283cb2dde623162c02fee77fcaa | | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~18.04-b07 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4593/1/testReport/ | | Max. process+thread count | 340 (vs. ulimit of 5500) | | modules | C: hadoop-project U: hadoop-project | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4593/1/console | | versions | git=2.17.1 maven=3.6.0 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. Issue Time Tracking --- Worklog Id: (was: 792962) Time Spent: 4h 50m (was: 4h 40m) > Upgrade Netty to 4.1.77.Final > - > > Key: HADOOP-18079 > URL: https://issues.apache.org/jira/browse/HADOOP-18079 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 3.3.3 >Reporter: Renukaprasad C >Assignee: Wei-Chiu Chuang >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 4h 50m > Remaining Estimate: 0h > > h4. Netty version - 4.1.71 has fix some CVEs. > CVE-2019-20444, > CVE-2019-20445 > CVE-2022-24823 > Upgrade to latest version. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail:
[GitHub] [hadoop] hadoop-yetus commented on pull request #4593: HADOOP-18079. Upgrade Netty to 4.1.77. (#3977)
hadoop-yetus commented on PR #4593: URL: https://github.com/apache/hadoop/pull/4593#issuecomment-1189656061 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 16m 21s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 0s | | xmllint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ branch-3.2 Compile Tests _ | | +1 :green_heart: | mvninstall | 32m 0s | | branch-3.2 passed | | +1 :green_heart: | compile | 0m 25s | | branch-3.2 passed | | +1 :green_heart: | mvnsite | 0m 30s | | branch-3.2 passed | | +1 :green_heart: | javadoc | 0m 36s | | branch-3.2 passed | | +1 :green_heart: | shadedclient | 48m 27s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 17s | | the patch passed | | +1 :green_heart: | compile | 0m 15s | | the patch passed | | +1 :green_heart: | javac | 0m 15s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | mvnsite | 0m 18s | | the patch passed | | +1 :green_heart: | javadoc | 0m 16s | | the patch passed | | +1 :green_heart: | shadedclient | 17m 29s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 0m 21s | | hadoop-project in the patch passed. | | +1 :green_heart: | asflicense | 0m 39s | | The patch does not generate ASF License warnings. | | | | 85m 58s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4593/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/4593 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient codespell detsecrets xmllint | | uname | Linux b91fd4ea249c 4.15.0-175-generic #184-Ubuntu SMP Thu Mar 24 17:48:36 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | branch-3.2 / 0372229e1cabf283cb2dde623162c02fee77fcaa | | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~18.04-b07 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4593/1/testReport/ | | Max. process+thread count | 340 (vs. ulimit of 5500) | | modules | C: hadoop-project U: hadoop-project | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4593/1/console | | versions | git=2.17.1 maven=3.6.0 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-18079) Upgrade Netty to 4.1.77.Final
[ https://issues.apache.org/jira/browse/HADOOP-18079?focusedWorklogId=792961=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-792961 ] ASF GitHub Bot logged work on HADOOP-18079: --- Author: ASF GitHub Bot Created on: 19/Jul/22 23:46 Start Date: 19/Jul/22 23:46 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on PR #4592: URL: https://github.com/apache/hadoop/pull/4592#issuecomment-1189655458 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 7m 59s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 0s | | xmllint was not available. | | +0 :ok: | shelldocs | 0m 0s | | Shelldocs was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ branch-3.3 Compile Tests _ | | +0 :ok: | mvndep | 14m 44s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 23m 46s | | branch-3.3 passed | | +1 :green_heart: | compile | 17m 54s | | branch-3.3 passed | | +1 :green_heart: | mvnsite | 20m 19s | | branch-3.3 passed | | +1 :green_heart: | javadoc | 7m 21s | | branch-3.3 passed | | +1 :green_heart: | shadedclient | 30m 39s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 1m 9s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 23m 13s | | the patch passed | | +1 :green_heart: | compile | 22m 21s | | the patch passed | | +1 :green_heart: | javac | 22m 21s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | mvnsite | 28m 15s | | the patch passed | | +1 :green_heart: | shellcheck | 0m 0s | | No new issues. | | +1 :green_heart: | javadoc | 8m 31s | | the patch passed | | -1 :x: | shadedclient | 29m 46s | | patch has errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 0m 44s | [/patch-unit-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4592/1/artifact/out/patch-unit-root.txt) | root in the patch failed. | | +0 :ok: | asflicense | 0m 46s | | ASF License check generated no output? | | | | 224m 32s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4592/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/4592 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient codespell detsecrets xmllint shellcheck shelldocs | | uname | Linux 8b2ba4fc14f8 4.15.0-175-generic #184-Ubuntu SMP Thu Mar 24 17:48:36 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | branch-3.3 / c545341785186a9a3419396c0c1d843f19830b81 | | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~18.04-b07 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4592/1/testReport/ | | Max. process+thread count | 560 (vs. ulimit of 5500) | | modules | C: hadoop-project . U: . | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4592/1/console | | versions | git=2.17.1 maven=3.6.0 shellcheck=0.4.6 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. Issue Time Tracking --- Worklog Id: (was: 792961) Time Spent: 4h 40m (was: 4.5h) > Upgrade Netty to 4.1.77.Final > - > > Key: HADOOP-18079 > URL: https://issues.apache.org/jira/browse/HADOOP-18079 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 3.3.3 >Reporter: Renukaprasad C >Assignee: Wei-Chiu Chuang >Priority: Major > Labels: pull-request-available >
[GitHub] [hadoop] hadoop-yetus commented on pull request #4592: HADOOP-18079. Upgrade Netty to 4.1.77. (#3977)
hadoop-yetus commented on PR #4592: URL: https://github.com/apache/hadoop/pull/4592#issuecomment-1189655458 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 7m 59s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 0s | | xmllint was not available. | | +0 :ok: | shelldocs | 0m 0s | | Shelldocs was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ branch-3.3 Compile Tests _ | | +0 :ok: | mvndep | 14m 44s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 23m 46s | | branch-3.3 passed | | +1 :green_heart: | compile | 17m 54s | | branch-3.3 passed | | +1 :green_heart: | mvnsite | 20m 19s | | branch-3.3 passed | | +1 :green_heart: | javadoc | 7m 21s | | branch-3.3 passed | | +1 :green_heart: | shadedclient | 30m 39s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 1m 9s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 23m 13s | | the patch passed | | +1 :green_heart: | compile | 22m 21s | | the patch passed | | +1 :green_heart: | javac | 22m 21s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | mvnsite | 28m 15s | | the patch passed | | +1 :green_heart: | shellcheck | 0m 0s | | No new issues. | | +1 :green_heart: | javadoc | 8m 31s | | the patch passed | | -1 :x: | shadedclient | 29m 46s | | patch has errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 0m 44s | [/patch-unit-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4592/1/artifact/out/patch-unit-root.txt) | root in the patch failed. | | +0 :ok: | asflicense | 0m 46s | | ASF License check generated no output? | | | | 224m 32s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4592/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/4592 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient codespell detsecrets xmllint shellcheck shelldocs | | uname | Linux 8b2ba4fc14f8 4.15.0-175-generic #184-Ubuntu SMP Thu Mar 24 17:48:36 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | branch-3.3 / c545341785186a9a3419396c0c1d843f19830b81 | | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~18.04-b07 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4592/1/testReport/ | | Max. process+thread count | 560 (vs. ulimit of 5500) | | modules | C: hadoop-project . U: . | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4592/1/console | | versions | git=2.17.1 maven=3.6.0 shellcheck=0.4.6 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-12007) GzipCodec native CodecPool leaks memory
[ https://issues.apache.org/jira/browse/HADOOP-12007?focusedWorklogId=792954=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-792954 ] ASF GitHub Bot logged work on HADOOP-12007: --- Author: ASF GitHub Bot Created on: 19/Jul/22 23:20 Start Date: 19/Jul/22 23:20 Worklog Time Spent: 10m Work Description: goiri commented on code in PR #4585: URL: https://github.com/apache/hadoop/pull/4585#discussion_r925033436 ## hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/TestCodecPool.java: ## @@ -17,20 +17,22 @@ */ package org.apache.hadoop.io.compress; -import static org.junit.Assert.assertEquals; - -import java.util.concurrent.Callable; -import java.util.concurrent.ExecutorService; -import java.util.concurrent.Executors; -import java.util.concurrent.LinkedBlockingDeque; -import java.util.concurrent.TimeUnit; - import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.io.compress.zlib.BuiltInGzipCompressor; +import org.apache.hadoop.io.compress.zlib.BuiltInGzipDecompressor; +import org.apache.hadoop.test.LambdaTestUtils; import org.junit.Before; import org.junit.Test; +import java.io.ByteArrayInputStream; +import java.io.ByteArrayOutputStream; +import java.io.OutputStream; import java.util.HashSet; +import java.util.Random; import java.util.Set; +import java.util.concurrent.*; Review Comment: Avoid * Issue Time Tracking --- Worklog Id: (was: 792954) Time Spent: 1h 40m (was: 1.5h) > GzipCodec native CodecPool leaks memory > --- > > Key: HADOOP-12007 > URL: https://issues.apache.org/jira/browse/HADOOP-12007 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.7.0 >Reporter: Yejun Yang >Priority: Major > Labels: pull-request-available > Time Spent: 1h 40m > Remaining Estimate: 0h > > org/apache/hadoop/io/compress/GzipCodec.java call > CompressionCodec.Util.createOutputStreamWithCodecPool to use CodecPool. But > compressor objects are actually never returned to pool which cause memory > leak. > HADOOP-10591 uses CompressionOutputStream.close() to return Compressor object > to pool. But CompressionCodec.Util.createOutputStreamWithCodecPool actually > returns a CompressorStream which overrides close(). > This cause CodecPool.returnCompressor never being called. In my log file I > can see lots of "Got brand-new compressor [.gz]" but no "Got recycled > compressor". -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-12007) GzipCodec native CodecPool leaks memory
[ https://issues.apache.org/jira/browse/HADOOP-12007?focusedWorklogId=792953=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-792953 ] ASF GitHub Bot logged work on HADOOP-12007: --- Author: ASF GitHub Bot Created on: 19/Jul/22 23:20 Start Date: 19/Jul/22 23:20 Worklog Time Spent: 10m Work Description: goiri commented on code in PR #4585: URL: https://github.com/apache/hadoop/pull/4585#discussion_r925033253 ## hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/TestCodecPool.java: ## @@ -189,4 +198,54 @@ public void testDecompressorNotReturnSameInstance() { CodecPool.returnDecompressor(decompressor); } } + + @Test(timeout = 1) + public void testDoNotPoolCompressorNotUseableAfterReturn() throws IOException { + +final GzipCodec gzipCodec = new GzipCodec(); +gzipCodec.setConf(new Configuration()); + +// BuiltInGzipCompressor is an explicit example of a Compressor with the @DoNotPool annotation +final Compressor compressor = new BuiltInGzipCompressor(new Configuration()); +CodecPool.returnCompressor(compressor); + +try (CompressionOutputStream outputStream = + gzipCodec.createOutputStream(new ByteArrayOutputStream(), compressor)) { + outputStream.write(1); + fail("Compressor from Codec with @DoNotPool should not be useable after returning to CodecPool"); +} catch (NullPointerException exception) { Review Comment: Could we add a check in the place where we would encounter the null and trigger a more friendly exception from there? Something like an already closed exception? Issue Time Tracking --- Worklog Id: (was: 792953) Time Spent: 1.5h (was: 1h 20m) > GzipCodec native CodecPool leaks memory > --- > > Key: HADOOP-12007 > URL: https://issues.apache.org/jira/browse/HADOOP-12007 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.7.0 >Reporter: Yejun Yang >Priority: Major > Labels: pull-request-available > Time Spent: 1.5h > Remaining Estimate: 0h > > org/apache/hadoop/io/compress/GzipCodec.java call > CompressionCodec.Util.createOutputStreamWithCodecPool to use CodecPool. But > compressor objects are actually never returned to pool which cause memory > leak. > HADOOP-10591 uses CompressionOutputStream.close() to return Compressor object > to pool. But CompressionCodec.Util.createOutputStreamWithCodecPool actually > returns a CompressorStream which overrides close(). > This cause CodecPool.returnCompressor never being called. In my log file I > can see lots of "Got brand-new compressor [.gz]" but no "Got recycled > compressor". -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] goiri commented on a diff in pull request #4585: HADOOP-12007. GzipCodec native CodecPool leaks memory
goiri commented on code in PR #4585: URL: https://github.com/apache/hadoop/pull/4585#discussion_r925033436 ## hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/TestCodecPool.java: ## @@ -17,20 +17,22 @@ */ package org.apache.hadoop.io.compress; -import static org.junit.Assert.assertEquals; - -import java.util.concurrent.Callable; -import java.util.concurrent.ExecutorService; -import java.util.concurrent.Executors; -import java.util.concurrent.LinkedBlockingDeque; -import java.util.concurrent.TimeUnit; - import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.io.compress.zlib.BuiltInGzipCompressor; +import org.apache.hadoop.io.compress.zlib.BuiltInGzipDecompressor; +import org.apache.hadoop.test.LambdaTestUtils; import org.junit.Before; import org.junit.Test; +import java.io.ByteArrayInputStream; +import java.io.ByteArrayOutputStream; +import java.io.OutputStream; import java.util.HashSet; +import java.util.Random; import java.util.Set; +import java.util.concurrent.*; Review Comment: Avoid * -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] goiri commented on a diff in pull request #4585: HADOOP-12007. GzipCodec native CodecPool leaks memory
goiri commented on code in PR #4585: URL: https://github.com/apache/hadoop/pull/4585#discussion_r925033253 ## hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/TestCodecPool.java: ## @@ -189,4 +198,54 @@ public void testDecompressorNotReturnSameInstance() { CodecPool.returnDecompressor(decompressor); } } + + @Test(timeout = 1) + public void testDoNotPoolCompressorNotUseableAfterReturn() throws IOException { + +final GzipCodec gzipCodec = new GzipCodec(); +gzipCodec.setConf(new Configuration()); + +// BuiltInGzipCompressor is an explicit example of a Compressor with the @DoNotPool annotation +final Compressor compressor = new BuiltInGzipCompressor(new Configuration()); +CodecPool.returnCompressor(compressor); + +try (CompressionOutputStream outputStream = + gzipCodec.createOutputStream(new ByteArrayOutputStream(), compressor)) { + outputStream.write(1); + fail("Compressor from Codec with @DoNotPool should not be useable after returning to CodecPool"); +} catch (NullPointerException exception) { Review Comment: Could we add a check in the place where we would encounter the null and trigger a more friendly exception from there? Something like an already closed exception? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-12007) GzipCodec native CodecPool leaks memory
[ https://issues.apache.org/jira/browse/HADOOP-12007?focusedWorklogId=792946=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-792946 ] ASF GitHub Bot logged work on HADOOP-12007: --- Author: ASF GitHub Bot Created on: 19/Jul/22 22:29 Start Date: 19/Jul/22 22:29 Worklog Time Spent: 10m Work Description: kevins-29 commented on code in PR #4585: URL: https://github.com/apache/hadoop/pull/4585#discussion_r925006154 ## hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/TestCodecPool.java: ## @@ -189,4 +198,54 @@ public void testDecompressorNotReturnSameInstance() { CodecPool.returnDecompressor(decompressor); } } + + @Test(timeout = 1) + public void testDoNotPoolCompressorNotUseableAfterReturn() throws IOException { + +final GzipCodec gzipCodec = new GzipCodec(); +gzipCodec.setConf(new Configuration()); + +// BuiltInGzipCompressor is an explicit example of a Compressor with the @DoNotPool annotation +final Compressor compressor = new BuiltInGzipCompressor(new Configuration()); +CodecPool.returnCompressor(compressor); + +try (CompressionOutputStream outputStream = + gzipCodec.createOutputStream(new ByteArrayOutputStream(), compressor)) { + outputStream.write(1); + fail("Compressor from Codec with @DoNotPool should not be useable after returning to CodecPool"); +} catch (NullPointerException exception) { Review Comment: Unfortunately I couldn't find another way to test that the underlying Compressor/Decompress has been closed. There is `finished` but that is set by `reset()` and has different semantics. ## hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/TestCodecPool.java: ## @@ -189,4 +198,54 @@ public void testDecompressorNotReturnSameInstance() { CodecPool.returnDecompressor(decompressor); } } + + @Test(timeout = 1) + public void testDoNotPoolCompressorNotUseableAfterReturn() throws IOException { + +final GzipCodec gzipCodec = new GzipCodec(); +gzipCodec.setConf(new Configuration()); + +// BuiltInGzipCompressor is an explicit example of a Compressor with the @DoNotPool annotation +final Compressor compressor = new BuiltInGzipCompressor(new Configuration()); +CodecPool.returnCompressor(compressor); + +try (CompressionOutputStream outputStream = + gzipCodec.createOutputStream(new ByteArrayOutputStream(), compressor)) { + outputStream.write(1); Review Comment: Thank you. Issue Time Tracking --- Worklog Id: (was: 792946) Time Spent: 1h 10m (was: 1h) > GzipCodec native CodecPool leaks memory > --- > > Key: HADOOP-12007 > URL: https://issues.apache.org/jira/browse/HADOOP-12007 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.7.0 >Reporter: Yejun Yang >Priority: Major > Labels: pull-request-available > Time Spent: 1h 10m > Remaining Estimate: 0h > > org/apache/hadoop/io/compress/GzipCodec.java call > CompressionCodec.Util.createOutputStreamWithCodecPool to use CodecPool. But > compressor objects are actually never returned to pool which cause memory > leak. > HADOOP-10591 uses CompressionOutputStream.close() to return Compressor object > to pool. But CompressionCodec.Util.createOutputStreamWithCodecPool actually > returns a CompressorStream which overrides close(). > This cause CodecPool.returnCompressor never being called. In my log file I > can see lots of "Got brand-new compressor [.gz]" but no "Got recycled > compressor". -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-12007) GzipCodec native CodecPool leaks memory
[ https://issues.apache.org/jira/browse/HADOOP-12007?focusedWorklogId=792947=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-792947 ] ASF GitHub Bot logged work on HADOOP-12007: --- Author: ASF GitHub Bot Created on: 19/Jul/22 22:29 Start Date: 19/Jul/22 22:29 Worklog Time Spent: 10m Work Description: kevins-29 commented on code in PR #4585: URL: https://github.com/apache/hadoop/pull/4585#discussion_r925006321 ## hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/TestCodecPool.java: ## @@ -189,4 +198,54 @@ public void testDecompressorNotReturnSameInstance() { CodecPool.returnDecompressor(decompressor); } } + + @Test(timeout = 1) + public void testDoNotPoolCompressorNotUseableAfterReturn() throws IOException { + +final GzipCodec gzipCodec = new GzipCodec(); +gzipCodec.setConf(new Configuration()); + +// BuiltInGzipCompressor is an explicit example of a Compressor with the @DoNotPool annotation +final Compressor compressor = new BuiltInGzipCompressor(new Configuration()); +CodecPool.returnCompressor(compressor); + +try (CompressionOutputStream outputStream = + gzipCodec.createOutputStream(new ByteArrayOutputStream(), compressor)) { + outputStream.write(1); + fail("Compressor from Codec with @DoNotPool should not be useable after returning to CodecPool"); +} catch (NullPointerException exception) { + Assert.assertEquals("Deflater has been closed", exception.getMessage()); +} + } + + @Test(timeout = 1) + public void testDoNotPoolDecompressorNotUseableAfterReturn() throws IOException { + +final GzipCodec gzipCodec = new GzipCodec(); +gzipCodec.setConf(new Configuration()); + +final Random random = new Random(); +final byte[] bytes = new byte[1024]; +random.nextBytes(bytes); + +ByteArrayOutputStream baos = new ByteArrayOutputStream(); +try (OutputStream outputStream = gzipCodec.createOutputStream(baos)) { + outputStream.write(bytes); +} + +final byte[] gzipBytes = baos.toByteArray(); +final ByteArrayInputStream bais = new ByteArrayInputStream(gzipBytes); + +// BuiltInGzipDecompressor is an explicit example of a Decompressor with the @DoNotPool annotation +final Decompressor decompressor = new BuiltInGzipDecompressor(); +CodecPool.returnDecompressor(decompressor); + +try (CompressionInputStream inputStream = Review Comment: Thank you Issue Time Tracking --- Worklog Id: (was: 792947) Time Spent: 1h 20m (was: 1h 10m) > GzipCodec native CodecPool leaks memory > --- > > Key: HADOOP-12007 > URL: https://issues.apache.org/jira/browse/HADOOP-12007 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.7.0 >Reporter: Yejun Yang >Priority: Major > Labels: pull-request-available > Time Spent: 1h 20m > Remaining Estimate: 0h > > org/apache/hadoop/io/compress/GzipCodec.java call > CompressionCodec.Util.createOutputStreamWithCodecPool to use CodecPool. But > compressor objects are actually never returned to pool which cause memory > leak. > HADOOP-10591 uses CompressionOutputStream.close() to return Compressor object > to pool. But CompressionCodec.Util.createOutputStreamWithCodecPool actually > returns a CompressorStream which overrides close(). > This cause CodecPool.returnCompressor never being called. In my log file I > can see lots of "Got brand-new compressor [.gz]" but no "Got recycled > compressor". -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] kevins-29 commented on a diff in pull request #4585: HADOOP-12007. GzipCodec native CodecPool leaks memory
kevins-29 commented on code in PR #4585: URL: https://github.com/apache/hadoop/pull/4585#discussion_r925006321 ## hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/TestCodecPool.java: ## @@ -189,4 +198,54 @@ public void testDecompressorNotReturnSameInstance() { CodecPool.returnDecompressor(decompressor); } } + + @Test(timeout = 1) + public void testDoNotPoolCompressorNotUseableAfterReturn() throws IOException { + +final GzipCodec gzipCodec = new GzipCodec(); +gzipCodec.setConf(new Configuration()); + +// BuiltInGzipCompressor is an explicit example of a Compressor with the @DoNotPool annotation +final Compressor compressor = new BuiltInGzipCompressor(new Configuration()); +CodecPool.returnCompressor(compressor); + +try (CompressionOutputStream outputStream = + gzipCodec.createOutputStream(new ByteArrayOutputStream(), compressor)) { + outputStream.write(1); + fail("Compressor from Codec with @DoNotPool should not be useable after returning to CodecPool"); +} catch (NullPointerException exception) { + Assert.assertEquals("Deflater has been closed", exception.getMessage()); +} + } + + @Test(timeout = 1) + public void testDoNotPoolDecompressorNotUseableAfterReturn() throws IOException { + +final GzipCodec gzipCodec = new GzipCodec(); +gzipCodec.setConf(new Configuration()); + +final Random random = new Random(); +final byte[] bytes = new byte[1024]; +random.nextBytes(bytes); + +ByteArrayOutputStream baos = new ByteArrayOutputStream(); +try (OutputStream outputStream = gzipCodec.createOutputStream(baos)) { + outputStream.write(bytes); +} + +final byte[] gzipBytes = baos.toByteArray(); +final ByteArrayInputStream bais = new ByteArrayInputStream(gzipBytes); + +// BuiltInGzipDecompressor is an explicit example of a Decompressor with the @DoNotPool annotation +final Decompressor decompressor = new BuiltInGzipDecompressor(); +CodecPool.returnDecompressor(decompressor); + +try (CompressionInputStream inputStream = Review Comment: Thank you -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] kevins-29 commented on a diff in pull request #4585: HADOOP-12007. GzipCodec native CodecPool leaks memory
kevins-29 commented on code in PR #4585: URL: https://github.com/apache/hadoop/pull/4585#discussion_r925006154 ## hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/TestCodecPool.java: ## @@ -189,4 +198,54 @@ public void testDecompressorNotReturnSameInstance() { CodecPool.returnDecompressor(decompressor); } } + + @Test(timeout = 1) + public void testDoNotPoolCompressorNotUseableAfterReturn() throws IOException { + +final GzipCodec gzipCodec = new GzipCodec(); +gzipCodec.setConf(new Configuration()); + +// BuiltInGzipCompressor is an explicit example of a Compressor with the @DoNotPool annotation +final Compressor compressor = new BuiltInGzipCompressor(new Configuration()); +CodecPool.returnCompressor(compressor); + +try (CompressionOutputStream outputStream = + gzipCodec.createOutputStream(new ByteArrayOutputStream(), compressor)) { + outputStream.write(1); + fail("Compressor from Codec with @DoNotPool should not be useable after returning to CodecPool"); +} catch (NullPointerException exception) { Review Comment: Unfortunately I couldn't find another way to test that the underlying Compressor/Decompress has been closed. There is `finished` but that is set by `reset()` and has different semantics. ## hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/TestCodecPool.java: ## @@ -189,4 +198,54 @@ public void testDecompressorNotReturnSameInstance() { CodecPool.returnDecompressor(decompressor); } } + + @Test(timeout = 1) + public void testDoNotPoolCompressorNotUseableAfterReturn() throws IOException { + +final GzipCodec gzipCodec = new GzipCodec(); +gzipCodec.setConf(new Configuration()); + +// BuiltInGzipCompressor is an explicit example of a Compressor with the @DoNotPool annotation +final Compressor compressor = new BuiltInGzipCompressor(new Configuration()); +CodecPool.returnCompressor(compressor); + +try (CompressionOutputStream outputStream = + gzipCodec.createOutputStream(new ByteArrayOutputStream(), compressor)) { + outputStream.write(1); Review Comment: Thank you. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-18079) Upgrade Netty to 4.1.77.Final
[ https://issues.apache.org/jira/browse/HADOOP-18079?focusedWorklogId=792945=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-792945 ] ASF GitHub Bot logged work on HADOOP-18079: --- Author: ASF GitHub Bot Created on: 19/Jul/22 22:20 Start Date: 19/Jul/22 22:20 Worklog Time Spent: 10m Work Description: jojochuang opened a new pull request, #4593: URL: https://github.com/apache/hadoop/pull/4593 Upgrade netty to address CVE-2019-20444, CVE-2019-20445 CVE-2022-24823 Contributed by Wei-Chiu Chuang cherrypicked from #3977 (cherry picked from commit a55ace7bc0c173f609b51e46cb0d4d8bcda3d79d) (cherry picked from commit c545341785186a9a3419396c0c1d843f19830b81) ### Description of PR ### How was this patch tested? ### For code changes: - [ ] Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')? - [ ] Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, `NOTICE-binary` files? Issue Time Tracking --- Worklog Id: (was: 792945) Time Spent: 4.5h (was: 4h 20m) > Upgrade Netty to 4.1.77.Final > - > > Key: HADOOP-18079 > URL: https://issues.apache.org/jira/browse/HADOOP-18079 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 3.3.3 >Reporter: Renukaprasad C >Assignee: Wei-Chiu Chuang >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 4.5h > Remaining Estimate: 0h > > h4. Netty version - 4.1.71 has fix some CVEs. > CVE-2019-20444, > CVE-2019-20445 > CVE-2022-24823 > Upgrade to latest version. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] jojochuang opened a new pull request, #4593: HADOOP-18079. Upgrade Netty to 4.1.77. (#3977)
jojochuang opened a new pull request, #4593: URL: https://github.com/apache/hadoop/pull/4593 Upgrade netty to address CVE-2019-20444, CVE-2019-20445 CVE-2022-24823 Contributed by Wei-Chiu Chuang cherrypicked from #3977 (cherry picked from commit a55ace7bc0c173f609b51e46cb0d4d8bcda3d79d) (cherry picked from commit c545341785186a9a3419396c0c1d843f19830b81) ### Description of PR ### How was this patch tested? ### For code changes: - [ ] Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')? - [ ] Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, `NOTICE-binary` files? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-12007) GzipCodec native CodecPool leaks memory
[ https://issues.apache.org/jira/browse/HADOOP-12007?focusedWorklogId=792927=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-792927 ] ASF GitHub Bot logged work on HADOOP-12007: --- Author: ASF GitHub Bot Created on: 19/Jul/22 21:39 Start Date: 19/Jul/22 21:39 Worklog Time Spent: 10m Work Description: goiri commented on code in PR #4585: URL: https://github.com/apache/hadoop/pull/4585#discussion_r924977086 ## hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/TestCodecPool.java: ## @@ -189,4 +198,54 @@ public void testDecompressorNotReturnSameInstance() { CodecPool.returnDecompressor(decompressor); } } + + @Test(timeout = 1) + public void testDoNotPoolCompressorNotUseableAfterReturn() throws IOException { + +final GzipCodec gzipCodec = new GzipCodec(); +gzipCodec.setConf(new Configuration()); + +// BuiltInGzipCompressor is an explicit example of a Compressor with the @DoNotPool annotation +final Compressor compressor = new BuiltInGzipCompressor(new Configuration()); +CodecPool.returnCompressor(compressor); + +try (CompressionOutputStream outputStream = + gzipCodec.createOutputStream(new ByteArrayOutputStream(), compressor)) { + outputStream.write(1); Review Comment: LambdaTestUtils#intercept ## hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/TestCodecPool.java: ## @@ -189,4 +198,54 @@ public void testDecompressorNotReturnSameInstance() { CodecPool.returnDecompressor(decompressor); } } + + @Test(timeout = 1) + public void testDoNotPoolCompressorNotUseableAfterReturn() throws IOException { + +final GzipCodec gzipCodec = new GzipCodec(); +gzipCodec.setConf(new Configuration()); + +// BuiltInGzipCompressor is an explicit example of a Compressor with the @DoNotPool annotation +final Compressor compressor = new BuiltInGzipCompressor(new Configuration()); +CodecPool.returnCompressor(compressor); + +try (CompressionOutputStream outputStream = + gzipCodec.createOutputStream(new ByteArrayOutputStream(), compressor)) { + outputStream.write(1); + fail("Compressor from Codec with @DoNotPool should not be useable after returning to CodecPool"); +} catch (NullPointerException exception) { Review Comment: NPE is the best we can do? ## hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/TestCodecPool.java: ## @@ -189,4 +198,54 @@ public void testDecompressorNotReturnSameInstance() { CodecPool.returnDecompressor(decompressor); } } + + @Test(timeout = 1) + public void testDoNotPoolCompressorNotUseableAfterReturn() throws IOException { + +final GzipCodec gzipCodec = new GzipCodec(); +gzipCodec.setConf(new Configuration()); + +// BuiltInGzipCompressor is an explicit example of a Compressor with the @DoNotPool annotation +final Compressor compressor = new BuiltInGzipCompressor(new Configuration()); +CodecPool.returnCompressor(compressor); + +try (CompressionOutputStream outputStream = + gzipCodec.createOutputStream(new ByteArrayOutputStream(), compressor)) { + outputStream.write(1); + fail("Compressor from Codec with @DoNotPool should not be useable after returning to CodecPool"); +} catch (NullPointerException exception) { + Assert.assertEquals("Deflater has been closed", exception.getMessage()); +} + } + + @Test(timeout = 1) + public void testDoNotPoolDecompressorNotUseableAfterReturn() throws IOException { + +final GzipCodec gzipCodec = new GzipCodec(); +gzipCodec.setConf(new Configuration()); + +final Random random = new Random(); +final byte[] bytes = new byte[1024]; +random.nextBytes(bytes); + +ByteArrayOutputStream baos = new ByteArrayOutputStream(); +try (OutputStream outputStream = gzipCodec.createOutputStream(baos)) { + outputStream.write(bytes); +} + +final byte[] gzipBytes = baos.toByteArray(); +final ByteArrayInputStream bais = new ByteArrayInputStream(gzipBytes); + +// BuiltInGzipDecompressor is an explicit example of a Decompressor with the @DoNotPool annotation +final Decompressor decompressor = new BuiltInGzipDecompressor(); +CodecPool.returnDecompressor(decompressor); + +try (CompressionInputStream inputStream = Review Comment: ``` try (CompressionInputStream inputStream = gzipCodec.createInputStream(bais, decompressor)) { LambdaTestUtils.intercept( NullPointerException.class, "Decompressor from Codec with @DoNotPool should not be useable after returning to CodecPool" () -> inputStream.read()); } ``` Issue Time Tracking --- Worklog Id:
[GitHub] [hadoop] goiri commented on a diff in pull request #4585: HADOOP-12007. GzipCodec native CodecPool leaks memory
goiri commented on code in PR #4585: URL: https://github.com/apache/hadoop/pull/4585#discussion_r924977086 ## hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/TestCodecPool.java: ## @@ -189,4 +198,54 @@ public void testDecompressorNotReturnSameInstance() { CodecPool.returnDecompressor(decompressor); } } + + @Test(timeout = 1) + public void testDoNotPoolCompressorNotUseableAfterReturn() throws IOException { + +final GzipCodec gzipCodec = new GzipCodec(); +gzipCodec.setConf(new Configuration()); + +// BuiltInGzipCompressor is an explicit example of a Compressor with the @DoNotPool annotation +final Compressor compressor = new BuiltInGzipCompressor(new Configuration()); +CodecPool.returnCompressor(compressor); + +try (CompressionOutputStream outputStream = + gzipCodec.createOutputStream(new ByteArrayOutputStream(), compressor)) { + outputStream.write(1); Review Comment: LambdaTestUtils#intercept ## hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/TestCodecPool.java: ## @@ -189,4 +198,54 @@ public void testDecompressorNotReturnSameInstance() { CodecPool.returnDecompressor(decompressor); } } + + @Test(timeout = 1) + public void testDoNotPoolCompressorNotUseableAfterReturn() throws IOException { + +final GzipCodec gzipCodec = new GzipCodec(); +gzipCodec.setConf(new Configuration()); + +// BuiltInGzipCompressor is an explicit example of a Compressor with the @DoNotPool annotation +final Compressor compressor = new BuiltInGzipCompressor(new Configuration()); +CodecPool.returnCompressor(compressor); + +try (CompressionOutputStream outputStream = + gzipCodec.createOutputStream(new ByteArrayOutputStream(), compressor)) { + outputStream.write(1); + fail("Compressor from Codec with @DoNotPool should not be useable after returning to CodecPool"); +} catch (NullPointerException exception) { Review Comment: NPE is the best we can do? ## hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/TestCodecPool.java: ## @@ -189,4 +198,54 @@ public void testDecompressorNotReturnSameInstance() { CodecPool.returnDecompressor(decompressor); } } + + @Test(timeout = 1) + public void testDoNotPoolCompressorNotUseableAfterReturn() throws IOException { + +final GzipCodec gzipCodec = new GzipCodec(); +gzipCodec.setConf(new Configuration()); + +// BuiltInGzipCompressor is an explicit example of a Compressor with the @DoNotPool annotation +final Compressor compressor = new BuiltInGzipCompressor(new Configuration()); +CodecPool.returnCompressor(compressor); + +try (CompressionOutputStream outputStream = + gzipCodec.createOutputStream(new ByteArrayOutputStream(), compressor)) { + outputStream.write(1); + fail("Compressor from Codec with @DoNotPool should not be useable after returning to CodecPool"); +} catch (NullPointerException exception) { + Assert.assertEquals("Deflater has been closed", exception.getMessage()); +} + } + + @Test(timeout = 1) + public void testDoNotPoolDecompressorNotUseableAfterReturn() throws IOException { + +final GzipCodec gzipCodec = new GzipCodec(); +gzipCodec.setConf(new Configuration()); + +final Random random = new Random(); +final byte[] bytes = new byte[1024]; +random.nextBytes(bytes); + +ByteArrayOutputStream baos = new ByteArrayOutputStream(); +try (OutputStream outputStream = gzipCodec.createOutputStream(baos)) { + outputStream.write(bytes); +} + +final byte[] gzipBytes = baos.toByteArray(); +final ByteArrayInputStream bais = new ByteArrayInputStream(gzipBytes); + +// BuiltInGzipDecompressor is an explicit example of a Decompressor with the @DoNotPool annotation +final Decompressor decompressor = new BuiltInGzipDecompressor(); +CodecPool.returnDecompressor(decompressor); + +try (CompressionInputStream inputStream = Review Comment: ``` try (CompressionInputStream inputStream = gzipCodec.createInputStream(bais, decompressor)) { LambdaTestUtils.intercept( NullPointerException.class, "Decompressor from Codec with @DoNotPool should not be useable after returning to CodecPool" () -> inputStream.read()); } ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional
[GitHub] [hadoop] hadoop-yetus commented on pull request #4576: HDFS-16667. Use malloc for buffer allocation in uriparser2
hadoop-yetus commented on PR #4576: URL: https://github.com/apache/hadoop/pull/4576#issuecomment-1189524760 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 23s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 28m 20s | | trunk passed | | +1 :green_heart: | compile | 5m 9s | | trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | compile | 4m 57s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | mvnsite | 0m 45s | | trunk passed | | +1 :green_heart: | shadedclient | 64m 44s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 22s | | the patch passed | | +1 :green_heart: | compile | 4m 41s | | the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | cc | 4m 41s | | the patch passed | | +1 :green_heart: | golang | 4m 41s | | the patch passed | | +1 :green_heart: | javac | 4m 41s | | the patch passed | | +1 :green_heart: | compile | 4m 41s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | cc | 4m 41s | | the patch passed | | +1 :green_heart: | golang | 4m 41s | | the patch passed | | +1 :green_heart: | javac | 4m 41s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | mvnsite | 0m 24s | | the patch passed | | +1 :green_heart: | shadedclient | 25m 32s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 97m 6s | | hadoop-hdfs-native-client in the patch passed. | | +1 :green_heart: | asflicense | 0m 40s | | The patch does not generate ASF License warnings. | | | | 202m 58s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4576/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/4576 | | Optional Tests | dupname asflicense compile cc mvnsite javac unit codespell detsecrets golang | | uname | Linux 232d09e8622b 4.15.0-175-generic #184-Ubuntu SMP Thu Mar 24 17:48:36 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / df2323ca40fb4579c6be56bee9b8a780c20243d5 | | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4576/3/testReport/ | | Max. process+thread count | 606 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: hadoop-hdfs-project/hadoop-hdfs-native-client | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4576/3/console | | versions | git=2.25.1 maven=3.6.3 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-18333) hadoop-client-runtime impact by CVE-2022-2047 due to shaded jetty
[ https://issues.apache.org/jira/browse/HADOOP-18333?focusedWorklogId=792898=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-792898 ] ASF GitHub Bot logged work on HADOOP-18333: --- Author: ASF GitHub Bot Created on: 19/Jul/22 20:18 Start Date: 19/Jul/22 20:18 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on PR #4553: URL: https://github.com/apache/hadoop/pull/4553#issuecomment-1189511807 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 53s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 1s | | xmllint was not available. | | +0 :ok: | shelldocs | 0m 1s | | Shelldocs was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 14m 48s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 28m 18s | | trunk passed | | +1 :green_heart: | compile | 25m 13s | | trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | compile | 22m 1s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 4m 30s | | trunk passed | | +1 :green_heart: | mvnsite | 20m 10s | | trunk passed | | +1 :green_heart: | javadoc | 8m 32s | | trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javadoc | 7m 23s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +0 :ok: | spotbugs | 0m 27s | | branch/hadoop-project no spotbugs output file (spotbugsXml.xml) | | +1 :green_heart: | shadedclient | 58m 21s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 56s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 25m 58s | | the patch passed | | +1 :green_heart: | compile | 24m 52s | | the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javac | 24m 52s | | the patch passed | | +1 :green_heart: | compile | 22m 3s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 22m 3s | | the patch passed | | +1 :green_heart: | blanks | 0m 1s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 4m 52s | | the patch passed | | +1 :green_heart: | mvnsite | 19m 49s | | the patch passed | | +1 :green_heart: | shellcheck | 0m 0s | | No new issues. | | +1 :green_heart: | javadoc | 8m 21s | | the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javadoc | 7m 28s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +0 :ok: | spotbugs | 0m 28s | | hadoop-project has no data from spotbugs | | +1 :green_heart: | shadedclient | 58m 6s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 1071m 57s | | root in the patch passed. | | +1 :green_heart: | asflicense | 2m 14s | | The patch does not generate ASF License warnings. | | | | 1448m 35s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4553/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/4553 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint shellcheck shelldocs | | uname | Linux 7bdfd70ef618 4.15.0-166-generic #174-Ubuntu SMP Wed Dec 8 19:07:44 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / dac51e7a74d40233700da64894aafb9cf4096ee6 | | Default Java | Private
[GitHub] [hadoop] hadoop-yetus commented on pull request #4553: HADOOP-18333.Upgrade jetty version to 9.4.48.v20220622
hadoop-yetus commented on PR #4553: URL: https://github.com/apache/hadoop/pull/4553#issuecomment-1189511807 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 53s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 1s | | xmllint was not available. | | +0 :ok: | shelldocs | 0m 1s | | Shelldocs was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 14m 48s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 28m 18s | | trunk passed | | +1 :green_heart: | compile | 25m 13s | | trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | compile | 22m 1s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 4m 30s | | trunk passed | | +1 :green_heart: | mvnsite | 20m 10s | | trunk passed | | +1 :green_heart: | javadoc | 8m 32s | | trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javadoc | 7m 23s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +0 :ok: | spotbugs | 0m 27s | | branch/hadoop-project no spotbugs output file (spotbugsXml.xml) | | +1 :green_heart: | shadedclient | 58m 21s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 56s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 25m 58s | | the patch passed | | +1 :green_heart: | compile | 24m 52s | | the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javac | 24m 52s | | the patch passed | | +1 :green_heart: | compile | 22m 3s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 22m 3s | | the patch passed | | +1 :green_heart: | blanks | 0m 1s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 4m 52s | | the patch passed | | +1 :green_heart: | mvnsite | 19m 49s | | the patch passed | | +1 :green_heart: | shellcheck | 0m 0s | | No new issues. | | +1 :green_heart: | javadoc | 8m 21s | | the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javadoc | 7m 28s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +0 :ok: | spotbugs | 0m 28s | | hadoop-project has no data from spotbugs | | +1 :green_heart: | shadedclient | 58m 6s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 1071m 57s | | root in the patch passed. | | +1 :green_heart: | asflicense | 2m 14s | | The patch does not generate ASF License warnings. | | | | 1448m 35s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4553/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/4553 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint shellcheck shelldocs | | uname | Linux 7bdfd70ef618 4.15.0-166-generic #174-Ubuntu SMP Wed Dec 8 19:07:44 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / dac51e7a74d40233700da64894aafb9cf4096ee6 | | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4553/2/testReport/ | | Max. process+thread count | 2863 (vs. ulimit of 5500) | | modules | C: hadoop-project
[jira] [Work logged] (HADOOP-18330) S3AFileSystem removes Path when calling createS3Client
[ https://issues.apache.org/jira/browse/HADOOP-18330?focusedWorklogId=792897=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-792897 ] ASF GitHub Bot logged work on HADOOP-18330: --- Author: ASF GitHub Bot Created on: 19/Jul/22 20:17 Start Date: 19/Jul/22 20:17 Worklog Time Spent: 10m Work Description: ashutoshpant commented on PR #4572: URL: https://github.com/apache/hadoop/pull/4572#issuecomment-1189511230 > ok, which s3 endpoint did you run the tests against? Just ran the tests against us-east-1 Issue Time Tracking --- Worklog Id: (was: 792897) Time Spent: 3h 20m (was: 3h 10m) > S3AFileSystem removes Path when calling createS3Client > -- > > Key: HADOOP-18330 > URL: https://issues.apache.org/jira/browse/HADOOP-18330 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3 >Affects Versions: 3.3.0, 3.3.1, 3.3.2, 3.3.3 >Reporter: Ashutosh Pant >Assignee: Ashutosh Pant >Priority: Minor > Labels: pull-request-available > Time Spent: 3h 20m > Remaining Estimate: 0h > > when using hadoop and spark to read/write data from an s3 bucket like -> > s3a://bucket/path and using a custom Credentials Provider, the path is > removed from the s3a URI and the credentials provider fails because the full > path is gone. > In Spark 3.2, > It was invoked as -> s3 = ReflectionUtils.newInstance(s3ClientFactoryClass, > conf) > .createS3Client(name, bucket, credentials); > But In spark 3.3.3 > It is invoked as s3 = ReflectionUtils.newInstance(s3ClientFactoryClass, > conf).createS3Client(getUri(), parameters); > the getUri() removes the path from the s3a URI -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ashutoshpant commented on pull request #4572: HADOOP-18330-S3AFileSystem removes Path when calling createS3Client
ashutoshpant commented on PR #4572: URL: https://github.com/apache/hadoop/pull/4572#issuecomment-1189511230 > ok, which s3 endpoint did you run the tests against? Just ran the tests against us-east-1 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-12007) GzipCodec native CodecPool leaks memory
[ https://issues.apache.org/jira/browse/HADOOP-12007?focusedWorklogId=792894=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-792894 ] ASF GitHub Bot logged work on HADOOP-12007: --- Author: ASF GitHub Bot Created on: 19/Jul/22 20:09 Start Date: 19/Jul/22 20:09 Worklog Time Spent: 10m Work Description: sunchao commented on PR #4585: URL: https://github.com/apache/hadoop/pull/4585#issuecomment-1189504924 Thanks @kevins-29 , the fix looks good to me. However, is this addressing the issue mentioned in HADOOP-12007? my understanding is that the issue there is related to `CompressorStream` overrides `close` and doesn't return the compressor to the pool as result. Issue Time Tracking --- Worklog Id: (was: 792894) Time Spent: 50m (was: 40m) > GzipCodec native CodecPool leaks memory > --- > > Key: HADOOP-12007 > URL: https://issues.apache.org/jira/browse/HADOOP-12007 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.7.0 >Reporter: Yejun Yang >Priority: Major > Labels: pull-request-available > Time Spent: 50m > Remaining Estimate: 0h > > org/apache/hadoop/io/compress/GzipCodec.java call > CompressionCodec.Util.createOutputStreamWithCodecPool to use CodecPool. But > compressor objects are actually never returned to pool which cause memory > leak. > HADOOP-10591 uses CompressionOutputStream.close() to return Compressor object > to pool. But CompressionCodec.Util.createOutputStreamWithCodecPool actually > returns a CompressorStream which overrides close(). > This cause CodecPool.returnCompressor never being called. In my log file I > can see lots of "Got brand-new compressor [.gz]" but no "Got recycled > compressor". -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] sunchao commented on pull request #4585: HADOOP-12007. GzipCodec native CodecPool leaks memory
sunchao commented on PR #4585: URL: https://github.com/apache/hadoop/pull/4585#issuecomment-1189504924 Thanks @kevins-29 , the fix looks good to me. However, is this addressing the issue mentioned in HADOOP-12007? my understanding is that the issue there is related to `CompressorStream` overrides `close` and doesn't return the compressor to the pool as result. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-18079) Upgrade Netty to 4.1.77.Final
[ https://issues.apache.org/jira/browse/HADOOP-18079?focusedWorklogId=792893=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-792893 ] ASF GitHub Bot logged work on HADOOP-18079: --- Author: ASF GitHub Bot Created on: 19/Jul/22 20:01 Start Date: 19/Jul/22 20:01 Worklog Time Spent: 10m Work Description: jojochuang opened a new pull request, #4592: URL: https://github.com/apache/hadoop/pull/4592 Upgrade netty to address CVE-2019-20444, CVE-2019-20445 CVE-2022-24823 Contributed by Wei-Chiu Chuang cherrypicked from #3977 (cherry picked from commit a55ace7bc0c173f609b51e46cb0d4d8bcda3d79d) Issue Time Tracking --- Worklog Id: (was: 792893) Time Spent: 4h 20m (was: 4h 10m) > Upgrade Netty to 4.1.77.Final > - > > Key: HADOOP-18079 > URL: https://issues.apache.org/jira/browse/HADOOP-18079 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 3.3.3 >Reporter: Renukaprasad C >Assignee: Wei-Chiu Chuang >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 4h 20m > Remaining Estimate: 0h > > h4. Netty version - 4.1.71 has fix some CVEs. > CVE-2019-20444, > CVE-2019-20445 > CVE-2022-24823 > Upgrade to latest version. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] jojochuang opened a new pull request, #4592: HADOOP-18079. Upgrade Netty to 4.1.77. (#3977)
jojochuang opened a new pull request, #4592: URL: https://github.com/apache/hadoop/pull/4592 Upgrade netty to address CVE-2019-20444, CVE-2019-20445 CVE-2022-24823 Contributed by Wei-Chiu Chuang cherrypicked from #3977 (cherry picked from commit a55ace7bc0c173f609b51e46cb0d4d8bcda3d79d) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-18330) S3AFileSystem removes Path when calling createS3Client
[ https://issues.apache.org/jira/browse/HADOOP-18330?focusedWorklogId=792889=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-792889 ] ASF GitHub Bot logged work on HADOOP-18330: --- Author: ASF GitHub Bot Created on: 19/Jul/22 19:51 Start Date: 19/Jul/22 19:51 Worklog Time Spent: 10m Work Description: steveloughran commented on PR #4572: URL: https://github.com/apache/hadoop/pull/4572#issuecomment-1189489423 ok, which s3 endpoint did you run the tests against? Issue Time Tracking --- Worklog Id: (was: 792889) Time Spent: 3h 10m (was: 3h) > S3AFileSystem removes Path when calling createS3Client > -- > > Key: HADOOP-18330 > URL: https://issues.apache.org/jira/browse/HADOOP-18330 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3 >Affects Versions: 3.3.0, 3.3.1, 3.3.2, 3.3.3 >Reporter: Ashutosh Pant >Assignee: Ashutosh Pant >Priority: Minor > Labels: pull-request-available > Time Spent: 3h 10m > Remaining Estimate: 0h > > when using hadoop and spark to read/write data from an s3 bucket like -> > s3a://bucket/path and using a custom Credentials Provider, the path is > removed from the s3a URI and the credentials provider fails because the full > path is gone. > In Spark 3.2, > It was invoked as -> s3 = ReflectionUtils.newInstance(s3ClientFactoryClass, > conf) > .createS3Client(name, bucket, credentials); > But In spark 3.3.3 > It is invoked as s3 = ReflectionUtils.newInstance(s3ClientFactoryClass, > conf).createS3Client(getUri(), parameters); > the getUri() removes the path from the s3a URI -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on pull request #4572: HADOOP-18330-S3AFileSystem removes Path when calling createS3Client
steveloughran commented on PR #4572: URL: https://github.com/apache/hadoop/pull/4572#issuecomment-1189489423 ok, which s3 endpoint did you run the tests against? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17461) Add thread-level IOStatistics Context
[ https://issues.apache.org/jira/browse/HADOOP-17461?focusedWorklogId=792888=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-792888 ] ASF GitHub Bot logged work on HADOOP-17461: --- Author: ASF GitHub Bot Created on: 19/Jul/22 19:48 Start Date: 19/Jul/22 19:48 Worklog Time Spent: 10m Work Description: steveloughran commented on code in PR #4352: URL: https://github.com/apache/hadoop/pull/4352#discussion_r924896943 ## hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/statistics/impl/IOStatisticsContext.java: ## @@ -31,34 +31,34 @@ * * The {@link #snapshot()} call creates a snapshot of the statistics; * - * The {@link #reset()} call resets the statistics in the current thread so + * The {@link #reset()} call resets the statistics in the context so * that later snapshots will get the incremental data. */ public interface IOStatisticsContext extends IOStatisticsSource { Review Comment: this needs to get moved to org.apache.hadoop.fs.statistics.IOStatisticsContext as it is the public api the apps need; the impl stuff is kept private for the filesystems. Issue Time Tracking --- Worklog Id: (was: 792888) Time Spent: 6h 20m (was: 6h 10m) > Add thread-level IOStatistics Context > - > > Key: HADOOP-17461 > URL: https://issues.apache.org/jira/browse/HADOOP-17461 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs, fs/azure, fs/s3 >Affects Versions: 3.3.1 >Reporter: Steve Loughran >Assignee: Mehakmeet Singh >Priority: Major > Labels: pull-request-available > Time Spent: 6h 20m > Remaining Estimate: 0h > > For effective reporting of the iostatistics of individual worker threads, we > need a thread-level context which IO components update. > * this contact needs to be passed in two background thread forming work on > behalf of a task. > * IO Components (streams, iterators, filesystems) need to update this context > statistics as they perform work > * Without double counting anything. > I imagine a ThreadLocal IOStatisticContext which will be updated in the > FileSystem API Calls. This context MUST be passed into the background threads > used by a task, so that IO is correctly aggregated. > I don't want streams, listIterators to do the updating as there is more > risk of double counting. However, we need to see their statistics if we want > to know things like "bytes discarded in backwards seeks". And I don't want to > be updating a shared context object on every read() call. > If all we want is store IO (HEAD, GET, DELETE, list performance etc) then the > FS is sufficient. > If we do want the stream-specific detail, then I propose > * caching the context in the constructor > * updating it only in close() or unbuffer() (as we do from S3AInputStream to > S3AInstrumenation) > * excluding those we know the FS already collects. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on a diff in pull request #4352: HADOOP-17461. Thread-level IOStatistics in S3A
steveloughran commented on code in PR #4352: URL: https://github.com/apache/hadoop/pull/4352#discussion_r924896943 ## hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/statistics/impl/IOStatisticsContext.java: ## @@ -31,34 +31,34 @@ * * The {@link #snapshot()} call creates a snapshot of the statistics; * - * The {@link #reset()} call resets the statistics in the current thread so + * The {@link #reset()} call resets the statistics in the context so * that later snapshots will get the incremental data. */ public interface IOStatisticsContext extends IOStatisticsSource { Review Comment: this needs to get moved to org.apache.hadoop.fs.statistics.IOStatisticsContext as it is the public api the apps need; the impl stuff is kept private for the filesystems. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17461) Add thread-level IOStatistics Context
[ https://issues.apache.org/jira/browse/HADOOP-17461?focusedWorklogId=792887=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-792887 ] ASF GitHub Bot logged work on HADOOP-17461: --- Author: ASF GitHub Bot Created on: 19/Jul/22 19:42 Start Date: 19/Jul/22 19:42 Worklog Time Spent: 10m Work Description: steveloughran commented on PR #4352: URL: https://github.com/apache/hadoop/pull/4352#issuecomment-1189482342 I now have a branch of spark set up to use this, albeit not production ready https://github.com/steveloughran/hadoop/tree/s3/HADOOP-17461-iostatisticsContext * TaskMetrics includes an IOStatisticsSnapshot in its serialized data * IOStatisticsContext is retrieved and reset at start of read/write work; updated at end. Based on where the read bytes/write bytes counters were read. Having done that, I've realised that the rdds and writers are not quite the correct place as the IOStats collect all IO on the thread, and are both the readers and writers will be working on that thread. The place to do it is actually in the ResultTask, with IOStatisticsSnapshot being one of the accumulators which is sent back. the spark driver would keep the core IOStatisticsSnapshot up to date with results from success and failure tasks. Issue Time Tracking --- Worklog Id: (was: 792887) Time Spent: 6h 10m (was: 6h) > Add thread-level IOStatistics Context > - > > Key: HADOOP-17461 > URL: https://issues.apache.org/jira/browse/HADOOP-17461 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs, fs/azure, fs/s3 >Affects Versions: 3.3.1 >Reporter: Steve Loughran >Assignee: Mehakmeet Singh >Priority: Major > Labels: pull-request-available > Time Spent: 6h 10m > Remaining Estimate: 0h > > For effective reporting of the iostatistics of individual worker threads, we > need a thread-level context which IO components update. > * this contact needs to be passed in two background thread forming work on > behalf of a task. > * IO Components (streams, iterators, filesystems) need to update this context > statistics as they perform work > * Without double counting anything. > I imagine a ThreadLocal IOStatisticContext which will be updated in the > FileSystem API Calls. This context MUST be passed into the background threads > used by a task, so that IO is correctly aggregated. > I don't want streams, listIterators to do the updating as there is more > risk of double counting. However, we need to see their statistics if we want > to know things like "bytes discarded in backwards seeks". And I don't want to > be updating a shared context object on every read() call. > If all we want is store IO (HEAD, GET, DELETE, list performance etc) then the > FS is sufficient. > If we do want the stream-specific detail, then I propose > * caching the context in the constructor > * updating it only in close() or unbuffer() (as we do from S3AInputStream to > S3AInstrumenation) > * excluding those we know the FS already collects. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on pull request #4352: HADOOP-17461. Thread-level IOStatistics in S3A
steveloughran commented on PR #4352: URL: https://github.com/apache/hadoop/pull/4352#issuecomment-1189482342 I now have a branch of spark set up to use this, albeit not production ready https://github.com/steveloughran/hadoop/tree/s3/HADOOP-17461-iostatisticsContext * TaskMetrics includes an IOStatisticsSnapshot in its serialized data * IOStatisticsContext is retrieved and reset at start of read/write work; updated at end. Based on where the read bytes/write bytes counters were read. Having done that, I've realised that the rdds and writers are not quite the correct place as the IOStats collect all IO on the thread, and are both the readers and writers will be working on that thread. The place to do it is actually in the ResultTask, with IOStatisticsSnapshot being one of the accumulators which is sent back. the spark driver would keep the core IOStatisticsSnapshot up to date with results from success and failure tasks. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-18301) Upgrade commons-io to 2.11.0
[ https://issues.apache.org/jira/browse/HADOOP-18301?focusedWorklogId=792885=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-792885 ] ASF GitHub Bot logged work on HADOOP-18301: --- Author: ASF GitHub Bot Created on: 19/Jul/22 19:26 Start Date: 19/Jul/22 19:26 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on PR #4455: URL: https://github.com/apache/hadoop/pull/4455#issuecomment-1189468822 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 58s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 0s | | xmllint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 14m 51s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 34m 8s | | trunk passed | | +1 :green_heart: | compile | 27m 53s | | trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | compile | 23m 52s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 4m 24s | | trunk passed | | +1 :green_heart: | mvnsite | 2m 45s | | trunk passed | | +1 :green_heart: | javadoc | 2m 19s | | trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javadoc | 2m 42s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +0 :ok: | spotbugs | 1m 4s | | branch/hadoop-project no spotbugs output file (spotbugsXml.xml) | | +1 :green_heart: | shadedclient | 29m 23s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 29s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 54s | | the patch passed | | +1 :green_heart: | compile | 27m 33s | | the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javac | 27m 33s | | the patch passed | | +1 :green_heart: | compile | 25m 7s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 25m 7s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 4m 57s | | the patch passed | | +1 :green_heart: | mvnsite | 3m 22s | | the patch passed | | +1 :green_heart: | javadoc | 2m 52s | | the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javadoc | 3m 22s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +0 :ok: | spotbugs | 1m 3s | | hadoop-project has no data from spotbugs | | +1 :green_heart: | shadedclient | 30m 33s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 1m 2s | | hadoop-project in the patch passed. | | +1 :green_heart: | unit | 360m 50s | | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 2m 16s | | The patch does not generate ASF License warnings. | | | | 620m 39s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4455/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/4455 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint | | uname | Linux 8e077ce3c8ed 4.15.0-175-generic #184-Ubuntu SMP Thu Mar 24 17:48:36 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 7192b8f791308be068a64058769dde35cf928bec | | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | Multi-JDK versions |
[GitHub] [hadoop] hadoop-yetus commented on pull request #4455: HADOOP-18301.Upgrade commons-io to 2.11.0
hadoop-yetus commented on PR #4455: URL: https://github.com/apache/hadoop/pull/4455#issuecomment-1189468822 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 58s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 0s | | xmllint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 14m 51s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 34m 8s | | trunk passed | | +1 :green_heart: | compile | 27m 53s | | trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | compile | 23m 52s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 4m 24s | | trunk passed | | +1 :green_heart: | mvnsite | 2m 45s | | trunk passed | | +1 :green_heart: | javadoc | 2m 19s | | trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javadoc | 2m 42s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +0 :ok: | spotbugs | 1m 4s | | branch/hadoop-project no spotbugs output file (spotbugsXml.xml) | | +1 :green_heart: | shadedclient | 29m 23s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 29s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 54s | | the patch passed | | +1 :green_heart: | compile | 27m 33s | | the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javac | 27m 33s | | the patch passed | | +1 :green_heart: | compile | 25m 7s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 25m 7s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 4m 57s | | the patch passed | | +1 :green_heart: | mvnsite | 3m 22s | | the patch passed | | +1 :green_heart: | javadoc | 2m 52s | | the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javadoc | 3m 22s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +0 :ok: | spotbugs | 1m 3s | | hadoop-project has no data from spotbugs | | +1 :green_heart: | shadedclient | 30m 33s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 1m 2s | | hadoop-project in the patch passed. | | +1 :green_heart: | unit | 360m 50s | | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 2m 16s | | The patch does not generate ASF License warnings. | | | | 620m 39s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4455/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/4455 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint | | uname | Linux 8e077ce3c8ed 4.15.0-175-generic #184-Ubuntu SMP Thu Mar 24 17:48:36 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 7192b8f791308be068a64058769dde35cf928bec | | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4455/2/testReport/ | | Max. process+thread count | 2161 (vs. ulimit of 5500) | | modules | C: hadoop-project hadoop-hdfs-project/hadoop-hdfs U: . | | Console output |
[jira] [Commented] (HADOOP-18347) Restrict vectoredIO threadpool to reduce memory pressure
[ https://issues.apache.org/jira/browse/HADOOP-18347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17568690#comment-17568690 ] Steve Loughran commented on HADOOP-18347: - makes snese. that bounded pool is fairly bounded across an fs instance, so could become a bottleneck. time to review the defaults? > Restrict vectoredIO threadpool to reduce memory pressure > > > Key: HADOOP-18347 > URL: https://issues.apache.org/jira/browse/HADOOP-18347 > Project: Hadoop Common > Issue Type: Sub-task > Components: common, fs, fs/adl, fs/s3 >Reporter: Rajesh Balamohan >Priority: Major > Labels: performance > > https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L964-L967 > Currently, it fetches all the ranges with unbounded threadpool. This will not > cause memory pressures with standard benchmarks like TPCDS. However, when > large number of ranges are present with large files, this could potentially > spike up memory usage of the task. Limiting the threadpool size could reduce > the memory usage. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] goiri commented on a diff in pull request #4531: HDFS-13274. RBF: Extend RouterRpcClient to use multiple sockets
goiri commented on code in PR #4531: URL: https://github.com/apache/hadoop/pull/4531#discussion_r924855528 ## hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionPool.java: ## @@ -210,24 +217,22 @@ public AtomicInteger getClientIndex() { * @return Connection context. */ protected ConnectionContext getConnection() { - this.lastActiveTime = Time.now(); - -// Get a connection from the pool following round-robin -ConnectionContext conn = null; List tmpConnections = this.connections; -int size = tmpConnections.size(); -// Inc and mask off sign bit, lookup index should be non-negative int -int threadIndex = this.clientIndex.getAndIncrement() & 0x7FFF; -for (int i=0; i 0) { + // Get a connection from the pool following round-robin + int threadIndex = this.clientIndex.getAndIncrement() & 0x7FFF; Review Comment: We should keep the old comment: ``` // Inc and mask off sign bit, lookup index should be non-negative int ``` ## hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionPool.java: ## @@ -210,24 +217,22 @@ public AtomicInteger getClientIndex() { * @return Connection context. */ protected ConnectionContext getConnection() { - this.lastActiveTime = Time.now(); - -// Get a connection from the pool following round-robin -ConnectionContext conn = null; List tmpConnections = this.connections; -int size = tmpConnections.size(); -// Inc and mask off sign bit, lookup index should be non-negative int -int threadIndex = this.clientIndex.getAndIncrement() & 0x7FFF; -for (int i=0; i + +dfs.federation.router.enable.multiple.socket +false + + If enable multiple downstream socket or not. Review Comment: https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-rbf/src/site/markdown/HDFSRouterFederation.md ## hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/resources/hdfs-rbf-default.xml: ## @@ -134,6 +134,22 @@ + +dfs.federation.router.enable.multiple.socket +false + + If enable multiple downstream socket or not. Review Comment: We should explain the relation between dfs.federation.router.enable.multiple.socket and dfs.federation.router.max.concurrency.per.connection. In general it would be good to have this in some of the RBF md files explaining why doing this. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18340) deleteOnExit does not work with S3AFileSystem
[ https://issues.apache.org/jira/browse/HADOOP-18340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17568655#comment-17568655 ] Huaxiang Sun commented on HADOOP-18340: --- Step back, I think yours way is a better way to follow: * parallelise the delete * skip existence checks * call the innerDelete method so the fs openness check is skipped. Let me think about it and come back. > deleteOnExit does not work with S3AFileSystem > - > > Key: HADOOP-18340 > URL: https://issues.apache.org/jira/browse/HADOOP-18340 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3 >Affects Versions: 3.3.3 >Reporter: Huaxiang Sun >Priority: Minor > > When deleteOnExit is set on some paths, they are not removed when file system > object is closed. The following exception is logged when printing out the > exception in info log. > {code:java} > 2022-07-15 19:29:12,552 [main] INFO fs.FileSystem > (FileSystem.java:processDeleteOnExit(1810)) - Ignoring failure to > deleteOnExit for path /file, exception {} > java.io.IOException: s3a://mock-bucket: FileSystem is closed! > at > org.apache.hadoop.fs.s3a.S3AFileSystem.checkNotClosed(S3AFileSystem.java:3887) > at > org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2333) > at > org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2355) > at > org.apache.hadoop.fs.s3a.S3AFileSystem.exists(S3AFileSystem.java:4402) > at > org.apache.hadoop.fs.FileSystem.processDeleteOnExit(FileSystem.java:1805) > at org.apache.hadoop.fs.FileSystem.close(FileSystem.java:2669) > at > org.apache.hadoop.fs.s3a.S3AFileSystem.close(S3AFileSystem.java:3830) > at > org.apache.hadoop.fs.s3a.TestS3AGetFileStatus.testFile(TestS3AGetFileStatus.java:87) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at > org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:258) > at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) > at > org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) > at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) > at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) > at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) > at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) > at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) > at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) > at org.junit.runners.ParentRunner.run(ParentRunner.java:413) > at > org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238) > at > org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159) > at > org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384) > at > org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345) > at > org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126) > at > org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418) > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe,
[jira] [Work logged] (HADOOP-12007) GzipCodec native CodecPool leaks memory
[ https://issues.apache.org/jira/browse/HADOOP-12007?focusedWorklogId=792831=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-792831 ] ASF GitHub Bot logged work on HADOOP-12007: --- Author: ASF GitHub Bot Created on: 19/Jul/22 17:19 Start Date: 19/Jul/22 17:19 Worklog Time Spent: 10m Work Description: dbtsai commented on PR #4585: URL: https://github.com/apache/hadoop/pull/4585#issuecomment-1189355811 cc @sunchao Issue Time Tracking --- Worklog Id: (was: 792831) Time Spent: 40m (was: 0.5h) > GzipCodec native CodecPool leaks memory > --- > > Key: HADOOP-12007 > URL: https://issues.apache.org/jira/browse/HADOOP-12007 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.7.0 >Reporter: Yejun Yang >Priority: Major > Labels: pull-request-available > Time Spent: 40m > Remaining Estimate: 0h > > org/apache/hadoop/io/compress/GzipCodec.java call > CompressionCodec.Util.createOutputStreamWithCodecPool to use CodecPool. But > compressor objects are actually never returned to pool which cause memory > leak. > HADOOP-10591 uses CompressionOutputStream.close() to return Compressor object > to pool. But CompressionCodec.Util.createOutputStreamWithCodecPool actually > returns a CompressorStream which overrides close(). > This cause CodecPool.returnCompressor never being called. In my log file I > can see lots of "Got brand-new compressor [.gz]" but no "Got recycled > compressor". -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] dbtsai commented on pull request #4585: HADOOP-12007. GzipCodec native CodecPool leaks memory
dbtsai commented on PR #4585: URL: https://github.com/apache/hadoop/pull/4585#issuecomment-1189355811 cc @sunchao -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #4576: HDFS-16667. Use malloc for buffer allocation in uriparser2
hadoop-yetus commented on PR #4576: URL: https://github.com/apache/hadoop/pull/4576#issuecomment-1189346335 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 27m 27s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 33m 54s | | trunk passed | | +1 :green_heart: | compile | 4m 5s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 43s | | trunk passed | | +1 :green_heart: | shadedclient | 71m 25s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 22s | | the patch passed | | +1 :green_heart: | compile | 3m 56s | | the patch passed | | +1 :green_heart: | cc | 3m 56s | | the patch passed | | +1 :green_heart: | golang | 3m 56s | | the patch passed | | +1 :green_heart: | javac | 3m 56s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | mvnsite | 0m 27s | | the patch passed | | +1 :green_heart: | shadedclient | 33m 43s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 113m 38s | | hadoop-hdfs-native-client in the patch passed. | | +1 :green_heart: | asflicense | 0m 51s | | The patch does not generate ASF License warnings. | | | | 255m 0s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4576/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/4576 | | Optional Tests | dupname asflicense compile cc mvnsite javac unit codespell detsecrets golang | | uname | Linux 7eb9a1960edb 4.15.0-175-generic #184-Ubuntu SMP Thu Mar 24 17:48:36 UTC 2022 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / df2323ca40fb4579c6be56bee9b8a780c20243d5 | | Default Java | Debian-11.0.15+10-post-Debian-1deb10u1 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4576/3/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: hadoop-hdfs-project/hadoop-hdfs-native-client | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4576/3/console | | versions | git=2.20.1 maven=3.6.0 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-18340) deleteOnExit does not work with S3AFileSystem
[ https://issues.apache.org/jira/browse/HADOOP-18340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17568637#comment-17568637 ] Huaxiang Sun edited comment on HADOOP-18340 at 7/19/22 5:05 PM: _"what if two threads call close()"_ (Sorry, for some reason, I cannot find the quote sign). In S3AFileSystrem's close(), there is already an atomic boolean to guide against multiple parallel closes, the code I added is after this check. [https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java#L3809] I have coded up a unitest with mock to demo the issue and the fix. For some reason, when I tried to run S3A Integration Test, I always run into the issue. I put the following into auth-keys.xml, it always give me "AWS access Key Id does not exist in our records". However, I can use the same keys from AWS cli to access the S3. Anything am I missing? Thanks [~ste...@apache.org]. fs.s3a.access.key AWS access key ID. Omit for IAM role-based or provider-based authentication. fs.s3a.secret.key AWS secret key. Omit for IAM role-based or provider-based authentication. fs.s3a.session.token Session token, when using org.apache.hadoop.fs.s3a.TemporaryAWSCredentialsProvider as one of the providers. was (Author: huaxiangsun): _"what if two threads call close()"_ (Sorry, for some reason, I cannot find the quote sign). In S3AFileSystrem's close(), there is already an atomic boolean to guide against multiple parallel closes. [https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java#L3809] I have coded up a unitest with mock to demo the issue and the fix. For some reason, when I tried to run S3A Integration Test, I always run into the issue. I put the following into auth-keys.xml, it always give me "AWS access Key Id does not exist in our records". However, I can use the same keys from AWS cli to access the S3. Anything am I missing? Thanks [~ste...@apache.org]. fs.s3a.access.key AWS access key ID. Omit for IAM role-based or provider-based authentication. fs.s3a.secret.key AWS secret key. Omit for IAM role-based or provider-based authentication. fs.s3a.session.token Session token, when using org.apache.hadoop.fs.s3a.TemporaryAWSCredentialsProvider as one of the providers. > deleteOnExit does not work with S3AFileSystem > - > > Key: HADOOP-18340 > URL: https://issues.apache.org/jira/browse/HADOOP-18340 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3 >Affects Versions: 3.3.3 >Reporter: Huaxiang Sun >Priority: Minor > > When deleteOnExit is set on some paths, they are not removed when file system > object is closed. The following exception is logged when printing out the > exception in info log. > {code:java} > 2022-07-15 19:29:12,552 [main] INFO fs.FileSystem > (FileSystem.java:processDeleteOnExit(1810)) - Ignoring failure to > deleteOnExit for path /file, exception {} > java.io.IOException: s3a://mock-bucket: FileSystem is closed! > at > org.apache.hadoop.fs.s3a.S3AFileSystem.checkNotClosed(S3AFileSystem.java:3887) > at > org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2333) > at > org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2355) > at > org.apache.hadoop.fs.s3a.S3AFileSystem.exists(S3AFileSystem.java:4402) > at > org.apache.hadoop.fs.FileSystem.processDeleteOnExit(FileSystem.java:1805) > at org.apache.hadoop.fs.FileSystem.close(FileSystem.java:2669) > at > org.apache.hadoop.fs.s3a.S3AFileSystem.close(S3AFileSystem.java:3830) > at > org.apache.hadoop.fs.s3a.TestS3AGetFileStatus.testFile(TestS3AGetFileStatus.java:87) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at >
[jira] [Commented] (HADOOP-18273) Support S3 access point alias
[ https://issues.apache.org/jira/browse/HADOOP-18273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17568641#comment-17568641 ] Daniel Carl Jones commented on HADOOP-18273: Sorry for the late reply, [~kevincong]. I imagine you won't see it for your bucket as s3.amazonaws.com is by default us-east-1. If you try with a bucket that was created in another region, I imagine this is how we'd reproduce the redirect exception when using s3.amazonaws.com. > Support S3 access point alias > - > > Key: HADOOP-18273 > URL: https://issues.apache.org/jira/browse/HADOOP-18273 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.3.9 >Reporter: Shen Cong >Priority: Major > > Adding support for using access point alias to access s3 bucket. > > When you create an access point, Amazon S3 automatically generates an alias > that you can use instead of an Amazon S3 bucket name for data access. You can > use this access point alias instead of an Amazon Resource Name (ARN) for any > access point data plane operation. > https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-points-alias.html -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18340) deleteOnExit does not work with S3AFileSystem
[ https://issues.apache.org/jira/browse/HADOOP-18340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17568637#comment-17568637 ] Huaxiang Sun commented on HADOOP-18340: --- _"what if two threads call close()"_ (Sorry, for some reason, I cannot find the quote sign). In S3AFileSystrem's close(), there is already an atomic boolean to guide against multiple parallel closes. [https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java#L3809] I have coded up a unitest with mock to demo the issue and the fix. For some reason, when I tried to run S3A Integration Test, I always run into the issue. I put the following into auth-keys.xml, it always give me "AWS access Key Id does not exist in our records". However, I can use the same keys from AWS cli to access the S3. Anything am I missing? Thanks [~ste...@apache.org]. fs.s3a.access.key AWS access key ID. Omit for IAM role-based or provider-based authentication. fs.s3a.secret.key AWS secret key. Omit for IAM role-based or provider-based authentication. fs.s3a.session.token Session token, when using org.apache.hadoop.fs.s3a.TemporaryAWSCredentialsProvider as one of the providers. > deleteOnExit does not work with S3AFileSystem > - > > Key: HADOOP-18340 > URL: https://issues.apache.org/jira/browse/HADOOP-18340 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3 >Affects Versions: 3.3.3 >Reporter: Huaxiang Sun >Priority: Minor > > When deleteOnExit is set on some paths, they are not removed when file system > object is closed. The following exception is logged when printing out the > exception in info log. > {code:java} > 2022-07-15 19:29:12,552 [main] INFO fs.FileSystem > (FileSystem.java:processDeleteOnExit(1810)) - Ignoring failure to > deleteOnExit for path /file, exception {} > java.io.IOException: s3a://mock-bucket: FileSystem is closed! > at > org.apache.hadoop.fs.s3a.S3AFileSystem.checkNotClosed(S3AFileSystem.java:3887) > at > org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2333) > at > org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2355) > at > org.apache.hadoop.fs.s3a.S3AFileSystem.exists(S3AFileSystem.java:4402) > at > org.apache.hadoop.fs.FileSystem.processDeleteOnExit(FileSystem.java:1805) > at org.apache.hadoop.fs.FileSystem.close(FileSystem.java:2669) > at > org.apache.hadoop.fs.s3a.S3AFileSystem.close(S3AFileSystem.java:3830) > at > org.apache.hadoop.fs.s3a.TestS3AGetFileStatus.testFile(TestS3AGetFileStatus.java:87) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at > org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:258) > at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) > at > org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) > at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) > at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) > at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) > at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) > at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) > at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) > at org.junit.runners.ParentRunner.run(ParentRunner.java:413) > at > org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365) >
[jira] [Work logged] (HADOOP-18190) s3a prefetching streams to collect iostats on prefetching operations
[ https://issues.apache.org/jira/browse/HADOOP-18190?focusedWorklogId=792796=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-792796 ] ASF GitHub Bot logged work on HADOOP-18190: --- Author: ASF GitHub Bot Created on: 19/Jul/22 16:24 Start Date: 19/Jul/22 16:24 Worklog Time Spent: 10m Work Description: ahmarsuhail commented on PR #4458: URL: https://github.com/apache/hadoop/pull/4458#issuecomment-1189301358 @steveloughran by > can you split success/failure logging of the invocation and duration of calls do you mean to add in stats for number of failed prefetch ops & duration of this failure? for duration, I couldn't figure out how to measure failure..for example, the duration of reading from S3 is measured [here](https://github.com/apache/hadoop/pull/4458/files#diff-79d7c6565dcf3633d045b1222349326646bfa722d8441ca1e9939b72df38161cR109), if the operation fails, the duration tracker will call `tracker.failed();`. 1) What does tracker.failed() do? 2) how should this be changed to measure duration of a failure? Issue Time Tracking --- Worklog Id: (was: 792796) Time Spent: 1h 50m (was: 1h 40m) > s3a prefetching streams to collect iostats on prefetching operations > > > Key: HADOOP-18190 > URL: https://issues.apache.org/jira/browse/HADOOP-18190 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.4.0 >Reporter: Steve Loughran >Assignee: Ahmar Suhail >Priority: Major > Labels: pull-request-available > Time Spent: 1h 50m > Remaining Estimate: 0h > > There is a lot more happening in reads, so there's a lot more data to collect > and publish in IO stats for us to view in a summary at the end of processes > as well as get from the stream while it is active. > Some useful ones would seem to be: > counters > * is in memory. using 0 or 1 here lets aggregation reports count total #of > memory cached files. > * prefetching operations executed > * errors during prefetching > gauges > * number of blocks in cache > * total size of blocks > * active prefetches > + active memory used > duration tracking count/min/max/ave > * time to fetch a block > * time queued before the actual fetch begins > * time a reader is blocked waiting for a block fetch to complete > and some info on cache use itself > * number of blocks discarded unread > * number of prefetched blocks later used > * number of backward seeks to a prefetched block > * number of forward seeks to a prefetched block > the key ones I care about are > # memory consumption > # can we determine if cache is working (reads with cache hit) and when it is > not (misses, wasted prefetches) > # time blocked on executors > The stats need to be accessible on a stream even when closed, and aggregated > into the FS. once we get per-thread stats contexts we can publish there too > and collect in worker threads for reporting in task commits -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ahmarsuhail commented on pull request #4458: HADOOP-18190. Adds iostats for prefetching
ahmarsuhail commented on PR #4458: URL: https://github.com/apache/hadoop/pull/4458#issuecomment-1189301358 @steveloughran by > can you split success/failure logging of the invocation and duration of calls do you mean to add in stats for number of failed prefetch ops & duration of this failure? for duration, I couldn't figure out how to measure failure..for example, the duration of reading from S3 is measured [here](https://github.com/apache/hadoop/pull/4458/files#diff-79d7c6565dcf3633d045b1222349326646bfa722d8441ca1e9939b72df38161cR109), if the operation fails, the duration tracker will call `tracker.failed();`. 1) What does tracker.failed() do? 2) how should this be changed to measure duration of a failure? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-18190) s3a prefetching streams to collect iostats on prefetching operations
[ https://issues.apache.org/jira/browse/HADOOP-18190?focusedWorklogId=792794=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-792794 ] ASF GitHub Bot logged work on HADOOP-18190: --- Author: ASF GitHub Bot Created on: 19/Jul/22 16:18 Start Date: 19/Jul/22 16:18 Worklog Time Spent: 10m Work Description: ahmarsuhail commented on code in PR #4458: URL: https://github.com/apache/hadoop/pull/4458#discussion_r924708222 ## hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/common/CachingBlockManager.java: ## @@ -330,6 +341,10 @@ private void readBlock(BufferData data, boolean isPrefetch, BufferData.State... this.read(buffer, offset, size); buffer.flip(); data.setReady(expectedState); + +if(isPrefetch) { + this.prefetchingStatistics.prefetchOperationCompleted(); Review Comment: So this was currently only measuring successful calls, I'm now also adding a count for failure. I can't figure out how to measure duration for a failure though, could you point me to an example? Issue Time Tracking --- Worklog Id: (was: 792794) Time Spent: 1h 40m (was: 1.5h) > s3a prefetching streams to collect iostats on prefetching operations > > > Key: HADOOP-18190 > URL: https://issues.apache.org/jira/browse/HADOOP-18190 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.4.0 >Reporter: Steve Loughran >Assignee: Ahmar Suhail >Priority: Major > Labels: pull-request-available > Time Spent: 1h 40m > Remaining Estimate: 0h > > There is a lot more happening in reads, so there's a lot more data to collect > and publish in IO stats for us to view in a summary at the end of processes > as well as get from the stream while it is active. > Some useful ones would seem to be: > counters > * is in memory. using 0 or 1 here lets aggregation reports count total #of > memory cached files. > * prefetching operations executed > * errors during prefetching > gauges > * number of blocks in cache > * total size of blocks > * active prefetches > + active memory used > duration tracking count/min/max/ave > * time to fetch a block > * time queued before the actual fetch begins > * time a reader is blocked waiting for a block fetch to complete > and some info on cache use itself > * number of blocks discarded unread > * number of prefetched blocks later used > * number of backward seeks to a prefetched block > * number of forward seeks to a prefetched block > the key ones I care about are > # memory consumption > # can we determine if cache is working (reads with cache hit) and when it is > not (misses, wasted prefetches) > # time blocked on executors > The stats need to be accessible on a stream even when closed, and aggregated > into the FS. once we get per-thread stats contexts we can publish there too > and collect in worker threads for reporting in task commits -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ahmarsuhail commented on a diff in pull request #4458: HADOOP-18190. Adds iostats for prefetching
ahmarsuhail commented on code in PR #4458: URL: https://github.com/apache/hadoop/pull/4458#discussion_r924708222 ## hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/common/CachingBlockManager.java: ## @@ -330,6 +341,10 @@ private void readBlock(BufferData data, boolean isPrefetch, BufferData.State... this.read(buffer, offset, size); buffer.flip(); data.setReady(expectedState); + +if(isPrefetch) { + this.prefetchingStatistics.prefetchOperationCompleted(); Review Comment: So this was currently only measuring successful calls, I'm now also adding a count for failure. I can't figure out how to measure duration for a failure though, could you point me to an example? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] GauthamBanasandra merged pull request #4573: HDFS-16665. Fix duplicate sources for HDFS test
GauthamBanasandra merged PR #4573: URL: https://github.com/apache/hadoop/pull/4573 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] GauthamBanasandra merged pull request #4571: HDFS-16464. Create only libhdfspp static libraries for Windows
GauthamBanasandra merged PR #4571: URL: https://github.com/apache/hadoop/pull/4571 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #4591: update cloud 123
hadoop-yetus commented on PR #4591: URL: https://github.com/apache/hadoop/pull/4591#issuecomment-1189242595 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 41s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | _ trunk Compile Tests _ | | +1 :green_heart: | shadedclient | 33m 35s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | shadedclient | 22m 54s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 53s | | The patch does not generate ASF License warnings. | | | | 60m 5s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4591/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/4591 | | Optional Tests | dupname asflicense codespell detsecrets | | uname | Linux 68714ab19c9d 4.15.0-169-generic #177-Ubuntu SMP Thu Feb 3 10:50:38 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / bea529f7c8ed6852ba107f4047d15f55e2579eda | | Max. process+thread count | 722 (vs. ulimit of 5500) | | modules | C: . U: . | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4591/1/console | | versions | git=2.25.1 maven=3.6.3 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-18330) S3AFileSystem removes Path when calling createS3Client
[ https://issues.apache.org/jira/browse/HADOOP-18330?focusedWorklogId=792756=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-792756 ] ASF GitHub Bot logged work on HADOOP-18330: --- Author: ASF GitHub Bot Created on: 19/Jul/22 15:22 Start Date: 19/Jul/22 15:22 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on PR #4572: URL: https://github.com/apache/hadoop/pull/4572#issuecomment-1189191525 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 42s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 38m 31s | | trunk passed | | +1 :green_heart: | compile | 1m 2s | | trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | compile | 0m 54s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 0m 52s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 2s | | trunk passed | | +1 :green_heart: | javadoc | 0m 49s | | trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javadoc | 0m 51s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 1m 33s | | trunk passed | | +1 :green_heart: | shadedclient | 21m 2s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 39s | | the patch passed | | +1 :green_heart: | compile | 0m 42s | | the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javac | 0m 42s | | the patch passed | | +1 :green_heart: | compile | 0m 37s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 0m 37s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 27s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 41s | | the patch passed | | +1 :green_heart: | javadoc | 0m 25s | | the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javadoc | 0m 32s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 1m 13s | | the patch passed | | +1 :green_heart: | shadedclient | 20m 27s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 2m 40s | | hadoop-aws in the patch passed. | | +1 :green_heart: | asflicense | 0m 51s | | The patch does not generate ASF License warnings. | | | | 98m 0s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4572/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/4572 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux df2b3c78addb 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 19ee4f51782c0cd33de1ec1715613ecd5744ea25 | | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4572/2/testReport/ | | Max. process+thread count | 732 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4572/2/console | | versions | git=2.25.1
[GitHub] [hadoop] hadoop-yetus commented on pull request #4572: HADOOP-18330-S3AFileSystem removes Path when calling createS3Client
hadoop-yetus commented on PR #4572: URL: https://github.com/apache/hadoop/pull/4572#issuecomment-1189191525 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 42s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 38m 31s | | trunk passed | | +1 :green_heart: | compile | 1m 2s | | trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | compile | 0m 54s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 0m 52s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 2s | | trunk passed | | +1 :green_heart: | javadoc | 0m 49s | | trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javadoc | 0m 51s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 1m 33s | | trunk passed | | +1 :green_heart: | shadedclient | 21m 2s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 39s | | the patch passed | | +1 :green_heart: | compile | 0m 42s | | the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javac | 0m 42s | | the patch passed | | +1 :green_heart: | compile | 0m 37s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 0m 37s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 27s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 41s | | the patch passed | | +1 :green_heart: | javadoc | 0m 25s | | the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javadoc | 0m 32s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 1m 13s | | the patch passed | | +1 :green_heart: | shadedclient | 20m 27s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 2m 40s | | hadoop-aws in the patch passed. | | +1 :green_heart: | asflicense | 0m 51s | | The patch does not generate ASF License warnings. | | | | 98m 0s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4572/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/4572 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux df2b3c78addb 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 19ee4f51782c0cd33de1ec1715613ecd5744ea25 | | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4572/2/testReport/ | | Max. process+thread count | 732 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4572/2/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please
[GitHub] [hadoop] hadoop-yetus commented on pull request #4531: HDFS-13274. RBF: Extend RouterRpcClient to use multiple sockets
hadoop-yetus commented on PR #4531: URL: https://github.com/apache/hadoop/pull/4531#issuecomment-1189183336 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 58s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 0s | | xmllint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 37m 58s | | trunk passed | | +1 :green_heart: | compile | 1m 2s | | trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | compile | 0m 57s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 0m 51s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 3s | | trunk passed | | +1 :green_heart: | javadoc | 1m 7s | | trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javadoc | 1m 15s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 1m 47s | | trunk passed | | +1 :green_heart: | shadedclient | 21m 25s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 42s | | the patch passed | | +1 :green_heart: | compile | 0m 45s | | the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javac | 0m 45s | | the patch passed | | +1 :green_heart: | compile | 0m 39s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 0m 39s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 27s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 43s | | the patch passed | | +1 :green_heart: | javadoc | 0m 41s | | the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javadoc | 0m 58s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 1m 27s | | the patch passed | | +1 :green_heart: | shadedclient | 20m 35s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 35m 36s | | hadoop-hdfs-rbf in the patch passed. | | +1 :green_heart: | asflicense | 0m 53s | | The patch does not generate ASF License warnings. | | | | 133m 56s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4531/4/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/4531 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint | | uname | Linux bccc8cf47db9 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 9deba067bfd69972199bd3bbc70ed5bf6d95b220 | | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4531/4/testReport/ | | Max. process+thread count | 2211 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: hadoop-hdfs-project/hadoop-hdfs-rbf | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4531/4/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific
[GitHub] [hadoop] hadoop-yetus commented on pull request #4587: YARN-11200 Backport numa to branch-2.10
hadoop-yetus commented on PR #4587: URL: https://github.com/apache/hadoop/pull/4587#issuecomment-1189169863 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 37s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 0s | | xmllint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ branch-2.10 Compile Tests _ | | +0 :ok: | mvndep | 3m 53s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 13m 20s | | branch-2.10 passed | | +1 :green_heart: | compile | 7m 38s | | branch-2.10 passed with JDK Azul Systems, Inc.-1.7.0_262-b10 | | +1 :green_heart: | compile | 6m 40s | | branch-2.10 passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~18.04-b07 | | +1 :green_heart: | checkstyle | 1m 40s | | branch-2.10 passed | | +1 :green_heart: | mvnsite | 3m 41s | | branch-2.10 passed | | +1 :green_heart: | javadoc | 3m 37s | | branch-2.10 passed with JDK Azul Systems, Inc.-1.7.0_262-b10 | | +1 :green_heart: | javadoc | 3m 15s | | branch-2.10 passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~18.04-b07 | | +1 :green_heart: | spotbugs | 6m 8s | | branch-2.10 passed | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 25s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 53s | | the patch passed | | +1 :green_heart: | compile | 6m 44s | | the patch passed with JDK Azul Systems, Inc.-1.7.0_262-b10 | | +1 :green_heart: | javac | 6m 44s | | the patch passed | | +1 :green_heart: | compile | 6m 34s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~18.04-b07 | | +1 :green_heart: | javac | 6m 34s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 1m 29s | [/results-checkstyle-hadoop-yarn-project_hadoop-yarn.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4587/3/artifact/out/results-checkstyle-hadoop-yarn-project_hadoop-yarn.txt) | hadoop-yarn-project/hadoop-yarn: The patch generated 1 new + 186 unchanged - 0 fixed = 187 total (was 186) | | +1 :green_heart: | mvnsite | 3m 17s | | the patch passed | | +1 :green_heart: | javadoc | 3m 11s | | the patch passed with JDK Azul Systems, Inc.-1.7.0_262-b10 | | +1 :green_heart: | javadoc | 2m 52s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~18.04-b07 | | +1 :green_heart: | spotbugs | 5m 52s | | the patch passed | _ Other Tests _ | | +1 :green_heart: | unit | 1m 11s | | hadoop-yarn-api in the patch passed. | | +1 :green_heart: | unit | 3m 49s | | hadoop-yarn-common in the patch passed. | | +1 :green_heart: | unit | 16m 3s | | hadoop-yarn-server-nodemanager in the patch passed. | | +1 :green_heart: | asflicense | 1m 6s | | The patch does not generate ASF License warnings. | | | | 111m 54s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4587/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/4587 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint | | uname | Linux e32ed62f7668 4.15.0-156-generic #163-Ubuntu SMP Thu Aug 19 23:31:58 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | branch-2.10 / 9da62bb85edc904dbcfbce70915a0971f4505223 | | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~18.04-b07 | | Multi-JDK versions | /usr/lib/jvm/zulu-7-amd64:Azul Systems, Inc.-1.7.0_262-b10 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_312-8u312-b07-0ubuntu1~18.04-b07 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4587/3/testReport/ | | Max. process+thread count | 184 (vs. ulimit of 5500) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common
[GitHub] [hadoop] raheeltariq opened a new pull request, #4591: update cloud 123
raheeltariq opened a new pull request, #4591: URL: https://github.com/apache/hadoop/pull/4591 ### Description of PR ### How was this patch tested? ### For code changes: - [ ] Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')? - [ ] Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, `NOTICE-binary` files? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-12007) GzipCodec native CodecPool leaks memory
[ https://issues.apache.org/jira/browse/HADOOP-12007?focusedWorklogId=792735=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-792735 ] ASF GitHub Bot logged work on HADOOP-12007: --- Author: ASF GitHub Bot Created on: 19/Jul/22 14:40 Start Date: 19/Jul/22 14:40 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on PR #4585: URL: https://github.com/apache/hadoop/pull/4585#issuecomment-1189140977 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 45s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 37m 58s | | trunk passed | | +1 :green_heart: | compile | 23m 25s | | trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | compile | 20m 50s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 1m 46s | | trunk passed | | +1 :green_heart: | mvnsite | 2m 14s | | trunk passed | | +1 :green_heart: | javadoc | 1m 51s | | trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javadoc | 1m 22s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 3m 13s | | trunk passed | | +1 :green_heart: | shadedclient | 23m 26s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 7s | | the patch passed | | +1 :green_heart: | compile | 22m 26s | | the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javac | 22m 26s | | the patch passed | | +1 :green_heart: | compile | 20m 49s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 20m 49s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 1m 38s | [/results-checkstyle-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4585/2/artifact/out/results-checkstyle-hadoop-common-project_hadoop-common.txt) | hadoop-common-project/hadoop-common: The patch generated 3 new + 12 unchanged - 0 fixed = 15 total (was 12) | | +1 :green_heart: | mvnsite | 2m 5s | | the patch passed | | +1 :green_heart: | javadoc | 1m 39s | | the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javadoc | 1m 22s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 3m 8s | | the patch passed | | +1 :green_heart: | shadedclient | 23m 21s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 19m 6s | | hadoop-common in the patch passed. | | +1 :green_heart: | asflicense | 1m 31s | | The patch does not generate ASF License warnings. | | | | 216m 53s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4585/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/4585 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 87636a7c2cd2 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 032209a606a1ba7e44072f87fd0d6a3a28660d8c | | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | Test Results |
[GitHub] [hadoop] hadoop-yetus commented on pull request #4585: HADOOP-12007. GzipCodec native CodecPool leaks memory
hadoop-yetus commented on PR #4585: URL: https://github.com/apache/hadoop/pull/4585#issuecomment-1189140977 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 45s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 37m 58s | | trunk passed | | +1 :green_heart: | compile | 23m 25s | | trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | compile | 20m 50s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 1m 46s | | trunk passed | | +1 :green_heart: | mvnsite | 2m 14s | | trunk passed | | +1 :green_heart: | javadoc | 1m 51s | | trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javadoc | 1m 22s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 3m 13s | | trunk passed | | +1 :green_heart: | shadedclient | 23m 26s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 7s | | the patch passed | | +1 :green_heart: | compile | 22m 26s | | the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javac | 22m 26s | | the patch passed | | +1 :green_heart: | compile | 20m 49s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 20m 49s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 1m 38s | [/results-checkstyle-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4585/2/artifact/out/results-checkstyle-hadoop-common-project_hadoop-common.txt) | hadoop-common-project/hadoop-common: The patch generated 3 new + 12 unchanged - 0 fixed = 15 total (was 12) | | +1 :green_heart: | mvnsite | 2m 5s | | the patch passed | | +1 :green_heart: | javadoc | 1m 39s | | the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javadoc | 1m 22s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 3m 8s | | the patch passed | | +1 :green_heart: | shadedclient | 23m 21s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 19m 6s | | hadoop-common in the patch passed. | | +1 :green_heart: | asflicense | 1m 31s | | The patch does not generate ASF License warnings. | | | | 216m 53s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4585/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/4585 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 87636a7c2cd2 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 032209a606a1ba7e44072f87fd0d6a3a28660d8c | | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4585/2/testReport/ | | Max. process+thread count | 1300 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4585/2/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This
[GitHub] [hadoop] ZanderXu commented on a diff in pull request #4531: HDFS-13274. RBF: Extend RouterRpcClient to use multiple sockets
ZanderXu commented on code in PR #4531: URL: https://github.com/apache/hadoop/pull/4531#discussion_r924473977 ## hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/FederationConnectionId.java: ## @@ -0,0 +1,61 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hdfs.server.federation.router; + +import org.apache.commons.lang3.builder.EqualsBuilder; +import org.apache.commons.lang3.builder.HashCodeBuilder; +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.io.retry.RetryPolicy; +import org.apache.hadoop.ipc.Client; +import org.apache.hadoop.security.UserGroupInformation; + +import java.net.InetSocketAddress; + +public class FederationConnectionId extends Client.ConnectionId { + private static final int PRIME = 16777619; + private final int index; + + public FederationConnectionId(InetSocketAddress address, Class protocol, + UserGroupInformation ticket, int rpcTimeout, + RetryPolicy connectionRetryPolicy, Configuration conf, int index) { +super(address, protocol, ticket, rpcTimeout, connectionRetryPolicy, conf); +this.index = index; + } + + @Override + public int hashCode() { +return new HashCodeBuilder() +.append(PRIME * super.hashCode()) Review Comment: Thanks, I have deleted the PRIME, please help me review it. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #4587: YARN-11200 Backport numa to branch-2.10
hadoop-yetus commented on PR #4587: URL: https://github.com/apache/hadoop/pull/4587#issuecomment-1189023126 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 8m 34s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 0s | | xmllint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ branch-2.10 Compile Tests _ | | +0 :ok: | mvndep | 3m 26s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 13m 15s | | branch-2.10 passed | | +1 :green_heart: | compile | 7m 46s | | branch-2.10 passed with JDK Azul Systems, Inc.-1.7.0_262-b10 | | +1 :green_heart: | compile | 6m 36s | | branch-2.10 passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~18.04-b07 | | +1 :green_heart: | checkstyle | 1m 40s | | branch-2.10 passed | | +1 :green_heart: | mvnsite | 3m 43s | | branch-2.10 passed | | +1 :green_heart: | javadoc | 3m 36s | | branch-2.10 passed with JDK Azul Systems, Inc.-1.7.0_262-b10 | | +1 :green_heart: | javadoc | 3m 16s | | branch-2.10 passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~18.04-b07 | | +1 :green_heart: | spotbugs | 6m 4s | | branch-2.10 passed | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 27s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 52s | | the patch passed | | -1 :x: | compile | 2m 6s | [/patch-compile-hadoop-yarn-project_hadoop-yarn-jdkAzulSystems,Inc.-1.7.0_262-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4587/2/artifact/out/patch-compile-hadoop-yarn-project_hadoop-yarn-jdkAzulSystems,Inc.-1.7.0_262-b10.txt) | hadoop-yarn in the patch failed with JDK Azul Systems, Inc.-1.7.0_262-b10. | | -1 :x: | javac | 2m 6s | [/patch-compile-hadoop-yarn-project_hadoop-yarn-jdkAzulSystems,Inc.-1.7.0_262-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4587/2/artifact/out/patch-compile-hadoop-yarn-project_hadoop-yarn-jdkAzulSystems,Inc.-1.7.0_262-b10.txt) | hadoop-yarn in the patch failed with JDK Azul Systems, Inc.-1.7.0_262-b10. | | +1 :green_heart: | compile | 6m 4s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~18.04-b07 | | +1 :green_heart: | javac | 6m 4s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 1m 29s | [/results-checkstyle-hadoop-yarn-project_hadoop-yarn.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4587/2/artifact/out/results-checkstyle-hadoop-yarn-project_hadoop-yarn.txt) | hadoop-yarn-project/hadoop-yarn: The patch generated 1 new + 186 unchanged - 0 fixed = 187 total (was 186) | | +1 :green_heart: | mvnsite | 3m 17s | | the patch passed | | +1 :green_heart: | javadoc | 3m 8s | | the patch passed with JDK Azul Systems, Inc.-1.7.0_262-b10 | | +1 :green_heart: | javadoc | 2m 52s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~18.04-b07 | | +1 :green_heart: | spotbugs | 5m 54s | | the patch passed | _ Other Tests _ | | +1 :green_heart: | unit | 1m 10s | | hadoop-yarn-api in the patch passed. | | +1 :green_heart: | unit | 3m 49s | | hadoop-yarn-common in the patch passed. | | +1 :green_heart: | unit | 16m 4s | | hadoop-yarn-server-nodemanager in the patch passed. | | +1 :green_heart: | asflicense | 1m 7s | | The patch does not generate ASF License warnings. | | | | 114m 38s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4587/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/4587 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint | | uname | Linux 24608df9e18e 4.15.0-156-generic #163-Ubuntu SMP Thu Aug 19 23:31:58 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | branch-2.10 / 72727247687763b64e38b851ab8140c2e12bd603 | | Default Java | Private
[GitHub] [hadoop] hadoop-yetus commented on pull request #4576: HDFS-16667. Use malloc for buffer allocation in uriparser2
hadoop-yetus commented on PR #4576: URL: https://github.com/apache/hadoop/pull/4576#issuecomment-1189018718 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 29m 12s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 27m 29s | | trunk passed | | +1 :green_heart: | compile | 4m 54s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 50s | | trunk passed | | +1 :green_heart: | shadedclient | 58m 16s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 26s | | the patch passed | | +1 :green_heart: | compile | 4m 12s | | the patch passed | | +1 :green_heart: | cc | 4m 12s | | the patch passed | | +1 :green_heart: | golang | 4m 12s | | the patch passed | | +1 :green_heart: | javac | 4m 12s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | mvnsite | 0m 27s | | the patch passed | | +1 :green_heart: | shadedclient | 23m 42s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 107m 12s | | hadoop-hdfs-native-client in the patch passed. | | +1 :green_heart: | asflicense | 0m 49s | | The patch does not generate ASF License warnings. | | | | 227m 4s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4576/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/4576 | | Optional Tests | dupname asflicense compile cc mvnsite javac unit codespell detsecrets golang | | uname | Linux a10d05b9b421 4.15.0-175-generic #184-Ubuntu SMP Thu Mar 24 17:48:36 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / df2323ca40fb4579c6be56bee9b8a780c20243d5 | | Default Java | Red Hat, Inc.-1.8.0_312-b07 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4576/3/testReport/ | | Max. process+thread count | 623 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: hadoop-hdfs-project/hadoop-hdfs-native-client | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4576/3/console | | versions | git=2.27.0 maven=3.6.3 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] Hexiaoqiao commented on a diff in pull request #4531: HDFS-13274. RBF: Extend RouterRpcClient to use multiple sockets
Hexiaoqiao commented on code in PR #4531: URL: https://github.com/apache/hadoop/pull/4531#discussion_r924425480 ## hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/FederationConnectionId.java: ## @@ -0,0 +1,61 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hdfs.server.federation.router; + +import org.apache.commons.lang3.builder.EqualsBuilder; +import org.apache.commons.lang3.builder.HashCodeBuilder; +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.io.retry.RetryPolicy; +import org.apache.hadoop.ipc.Client; +import org.apache.hadoop.security.UserGroupInformation; + +import java.net.InetSocketAddress; + +public class FederationConnectionId extends Client.ConnectionId { + private static final int PRIME = 16777619; + private final int index; + + public FederationConnectionId(InetSocketAddress address, Class protocol, + UserGroupInformation ticket, int rpcTimeout, + RetryPolicy connectionRetryPolicy, Configuration conf, int index) { +super(address, protocol, ticket, rpcTimeout, connectionRetryPolicy, conf); +this.index = index; + } + + @Override + public int hashCode() { +return new HashCodeBuilder() +.append(PRIME * super.hashCode()) Review Comment: I am not sure if it is necessary to multiply by PRIME for hashCode, because super hashCode has done that and the conflict probability is very low here. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] yuanboliu commented on pull request #4558: HDFS-16657 Changing pool-level lock to volume-level lock for invalida…
yuanboliu commented on PR #4558: URL: https://github.com/apache/hadoop/pull/4558#issuecomment-1188963756 @Hexiaoqiao 1. The default max delation rate is 2 blocks per minute with 3s heartbeat, so practically memory wouldn't be the problem. 2. -- hold write lock too long this may be a potential issue. Anyway I will try to test the performance and give feedback. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-18344) AWS SDK update to 1.12.262 to address jackson CVE-2018-7489
[ https://issues.apache.org/jira/browse/HADOOP-18344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Carl Jones reassigned HADOOP-18344: -- Assignee: Ahmar Suhail > AWS SDK update to 1.12.262 to address jackson CVE-2018-7489 > > > Key: HADOOP-18344 > URL: https://issues.apache.org/jira/browse/HADOOP-18344 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.4.0, 3.3.4 >Reporter: Steve Loughran >Assignee: Ahmar Suhail >Priority: Major > > yet another jackson CVE in aws sdk > https://github.com/apache/hadoop/pull/4491/commits/5496816b472473eb7a9c174b7d3e69b6eee1e271 > maybe we need to have a list of all shaded jackson's we get on the CP and > have a process of upgrading them all at the same time -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] Neilxzn opened a new pull request, #4590: add swapBlocklistForeditLog
Neilxzn opened a new pull request, #4590: URL: https://github.com/apache/hadoop/pull/4590 ### Description of PR https://issues.apache.org/jira/browse/HDFS-15006 I am Interested in this jira, we want solution in our cluster. Try to complete it. ### How was this patch tested? ### For code changes: Add OP_SWAP_BLOCK_LIST as an operation code in FSEditLogOpCodes. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-18348) Echo java process's parent pid to the pid file intermediate state
[ https://issues.apache.org/jira/browse/HADOOP-18348?focusedWorklogId=792550=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-792550 ] ASF GitHub Bot logged work on HADOOP-18348: --- Author: ASF GitHub Bot Created on: 19/Jul/22 09:51 Start Date: 19/Jul/22 09:51 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on PR #4589: URL: https://github.com/apache/hadoop/pull/4589#issuecomment-1188845501 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 48s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +0 :ok: | shelldocs | 0m 0s | | Shelldocs was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 40m 51s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 37s | | trunk passed | | +1 :green_heart: | shadedclient | 22m 23s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 5s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | mvnsite | 1m 23s | | the patch passed | | +1 :green_heart: | shellcheck | 0m 3s | | No new issues. | | +1 :green_heart: | shadedclient | 22m 12s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 1m 47s | | hadoop-common in the patch passed. | | +1 :green_heart: | asflicense | 0m 43s | | The patch does not generate ASF License warnings. | | | | 95m 1s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4589/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/4589 | | Optional Tests | dupname asflicense mvnsite unit codespell detsecrets shellcheck shelldocs | | uname | Linux 89d937a0ab5d 4.15.0-175-generic #184-Ubuntu SMP Thu Mar 24 17:48:36 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 6ae157228cb7719583747c9247ce4770d6aa0fd5 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4589/1/testReport/ | | Max. process+thread count | 633 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4589/1/console | | versions | git=2.25.1 maven=3.6.3 shellcheck=0.7.0 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. Issue Time Tracking --- Worklog Id: (was: 792550) Time Spent: 20m (was: 10m) > Echo java process's parent pid to the pid file intermediate state > - > > Key: HADOOP-18348 > URL: https://issues.apache.org/jira/browse/HADOOP-18348 > Project: Hadoop Common > Issue Type: Bug > Components: common >Reporter: jiangrui >Priority: Major > Labels: pull-request-available > Time Spent: 20m > Remaining Estimate: 0h > > In hadoop-function.sh file,there is hadoop_start_daemon and > hadoop_start_daemon_wrapper functions. > hadoop_start_daemon_wrapper invoke hadoop_start_daemon and put it to > background. > > In hadoop_start_daemon function, echo $$ > pidfile,cause this scenario > because hadoop_start_daemon is in a subshell by ampersand, and $ expands to > the process ID of the current shell, not the subshell. > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17461) Add thread-level IOStatistics Context
[ https://issues.apache.org/jira/browse/HADOOP-17461?focusedWorklogId=792549=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-792549 ] ASF GitHub Bot logged work on HADOOP-17461: --- Author: ASF GitHub Bot Created on: 19/Jul/22 09:51 Start Date: 19/Jul/22 09:51 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on PR #4352: URL: https://github.com/apache/hadoop/pull/4352#issuecomment-1188844927 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 41s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 5 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 44m 47s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 27m 25s | | trunk passed | | +1 :green_heart: | compile | 25m 19s | | trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | compile | 22m 3s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 4m 15s | | trunk passed | | +1 :green_heart: | mvnsite | 3m 24s | | trunk passed | | +1 :green_heart: | javadoc | 2m 38s | | trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javadoc | 2m 23s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 5m 11s | | trunk passed | | +1 :green_heart: | shadedclient | 24m 41s | | branch has no errors when building and testing our client artifacts. | | -0 :warning: | patch | 25m 12s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 33s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 54s | | the patch passed | | +1 :green_heart: | compile | 24m 52s | | the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javac | 24m 52s | | the patch passed | | +1 :green_heart: | compile | 22m 57s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 22m 57s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 4m 2s | | the patch passed | | +1 :green_heart: | mvnsite | 3m 35s | | the patch passed | | +1 :green_heart: | javadoc | 2m 44s | | the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javadoc | 2m 24s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 5m 25s | | the patch passed | | +1 :green_heart: | shadedclient | 23m 19s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 19m 30s | | hadoop-common in the patch passed. | | +1 :green_heart: | unit | 3m 16s | | hadoop-aws in the patch passed. | | +1 :green_heart: | asflicense | 1m 32s | | The patch does not generate ASF License warnings. | | | | 283m 38s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4352/10/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/4352 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 2607ccafc906 4.15.0-169-generic #177-Ubuntu SMP Thu Feb 3 10:50:38 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / f54ad18a9e2230191dc4c0c504d332772bceb23e | | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1
[GitHub] [hadoop] hadoop-yetus commented on pull request #4589: HADOOP-18348. Change hadoop_start_daemon function $ to BASHPID
hadoop-yetus commented on PR #4589: URL: https://github.com/apache/hadoop/pull/4589#issuecomment-1188845501 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 48s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +0 :ok: | shelldocs | 0m 0s | | Shelldocs was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 40m 51s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 37s | | trunk passed | | +1 :green_heart: | shadedclient | 22m 23s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 5s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | mvnsite | 1m 23s | | the patch passed | | +1 :green_heart: | shellcheck | 0m 3s | | No new issues. | | +1 :green_heart: | shadedclient | 22m 12s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 1m 47s | | hadoop-common in the patch passed. | | +1 :green_heart: | asflicense | 0m 43s | | The patch does not generate ASF License warnings. | | | | 95m 1s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4589/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/4589 | | Optional Tests | dupname asflicense mvnsite unit codespell detsecrets shellcheck shelldocs | | uname | Linux 89d937a0ab5d 4.15.0-175-generic #184-Ubuntu SMP Thu Mar 24 17:48:36 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 6ae157228cb7719583747c9247ce4770d6aa0fd5 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4589/1/testReport/ | | Max. process+thread count | 633 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4589/1/console | | versions | git=2.25.1 maven=3.6.3 shellcheck=0.7.0 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #4352: HADOOP-17461. Thread-level IOStatistics in S3A
hadoop-yetus commented on PR #4352: URL: https://github.com/apache/hadoop/pull/4352#issuecomment-1188844927 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 41s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 5 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 44m 47s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 27m 25s | | trunk passed | | +1 :green_heart: | compile | 25m 19s | | trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | compile | 22m 3s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 4m 15s | | trunk passed | | +1 :green_heart: | mvnsite | 3m 24s | | trunk passed | | +1 :green_heart: | javadoc | 2m 38s | | trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javadoc | 2m 23s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 5m 11s | | trunk passed | | +1 :green_heart: | shadedclient | 24m 41s | | branch has no errors when building and testing our client artifacts. | | -0 :warning: | patch | 25m 12s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 33s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 54s | | the patch passed | | +1 :green_heart: | compile | 24m 52s | | the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javac | 24m 52s | | the patch passed | | +1 :green_heart: | compile | 22m 57s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 22m 57s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 4m 2s | | the patch passed | | +1 :green_heart: | mvnsite | 3m 35s | | the patch passed | | +1 :green_heart: | javadoc | 2m 44s | | the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javadoc | 2m 24s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 5m 25s | | the patch passed | | +1 :green_heart: | shadedclient | 23m 19s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 19m 30s | | hadoop-common in the patch passed. | | +1 :green_heart: | unit | 3m 16s | | hadoop-aws in the patch passed. | | +1 :green_heart: | asflicense | 1m 32s | | The patch does not generate ASF License warnings. | | | | 283m 38s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4352/10/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/4352 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 2607ccafc906 4.15.0-169-generic #177-Ubuntu SMP Thu Feb 3 10:50:38 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / f54ad18a9e2230191dc4c0c504d332772bceb23e | | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4352/10/testReport/ | | Max. process+thread count | 1251 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws U: . | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4352/10/console | |
[jira] [Work logged] (HADOOP-13386) Upgrade Avro to 1.9.2
[ https://issues.apache.org/jira/browse/HADOOP-13386?focusedWorklogId=792547=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-792547 ] ASF GitHub Bot logged work on HADOOP-13386: --- Author: ASF GitHub Bot Created on: 19/Jul/22 09:41 Start Date: 19/Jul/22 09:41 Worklog Time Spent: 10m Work Description: steveloughran closed pull request #4579: HADOOP-13386. Upgrade Avro to 1.9.2 (#3990) URL: https://github.com/apache/hadoop/pull/4579 Issue Time Tracking --- Worklog Id: (was: 792547) Time Spent: 7h (was: 6h 50m) > Upgrade Avro to 1.9.2 > - > > Key: HADOOP-13386 > URL: https://issues.apache.org/jira/browse/HADOOP-13386 > Project: Hadoop Common > Issue Type: Sub-task > Components: build >Reporter: Ben McCann >Assignee: PJ Fanning >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 7h > Remaining Estimate: 0h > > Avro 1.8.x makes generated classes serializable which makes them much easier > to use with Spark. It would be great to upgrade Avro to 1.8.x > Fix CVE-2021-43045 -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-13386) Upgrade Avro to 1.9.2
[ https://issues.apache.org/jira/browse/HADOOP-13386?focusedWorklogId=792546=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-792546 ] ASF GitHub Bot logged work on HADOOP-13386: --- Author: ASF GitHub Bot Created on: 19/Jul/22 09:41 Start Date: 19/Jul/22 09:41 Worklog Time Spent: 10m Work Description: steveloughran commented on PR #4579: URL: https://github.com/apache/hadoop/pull/4579#issuecomment-1188834271 closing as too traumatic Issue Time Tracking --- Worklog Id: (was: 792546) Time Spent: 6h 50m (was: 6h 40m) > Upgrade Avro to 1.9.2 > - > > Key: HADOOP-13386 > URL: https://issues.apache.org/jira/browse/HADOOP-13386 > Project: Hadoop Common > Issue Type: Sub-task > Components: build >Reporter: Ben McCann >Assignee: PJ Fanning >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 6h 50m > Remaining Estimate: 0h > > Avro 1.8.x makes generated classes serializable which makes them much easier > to use with Spark. It would be great to upgrade Avro to 1.8.x > Fix CVE-2021-43045 -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran closed pull request #4579: HADOOP-13386. Upgrade Avro to 1.9.2 (#3990)
steveloughran closed pull request #4579: HADOOP-13386. Upgrade Avro to 1.9.2 (#3990) URL: https://github.com/apache/hadoop/pull/4579 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on pull request #4579: HADOOP-13386. Upgrade Avro to 1.9.2 (#3990)
steveloughran commented on PR #4579: URL: https://github.com/apache/hadoop/pull/4579#issuecomment-1188834271 closing as too traumatic -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18340) deleteOnExit does not work with S3AFileSystem
[ https://issues.apache.org/jira/browse/HADOOP-18340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17568444#comment-17568444 ] Steve Loughran commented on HADOOP-18340: - bq. , processDeleteOnExit before isClosed set to true in S3A what if two threads call close() 1. you would need another atomic bool, shutdown in progress, and make further calls to closel) return if either are set. 2. set the isClosed flag after deletions but before shutting down thread and http pools > deleteOnExit does not work with S3AFileSystem > - > > Key: HADOOP-18340 > URL: https://issues.apache.org/jira/browse/HADOOP-18340 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3 >Affects Versions: 3.3.3 >Reporter: Huaxiang Sun >Priority: Minor > > When deleteOnExit is set on some paths, they are not removed when file system > object is closed. The following exception is logged when printing out the > exception in info log. > {code:java} > 2022-07-15 19:29:12,552 [main] INFO fs.FileSystem > (FileSystem.java:processDeleteOnExit(1810)) - Ignoring failure to > deleteOnExit for path /file, exception {} > java.io.IOException: s3a://mock-bucket: FileSystem is closed! > at > org.apache.hadoop.fs.s3a.S3AFileSystem.checkNotClosed(S3AFileSystem.java:3887) > at > org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2333) > at > org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2355) > at > org.apache.hadoop.fs.s3a.S3AFileSystem.exists(S3AFileSystem.java:4402) > at > org.apache.hadoop.fs.FileSystem.processDeleteOnExit(FileSystem.java:1805) > at org.apache.hadoop.fs.FileSystem.close(FileSystem.java:2669) > at > org.apache.hadoop.fs.s3a.S3AFileSystem.close(S3AFileSystem.java:3830) > at > org.apache.hadoop.fs.s3a.TestS3AGetFileStatus.testFile(TestS3AGetFileStatus.java:87) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at > org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:258) > at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) > at > org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) > at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) > at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) > at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) > at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) > at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) > at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) > at org.junit.runners.ParentRunner.run(ParentRunner.java:413) > at > org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238) > at > org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159) > at > org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384) > at > org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345) > at > org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126) > at > org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418) > {code} -- This message was sent by Atlassian Jira
[jira] [Work logged] (HADOOP-18330) S3AFileSystem removes Path when calling createS3Client
[ https://issues.apache.org/jira/browse/HADOOP-18330?focusedWorklogId=792542=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-792542 ] ASF GitHub Bot logged work on HADOOP-18330: --- Author: ASF GitHub Bot Created on: 19/Jul/22 09:32 Start Date: 19/Jul/22 09:32 Worklog Time Spent: 10m Work Description: steveloughran commented on PR #4572: URL: https://github.com/apache/hadoop/pull/4572#issuecomment-1188824862 get yetus to stop failing the build, then run the integration tests against an s3 bucket, then i will review Issue Time Tracking --- Worklog Id: (was: 792542) Time Spent: 2h 50m (was: 2h 40m) > S3AFileSystem removes Path when calling createS3Client > -- > > Key: HADOOP-18330 > URL: https://issues.apache.org/jira/browse/HADOOP-18330 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3 >Affects Versions: 3.3.0, 3.3.1, 3.3.2, 3.3.3 >Reporter: Ashutosh Pant >Assignee: Ashutosh Pant >Priority: Minor > Labels: pull-request-available > Time Spent: 2h 50m > Remaining Estimate: 0h > > when using hadoop and spark to read/write data from an s3 bucket like -> > s3a://bucket/path and using a custom Credentials Provider, the path is > removed from the s3a URI and the credentials provider fails because the full > path is gone. > In Spark 3.2, > It was invoked as -> s3 = ReflectionUtils.newInstance(s3ClientFactoryClass, > conf) > .createS3Client(name, bucket, credentials); > But In spark 3.3.3 > It is invoked as s3 = ReflectionUtils.newInstance(s3ClientFactoryClass, > conf).createS3Client(getUri(), parameters); > the getUri() removes the path from the s3a URI -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on pull request #4572: HADOOP-18330-S3AFileSystem removes Path when calling createS3Client
steveloughran commented on PR #4572: URL: https://github.com/apache/hadoop/pull/4572#issuecomment-1188824862 get yetus to stop failing the build, then run the integration tests against an s3 bucket, then i will review -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] pranavsaxena-microsoft closed pull request #4588: Advanced PR for https://github.com/apache/hadoop/pull/3440
pranavsaxena-microsoft closed pull request #4588: Advanced PR for https://github.com/apache/hadoop/pull/3440 URL: https://github.com/apache/hadoop/pull/4588 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ayushtkn commented on pull request #4588: Advanced PR for https://github.com/apache/hadoop/pull/3440
ayushtkn commented on PR #4588: URL: https://github.com/apache/hadoop/pull/4588#issuecomment-1188813299 the title of the PR is weird, isn't very conclusive what's happening. Please set it correctly according to : https://cwiki.apache.org/confluence/display/hadoop/how+to+contribute#HowToContribute-Provideapatch -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-18343) upgrade to jetty 9.4.48 due to CVE
[ https://issues.apache.org/jira/browse/HADOOP-18343?focusedWorklogId=792534=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-792534 ] ASF GitHub Bot logged work on HADOOP-18343: --- Author: ASF GitHub Bot Created on: 19/Jul/22 09:12 Start Date: 19/Jul/22 09:12 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on PR #4582: URL: https://github.com/apache/hadoop/pull/4582#issuecomment-1188802285 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 47s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 1s | | xmllint was not available. | | +0 :ok: | shelldocs | 0m 1s | | Shelldocs was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 14m 25s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 28m 21s | | trunk passed | | +1 :green_heart: | compile | 24m 59s | | trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | compile | 21m 48s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | mvnsite | 20m 13s | | trunk passed | | +1 :green_heart: | javadoc | 8m 31s | | trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javadoc | 7m 27s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | shadedclient | 38m 34s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 56s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 25m 2s | | the patch passed | | +1 :green_heart: | compile | 24m 34s | | the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | -1 :x: | javac | 24m 34s | [/results-compile-javac-root-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4582/1/artifact/out/results-compile-javac-root-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt) | root-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 generated 3 new + 2884 unchanged - 0 fixed = 2887 total (was 2884) | | +1 :green_heart: | compile | 21m 48s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | -1 :x: | javac | 21m 48s | [/results-compile-javac-root-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4582/1/artifact/out/results-compile-javac-root-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt) | root-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 generated 3 new + 2680 unchanged - 0 fixed = 2683 total (was 2680) | | +1 :green_heart: | blanks | 0m 1s | | The patch has no blanks issues. | | +1 :green_heart: | mvnsite | 19m 39s | | the patch passed | | +1 :green_heart: | shellcheck | 0m 0s | | No new issues. | | +1 :green_heart: | javadoc | 8m 26s | | the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javadoc | 7m 29s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | shadedclient | 40m 25s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 1043m 34s | | root in the patch passed. | | +1 :green_heart: | asflicense | 2m 20s | | The patch does not generate ASF License warnings. | | | | 1330m 24s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4582/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/4582 | | Optional
[GitHub] [hadoop] hadoop-yetus commented on pull request #4582: HADOOP-18343: upgrade jetty
hadoop-yetus commented on PR #4582: URL: https://github.com/apache/hadoop/pull/4582#issuecomment-1188802285 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 47s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 1s | | xmllint was not available. | | +0 :ok: | shelldocs | 0m 1s | | Shelldocs was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 14m 25s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 28m 21s | | trunk passed | | +1 :green_heart: | compile | 24m 59s | | trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | compile | 21m 48s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | mvnsite | 20m 13s | | trunk passed | | +1 :green_heart: | javadoc | 8m 31s | | trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javadoc | 7m 27s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | shadedclient | 38m 34s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 56s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 25m 2s | | the patch passed | | +1 :green_heart: | compile | 24m 34s | | the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | -1 :x: | javac | 24m 34s | [/results-compile-javac-root-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4582/1/artifact/out/results-compile-javac-root-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt) | root-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 generated 3 new + 2884 unchanged - 0 fixed = 2887 total (was 2884) | | +1 :green_heart: | compile | 21m 48s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | -1 :x: | javac | 21m 48s | [/results-compile-javac-root-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4582/1/artifact/out/results-compile-javac-root-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt) | root-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 generated 3 new + 2680 unchanged - 0 fixed = 2683 total (was 2680) | | +1 :green_heart: | blanks | 0m 1s | | The patch has no blanks issues. | | +1 :green_heart: | mvnsite | 19m 39s | | the patch passed | | +1 :green_heart: | shellcheck | 0m 0s | | No new issues. | | +1 :green_heart: | javadoc | 8m 26s | | the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javadoc | 7m 29s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | shadedclient | 40m 25s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 1043m 34s | | root in the patch passed. | | +1 :green_heart: | asflicense | 2m 20s | | The patch does not generate ASF License warnings. | | | | 1330m 24s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4582/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/4582 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient codespell detsecrets xmllint shellcheck shelldocs | | uname | Linux 5d1e7b3d8531 4.15.0-175-generic #184-Ubuntu SMP Thu Mar 24 17:48:36 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 2a82eb7e220b63216c34324ddd191ae5a0a5ac46 | | Default
[GitHub] [hadoop] hadoop-yetus commented on pull request #4576: HDFS-16667. Use malloc for buffer allocation in uriparser2
hadoop-yetus commented on PR #4576: URL: https://github.com/apache/hadoop/pull/4576#issuecomment-1188797715 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 51s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 85m 25s | | trunk passed | | +1 :green_heart: | compile | 3m 58s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 38s | | trunk passed | | +1 :green_heart: | shadedclient | 112m 10s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 20s | | the patch passed | | +1 :green_heart: | compile | 3m 52s | | the patch passed | | +1 :green_heart: | cc | 3m 52s | | the patch passed | | +1 :green_heart: | golang | 3m 52s | | the patch passed | | +1 :green_heart: | javac | 3m 52s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | mvnsite | 0m 22s | | the patch passed | | +1 :green_heart: | shadedclient | 22m 16s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 88m 30s | | hadoop-hdfs-native-client in the patch passed. | | +1 :green_heart: | asflicense | 0m 51s | | The patch does not generate ASF License warnings. | | | | 231m 50s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4576/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/4576 | | Optional Tests | dupname asflicense compile cc mvnsite javac unit codespell detsecrets golang | | uname | Linux c9777e842920 4.15.0-175-generic #184-Ubuntu SMP Thu Mar 24 17:48:36 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / df2323ca40fb4579c6be56bee9b8a780c20243d5 | | Default Java | Red Hat, Inc.-1.8.0_332-b09 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4576/3/testReport/ | | Max. process+thread count | 615 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: hadoop-hdfs-project/hadoop-hdfs-native-client | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4576/3/console | | versions | git=2.9.5 maven=3.6.3 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-18301) Upgrade commons-io to 2.11.0
[ https://issues.apache.org/jira/browse/HADOOP-18301?focusedWorklogId=792524=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-792524 ] ASF GitHub Bot logged work on HADOOP-18301: --- Author: ASF GitHub Bot Created on: 19/Jul/22 09:05 Start Date: 19/Jul/22 09:05 Worklog Time Spent: 10m Work Description: ashutoshcipher commented on PR #4455: URL: https://github.com/apache/hadoop/pull/4455#issuecomment-1188795186 @aajisaka - Sorry I missed addressing you comment earlier. In my latest commit, I have addressed you comment. Thanks. Issue Time Tracking --- Worklog Id: (was: 792524) Time Spent: 2h (was: 1h 50m) > Upgrade commons-io to 2.11.0 > > > Key: HADOOP-18301 > URL: https://issues.apache.org/jira/browse/HADOOP-18301 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.2.3, 3.3.3 >Reporter: groot >Assignee: groot >Priority: Minor > Labels: pull-request-available > Time Spent: 2h > Remaining Estimate: 0h > > Current version 2.8.0 is almost ~2 years old > Upgrading to new release to keep up for new features and bug fixes. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ashutoshcipher commented on pull request #4455: HADOOP-18301.Upgrade commons-io to 2.11.0
ashutoshcipher commented on PR #4455: URL: https://github.com/apache/hadoop/pull/4455#issuecomment-1188795186 @aajisaka - Sorry I missed addressing you comment earlier. In my latest commit, I have addressed you comment. Thanks. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #4588: Advanced PR for https://github.com/apache/hadoop/pull/3440
hadoop-yetus commented on PR #4588: URL: https://github.com/apache/hadoop/pull/4588#issuecomment-1188773154 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 54s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +0 :ok: | markdownlint | 0m 0s | | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 9 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 39m 18s | | trunk passed | | +1 :green_heart: | compile | 0m 58s | | trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | compile | 0m 54s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 0m 49s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 58s | | trunk passed | | +1 :green_heart: | javadoc | 0m 55s | | trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javadoc | 0m 46s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 1m 33s | | trunk passed | | +1 :green_heart: | shadedclient | 21m 58s | | branch has no errors when building and testing our client artifacts. | | -0 :warning: | patch | 22m 25s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 39s | | the patch passed | | +1 :green_heart: | compile | 0m 41s | | the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javac | 0m 41s | | the patch passed | | +1 :green_heart: | compile | 0m 37s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 0m 37s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 26s | [/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4588/2/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt) | hadoop-tools/hadoop-azure: The patch generated 22 new + 5 unchanged - 0 fixed = 27 total (was 5) | | +1 :green_heart: | mvnsite | 0m 39s | | the patch passed | | +1 :green_heart: | javadoc | 0m 33s | | the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javadoc | 0m 32s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 1m 12s | | the patch passed | | +1 :green_heart: | shadedclient | 20m 18s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 2m 15s | | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 0m 49s | | The patch does not generate ASF License warnings. | | | | 99m 22s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4588/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/4588 | | Optional Tests | dupname asflicense codespell detsecrets compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle markdownlint | | uname | Linux 6557ddeacb95 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / af5e90b0197ce606263487f00cdf3b843d8cf96b | | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4588/2/testReport/ | | Max. process+thread count | 693 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure | |
[jira] [Work logged] (HADOOP-18348) Echo java process's parent pid to the pid file intermediate state
[ https://issues.apache.org/jira/browse/HADOOP-18348?focusedWorklogId=792496=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-792496 ] ASF GitHub Bot logged work on HADOOP-18348: --- Author: ASF GitHub Bot Created on: 19/Jul/22 08:15 Start Date: 19/Jul/22 08:15 Worklog Time Spent: 10m Work Description: Pr-Jiang opened a new pull request, #4589: URL: https://github.com/apache/hadoop/pull/4589 ### Description of PR https://issues.apache.org/jira/browse/HADOOP-18348 ### How was this patch tested? ### For code changes: - [ ] Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')? - [ ] Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, `NOTICE-binary` files? Issue Time Tracking --- Worklog Id: (was: 792496) Remaining Estimate: 0h Time Spent: 10m > Echo java process's parent pid to the pid file intermediate state > - > > Key: HADOOP-18348 > URL: https://issues.apache.org/jira/browse/HADOOP-18348 > Project: Hadoop Common > Issue Type: Bug > Components: common >Reporter: jiangrui >Priority: Major > Time Spent: 10m > Remaining Estimate: 0h > > In hadoop-function.sh file,there is hadoop_start_daemon and > hadoop_start_daemon_wrapper functions. > hadoop_start_daemon_wrapper invoke hadoop_start_daemon and put it to > background. > > In hadoop_start_daemon function, echo $$ > pidfile,cause this scenario > because hadoop_start_daemon is in a subshell by ampersand, and $ expands to > the process ID of the current shell, not the subshell. > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-18348) Echo java process's parent pid to the pid file intermediate state
[ https://issues.apache.org/jira/browse/HADOOP-18348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HADOOP-18348: Labels: pull-request-available (was: ) > Echo java process's parent pid to the pid file intermediate state > - > > Key: HADOOP-18348 > URL: https://issues.apache.org/jira/browse/HADOOP-18348 > Project: Hadoop Common > Issue Type: Bug > Components: common >Reporter: jiangrui >Priority: Major > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > In hadoop-function.sh file,there is hadoop_start_daemon and > hadoop_start_daemon_wrapper functions. > hadoop_start_daemon_wrapper invoke hadoop_start_daemon and put it to > background. > > In hadoop_start_daemon function, echo $$ > pidfile,cause this scenario > because hadoop_start_daemon is in a subshell by ampersand, and $ expands to > the process ID of the current shell, not the subshell. > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] Pr-Jiang opened a new pull request, #4589: HADOOP-18348. Change hadoop_start_daemon function $ to BASHPID
Pr-Jiang opened a new pull request, #4589: URL: https://github.com/apache/hadoop/pull/4589 ### Description of PR https://issues.apache.org/jira/browse/HADOOP-18348 ### How was this patch tested? ### For code changes: - [ ] Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')? - [ ] Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, `NOTICE-binary` files? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-18348) Echo java process's parent pid to the pid file intermediate state
[ https://issues.apache.org/jira/browse/HADOOP-18348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] jiangrui updated HADOOP-18348: -- Summary: Echo java process's parent pid to the pid file intermediate state (was: echo java process's parent pid to the pid file intermediate state) > Echo java process's parent pid to the pid file intermediate state > - > > Key: HADOOP-18348 > URL: https://issues.apache.org/jira/browse/HADOOP-18348 > Project: Hadoop Common > Issue Type: Bug > Components: common >Reporter: jiangrui >Priority: Major > > In hadoop-function.sh file,there is hadoop_start_daemon and > hadoop_start_daemon_wrapper functions. > hadoop_start_daemon_wrapper invoke hadoop_start_daemon and put it to > background. > > In hadoop_start_daemon function, echo $$ > pidfile,cause this scenario > because hadoop_start_daemon is in a subshell by ampersand, and $ expands to > the process ID of the current shell, not the subshell. > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-18348) echo java process's parent pid to the pid file intermediate state
jiangrui created HADOOP-18348: - Summary: echo java process's parent pid to the pid file intermediate state Key: HADOOP-18348 URL: https://issues.apache.org/jira/browse/HADOOP-18348 Project: Hadoop Common Issue Type: Bug Components: common Reporter: jiangrui In hadoop-function.sh file,there is hadoop_start_daemon and hadoop_start_daemon_wrapper functions. hadoop_start_daemon_wrapper invoke hadoop_start_daemon and put it to background. In hadoop_start_daemon function, echo $$ > pidfile,cause this scenario because hadoop_start_daemon is in a subshell by ampersand, and $ expands to the process ID of the current shell, not the subshell. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] aajisaka commented on pull request #4459: YARN-11186: Upgrade frontend toolchains used by YARN application catalog webapp
aajisaka commented on PR #4459: URL: https://github.com/apache/hadoop/pull/4459#issuecomment-1188702847 The upgrade affects YARN Web UI v2 as well. Now I don't think the Web UI v2 supports Node v16. cc: @iwasakims Probably you need to overwrite the version in the yarn application catalog webapp module. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #4588: Advanced PR for https://github.com/apache/hadoop/pull/3440
hadoop-yetus commented on PR #4588: URL: https://github.com/apache/hadoop/pull/4588#issuecomment-1188664480 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 37s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +0 :ok: | markdownlint | 0m 0s | | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 9 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 68m 14s | | trunk passed | | +1 :green_heart: | compile | 0m 58s | | trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | compile | 0m 54s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 0m 53s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 2s | | trunk passed | | +1 :green_heart: | javadoc | 0m 59s | | trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javadoc | 0m 50s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 1m 32s | | trunk passed | | +1 :green_heart: | shadedclient | 20m 52s | | branch has no errors when building and testing our client artifacts. | | -0 :warning: | patch | 21m 19s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 39s | | the patch passed | | +1 :green_heart: | compile | 0m 41s | | the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javac | 0m 41s | | the patch passed | | +1 :green_heart: | compile | 0m 37s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 0m 37s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 28s | [/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4588/1/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt) | hadoop-tools/hadoop-azure: The patch generated 22 new + 5 unchanged - 0 fixed = 27 total (was 5) | | +1 :green_heart: | mvnsite | 0m 41s | | the patch passed | | +1 :green_heart: | javadoc | 0m 33s | | the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javadoc | 0m 32s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 1m 9s | | the patch passed | | +1 :green_heart: | shadedclient | 20m 16s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 2m 16s | | hadoop-azure in the patch passed. | | -1 :x: | asflicense | 0m 50s | [/results-asflicense.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4588/1/artifact/out/results-asflicense.txt) | The patch generated 2 ASF License warnings. | | | | 128m 9s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4588/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/4588 | | Optional Tests | dupname asflicense codespell detsecrets compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle markdownlint | | uname | Linux 90fd47f9f54f 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 926e7d97f99d01e5439a4ece3520b3f7fb5e6ed7 | | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4588/1/testReport/ | | Max. process+thread count |