[GitHub] [hadoop] ayushtkn edited a comment on pull request #2881: HDFS-15961. standby namenode failed to start ordered snapshot deletion is enabled while having snapshottable directories
ayushtkn edited a comment on pull request #2881: URL: https://github.com/apache/hadoop/pull/2881#issuecomment-819242460 I think We should hold this off, I think the code has issues, as I said. Earlier I thought, there is some catch but I don’t think. It is misbehaving only. Ideally such features should go in a branch first -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ayushtkn commented on pull request #2881: HDFS-15961. standby namenode failed to start ordered snapshot deletion is enabled while having snapshottable directories
ayushtkn commented on pull request #2881: URL: https://github.com/apache/hadoop/pull/2881#issuecomment-819242460 I think We should hold this oफf, I think the code has issues, as I said. Earlier I thought, there is some catch but I don’t think. It is misbehaving only. Ideally such features should go in a branch first -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bshashikant commented on pull request #2881: HDFS-15961. standby namenode failed to start ordered snapshot deletion is enabled while having snapshottable directories
bshashikant commented on pull request #2881: URL: https://github.com/apache/hadoop/pull/2881#issuecomment-819237562 @smengcl , can you please have a look again? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17633) Please upgrade json-smart dependency to the latest version
[ https://issues.apache.org/jira/browse/HADOOP-17633?focusedWorklogId=582188=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-582188 ] ASF GitHub Bot logged work on HADOOP-17633: --- Author: ASF GitHub Bot Created on: 14/Apr/21 04:22 Start Date: 14/Apr/21 04:22 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #2895: URL: https://github.com/apache/hadoop/pull/2895#issuecomment-819217518 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 38s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | shelldocs | 0m 0s | | Shelldocs was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 15m 37s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 20m 28s | | trunk passed | | +1 :green_heart: | compile | 23m 46s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 18m 26s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | mvnsite | 25m 39s | | trunk passed | | +1 :green_heart: | javadoc | 7m 20s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 7m 47s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | shadedclient | 28m 21s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 39s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 20m 3s | | the patch passed | | +1 :green_heart: | compile | 20m 21s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 20m 21s | | the patch passed | | +1 :green_heart: | compile | 18m 14s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | javac | 18m 14s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | mvnsite | 20m 55s | | the patch passed | | +1 :green_heart: | shellcheck | 0m 0s | | No new issues. | | +1 :green_heart: | xml | 0m 2s | | The patch has no ill-formed XML file. | | +1 :green_heart: | javadoc | 8m 25s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 8m 17s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | shadedclient | 31m 49s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 789m 31s | [/patch-unit-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2895/4/artifact/out/patch-unit-root.txt) | root in the patch passed. | | +1 :green_heart: | asflicense | 1m 52s | | The patch does not generate ASF License warnings. | | | | 1039m 7s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.tools.dynamometer.TestDynamometerInfra | | | hadoop.tools.fedbalance.TestDistCpProcedure | | | hadoop.tools.fedbalance.procedure.TestBalanceProcedureScheduler | | | hadoop.hdfs.server.balancer.TestBalancer | | | hadoop.hdfs.TestDFSStripedOutputStreamWithRandomECPolicy | | | hadoop.hdfs.server.namenode.snapshot.TestNestedSnapshots | | | hadoop.hdfs.server.datanode.TestDirectoryScanner | | | hadoop.hdfs.TestDecommissionWithStriped | | | hadoop.hdfs.tools.TestDFSZKFailoverController | | | hadoop.hdfs.server.balancer.TestBalancerWithHANameNodes | | | hadoop.hdfs.server.federation.router.TestRouterRpcMultiDestination | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2895/4/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2895 | | Optional Tests | dupname asflicense codespell shellcheck
[GitHub] [hadoop] hadoop-yetus commented on pull request #2895: HADOOP-17633. Bump json-smart to 2.4.2 and nimbus-jose-jwt to 9.8 due to CVEs
hadoop-yetus commented on pull request #2895: URL: https://github.com/apache/hadoop/pull/2895#issuecomment-819217518 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 38s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | shelldocs | 0m 0s | | Shelldocs was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 15m 37s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 20m 28s | | trunk passed | | +1 :green_heart: | compile | 23m 46s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 18m 26s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | mvnsite | 25m 39s | | trunk passed | | +1 :green_heart: | javadoc | 7m 20s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 7m 47s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | shadedclient | 28m 21s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 39s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 20m 3s | | the patch passed | | +1 :green_heart: | compile | 20m 21s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 20m 21s | | the patch passed | | +1 :green_heart: | compile | 18m 14s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | javac | 18m 14s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | mvnsite | 20m 55s | | the patch passed | | +1 :green_heart: | shellcheck | 0m 0s | | No new issues. | | +1 :green_heart: | xml | 0m 2s | | The patch has no ill-formed XML file. | | +1 :green_heart: | javadoc | 8m 25s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 8m 17s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | shadedclient | 31m 49s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 789m 31s | [/patch-unit-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2895/4/artifact/out/patch-unit-root.txt) | root in the patch passed. | | +1 :green_heart: | asflicense | 1m 52s | | The patch does not generate ASF License warnings. | | | | 1039m 7s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.tools.dynamometer.TestDynamometerInfra | | | hadoop.tools.fedbalance.TestDistCpProcedure | | | hadoop.tools.fedbalance.procedure.TestBalanceProcedureScheduler | | | hadoop.hdfs.server.balancer.TestBalancer | | | hadoop.hdfs.TestDFSStripedOutputStreamWithRandomECPolicy | | | hadoop.hdfs.server.namenode.snapshot.TestNestedSnapshots | | | hadoop.hdfs.server.datanode.TestDirectoryScanner | | | hadoop.hdfs.TestDecommissionWithStriped | | | hadoop.hdfs.tools.TestDFSZKFailoverController | | | hadoop.hdfs.server.balancer.TestBalancerWithHANameNodes | | | hadoop.hdfs.server.federation.router.TestRouterRpcMultiDestination | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2895/4/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2895 | | Optional Tests | dupname asflicense codespell shellcheck shelldocs compile javac javadoc mvninstall mvnsite unit shadedclient xml | | uname | Linux 8c4dde093b0c 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 424dc72fecacbd836007856eadef7d22db1689b7 | | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | |
[GitHub] [hadoop] ferhui commented on pull request #2902: HDFS-15940. Fixing and refactoring tests specific to Block recovery.
ferhui commented on pull request #2902: URL: https://github.com/apache/hadoop/pull/2902#issuecomment-819171302 @jojochuang Thanks. Failed tests passed locally There are some unused imports in TestBlockRecovery.java -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2860: Test Pre-Commits.
hadoop-yetus commented on pull request #2860: URL: https://github.com/apache/hadoop/pull/2860#issuecomment-819153675 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 39s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | shelldocs | 0m 0s | | Shelldocs was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 6 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 15m 36s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 22m 57s | | trunk passed | | +1 :green_heart: | compile | 24m 1s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 20m 54s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 4m 35s | | trunk passed | | +1 :green_heart: | mvnsite | 4m 38s | | trunk passed | | +1 :green_heart: | javadoc | 3m 47s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 4m 39s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +0 :ok: | spotbugs | 0m 47s | | branch/hadoop-project no spotbugs output file (spotbugsXml.xml) | | +1 :green_heart: | shadedclient | 15m 14s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 31s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 52s | | the patch passed | | +1 :green_heart: | compile | 20m 1s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 20m 1s | | the patch passed | | +1 :green_heart: | compile | 18m 2s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | javac | 18m 2s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 3m 52s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2860/10/artifact/out/results-checkstyle-root.txt) | root: The patch generated 3 new + 177 unchanged - 1 fixed = 180 total (was 178) | | +1 :green_heart: | hadolint | 0m 3s | | No new issues. | | +1 :green_heart: | mvnsite | 4m 40s | | the patch passed | | +1 :green_heart: | shellcheck | 0m 0s | | No new issues. | | +1 :green_heart: | xml | 0m 2s | | The patch has no ill-formed XML file. | | +1 :green_heart: | javadoc | 4m 3s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 4m 50s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +0 :ok: | spotbugs | 0m 45s | | hadoop-project has no data from spotbugs | | +1 :green_heart: | shadedclient | 15m 18s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 0m 44s | | hadoop-project in the patch passed. | | +1 :green_heart: | unit | 2m 45s | | hadoop-hdfs-client in the patch passed. | | -1 :x: | unit | 230m 19s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2860/10/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | unit | 18m 24s | | hadoop-hdfs-rbf in the patch passed. | | +1 :green_heart: | asflicense | 1m 21s | | The patch does not generate ASF License warnings. | | | | 467m 25s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys | | | hadoop.hdfs.server.namenode.TestDeadDatanode | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2860/10/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2860 | | Optional Tests | dupname asflicense codespell hadolint shellcheck shelldocs compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle xml | | uname | Linux d7b932c911f4 4.15.0-136-generic #140-Ubuntu
[jira] [Commented] (HADOOP-17524) Remove EventCounter
[ https://issues.apache.org/jira/browse/HADOOP-17524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17320616#comment-17320616 ] Akira Ajisaka commented on HADOOP-17524: [~vjasani] Yes, go ahead! > Remove EventCounter > --- > > Key: HADOOP-17524 > URL: https://issues.apache.org/jira/browse/HADOOP-17524 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Akira Ajisaka >Assignee: Viraj Jasani >Priority: Major > > EventCount is using Log4J 1.x API. We need to remove it to drop Log4J 1.x. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16524) Automatic keystore reloading for HttpServer2
[ https://issues.apache.org/jira/browse/HADOOP-16524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17320580#comment-17320580 ] Michael Stack commented on HADOOP-16524: ACK [~ayushtkn] Thanks for ping. Looking... > Automatic keystore reloading for HttpServer2 > > > Key: HADOOP-16524 > URL: https://issues.apache.org/jira/browse/HADOOP-16524 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Kihwal Lee >Assignee: Borislav Iordanov >Priority: Major > Labels: pull-request-available > Fix For: 3.3.1, 3.4.0 > > Attachments: HADOOP-16524.patch > > Time Spent: 5h 50m > Remaining Estimate: 0h > > Jetty 9 simplified reloading of keystore. This allows hadoop daemon's SSL > cert to be updated in place without having to restart the service. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] fengnanli commented on pull request #2903: HDFS-15423 RBF: WebHDFS create shouldn't choose DN from all sub-clusters
fengnanli commented on pull request #2903: URL: https://github.com/apache/hadoop/pull/2903#issuecomment-819115602 @goiri Mind merging this? The test failures are flaky and I got them working locally. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17604) Separate string metric from tag in hadoop metrics2
[ https://issues.apache.org/jira/browse/HADOOP-17604?focusedWorklogId=582106=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-582106 ] ASF GitHub Bot logged work on HADOOP-17604: --- Author: ASF GitHub Bot Created on: 13/Apr/21 23:25 Start Date: 13/Apr/21 23:25 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #2904: URL: https://github.com/apache/hadoop/pull/2904#issuecomment-819114365 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 6s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 15m 35s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 29m 5s | | trunk passed | | +1 :green_heart: | compile | 29m 39s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 24m 7s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 4m 57s | | trunk passed | | +1 :green_heart: | mvnsite | 3m 37s | | trunk passed | | +1 :green_heart: | javadoc | 2m 10s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 2m 56s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 5m 29s | | trunk passed | | +1 :green_heart: | shadedclient | 19m 18s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 26s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 27s | | the patch passed | | +1 :green_heart: | compile | 28m 59s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 28m 59s | | the patch passed | | +1 :green_heart: | compile | 25m 45s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | javac | 25m 45s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 4m 30s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2904/1/artifact/out/results-checkstyle-root.txt) | root: The patch generated 4 new + 51 unchanged - 0 fixed = 55 total (was 51) | | +1 :green_heart: | mvnsite | 3m 39s | | the patch passed | | +1 :green_heart: | javadoc | 2m 30s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 3m 4s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 7m 6s | | the patch passed | | +1 :green_heart: | shadedclient | 18m 30s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 17m 23s | [/patch-unit-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2904/1/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt) | hadoop-common in the patch passed. | | +1 :green_heart: | unit | 2m 10s | | hadoop-aws in the patch passed. | | +1 :green_heart: | unit | 2m 1s | | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 0m 47s | | The patch does not generate ASF License warnings. | | | | 260m 27s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.metrics2.lib.TestMetricsAnnotations | | | hadoop.ipc.TestRPC | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2904/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2904 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 0a1d0f8f05b1 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality |
[GitHub] [hadoop] hadoop-yetus commented on pull request #2904: HADOOP-17604 Fix method metric prefix for string type
hadoop-yetus commented on pull request #2904: URL: https://github.com/apache/hadoop/pull/2904#issuecomment-819114365 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 6s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 15m 35s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 29m 5s | | trunk passed | | +1 :green_heart: | compile | 29m 39s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 24m 7s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 4m 57s | | trunk passed | | +1 :green_heart: | mvnsite | 3m 37s | | trunk passed | | +1 :green_heart: | javadoc | 2m 10s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 2m 56s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 5m 29s | | trunk passed | | +1 :green_heart: | shadedclient | 19m 18s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 26s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 27s | | the patch passed | | +1 :green_heart: | compile | 28m 59s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 28m 59s | | the patch passed | | +1 :green_heart: | compile | 25m 45s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | javac | 25m 45s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 4m 30s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2904/1/artifact/out/results-checkstyle-root.txt) | root: The patch generated 4 new + 51 unchanged - 0 fixed = 55 total (was 51) | | +1 :green_heart: | mvnsite | 3m 39s | | the patch passed | | +1 :green_heart: | javadoc | 2m 30s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 3m 4s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 7m 6s | | the patch passed | | +1 :green_heart: | shadedclient | 18m 30s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 17m 23s | [/patch-unit-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2904/1/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt) | hadoop-common in the patch passed. | | +1 :green_heart: | unit | 2m 10s | | hadoop-aws in the patch passed. | | +1 :green_heart: | unit | 2m 1s | | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 0m 47s | | The patch does not generate ASF License warnings. | | | | 260m 27s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.metrics2.lib.TestMetricsAnnotations | | | hadoop.ipc.TestRPC | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2904/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2904 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 0a1d0f8f05b1 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 2648fa22bbad916dcf95464e8c57f4a11b6c825a | | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Test Results |
[GitHub] [hadoop] hadoop-yetus commented on pull request #2905: HDFS-15912. Allow ProtobufRpcEngine to be extensible
hadoop-yetus commented on pull request #2905: URL: https://github.com/apache/hadoop/pull/2905#issuecomment-819107509 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 36s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 34m 8s | | trunk passed | | +1 :green_heart: | compile | 20m 50s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 18m 6s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 1m 6s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 35s | | trunk passed | | +1 :green_heart: | javadoc | 1m 4s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 1m 36s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 2m 20s | | trunk passed | | +1 :green_heart: | shadedclient | 15m 39s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 51s | | the patch passed | | +1 :green_heart: | compile | 19m 58s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 19m 58s | | the patch passed | | +1 :green_heart: | compile | 17m 58s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | javac | 17m 58s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 1m 4s | [/results-checkstyle-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2905/1/artifact/out/results-checkstyle-hadoop-common-project_hadoop-common.txt) | hadoop-common-project/hadoop-common: The patch generated 18 new + 20 unchanged - 3 fixed = 38 total (was 23) | | +1 :green_heart: | mvnsite | 1m 29s | | the patch passed | | +1 :green_heart: | javadoc | 1m 3s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 1m 38s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 2m 29s | | the patch passed | | +1 :green_heart: | shadedclient | 15m 54s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 17m 16s | | hadoop-common in the patch passed. | | +1 :green_heart: | asflicense | 0m 54s | | The patch does not generate ASF License warnings. | | | | 178m 0s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2905/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2905 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 77460dfcfc53 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / a1c99c8da2a835e469d9209cfd01911f66fb6180 | | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2905/1/testReport/ | | Max. process+thread count | 1850 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2905/1/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This
[GitHub] [hadoop] hadoop-yetus commented on pull request #2903: HDFS-15423 RBF: WebHDFS create shouldn't choose DN from all sub-clusters
hadoop-yetus commented on pull request #2903: URL: https://github.com/apache/hadoop/pull/2903#issuecomment-819090454 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 7s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 3 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 38m 1s | | trunk passed | | +1 :green_heart: | compile | 0m 42s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 0m 35s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 0m 23s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 39s | | trunk passed | | +1 :green_heart: | javadoc | 0m 37s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 0m 50s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 1m 16s | | trunk passed | | +1 :green_heart: | shadedclient | 17m 32s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 32s | | the patch passed | | +1 :green_heart: | compile | 0m 35s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 0m 35s | | the patch passed | | +1 :green_heart: | compile | 0m 30s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | javac | 0m 30s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 17s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 32s | | the patch passed | | +1 :green_heart: | javadoc | 0m 31s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 0m 44s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 1m 22s | | the patch passed | | +1 :green_heart: | shadedclient | 17m 55s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 24m 44s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2903/3/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt) | hadoop-hdfs-rbf in the patch passed. | | +1 :green_heart: | asflicense | 0m 29s | | The patch does not generate ASF License warnings. | | | | 111m 28s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.federation.router.TestRouterFederationRename | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2903/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2903 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 2f12a3488480 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 31065de427a776aed2bacfa5d5b2258c7589b506 | | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2903/3/testReport/ | | Max. process+thread count | 2192 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: hadoop-hdfs-project/hadoop-hdfs-rbf | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2903/3/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache
[GitHub] [hadoop] hadoop-yetus commented on pull request #2903: HDFS-15423 RBF: WebHDFS create shouldn't choose DN from all sub-clusters
hadoop-yetus commented on pull request #2903: URL: https://github.com/apache/hadoop/pull/2903#issuecomment-819084728 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 56s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 3 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 38m 3s | | trunk passed | | +1 :green_heart: | compile | 0m 42s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 0m 36s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 0m 26s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 38s | | trunk passed | | +1 :green_heart: | javadoc | 0m 36s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 0m 51s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 1m 18s | | trunk passed | | +1 :green_heart: | shadedclient | 17m 0s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 35s | | the patch passed | | +1 :green_heart: | compile | 0m 33s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 0m 33s | | the patch passed | | +1 :green_heart: | compile | 0m 28s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | javac | 0m 28s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 15s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 32s | | the patch passed | | +1 :green_heart: | javadoc | 0m 31s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 0m 49s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 1m 22s | | the patch passed | | +1 :green_heart: | shadedclient | 16m 59s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 25m 31s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2903/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt) | hadoop-hdfs-rbf in the patch passed. | | +1 :green_heart: | asflicense | 0m 34s | | The patch does not generate ASF License warnings. | | | | 110m 38s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.federation.router.TestRouterRpc | | | hadoop.hdfs.server.federation.router.TestRouterRpcMultiDestination | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2903/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2903 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 14d2d2c03628 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 2a29edf31df4f51b7de8dffc03eacada81b683e7 | | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2903/2/testReport/ | | Max. process+thread count | 2203 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: hadoop-hdfs-project/hadoop-hdfs-rbf | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2903/2/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically
[GitHub] [hadoop] hchaverri opened a new pull request #2905: HDFS-15912. Allow ProtobufRpcEngine to be extensible
hchaverri opened a new pull request #2905: URL: https://github.com/apache/hadoop/pull/2905 ## NOTICE Please create an issue in ASF JIRA before opening a pull request, and you need to set the title of the pull request which starts with the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.) For more details, please see https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-15566) Support OpenTelemetry
[ https://issues.apache.org/jira/browse/HADOOP-15566?focusedWorklogId=582032=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-582032 ] ASF GitHub Bot logged work on HADOOP-15566: --- Author: ASF GitHub Bot Created on: 13/Apr/21 20:05 Start Date: 13/Apr/21 20:05 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #2816: URL: https://github.com/apache/hadoop/pull/2816#issuecomment-819017936 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 38s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 15m 54s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 20m 30s | | trunk passed | | +1 :green_heart: | compile | 21m 20s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 18m 9s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 3m 47s | | trunk passed | | +1 :green_heart: | mvnsite | 2m 9s | | trunk passed | | +1 :green_heart: | javadoc | 1m 40s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 2m 10s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +0 :ok: | spotbugs | 0m 40s | | branch/hadoop-project no spotbugs output file (spotbugsXml.xml) | | +1 :green_heart: | shadedclient | 15m 43s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 1m 0s | | Maven dependency ordering for patch | | -1 :x: | mvninstall | 0m 53s | [/patch-mvninstall-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2816/2/artifact/out/patch-mvninstall-hadoop-common-project_hadoop-common.txt) | hadoop-common in the patch failed. | | +1 :green_heart: | compile | 20m 34s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 20m 34s | | the patch passed | | +1 :green_heart: | compile | 18m 21s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | javac | 18m 21s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 3m 49s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2816/2/artifact/out/results-checkstyle-root.txt) | root: The patch generated 23 new + 4 unchanged - 1 fixed = 27 total (was 5) | | +1 :green_heart: | mvnsite | 2m 11s | | the patch passed | | +1 :green_heart: | xml | 0m 2s | | The patch has no ill-formed XML file. | | +1 :green_heart: | javadoc | 1m 42s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 2m 13s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +0 :ok: | spotbugs | 0m 37s | | hadoop-project has no data from spotbugs | | -1 :x: | spotbugs | 2m 34s | [/new-spotbugs-hadoop-common-project_hadoop-common.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2816/2/artifact/out/new-spotbugs-hadoop-common-project_hadoop-common.html) | hadoop-common-project/hadoop-common generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | | -1 :x: | shadedclient | 4m 31s | | patch has errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 0m 35s | | hadoop-project in the patch passed. | | +1 :green_heart: | unit | 18m 37s | | hadoop-common in the patch passed. | | +1 :green_heart: | asflicense | 0m 56s | | The patch does not generate ASF License warnings. | | | | 185m 6s | | | | Reason | Tests | |---:|:--| | SpotBugs | module:hadoop-common-project/hadoop-common | | | Incorrect lazy initialization and update of static field
[GitHub] [hadoop] hadoop-yetus commented on pull request #2816: HADOOP-15566 initial changes for opentelemetry - WIP
hadoop-yetus commented on pull request #2816: URL: https://github.com/apache/hadoop/pull/2816#issuecomment-819017936 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 38s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 15m 54s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 20m 30s | | trunk passed | | +1 :green_heart: | compile | 21m 20s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 18m 9s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 3m 47s | | trunk passed | | +1 :green_heart: | mvnsite | 2m 9s | | trunk passed | | +1 :green_heart: | javadoc | 1m 40s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 2m 10s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +0 :ok: | spotbugs | 0m 40s | | branch/hadoop-project no spotbugs output file (spotbugsXml.xml) | | +1 :green_heart: | shadedclient | 15m 43s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 1m 0s | | Maven dependency ordering for patch | | -1 :x: | mvninstall | 0m 53s | [/patch-mvninstall-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2816/2/artifact/out/patch-mvninstall-hadoop-common-project_hadoop-common.txt) | hadoop-common in the patch failed. | | +1 :green_heart: | compile | 20m 34s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 20m 34s | | the patch passed | | +1 :green_heart: | compile | 18m 21s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | javac | 18m 21s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 3m 49s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2816/2/artifact/out/results-checkstyle-root.txt) | root: The patch generated 23 new + 4 unchanged - 1 fixed = 27 total (was 5) | | +1 :green_heart: | mvnsite | 2m 11s | | the patch passed | | +1 :green_heart: | xml | 0m 2s | | The patch has no ill-formed XML file. | | +1 :green_heart: | javadoc | 1m 42s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 2m 13s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +0 :ok: | spotbugs | 0m 37s | | hadoop-project has no data from spotbugs | | -1 :x: | spotbugs | 2m 34s | [/new-spotbugs-hadoop-common-project_hadoop-common.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2816/2/artifact/out/new-spotbugs-hadoop-common-project_hadoop-common.html) | hadoop-common-project/hadoop-common generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | | -1 :x: | shadedclient | 4m 31s | | patch has errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 0m 35s | | hadoop-project in the patch passed. | | +1 :green_heart: | unit | 18m 37s | | hadoop-common in the patch passed. | | +1 :green_heart: | asflicense | 0m 56s | | The patch does not generate ASF License warnings. | | | | 185m 6s | | | | Reason | Tests | |---:|:--| | SpotBugs | module:hadoop-common-project/hadoop-common | | | Incorrect lazy initialization and update of static field org.apache.hadoop.tracing.Tracer$Builder.globalTracer in org.apache.hadoop.tracing.Tracer$Builder.build() At Tracer.java:of static field org.apache.hadoop.tracing.Tracer$Builder.globalTracer in org.apache.hadoop.tracing.Tracer$Builder.build() At Tracer.java:[lines 145-149] | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base:
[GitHub] [hadoop] hchaverri closed pull request #2901: HDFS-15912. Allow ProtobufRpcEngine to be extensible
hchaverri closed pull request #2901: URL: https://github.com/apache/hadoop/pull/2901 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2903: HDFS-15423 RBF: WebHDFS create shouldn't choose DN from all sub-clusters
hadoop-yetus commented on pull request #2903: URL: https://github.com/apache/hadoop/pull/2903#issuecomment-819013126 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 34s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 3 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 51m 22s | | trunk passed | | +1 :green_heart: | compile | 1m 4s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 0m 42s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 0m 28s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 58s | | trunk passed | | +1 :green_heart: | javadoc | 0m 55s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 1m 9s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 1m 47s | | trunk passed | | -1 :x: | shadedclient | 20m 53s | | branch has errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 49s | | the patch passed | | +1 :green_heart: | compile | 0m 53s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 0m 53s | | the patch passed | | +1 :green_heart: | compile | 0m 44s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | javac | 0m 44s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 20s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 39s | | the patch passed | | +1 :green_heart: | javadoc | 0m 46s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 1m 2s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 1m 57s | | the patch passed | | -1 :x: | shadedclient | 5m 56s | | patch has errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 0m 25s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2903/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt) | hadoop-hdfs-rbf in the patch failed. | | +1 :green_heart: | asflicense | 0m 36s | | The patch does not generate ASF License warnings. | | | | 95m 40s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2903/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2903 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 1780317c385b 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 2a29edf31df4f51b7de8dffc03eacada81b683e7 | | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2903/1/testReport/ | | Max. process+thread count | 596 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: hadoop-hdfs-project/hadoop-hdfs-rbf | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2903/1/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact
[GitHub] [hadoop] hadoop-yetus commented on pull request #2901: HDFS-15912. Allow ProtobufRpcEngine to be extensible
hadoop-yetus commented on pull request #2901: URL: https://github.com/apache/hadoop/pull/2901#issuecomment-819009254 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 42s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 15m 50s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 20m 31s | | trunk passed | | +1 :green_heart: | compile | 21m 12s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 18m 5s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 4m 0s | | trunk passed | | +1 :green_heart: | mvnsite | 2m 21s | | trunk passed | | +1 :green_heart: | javadoc | 1m 58s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 3m 4s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 4m 15s | | trunk passed | | +1 :green_heart: | shadedclient | 17m 47s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 25s | | Maven dependency ordering for patch | | -1 :x: | mvninstall | 0m 32s | [/patch-mvninstall-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2901/4/artifact/out/patch-mvninstall-hadoop-common-project_hadoop-common.txt) | hadoop-common in the patch failed. | | -1 :x: | compile | 1m 2s | [/patch-compile-root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2901/4/artifact/out/patch-compile-root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt) | root in the patch failed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04. | | -1 :x: | javac | 1m 2s | [/patch-compile-root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2901/4/artifact/out/patch-compile-root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt) | root in the patch failed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04. | | -1 :x: | compile | 0m 56s | [/patch-compile-root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2901/4/artifact/out/patch-compile-root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt) | root in the patch failed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08. | | -1 :x: | javac | 0m 56s | [/patch-compile-root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2901/4/artifact/out/patch-compile-root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt) | root in the patch failed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08. | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 3m 53s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2901/4/artifact/out/results-checkstyle-root.txt) | root: The patch generated 36 new + 21 unchanged - 3 fixed = 57 total (was 24) | | -1 :x: | mvnsite | 0m 37s | [/patch-mvnsite-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2901/4/artifact/out/patch-mvnsite-hadoop-common-project_hadoop-common.txt) | hadoop-common in the patch failed. | | -1 :x: | javadoc | 0m 28s | [/patch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2901/4/artifact/out/patch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt) | hadoop-common in the patch failed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04. | | -1 :x: | javadoc | 1m 21s |
[GitHub] [hadoop] hadoop-yetus commented on pull request #2860: Test Pre-Commits.
hadoop-yetus commented on pull request #2860: URL: https://github.com/apache/hadoop/pull/2860#issuecomment-818990572 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 40s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | shelldocs | 0m 0s | | Shelldocs was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 5 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 15m 30s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 22m 21s | | trunk passed | | +1 :green_heart: | compile | 23m 30s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 19m 31s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 3m 49s | | trunk passed | | +1 :green_heart: | mvnsite | 4m 31s | | trunk passed | | +1 :green_heart: | javadoc | 3m 46s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 4m 30s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +0 :ok: | spotbugs | 0m 43s | | branch/hadoop-project no spotbugs output file (spotbugsXml.xml) | | +1 :green_heart: | shadedclient | 15m 30s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 32s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 56s | | the patch passed | | +1 :green_heart: | compile | 20m 8s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 20m 8s | | the patch passed | | +1 :green_heart: | compile | 18m 5s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | javac | 18m 5s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 3m 52s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2860/9/artifact/out/results-checkstyle-root.txt) | root: The patch generated 2 new + 156 unchanged - 1 fixed = 158 total (was 157) | | +1 :green_heart: | hadolint | 0m 2s | | No new issues. | | +1 :green_heart: | mvnsite | 4m 47s | | the patch passed | | +1 :green_heart: | shellcheck | 0m 1s | | No new issues. | | +1 :green_heart: | xml | 0m 1s | | The patch has no ill-formed XML file. | | +1 :green_heart: | javadoc | 4m 5s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 4m 51s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +0 :ok: | spotbugs | 0m 45s | | hadoop-project has no data from spotbugs | | +1 :green_heart: | shadedclient | 15m 16s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 0m 44s | | hadoop-project in the patch passed. | | +1 :green_heart: | unit | 2m 47s | | hadoop-hdfs-client in the patch passed. | | -1 :x: | unit | 231m 54s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2860/9/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | unit | 18m 22s | | hadoop-hdfs-rbf in the patch passed. | | +1 :green_heart: | asflicense | 1m 15s | | The patch does not generate ASF License warnings. | | | | 466m 7s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.namenode.ha.TestBootstrapAliasmap | | | hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2860/9/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2860 | | Optional Tests | dupname asflicense codespell hadolint shellcheck shelldocs compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle xml | | uname | Linux cf88a4e99268 4.15.0-136-generic
[jira] [Updated] (HADOOP-17604) Separate string metric from tag in hadoop metrics2
[ https://issues.apache.org/jira/browse/HADOOP-17604?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fengnan Li updated HADOOP-17604: Status: Patch Available (was: Open) > Separate string metric from tag in hadoop metrics2 > -- > > Key: HADOOP-17604 > URL: https://issues.apache.org/jira/browse/HADOOP-17604 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Reporter: Fengnan Li >Assignee: Fengnan Li >Priority: Minor > Labels: pull-request-available > Attachments: Screen Shot 2021-03-26 at 2.50.08 PM.png > > Time Spent: 10m > Remaining Estimate: 0h > > Right now in hadoop metrics2, String metrics from method are categorized as > tag (v.s. metrics as other number types), this caused later when reporting > beans, it will add a prefix "tag." before the metric name. > It will be cleaner if we have another child inherit MutableMetric for string > (maybe MutableText?) thus the String metrics from method can get rid of the > tag. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17604) Separate string metric from tag in hadoop metrics2
[ https://issues.apache.org/jira/browse/HADOOP-17604?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HADOOP-17604: Labels: pull-request-available (was: ) > Separate string metric from tag in hadoop metrics2 > -- > > Key: HADOOP-17604 > URL: https://issues.apache.org/jira/browse/HADOOP-17604 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Reporter: Fengnan Li >Assignee: Fengnan Li >Priority: Minor > Labels: pull-request-available > Attachments: Screen Shot 2021-03-26 at 2.50.08 PM.png > > Time Spent: 10m > Remaining Estimate: 0h > > Right now in hadoop metrics2, String metrics from method are categorized as > tag (v.s. metrics as other number types), this caused later when reporting > beans, it will add a prefix "tag." before the metric name. > It will be cleaner if we have another child inherit MutableMetric for string > (maybe MutableText?) thus the String metrics from method can get rid of the > tag. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] fengnanli opened a new pull request #2904: HADOOP-17604 Fix method metric prefix for string type
fengnanli opened a new pull request #2904: URL: https://github.com/apache/hadoop/pull/2904 ## NOTICE Please create an issue in ASF JIRA before opening a pull request, and you need to set the title of the pull request which starts with the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.) For more details, please see https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17604) Separate string metric from tag in hadoop metrics2
[ https://issues.apache.org/jira/browse/HADOOP-17604?focusedWorklogId=581986=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-581986 ] ASF GitHub Bot logged work on HADOOP-17604: --- Author: ASF GitHub Bot Created on: 13/Apr/21 19:03 Start Date: 13/Apr/21 19:03 Worklog Time Spent: 10m Work Description: fengnanli opened a new pull request #2904: URL: https://github.com/apache/hadoop/pull/2904 ## NOTICE Please create an issue in ASF JIRA before opening a pull request, and you need to set the title of the pull request which starts with the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.) For more details, please see https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 581986) Remaining Estimate: 0h Time Spent: 10m > Separate string metric from tag in hadoop metrics2 > -- > > Key: HADOOP-17604 > URL: https://issues.apache.org/jira/browse/HADOOP-17604 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Reporter: Fengnan Li >Assignee: Fengnan Li >Priority: Minor > Attachments: Screen Shot 2021-03-26 at 2.50.08 PM.png > > Time Spent: 10m > Remaining Estimate: 0h > > Right now in hadoop metrics2, String metrics from method are categorized as > tag (v.s. metrics as other number types), this caused later when reporting > beans, it will add a prefix "tag." before the metric name. > It will be cleaner if we have another child inherit MutableMetric for string > (maybe MutableText?) thus the String metrics from method can get rid of the > tag. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15457) Add Security-Related HTTP Response Header in WEBUIs.
[ https://issues.apache.org/jira/browse/HADOOP-15457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siyao Meng updated HADOOP-15457: Description: As of today, YARN web-ui lacks certain security related http response headers. We are planning to add few default ones and also add support for headers to be able to get added via xml config. Planning to make the below two as default. * X-XSS-Protection: 1; mode=block * X-Content-Type-Options: nosniff Support for headers via config properties in core-site.xml will be along the below lines {code:java} hadoop.http.header.Strict-Transport-Security valHSTSFromXML {code} In the above example, valHSTSFromXML is an example value, this should be [configured|https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Strict-Transport-Security] according to the security requirements. With this Jira, users can set required headers by prefixing HTTP header with hadoop.http.header. and configure with the required value in their core-site.xml. Example: {code:java} hadoop.http.header.http-header http-header-value {code} A regex matcher will lift these properties and add into the response header when Jetty prepares the response. was: As of today, YARN web-ui lacks certain security related http response headers. We are planning to add few default ones and also add support for headers to be able to get added via xml config. Planning to make the below two as default. * X-XSS-Protection: 1; mode=block * X-Content-Type-Options: nosniff Support for headers via config properties in core-site.xml will be along the below lines {code:java} hadoop.http.header.Strict_Transport_Security valHSTSFromXML {code} In the above example, valHSTSFromXML is an example value, this should be configured according to the security requirements. With this Jira, users can set required headers by prefixing HTTP header with hadoop.http.header. and configure with the required value in their core-site.xml. Example: {code:java} hadoop.http.header.http-header> http-header-value {code} A regex matcher will lift these properties and add into the response header when Jetty prepares the response. > Add Security-Related HTTP Response Header in WEBUIs. > > > Key: HADOOP-15457 > URL: https://issues.apache.org/jira/browse/HADOOP-15457 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Kanwaljeet Sachdev >Assignee: Kanwaljeet Sachdev >Priority: Major > Labels: security > Fix For: 3.2.0 > > Attachments: HADOOP-15457.001.patch, HADOOP-15457.002.patch, > HADOOP-15457.003.patch, HADOOP-15457.004.patch, HADOOP-15457.005.patch, > YARN-8198.001.patch, YARN-8198.002.patch, YARN-8198.003.patch, > YARN-8198.004.patch, YARN-8198.005.patch > > > As of today, YARN web-ui lacks certain security related http response > headers. We are planning to add few default ones and also add support for > headers to be able to get added via xml config. Planning to make the below > two as default. > * X-XSS-Protection: 1; mode=block > * X-Content-Type-Options: nosniff > > Support for headers via config properties in core-site.xml will be along the > below lines > {code:java} > > hadoop.http.header.Strict-Transport-Security > valHSTSFromXML > {code} > In the above example, valHSTSFromXML is an example value, this should be > [configured|https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Strict-Transport-Security] > according to the security requirements. > With this Jira, users can set required headers by prefixing HTTP header with > hadoop.http.header. and configure with the required value in their > core-site.xml. > Example: > {code:java} > > hadoop.http.header.http-header > http-header-value > > {code} > > A regex matcher will lift these properties and add into the response header > when Jetty prepares the response. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] xiaoyuyao edited a comment on pull request #2784: HDFS-15850. Superuser actions should be reported to external enforcers
xiaoyuyao edited a comment on pull request #2784: URL: https://github.com/apache/hadoop/pull/2784#issuecomment-818962252 Thanks @vivekratnavel for the update. The latest change LGTM, +1. The unit test failure does not relate to the change here. I will merge the PR EOD tomorrow if there is no more additional comment. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] xiaoyuyao commented on pull request #2784: HDFS-15850. Superuser actions should be reported to external enforcers
xiaoyuyao commented on pull request #2784: URL: https://github.com/apache/hadoop/pull/2784#issuecomment-818962252 Thanks @vivekratnavel for the update. The latest change LGTM, +1. The unit test failure does not relate to the change here. I will merge the PR shortly. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17611) Distcp parallel file copy breaks the modification time
[ https://issues.apache.org/jira/browse/HADOOP-17611?focusedWorklogId=581946=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-581946 ] ASF GitHub Bot logged work on HADOOP-17611: --- Author: ASF GitHub Bot Created on: 13/Apr/21 18:26 Start Date: 13/Apr/21 18:26 Worklog Time Spent: 10m Work Description: virajjasani commented on pull request #2892: URL: https://github.com/apache/hadoop/pull/2892#issuecomment-818959488 Closing this PR as this is being tracked actively on #2897 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 581946) Time Spent: 2.5h (was: 2h 20m) > Distcp parallel file copy breaks the modification time > -- > > Key: HADOOP-17611 > URL: https://issues.apache.org/jira/browse/HADOOP-17611 > Project: Hadoop Common > Issue Type: Bug >Reporter: Adam Maroti >Priority: Major > Labels: pull-request-available > Time Spent: 2.5h > Remaining Estimate: 0h > > The commit HADOOP-11794. Enable distcp to copy blocks in parallel. > (bf3fb585aaf2b179836e139c041fc87920a3c886) broke the modification time of > large files. > > In CopyCommitter.java inside concatFileChunks Filesystem.concat is called > which changes the modification time therefore the modification times of files > copeid by distcp will not match the source files. However this only occurs > for large enough files, which are copied by splitting them up by distcp. > In concatFileChunks before calling concat extract the modification time and > apply that to the concatenated result-file after the concat. (probably best > -after- before the rename()). -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17611) Distcp parallel file copy breaks the modification time
[ https://issues.apache.org/jira/browse/HADOOP-17611?focusedWorklogId=581945=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-581945 ] ASF GitHub Bot logged work on HADOOP-17611: --- Author: ASF GitHub Bot Created on: 13/Apr/21 18:26 Start Date: 13/Apr/21 18:26 Worklog Time Spent: 10m Work Description: virajjasani closed pull request #2892: URL: https://github.com/apache/hadoop/pull/2892 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 581945) Time Spent: 2h 20m (was: 2h 10m) > Distcp parallel file copy breaks the modification time > -- > > Key: HADOOP-17611 > URL: https://issues.apache.org/jira/browse/HADOOP-17611 > Project: Hadoop Common > Issue Type: Bug >Reporter: Adam Maroti >Priority: Major > Labels: pull-request-available > Time Spent: 2h 20m > Remaining Estimate: 0h > > The commit HADOOP-11794. Enable distcp to copy blocks in parallel. > (bf3fb585aaf2b179836e139c041fc87920a3c886) broke the modification time of > large files. > > In CopyCommitter.java inside concatFileChunks Filesystem.concat is called > which changes the modification time therefore the modification times of files > copeid by distcp will not match the source files. However this only occurs > for large enough files, which are copied by splitting them up by distcp. > In concatFileChunks before calling concat extract the modification time and > apply that to the concatenated result-file after the concat. (probably best > -after- before the rename()). -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] virajjasani commented on pull request #2892: HADOOP-17611. Distcp parallel file copy should retain first chunk modifiedTime after concat
virajjasani commented on pull request #2892: URL: https://github.com/apache/hadoop/pull/2892#issuecomment-818959488 Closing this PR as this is being tracked actively on #2897 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] virajjasani closed pull request #2892: HADOOP-17611. Distcp parallel file copy should retain first chunk modifiedTime after concat
virajjasani closed pull request #2892: URL: https://github.com/apache/hadoop/pull/2892 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17611) Distcp parallel file copy breaks the modification time
[ https://issues.apache.org/jira/browse/HADOOP-17611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17320445#comment-17320445 ] Ayush Saxena commented on HADOOP-17611: --- Seems there are two PRs, More or less doing the same thing I guess, I just had a glance on the second one, So, who ever plans to chase this, Couple of points to keep in mind: * We need a test in {{AbstractContractDistCpTest}} which all the {{FileSystems}} can also use * Should cover *two scenarios.* First When preserve Time is specified it should preserve time and when not it shouldn't in case of parallel copy. The latter case is working ok as of now, To make sure we don't change the behaviour * The parent modification time is to be preserved when the parent is in the scope of copy, not always. say your are copying /dir/fil1 to /dir1/file2 using parallel copy, then we don't touch /dir1 AFAIK The above are the basic requirements, Now the below stuff, If possible we should do: * For parent directories preserve only once, say if you have 10K files under that parent, then do that setTimes 10K times. * And if the parallel copy is enabled, there is no point of preserving before concat operation, we can save that call. This isn't a one liner, and throw some challenges, So, please decide who wants to chase this and together work on one PR only. > Distcp parallel file copy breaks the modification time > -- > > Key: HADOOP-17611 > URL: https://issues.apache.org/jira/browse/HADOOP-17611 > Project: Hadoop Common > Issue Type: Bug >Reporter: Adam Maroti >Priority: Major > Labels: pull-request-available > Time Spent: 2h 10m > Remaining Estimate: 0h > > The commit HADOOP-11794. Enable distcp to copy blocks in parallel. > (bf3fb585aaf2b179836e139c041fc87920a3c886) broke the modification time of > large files. > > In CopyCommitter.java inside concatFileChunks Filesystem.concat is called > which changes the modification time therefore the modification times of files > copeid by distcp will not match the source files. However this only occurs > for large enough files, which are copied by splitting them up by distcp. > In concatFileChunks before calling concat extract the modification time and > apply that to the concatenated result-file after the concat. (probably best > -after- before the rename()). -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] fengnanli opened a new pull request #2903: HDFS-15423 RBF: WebHDFS create shouldn't choose DN from all sub-clusters
fengnanli opened a new pull request #2903: URL: https://github.com/apache/hadoop/pull/2903 ## NOTICE Please create an issue in ASF JIRA before opening a pull request, and you need to set the title of the pull request which starts with the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.) For more details, please see https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16206) Migrate from Log4j1 to Log4j2
[ https://issues.apache.org/jira/browse/HADOOP-16206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17320439#comment-17320439 ] Ahmed Hussein commented on HADOOP-16206: [~zhangduo] Sure, Go ahead and give it a try! > Migrate from Log4j1 to Log4j2 > - > > Key: HADOOP-16206 > URL: https://issues.apache.org/jira/browse/HADOOP-16206 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.3.0 >Reporter: Akira Ajisaka >Priority: Major > Attachments: HADOOP-16206-wip.001.patch > > > This sub-task is to remove log4j1 dependency and add log4j2 dependency. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17633) Please upgrade json-smart dependency to the latest version
[ https://issues.apache.org/jira/browse/HADOOP-17633?focusedWorklogId=581924=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-581924 ] ASF GitHub Bot logged work on HADOOP-17633: --- Author: ASF GitHub Bot Created on: 13/Apr/21 17:35 Start Date: 13/Apr/21 17:35 Worklog Time Spent: 10m Work Description: virajjasani edited a comment on pull request #2895: URL: https://github.com/apache/hadoop/pull/2895#issuecomment-818917758 FYI @ayushtkn @jojochuang , if you would also like to take a look. Thanks -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 581924) Time Spent: 1.5h (was: 1h 20m) > Please upgrade json-smart dependency to the latest version > -- > > Key: HADOOP-17633 > URL: https://issues.apache.org/jira/browse/HADOOP-17633 > Project: Hadoop Common > Issue Type: Improvement > Components: auth, build >Affects Versions: 3.3.0, 3.2.1, 3.2.2, 3.4.0 >Reporter: helen huang >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > Time Spent: 1.5h > Remaining Estimate: 0h > > Please upgrade the json-smart dependency to the latest version available. > Currently hadoop-auth is using version 2.3. Fortify scan picked up a security > issue with this version. Please upgrade to the latest version. > Thanks! > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2860: Test Pre-Commits.
hadoop-yetus commented on pull request #2860: URL: https://github.com/apache/hadoop/pull/2860#issuecomment-818918160 (!) A patch to the testing environment has been detected. Re-executing against the patched versions to perform further tests. The console is at https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2860/10/console in case of problems. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] virajjasani edited a comment on pull request #2895: HADOOP-17633. Bump json-smart to 2.4.2 due to CVEs
virajjasani edited a comment on pull request #2895: URL: https://github.com/apache/hadoop/pull/2895#issuecomment-818917758 FYI @ayushtkn @jojochuang , if you would also like to take a look. Thanks -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17633) Please upgrade json-smart dependency to the latest version
[ https://issues.apache.org/jira/browse/HADOOP-17633?focusedWorklogId=581922=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-581922 ] ASF GitHub Bot logged work on HADOOP-17633: --- Author: ASF GitHub Bot Created on: 13/Apr/21 17:34 Start Date: 13/Apr/21 17:34 Worklog Time Spent: 10m Work Description: virajjasani commented on pull request #2895: URL: https://github.com/apache/hadoop/pull/2895#issuecomment-818917758 FYI @ayushtkn @jojochuang -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 581922) Time Spent: 1h 20m (was: 1h 10m) > Please upgrade json-smart dependency to the latest version > -- > > Key: HADOOP-17633 > URL: https://issues.apache.org/jira/browse/HADOOP-17633 > Project: Hadoop Common > Issue Type: Improvement > Components: auth, build >Affects Versions: 3.3.0, 3.2.1, 3.2.2, 3.4.0 >Reporter: helen huang >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > Time Spent: 1h 20m > Remaining Estimate: 0h > > Please upgrade the json-smart dependency to the latest version available. > Currently hadoop-auth is using version 2.3. Fortify scan picked up a security > issue with this version. Please upgrade to the latest version. > Thanks! > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] virajjasani commented on pull request #2895: HADOOP-17633. Bump json-smart to 2.4.2 due to CVEs
virajjasani commented on pull request #2895: URL: https://github.com/apache/hadoop/pull/2895#issuecomment-818917758 FYI @ayushtkn @jojochuang -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-15566) Support OpenTelemetry
[ https://issues.apache.org/jira/browse/HADOOP-15566?focusedWorklogId=581921=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-581921 ] ASF GitHub Bot logged work on HADOOP-15566: --- Author: ASF GitHub Bot Created on: 13/Apr/21 17:31 Start Date: 13/Apr/21 17:31 Worklog Time Spent: 10m Work Description: ArkenKiran commented on a change in pull request #2816: URL: https://github.com/apache/hadoop/pull/2816#discussion_r612646490 ## File path: hadoop-common-project/hadoop-common/pom.xml ## @@ -371,6 +371,31 @@ lz4-java provided + + io.opentelemetry + opentelemetry-api + 1.0.0 Review comment: @steveloughran Thanks for the review. I will make these changes in next 2-3 days. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 581921) Time Spent: 1h 10m (was: 1h) > Support OpenTelemetry > - > > Key: HADOOP-15566 > URL: https://issues.apache.org/jira/browse/HADOOP-15566 > Project: Hadoop Common > Issue Type: New Feature > Components: metrics, tracing >Affects Versions: 3.1.0 >Reporter: Todd Lipcon >Assignee: Siyao Meng >Priority: Major > Labels: pull-request-available, security > Attachments: HADOOP-15566.000.WIP.patch, OpenTelemetry Support Scope > Doc v2.pdf, OpenTracing Support Scope Doc.pdf, Screen Shot 2018-06-29 at > 11.59.16 AM.png, ss-trace-s3a.png > > Time Spent: 1h 10m > Remaining Estimate: 0h > > The HTrace incubator project has voted to retire itself and won't be making > further releases. The Hadoop project currently has various hooks with HTrace. > It seems in some cases (eg HDFS-13702) these hooks have had measurable > performance overhead. Given these two factors, I think we should consider > removing the HTrace integration. If there is someone willing to do the work, > replacing it with OpenTracing might be a better choice since there is an > active community. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ArkenKiran commented on a change in pull request #2816: HADOOP-15566 initial changes for opentelemetry - WIP
ArkenKiran commented on a change in pull request #2816: URL: https://github.com/apache/hadoop/pull/2816#discussion_r612646490 ## File path: hadoop-common-project/hadoop-common/pom.xml ## @@ -371,6 +371,31 @@ lz4-java provided + + io.opentelemetry + opentelemetry-api + 1.0.0 Review comment: @steveloughran Thanks for the review. I will make these changes in next 2-3 days. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17633) Please upgrade json-smart dependency to the latest version
[ https://issues.apache.org/jira/browse/HADOOP-17633?focusedWorklogId=581910=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-581910 ] ASF GitHub Bot logged work on HADOOP-17633: --- Author: ASF GitHub Bot Created on: 13/Apr/21 17:14 Start Date: 13/Apr/21 17:14 Worklog Time Spent: 10m Work Description: virajjasani commented on pull request #2895: URL: https://github.com/apache/hadoop/pull/2895#issuecomment-818903880 `json-smart` is used directly by hadoop-auth and through hadoop-auth, it is used by multiple modules as transitive dependency in hadoop-common, hadoop-nfs, hadoop-hdfs, hadoop-yarn-common etc. I have run unit tests for majority modules locally and tests look good. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 581910) Time Spent: 1h 10m (was: 1h) > Please upgrade json-smart dependency to the latest version > -- > > Key: HADOOP-17633 > URL: https://issues.apache.org/jira/browse/HADOOP-17633 > Project: Hadoop Common > Issue Type: Improvement > Components: auth, build >Affects Versions: 3.3.0, 3.2.1, 3.2.2, 3.4.0 >Reporter: helen huang >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > Time Spent: 1h 10m > Remaining Estimate: 0h > > Please upgrade the json-smart dependency to the latest version available. > Currently hadoop-auth is using version 2.3. Fortify scan picked up a security > issue with this version. Please upgrade to the latest version. > Thanks! > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] virajjasani commented on pull request #2895: HADOOP-17633. Bump json-smart to 2.4.2 due to CVEs
virajjasani commented on pull request #2895: URL: https://github.com/apache/hadoop/pull/2895#issuecomment-818903880 `json-smart` is used directly by hadoop-auth and through hadoop-auth, it is used by multiple modules as transitive dependency in hadoop-common, hadoop-nfs, hadoop-hdfs, hadoop-yarn-common etc. I have run unit tests for majority modules locally and tests look good. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2889: HDFS-15963. Unreleased volume references cause an infinite loop.
hadoop-yetus commented on pull request #2889: URL: https://github.com/apache/hadoop/pull/2889#issuecomment-818875816 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 57s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 3 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 36m 43s | | trunk passed | | +1 :green_heart: | compile | 1m 26s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 1m 16s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 1m 5s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 23s | | trunk passed | | +1 :green_heart: | javadoc | 0m 53s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 1m 30s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 3m 16s | | trunk passed | | +1 :green_heart: | shadedclient | 18m 52s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 14s | | the patch passed | | +1 :green_heart: | compile | 1m 13s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 1m 13s | | the patch passed | | +1 :green_heart: | compile | 1m 8s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | javac | 1m 8s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 53s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 14s | | the patch passed | | +1 :green_heart: | javadoc | 0m 47s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 1m 17s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | -1 :x: | spotbugs | 3m 18s | [/new-spotbugs-hadoop-hdfs-project_hadoop-hdfs.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2889/7/artifact/out/new-spotbugs-hadoop-hdfs-project_hadoop-hdfs.html) | hadoop-hdfs-project/hadoop-hdfs generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | | +1 :green_heart: | shadedclient | 18m 45s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 348m 2s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2889/7/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 35s | | The patch does not generate ASF License warnings. | | | | 443m 16s | | | | Reason | Tests | |---:|:--| | SpotBugs | module:hadoop-hdfs-project/hadoop-hdfs | | | Exception is caught when Exception is not thrown in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService$ReplicaFileDeleteTask.run() At FsDatasetAsyncDiskService.java:is not thrown in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService$ReplicaFileDeleteTask.run() At FsDatasetAsyncDiskService.java:[line 347] | | Failed junit tests | hadoop.hdfs.server.namenode.TestAddOverReplicatedStripedBlocks | | | hadoop.hdfs.server.namenode.TestFileTruncate | | | hadoop.hdfs.TestDFSShell | | | hadoop.hdfs.server.namenode.ha.TestBootstrapStandby | | | hadoop.hdfs.server.namenode.TestDecommissioningStatus | | | hadoop.hdfs.server.datanode.fsdataset.impl.TestFsVolumeList | | | hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys | | | hadoop.hdfs.server.namenode.snapshot.TestNestedSnapshots | | | hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints | | | hadoop.hdfs.server.namenode.ha.TestEditLogTailer | | | hadoop.hdfs.server.datanode.TestDirectoryScanner | | | hadoop.hdfs.TestPersistBlocks | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2889/7/artifact/out/Dockerfile |
[GitHub] [hadoop] goiri merged pull request #2898: HDFS-15971. Make mkstemp cross platform
goiri merged pull request #2898: URL: https://github.com/apache/hadoop/pull/2898 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-17524) Remove EventCounter
[ https://issues.apache.org/jira/browse/HADOOP-17524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Viraj Jasani reassigned HADOOP-17524: - Assignee: Viraj Jasani > Remove EventCounter > --- > > Key: HADOOP-17524 > URL: https://issues.apache.org/jira/browse/HADOOP-17524 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Akira Ajisaka >Assignee: Viraj Jasani >Priority: Major > > EventCount is using Log4J 1.x API. We need to remove it to drop Log4J 1.x. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] jojochuang commented on a change in pull request #2849: HDFS-15621. Datanode DirectoryScanner uses excessive memory
jojochuang commented on a change in pull request #2849: URL: https://github.com/apache/hadoop/pull/2849#discussion_r612493208 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsVolumeSpi.java ## @@ -297,23 +281,18 @@ private static String getSuffix(File f, String prefix) { * @param metaFile the path to the block meta-data file Review comment: need a @param for basePath. Also add that metaFile stores only the suffix. ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsVolumeSpi.java ## @@ -227,27 +227,27 @@ */ public static class ScanInfo implements Comparable { private final long blockId; - /** - * The block file path, relative to the volume's base directory. - * If there was no block file found, this may be null. If 'vol' - * is null, then this is the full path of the block file. + * The full path to the folder containing the block / meta files. */ -private final String blockSuffix; - +private final File basePath; /** - * The suffix of the meta file path relative to the block file. - * If blockSuffix is null, then this will be the entire path relative - * to the volume base directory, or an absolute path if vol is also - * null. + * The block file name, with no path */ -private final String metaSuffix; +private final String blockFile; +/** + * Holds the meta file name, with no path, only if blockFile is null. + * If blockFile is not null, the meta file will be named identically to + * the blockFile, but with a suffix like "_1234.meta". If the blockFile + * is present, we store only the meta file suffix. + */ Review comment: it would also make sense to copy this comment to the constructor parameters. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16524) Automatic keystore reloading for HttpServer2
[ https://issues.apache.org/jira/browse/HADOOP-16524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17320222#comment-17320222 ] Ayush Saxena commented on HADOOP-16524: --- [~stack]/[~borislav.iordanov] There is a test failing : [https://ci-hadoop.apache.org/view/Hadoop/job/hadoop-qbt-trunk-java8-linux-x86_64/475/testReport/junit/org.apache.hadoop.hdfs.qjournal.server/TestJournalNodeRespectsBindHostKeys/testHttpsBindHostKey/] It is throwing exception from the newly added code: {code:java} java.lang.NullPointerException at sun.nio.fs.UnixPath.normalizeAndCheck(UnixPath.java:77) at sun.nio.fs.UnixPath.(UnixPath.java:71) at sun.nio.fs.UnixFileSystem.getPath(UnixFileSystem.java:281) at java.nio.file.Paths.get(Paths.java:84) at org.apache.hadoop.http.HttpServer2$Builder.makeConfigurationChangeMonitor(HttpServer2.java:609) at org.apache.hadoop.http.HttpServer2$Builder.createHttpsChannelConnector(HttpServer2.java:592) at org.apache.hadoop.http.HttpServer2$Builder.build(HttpServer2.java:518) {code} I suspect it could be because of this change, Can you folks check once. > Automatic keystore reloading for HttpServer2 > > > Key: HADOOP-16524 > URL: https://issues.apache.org/jira/browse/HADOOP-16524 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Kihwal Lee >Assignee: Borislav Iordanov >Priority: Major > Labels: pull-request-available > Fix For: 3.3.1, 3.4.0 > > Attachments: HADOOP-16524.patch > > Time Spent: 5h 50m > Remaining Estimate: 0h > > Jetty 9 simplified reloading of keystore. This allows hadoop daemon's SSL > cert to be updated in place without having to restart the service. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17524) Remove EventCounter
[ https://issues.apache.org/jira/browse/HADOOP-17524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17320200#comment-17320200 ] Duo Zhang commented on HADOOP-17524: I'm OK with removing it. > Remove EventCounter > --- > > Key: HADOOP-17524 > URL: https://issues.apache.org/jira/browse/HADOOP-17524 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Akira Ajisaka >Priority: Major > > EventCount is using Log4J 1.x API. We need to remove it to drop Log4J 1.x. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-17524) Remove EventCounter
[ https://issues.apache.org/jira/browse/HADOOP-17524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17320178#comment-17320178 ] Viraj Jasani edited comment on HADOOP-17524 at 4/13/21, 1:30 PM: - FYI [~zhangduo], in case you have any specific suggestions other than removing EventCounter (incompatible change). Or do you feel this feature in JMXMetrics is quite useful and should not be removed? was (Author: vjasani): FYI [~zhangduo], in case you have any specific suggestions other than removing EventCounter (incompatible change). > Remove EventCounter > --- > > Key: HADOOP-17524 > URL: https://issues.apache.org/jira/browse/HADOOP-17524 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Akira Ajisaka >Priority: Major > > EventCount is using Log4J 1.x API. We need to remove it to drop Log4J 1.x. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17524) Remove EventCounter
[ https://issues.apache.org/jira/browse/HADOOP-17524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17320178#comment-17320178 ] Viraj Jasani commented on HADOOP-17524: --- FYI [~zhangduo], in case you have any specific suggestions other than removing EventCounter (incompatible change). > Remove EventCounter > --- > > Key: HADOOP-17524 > URL: https://issues.apache.org/jira/browse/HADOOP-17524 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Akira Ajisaka >Priority: Major > > EventCount is using Log4J 1.x API. We need to remove it to drop Log4J 1.x. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17511) Add an Audit plugin point for S3A auditing/context
[ https://issues.apache.org/jira/browse/HADOOP-17511?focusedWorklogId=581771=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-581771 ] ASF GitHub Bot logged work on HADOOP-17511: --- Author: ASF GitHub Bot Created on: 13/Apr/21 13:25 Start Date: 13/Apr/21 13:25 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #2807: URL: https://github.com/apache/hadoop/pull/2807#issuecomment-818733840 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 39s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 3s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | markdownlint | 0m 0s | | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 43 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 14m 37s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 20m 1s | | trunk passed | | -1 :x: | compile | 18m 29s | [/branch-compile-root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/12/artifact/out/branch-compile-root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt) | root in trunk failed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04. | | +1 :green_heart: | compile | 19m 22s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 3m 50s | | trunk passed | | +1 :green_heart: | mvnsite | 2m 28s | | trunk passed | | +1 :green_heart: | javadoc | 1m 41s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 2m 15s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 3m 41s | | trunk passed | | +1 :green_heart: | shadedclient | 14m 36s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 28s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 28s | | the patch passed | | +1 :green_heart: | compile | 21m 30s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | -1 :x: | javac | 21m 30s | [/results-compile-javac-root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/12/artifact/out/results-compile-javac-root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt) | root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 generated 188 new + 1750 unchanged - 0 fixed = 1938 total (was 1750) | | +1 :green_heart: | compile | 20m 7s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | -1 :x: | javac | 20m 7s | [/results-compile-javac-root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/12/artifact/out/results-compile-javac-root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt) | root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 generated 1 new + 1832 unchanged - 1 fixed = 1833 total (was 1833) | | -1 :x: | blanks | 0m 0s | [/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/12/artifact/out/blanks-eol.txt) | The patch has 1 line(s) that end in blanks. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply | | -0 :warning: | checkstyle | 3m 49s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/12/artifact/out/results-checkstyle-root.txt) | root: The patch generated 7 new + 185 unchanged - 4 fixed = 192 total (was 189) | | +1 :green_heart: | mvnsite | 2m 28s | | the patch passed | | +1 :green_heart: | xml | 0m 1s | | The patch has no ill-formed XML file. | | +1 :green_heart: | javadoc | 1m 32s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | -1 :x: | javadoc | 0m 39s |
[GitHub] [hadoop] hadoop-yetus commented on pull request #2807: HADOOP-17511. Add audit/telemetry logging to S3A connector
hadoop-yetus commented on pull request #2807: URL: https://github.com/apache/hadoop/pull/2807#issuecomment-818733840 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 39s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 3s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | markdownlint | 0m 0s | | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 43 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 14m 37s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 20m 1s | | trunk passed | | -1 :x: | compile | 18m 29s | [/branch-compile-root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/12/artifact/out/branch-compile-root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt) | root in trunk failed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04. | | +1 :green_heart: | compile | 19m 22s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 3m 50s | | trunk passed | | +1 :green_heart: | mvnsite | 2m 28s | | trunk passed | | +1 :green_heart: | javadoc | 1m 41s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 2m 15s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 3m 41s | | trunk passed | | +1 :green_heart: | shadedclient | 14m 36s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 28s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 28s | | the patch passed | | +1 :green_heart: | compile | 21m 30s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | -1 :x: | javac | 21m 30s | [/results-compile-javac-root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/12/artifact/out/results-compile-javac-root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt) | root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 generated 188 new + 1750 unchanged - 0 fixed = 1938 total (was 1750) | | +1 :green_heart: | compile | 20m 7s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | -1 :x: | javac | 20m 7s | [/results-compile-javac-root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/12/artifact/out/results-compile-javac-root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt) | root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 generated 1 new + 1832 unchanged - 1 fixed = 1833 total (was 1833) | | -1 :x: | blanks | 0m 0s | [/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/12/artifact/out/blanks-eol.txt) | The patch has 1 line(s) that end in blanks. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply | | -0 :warning: | checkstyle | 3m 49s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/12/artifact/out/results-checkstyle-root.txt) | root: The patch generated 7 new + 185 unchanged - 4 fixed = 192 total (was 189) | | +1 :green_heart: | mvnsite | 2m 28s | | the patch passed | | +1 :green_heart: | xml | 0m 1s | | The patch has no ill-formed XML file. | | +1 :green_heart: | javadoc | 1m 32s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | -1 :x: | javadoc | 0m 39s | [/results-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/12/artifact/out/results-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt) | hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 generated 6 new + 80 unchanged - 8 fixed = 86 total (was 88) | | -1 :x: | spotbugs | 1m 30s |
[GitHub] [hadoop] hadoop-yetus commented on pull request #2902: HDFS-15940. Fixing and refactoring tests specific to Block recovery.
hadoop-yetus commented on pull request #2902: URL: https://github.com/apache/hadoop/pull/2902#issuecomment-818725828 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 15m 37s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 3 new or modified test files. | _ branch-3.2 Compile Tests _ | | +1 :green_heart: | mvninstall | 34m 48s | | branch-3.2 passed | | +1 :green_heart: | compile | 1m 12s | | branch-3.2 passed | | +1 :green_heart: | checkstyle | 0m 50s | | branch-3.2 passed | | +1 :green_heart: | mvnsite | 1m 26s | | branch-3.2 passed | | +1 :green_heart: | javadoc | 1m 6s | | branch-3.2 passed | | +1 :green_heart: | spotbugs | 3m 57s | | branch-3.2 passed | | +1 :green_heart: | shadedclient | 19m 49s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 25s | | the patch passed | | +1 :green_heart: | compile | 1m 6s | | the patch passed | | +1 :green_heart: | javac | 1m 6s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 47s | [/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2902/1/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs-project/hadoop-hdfs: The patch generated 3 new + 56 unchanged - 7 fixed = 59 total (was 63) | | +1 :green_heart: | mvnsite | 1m 16s | | the patch passed | | +1 :green_heart: | javadoc | 0m 59s | | the patch passed | | +1 :green_heart: | spotbugs | 3m 41s | | the patch passed | | +1 :green_heart: | shadedclient | 19m 41s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 191m 46s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2902/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch failed. | | +1 :green_heart: | asflicense | 0m 35s | | The patch does not generate ASF License warnings. | | | | 296m 5s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.namenode.TestFsck | | | hadoop.hdfs.server.namenode.TestEditLogRace | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2902/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2902 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux ad55042679bc 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | branch-3.2 / 7edf24d39e57ecf4e0b8f3fe4f210a6639644f49 | | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~18.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2902/1/testReport/ | | Max. process+thread count | 2135 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2902/1/console | | versions | git=2.17.1 maven=3.6.0 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16206) Migrate from Log4j1 to Log4j2
[ https://issues.apache.org/jira/browse/HADOOP-16206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17320152#comment-17320152 ] Duo Zhang commented on HADOOP-16206: Sounds good to me. So [~ahussein] you would like to provide a patch? If not I could also give a try. Thanks. > Migrate from Log4j1 to Log4j2 > - > > Key: HADOOP-16206 > URL: https://issues.apache.org/jira/browse/HADOOP-16206 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.3.0 >Reporter: Akira Ajisaka >Priority: Major > Attachments: HADOOP-16206-wip.001.patch > > > This sub-task is to remove log4j1 dependency and add log4j2 dependency. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16206) Migrate from Log4j1 to Log4j2
[ https://issues.apache.org/jira/browse/HADOOP-16206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17320140#comment-17320140 ] Ahmed Hussein commented on HADOOP-16206: Thanks [~weichiu] and [~zhangduo] for the suggestions. The approach of breaking this up seems a good idea. I propose a slightly different scheme of breaking-up the migration. Instead of "per-module", the migration would be split into two phases: # Phase-1: Get it to work. Straightforward replacement of log4j ** IMHO, it will be better to aim for log4j2 skipping the bridge way. ** Reviewer only needs to check that the tests pass and the new configurations are not causing inconsistence. ** this implies that suggestions for tuning/enhancements should be noted but not immediately applied. # Phase-2: Post migration. Tuning and optimizations ** After the migration is done, separate Jiras are filed to tune the logging. ** Separate tickets can be issued to address performance evaluations and exploration of other features such as Async, garbage-free, etc.. The approach of two-phases migration will reduce the burden and logically separate between actual migration Vs. addressing suggestions and tuning requests. > Migrate from Log4j1 to Log4j2 > - > > Key: HADOOP-16206 > URL: https://issues.apache.org/jira/browse/HADOOP-16206 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.3.0 >Reporter: Akira Ajisaka >Priority: Major > Attachments: HADOOP-16206-wip.001.patch > > > This sub-task is to remove log4j1 dependency and add log4j2 dependency. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-16206) Migrate from Log4j1 to Log4j2
[ https://issues.apache.org/jira/browse/HADOOP-16206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17320140#comment-17320140 ] Ahmed Hussein edited comment on HADOOP-16206 at 4/13/21, 12:39 PM: --- Thanks [~weichiu] and [~zhangduo] for the suggestions. The approach of breaking this up seems a good idea. I propose a slightly different scheme of breaking-up the migration. Instead of "per-module", the migration would be split into two phases: # Phase-1: Get it to work. Straightforward replacement of log4j ** IMHO, it will be better to aim for log4j2 skipping the bridge API. ** Reviewer only needs to check that the tests pass and the new configurations are not causing inconsistence. ** this implies that suggestions for tuning/enhancements should be noted but not immediately applied. # Phase-2: Post migration. Tuning and optimizations ** After the migration is done, separate Jiras are filed to tune the logging. ** Separate tickets can be issued to address performance evaluations and exploration of other features such as Async, garbage-free, etc.. The approach of two-phases migration will reduce the burden and logically separate between actual migration Vs. addressing suggestions and tuning requests. was (Author: ahussein): Thanks [~weichiu] and [~zhangduo] for the suggestions. The approach of breaking this up seems a good idea. I propose a slightly different scheme of breaking-up the migration. Instead of "per-module", the migration would be split into two phases: # Phase-1: Get it to work. Straightforward replacement of log4j ** IMHO, it will be better to aim for log4j2 skipping the bridge way. ** Reviewer only needs to check that the tests pass and the new configurations are not causing inconsistence. ** this implies that suggestions for tuning/enhancements should be noted but not immediately applied. # Phase-2: Post migration. Tuning and optimizations ** After the migration is done, separate Jiras are filed to tune the logging. ** Separate tickets can be issued to address performance evaluations and exploration of other features such as Async, garbage-free, etc.. The approach of two-phases migration will reduce the burden and logically separate between actual migration Vs. addressing suggestions and tuning requests. > Migrate from Log4j1 to Log4j2 > - > > Key: HADOOP-16206 > URL: https://issues.apache.org/jira/browse/HADOOP-16206 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.3.0 >Reporter: Akira Ajisaka >Priority: Major > Attachments: HADOOP-16206-wip.001.patch > > > This sub-task is to remove log4j1 dependency and add log4j2 dependency. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17633) Please upgrade json-smart dependency to the latest version
[ https://issues.apache.org/jira/browse/HADOOP-17633?focusedWorklogId=581713=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-581713 ] ASF GitHub Bot logged work on HADOOP-17633: --- Author: ASF GitHub Bot Created on: 13/Apr/21 11:59 Start Date: 13/Apr/21 11:59 Worklog Time Spent: 10m Work Description: virajjasani commented on pull request #2895: URL: https://github.com/apache/hadoop/pull/2895#issuecomment-818678822 @steveloughran I have run tests locally for some affected modules and they seem fine. However, this PR's build seems to be stuck (build#2 is still not over after 17+ hr). Thread dump from Jenkins build also doesn't seem that useful. By any chance, are you aware of any build issues with `hadoop-project` module specifically? My last PR change related to `hadoop-project` also had the similar issue, build did not post any result on PR (PR #2757 ). Hadoop commons, HDFS changes do post result on PR. Build link [here](https://ci-hadoop.apache.org/job/hadoop-multibranch/view/change-requests/job/PR-2895/) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 581713) Time Spent: 1h (was: 50m) > Please upgrade json-smart dependency to the latest version > -- > > Key: HADOOP-17633 > URL: https://issues.apache.org/jira/browse/HADOOP-17633 > Project: Hadoop Common > Issue Type: Improvement > Components: auth, build >Affects Versions: 3.3.0, 3.2.1, 3.2.2, 3.4.0 >Reporter: helen huang >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > Time Spent: 1h > Remaining Estimate: 0h > > Please upgrade the json-smart dependency to the latest version available. > Currently hadoop-auth is using version 2.3. Fortify scan picked up a security > issue with this version. Please upgrade to the latest version. > Thanks! > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] virajjasani commented on pull request #2895: HADOOP-17633. Bump json-smart to 2.4.2 due to CVEs
virajjasani commented on pull request #2895: URL: https://github.com/apache/hadoop/pull/2895#issuecomment-818678822 @steveloughran I have run tests locally for some affected modules and they seem fine. However, this PR's build seems to be stuck (build#2 is still not over after 17+ hr). Thread dump from Jenkins build also doesn't seem that useful. By any chance, are you aware of any build issues with `hadoop-project` module specifically? My last PR change related to `hadoop-project` also had the similar issue, build did not post any result on PR (PR #2757 ). Hadoop commons, HDFS changes do post result on PR. Build link [here](https://ci-hadoop.apache.org/job/hadoop-multibranch/view/change-requests/job/PR-2895/) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2860: Test Pre-Commits.
hadoop-yetus commented on pull request #2860: URL: https://github.com/apache/hadoop/pull/2860#issuecomment-818664661 (!) A patch to the testing environment has been detected. Re-executing against the patched versions to perform further tests. The console is at https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2860/9/console in case of problems. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17524) Remove EventCounter
[ https://issues.apache.org/jira/browse/HADOOP-17524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17320089#comment-17320089 ] Viraj Jasani commented on HADOOP-17524: --- [~aajisaka] If you don't mind, can I take this up? > Remove EventCounter > --- > > Key: HADOOP-17524 > URL: https://issues.apache.org/jira/browse/HADOOP-17524 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Akira Ajisaka >Priority: Major > > EventCount is using Log4J 1.x API. We need to remove it to drop Log4J 1.x. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] tasanuma commented on pull request #2868: HDFS-15759. EC: Verify EC reconstruction correctness on DataNode (#2585)
tasanuma commented on pull request #2868: URL: https://github.com/apache/hadoop/pull/2868#issuecomment-818649530 Thanks for your work, @jojochuang and @ferhui. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-11245) Update NFS gateway to use Netty4
[ https://issues.apache.org/jira/browse/HADOOP-11245?focusedWorklogId=581671=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-581671 ] ASF GitHub Bot logged work on HADOOP-11245: --- Author: ASF GitHub Bot Created on: 13/Apr/21 10:21 Start Date: 13/Apr/21 10:21 Worklog Time Spent: 10m Work Description: szetszwo commented on a change in pull request #2832: URL: https://github.com/apache/hadoop/pull/2832#discussion_r612218129 ## File path: hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/oncrpc/RpcUtil.java ## @@ -62,75 +64,84 @@ public static FrameDecoder constructRpcFrameDecoder() { * RpcFrameDecoder is a stateful pipeline stage. It has to be constructed for * each RPC client. */ - static class RpcFrameDecoder extends FrameDecoder { + static class RpcFrameDecoder extends ByteToMessageDecoder { public static final Logger LOG = LoggerFactory.getLogger(RpcFrameDecoder.class); -private ChannelBuffer currentFrame; +private boolean isLast; Review comment: isLast should be volatile. ## File path: hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/oncrpc/SimpleTcpClient.java ## @@ -48,40 +47,42 @@ public SimpleTcpClient(String host, int port, XDR request, Boolean oneShot) { this.request = request; this.oneShot = oneShot; } - - protected ChannelPipelineFactory setPipelineFactory() { -this.pipelineFactory = new ChannelPipelineFactory() { + + protected ChannelInitializer setChannelHandler() { +return new ChannelInitializer() { @Override - public ChannelPipeline getPipeline() { -return Channels.pipeline( + protected void initChannel(SocketChannel ch) throws Exception { +ChannelPipeline p = ch.pipeline(); +p.addLast( RpcUtil.constructRpcFrameDecoder(), -new SimpleTcpClientHandler(request)); +new SimpleTcpClientHandler(request) +); } }; -return this.pipelineFactory; } public void run() { // Configure the client. -ChannelFactory factory = new NioClientSocketChannelFactory( -Executors.newCachedThreadPool(), Executors.newCachedThreadPool(), 1, 1); -ClientBootstrap bootstrap = new ClientBootstrap(factory); - -// Set up the pipeline factory. -bootstrap.setPipelineFactory(setPipelineFactory()); +NioEventLoopGroup workerGroup = new NioEventLoopGroup(); Review comment: Is run() only used in unit tests? If not, workerGroup needs to be shutdown when oneShot == false. ## File path: hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/oncrpc/SimpleTcpClient.java ## @@ -48,40 +47,42 @@ public SimpleTcpClient(String host, int port, XDR request, Boolean oneShot) { this.request = request; this.oneShot = oneShot; } - - protected ChannelPipelineFactory setPipelineFactory() { -this.pipelineFactory = new ChannelPipelineFactory() { + + protected ChannelInitializer setChannelHandler() { +return new ChannelInitializer() { @Override - public ChannelPipeline getPipeline() { -return Channels.pipeline( + protected void initChannel(SocketChannel ch) throws Exception { +ChannelPipeline p = ch.pipeline(); +p.addLast( RpcUtil.constructRpcFrameDecoder(), -new SimpleTcpClientHandler(request)); +new SimpleTcpClientHandler(request) +); } }; -return this.pipelineFactory; } public void run() { // Configure the client. -ChannelFactory factory = new NioClientSocketChannelFactory( -Executors.newCachedThreadPool(), Executors.newCachedThreadPool(), 1, 1); -ClientBootstrap bootstrap = new ClientBootstrap(factory); - -// Set up the pipeline factory. -bootstrap.setPipelineFactory(setPipelineFactory()); +NioEventLoopGroup workerGroup = new NioEventLoopGroup(); +Bootstrap bootstrap = new Bootstrap() +.group(workerGroup) +.channel(NioSocketChannel.class); -bootstrap.setOption("tcpNoDelay", true); -bootstrap.setOption("keepAlive", true); +try { + ChannelFuture future = bootstrap.handler(setChannelHandler()) + .option(ChannelOption.TCP_NODELAY, true) + .option(ChannelOption.SO_KEEPALIVE, true) + .connect(new InetSocketAddress(host, port)).sync(); -// Start the connection attempt. -ChannelFuture future = bootstrap.connect(new InetSocketAddress(host, port)); + if (oneShot) { +// Wait until the connection is closed or the connection attempt fails. +future.channel().closeFuture().sync(); -if (oneShot) { - // Wait until the connection is closed or the connection attempt fails. - future.getChannel().getCloseFuture().awaitUninterruptibly(); - - // Shut
[GitHub] [hadoop] szetszwo commented on a change in pull request #2832: HADOOP-11245. Update NFS gateway to use Netty4
szetszwo commented on a change in pull request #2832: URL: https://github.com/apache/hadoop/pull/2832#discussion_r612218129 ## File path: hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/oncrpc/RpcUtil.java ## @@ -62,75 +64,84 @@ public static FrameDecoder constructRpcFrameDecoder() { * RpcFrameDecoder is a stateful pipeline stage. It has to be constructed for * each RPC client. */ - static class RpcFrameDecoder extends FrameDecoder { + static class RpcFrameDecoder extends ByteToMessageDecoder { public static final Logger LOG = LoggerFactory.getLogger(RpcFrameDecoder.class); -private ChannelBuffer currentFrame; +private boolean isLast; Review comment: isLast should be volatile. ## File path: hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/oncrpc/SimpleTcpClient.java ## @@ -48,40 +47,42 @@ public SimpleTcpClient(String host, int port, XDR request, Boolean oneShot) { this.request = request; this.oneShot = oneShot; } - - protected ChannelPipelineFactory setPipelineFactory() { -this.pipelineFactory = new ChannelPipelineFactory() { + + protected ChannelInitializer setChannelHandler() { +return new ChannelInitializer() { @Override - public ChannelPipeline getPipeline() { -return Channels.pipeline( + protected void initChannel(SocketChannel ch) throws Exception { +ChannelPipeline p = ch.pipeline(); +p.addLast( RpcUtil.constructRpcFrameDecoder(), -new SimpleTcpClientHandler(request)); +new SimpleTcpClientHandler(request) +); } }; -return this.pipelineFactory; } public void run() { // Configure the client. -ChannelFactory factory = new NioClientSocketChannelFactory( -Executors.newCachedThreadPool(), Executors.newCachedThreadPool(), 1, 1); -ClientBootstrap bootstrap = new ClientBootstrap(factory); - -// Set up the pipeline factory. -bootstrap.setPipelineFactory(setPipelineFactory()); +NioEventLoopGroup workerGroup = new NioEventLoopGroup(); Review comment: Is run() only used in unit tests? If not, workerGroup needs to be shutdown when oneShot == false. ## File path: hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/oncrpc/SimpleTcpClient.java ## @@ -48,40 +47,42 @@ public SimpleTcpClient(String host, int port, XDR request, Boolean oneShot) { this.request = request; this.oneShot = oneShot; } - - protected ChannelPipelineFactory setPipelineFactory() { -this.pipelineFactory = new ChannelPipelineFactory() { + + protected ChannelInitializer setChannelHandler() { +return new ChannelInitializer() { @Override - public ChannelPipeline getPipeline() { -return Channels.pipeline( + protected void initChannel(SocketChannel ch) throws Exception { +ChannelPipeline p = ch.pipeline(); +p.addLast( RpcUtil.constructRpcFrameDecoder(), -new SimpleTcpClientHandler(request)); +new SimpleTcpClientHandler(request) +); } }; -return this.pipelineFactory; } public void run() { // Configure the client. -ChannelFactory factory = new NioClientSocketChannelFactory( -Executors.newCachedThreadPool(), Executors.newCachedThreadPool(), 1, 1); -ClientBootstrap bootstrap = new ClientBootstrap(factory); - -// Set up the pipeline factory. -bootstrap.setPipelineFactory(setPipelineFactory()); +NioEventLoopGroup workerGroup = new NioEventLoopGroup(); +Bootstrap bootstrap = new Bootstrap() +.group(workerGroup) +.channel(NioSocketChannel.class); -bootstrap.setOption("tcpNoDelay", true); -bootstrap.setOption("keepAlive", true); +try { + ChannelFuture future = bootstrap.handler(setChannelHandler()) + .option(ChannelOption.TCP_NODELAY, true) + .option(ChannelOption.SO_KEEPALIVE, true) + .connect(new InetSocketAddress(host, port)).sync(); -// Start the connection attempt. -ChannelFuture future = bootstrap.connect(new InetSocketAddress(host, port)); + if (oneShot) { +// Wait until the connection is closed or the connection attempt fails. +future.channel().closeFuture().sync(); -if (oneShot) { - // Wait until the connection is closed or the connection attempt fails. - future.getChannel().getCloseFuture().awaitUninterruptibly(); - - // Shut down thread pools to exit. - bootstrap.releaseExternalResources(); +// Shut down thread pools to exit. +workerGroup.shutdownGracefully(); Review comment: This should be moved to a finally block. Otherwise, it won't be shutdown in case of exceptions. ## File path: hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/oncrpc/RpcResponse.java ## @@ -19,15 +19,33 @@
[GitHub] [hadoop] tomscut commented on pull request #2896: HDFS-15970. Print network topology on the web
tomscut commented on pull request #2896: URL: https://github.com/apache/hadoop/pull/2896#issuecomment-818622111 Those failed unit tests were unrelated to the change. And they work fine locally. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2896: HDFS-15970. Print network topology on the web
hadoop-yetus commented on pull request #2896: URL: https://github.com/apache/hadoop/pull/2896#issuecomment-818611027 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 35s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 34m 19s | | trunk passed | | +1 :green_heart: | compile | 1m 27s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 1m 13s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 0m 58s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 28s | | trunk passed | | +1 :green_heart: | javadoc | 0m 55s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 1m 30s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 3m 23s | | trunk passed | | +1 :green_heart: | shadedclient | 16m 51s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 14s | | the patch passed | | +1 :green_heart: | compile | 1m 18s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 1m 18s | | the patch passed | | +1 :green_heart: | compile | 1m 9s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | javac | 1m 9s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 55s | | hadoop-hdfs-project/hadoop-hdfs: The patch generated 0 new + 5 unchanged - 1 fixed = 5 total (was 6) | | +1 :green_heart: | mvnsite | 1m 11s | | the patch passed | | +1 :green_heart: | javadoc | 0m 50s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 1m 23s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 3m 18s | | the patch passed | | +1 :green_heart: | shadedclient | 16m 27s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 245m 39s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2896/4/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 45s | | The patch does not generate ASF License warnings. | | | | 334m 11s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys | | | hadoop.hdfs.server.namenode.snapshot.TestNestedSnapshots | | | hadoop.hdfs.server.namenode.TestPersistentStoragePolicySatisfier | | | hadoop.hdfs.server.datanode.TestDirectoryScanner | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2896/4/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2896 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux c72d0f5777f6 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 6a97f6a99f2809aed0f352446b184c6a33445731 | | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2896/4/testReport/ | | Max. process+thread count | 3035 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output |
[jira] [Work logged] (HADOOP-11245) Update NFS gateway to use Netty4
[ https://issues.apache.org/jira/browse/HADOOP-11245?focusedWorklogId=581658=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-581658 ] ASF GitHub Bot logged work on HADOOP-11245: --- Author: ASF GitHub Bot Created on: 13/Apr/21 09:44 Start Date: 13/Apr/21 09:44 Worklog Time Spent: 10m Work Description: szetszwo commented on pull request #2832: URL: https://github.com/apache/hadoop/pull/2832#issuecomment-818602960 > ... verified to contain no memory leak via -Dio.netty.leakDetectionLevel=paranoid. Hi @jojochuang , according to https://netty.io/wiki/reference-counted-objects.html , the option is renamed to `io.netty.leakDetection.Level`; see the NOTE below. https://user-images.githubusercontent.com/907380/114532453-6ee1bf00-9c7f-11eb-82fe-f43bffc3cae8.png;> Not sure if the old option name still works. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 581658) Time Spent: 1h 40m (was: 1.5h) > Update NFS gateway to use Netty4 > > > Key: HADOOP-11245 > URL: https://issues.apache.org/jira/browse/HADOOP-11245 > Project: Hadoop Common > Issue Type: Sub-task > Components: nfs >Reporter: Brandon Li >Assignee: Wei-Chiu Chuang >Priority: Major > Labels: pull-request-available > Time Spent: 1h 40m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] szetszwo commented on pull request #2832: HADOOP-11245. Update NFS gateway to use Netty4
szetszwo commented on pull request #2832: URL: https://github.com/apache/hadoop/pull/2832#issuecomment-818602960 > ... verified to contain no memory leak via -Dio.netty.leakDetectionLevel=paranoid. Hi @jojochuang , according to https://netty.io/wiki/reference-counted-objects.html , the option is renamed to `io.netty.leakDetection.Level`; see the NOTE below. https://user-images.githubusercontent.com/907380/114532453-6ee1bf00-9c7f-11eb-82fe-f43bffc3cae8.png;> Not sure if the old option name still works. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2819: HDFS-15923. Authentication failed when rename accross sub clusters.
hadoop-yetus commented on pull request #2819: URL: https://github.com/apache/hadoop/pull/2819#issuecomment-818588736 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 5m 29s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 35m 55s | | trunk passed | | +1 :green_heart: | compile | 0m 38s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 0m 33s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 0m 22s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 39s | | trunk passed | | +1 :green_heart: | javadoc | 0m 39s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 0m 52s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 1m 20s | | trunk passed | | +1 :green_heart: | shadedclient | 17m 20s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 33s | | the patch passed | | +1 :green_heart: | compile | 0m 32s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 0m 32s | | the patch passed | | +1 :green_heart: | compile | 0m 27s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | javac | 0m 27s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 18s | [/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2819/2/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt) | hadoop-hdfs-project/hadoop-hdfs-rbf: The patch generated 5 new + 0 unchanged - 0 fixed = 5 total (was 0) | | +1 :green_heart: | mvnsite | 0m 31s | | the patch passed | | +1 :green_heart: | javadoc | 0m 31s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 0m 47s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 1m 23s | | the patch passed | | +1 :green_heart: | shadedclient | 16m 54s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 26m 15s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2819/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt) | hadoop-hdfs-rbf in the patch passed. | | +1 :green_heart: | asflicense | 0m 30s | | The patch does not generate ASF License warnings. | | | | 114m 6s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.federation.router.TestRouterFederationRename | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2819/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2819 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 46ed824fb395 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 4bfc7ae17a3386618cf73a9984118f7beaaddc65 | | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2819/2/testReport/ | | Max. process+thread count | 1906 (vs. ulimit of 5500) | | modules | C:
[GitHub] [hadoop] hadoop-yetus commented on pull request #2901: HDFS-15912. Allow ProtobufRpcEngine to be extensible
hadoop-yetus commented on pull request #2901: URL: https://github.com/apache/hadoop/pull/2901#issuecomment-818587189 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 53s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 14m 4s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 23m 29s | | trunk passed | | +1 :green_heart: | compile | 22m 38s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 18m 58s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 4m 0s | | trunk passed | | +1 :green_heart: | mvnsite | 2m 13s | | trunk passed | | +1 :green_heart: | javadoc | 1m 43s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 2m 31s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 3m 40s | | trunk passed | | +1 :green_heart: | shadedclient | 16m 55s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 20s | | Maven dependency ordering for patch | | -1 :x: | mvninstall | 0m 28s | [/patch-mvninstall-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2901/2/artifact/out/patch-mvninstall-hadoop-common-project_hadoop-common.txt) | hadoop-common in the patch failed. | | -1 :x: | compile | 0m 54s | [/patch-compile-root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2901/2/artifact/out/patch-compile-root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt) | root in the patch failed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04. | | -1 :x: | javac | 0m 54s | [/patch-compile-root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2901/2/artifact/out/patch-compile-root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt) | root in the patch failed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04. | | -1 :x: | compile | 0m 48s | [/patch-compile-root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2901/2/artifact/out/patch-compile-root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt) | root in the patch failed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08. | | -1 :x: | javac | 0m 48s | [/patch-compile-root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2901/2/artifact/out/patch-compile-root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt) | root in the patch failed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08. | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 3m 41s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2901/2/artifact/out/results-checkstyle-root.txt) | root: The patch generated 36 new + 21 unchanged - 3 fixed = 57 total (was 24) | | -1 :x: | mvnsite | 0m 33s | [/patch-mvnsite-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2901/2/artifact/out/patch-mvnsite-hadoop-common-project_hadoop-common.txt) | hadoop-common in the patch failed. | | -1 :x: | javadoc | 0m 23s | [/patch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2901/2/artifact/out/patch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt) | hadoop-common in the patch failed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04. | | -1 :x: | javadoc | 1m 18s |
[jira] [Work logged] (HADOOP-17633) Please upgrade json-smart dependency to the latest version
[ https://issues.apache.org/jira/browse/HADOOP-17633?focusedWorklogId=581646=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-581646 ] ASF GitHub Bot logged work on HADOOP-17633: --- Author: ASF GitHub Bot Created on: 13/Apr/21 09:07 Start Date: 13/Apr/21 09:07 Worklog Time Spent: 10m Work Description: virajjasani edited a comment on pull request #2895: URL: https://github.com/apache/hadoop/pull/2895#issuecomment-818578359 ``` [INFO] org.apache.hadoop:hadoop-minikdc:jar:3.4.0-SNAPSHOT [INFO] +- commons-io:commons-io:jar:2.5:compile [INFO] +- org.apache.kerby:kerb-simplekdc:jar:1.0.1:compile [INFO] | +- org.apache.kerby:kerb-client:jar:1.0.1:compile [INFO] | | +- org.apache.kerby:kerby-config:jar:1.0.1:compile [INFO] | | +- org.apache.kerby:kerb-core:jar:1.0.1:compile [INFO] | | | \- org.apache.kerby:kerby-pkix:jar:1.0.1:compile [INFO] | | | +- org.apache.kerby:kerby-asn1:jar:1.0.1:compile [INFO] | | | \- org.apache.kerby:kerby-util:jar:1.0.1:compile [INFO] | | +- org.apache.kerby:kerb-common:jar:1.0.1:compile [INFO] | | | \- org.apache.kerby:kerb-crypto:jar:1.0.1:compile [INFO] | | +- org.apache.kerby:kerb-util:jar:1.0.1:compile [INFO] | | \- org.apache.kerby:token-provider:jar:1.0.1:compile [INFO] | | \- com.nimbusds:nimbus-jose-jwt:jar:9.8:compile [INFO] | |\- com.github.stephenc.jcip:jcip-annotations:jar:1.0-1:compile [INFO] | \- org.apache.kerby:kerb-admin:jar:1.0.1:compile ``` & ``` [INFO] +- com.nimbusds:nimbus-jose-jwt:jar:9.8:compile [INFO] | \- com.github.stephenc.jcip:jcip-annotations:jar:1.0-1:compile ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 581646) Time Spent: 50m (was: 40m) > Please upgrade json-smart dependency to the latest version > -- > > Key: HADOOP-17633 > URL: https://issues.apache.org/jira/browse/HADOOP-17633 > Project: Hadoop Common > Issue Type: Improvement > Components: auth, build >Affects Versions: 3.3.0, 3.2.1, 3.2.2, 3.4.0 >Reporter: helen huang >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > Time Spent: 50m > Remaining Estimate: 0h > > Please upgrade the json-smart dependency to the latest version available. > Currently hadoop-auth is using version 2.3. Fortify scan picked up a security > issue with this version. Please upgrade to the latest version. > Thanks! > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17633) Please upgrade json-smart dependency to the latest version
[ https://issues.apache.org/jira/browse/HADOOP-17633?focusedWorklogId=581645=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-581645 ] ASF GitHub Bot logged work on HADOOP-17633: --- Author: ASF GitHub Bot Created on: 13/Apr/21 09:07 Start Date: 13/Apr/21 09:07 Worklog Time Spent: 10m Work Description: virajjasani commented on pull request #2895: URL: https://github.com/apache/hadoop/pull/2895#issuecomment-818578359 ``` [INFO] org.apache.hadoop:hadoop-minikdc:jar:3.4.0-SNAPSHOT [INFO] +- commons-io:commons-io:jar:2.5:compile [INFO] +- org.apache.kerby:kerb-simplekdc:jar:1.0.1:compile [INFO] | +- org.apache.kerby:kerb-client:jar:1.0.1:compile [INFO] | | +- org.apache.kerby:kerby-config:jar:1.0.1:compile [INFO] | | +- org.apache.kerby:kerb-core:jar:1.0.1:compile [INFO] | | | \- org.apache.kerby:kerby-pkix:jar:1.0.1:compile [INFO] | | | +- org.apache.kerby:kerby-asn1:jar:1.0.1:compile [INFO] | | | \- org.apache.kerby:kerby-util:jar:1.0.1:compile [INFO] | | +- org.apache.kerby:kerb-common:jar:1.0.1:compile [INFO] | | | \- org.apache.kerby:kerb-crypto:jar:1.0.1:compile [INFO] | | +- org.apache.kerby:kerb-util:jar:1.0.1:compile **[INFO] | | \- org.apache.kerby:token-provider:jar:1.0.1:compile [INFO] | | \- com.nimbusds:nimbus-jose-jwt:jar:9.8:compile** [INFO] | |\- com.github.stephenc.jcip:jcip-annotations:jar:1.0-1:compile [INFO] | \- org.apache.kerby:kerb-admin:jar:1.0.1:compile ``` & ``` **[INFO] +- com.nimbusds:nimbus-jose-jwt:jar:9.8:compile** [INFO] | \- com.github.stephenc.jcip:jcip-annotations:jar:1.0-1:compile ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 581645) Time Spent: 40m (was: 0.5h) > Please upgrade json-smart dependency to the latest version > -- > > Key: HADOOP-17633 > URL: https://issues.apache.org/jira/browse/HADOOP-17633 > Project: Hadoop Common > Issue Type: Improvement > Components: auth, build >Affects Versions: 3.3.0, 3.2.1, 3.2.2, 3.4.0 >Reporter: helen huang >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > Time Spent: 40m > Remaining Estimate: 0h > > Please upgrade the json-smart dependency to the latest version available. > Currently hadoop-auth is using version 2.3. Fortify scan picked up a security > issue with this version. Please upgrade to the latest version. > Thanks! > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] virajjasani edited a comment on pull request #2895: HADOOP-17633. Bump json-smart to 2.4.2 due to CVEs
virajjasani edited a comment on pull request #2895: URL: https://github.com/apache/hadoop/pull/2895#issuecomment-818578359 ``` [INFO] org.apache.hadoop:hadoop-minikdc:jar:3.4.0-SNAPSHOT [INFO] +- commons-io:commons-io:jar:2.5:compile [INFO] +- org.apache.kerby:kerb-simplekdc:jar:1.0.1:compile [INFO] | +- org.apache.kerby:kerb-client:jar:1.0.1:compile [INFO] | | +- org.apache.kerby:kerby-config:jar:1.0.1:compile [INFO] | | +- org.apache.kerby:kerb-core:jar:1.0.1:compile [INFO] | | | \- org.apache.kerby:kerby-pkix:jar:1.0.1:compile [INFO] | | | +- org.apache.kerby:kerby-asn1:jar:1.0.1:compile [INFO] | | | \- org.apache.kerby:kerby-util:jar:1.0.1:compile [INFO] | | +- org.apache.kerby:kerb-common:jar:1.0.1:compile [INFO] | | | \- org.apache.kerby:kerb-crypto:jar:1.0.1:compile [INFO] | | +- org.apache.kerby:kerb-util:jar:1.0.1:compile [INFO] | | \- org.apache.kerby:token-provider:jar:1.0.1:compile [INFO] | | \- com.nimbusds:nimbus-jose-jwt:jar:9.8:compile [INFO] | |\- com.github.stephenc.jcip:jcip-annotations:jar:1.0-1:compile [INFO] | \- org.apache.kerby:kerb-admin:jar:1.0.1:compile ``` & ``` [INFO] +- com.nimbusds:nimbus-jose-jwt:jar:9.8:compile [INFO] | \- com.github.stephenc.jcip:jcip-annotations:jar:1.0-1:compile ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] virajjasani commented on pull request #2895: HADOOP-17633. Bump json-smart to 2.4.2 due to CVEs
virajjasani commented on pull request #2895: URL: https://github.com/apache/hadoop/pull/2895#issuecomment-818578359 ``` [INFO] org.apache.hadoop:hadoop-minikdc:jar:3.4.0-SNAPSHOT [INFO] +- commons-io:commons-io:jar:2.5:compile [INFO] +- org.apache.kerby:kerb-simplekdc:jar:1.0.1:compile [INFO] | +- org.apache.kerby:kerb-client:jar:1.0.1:compile [INFO] | | +- org.apache.kerby:kerby-config:jar:1.0.1:compile [INFO] | | +- org.apache.kerby:kerb-core:jar:1.0.1:compile [INFO] | | | \- org.apache.kerby:kerby-pkix:jar:1.0.1:compile [INFO] | | | +- org.apache.kerby:kerby-asn1:jar:1.0.1:compile [INFO] | | | \- org.apache.kerby:kerby-util:jar:1.0.1:compile [INFO] | | +- org.apache.kerby:kerb-common:jar:1.0.1:compile [INFO] | | | \- org.apache.kerby:kerb-crypto:jar:1.0.1:compile [INFO] | | +- org.apache.kerby:kerb-util:jar:1.0.1:compile **[INFO] | | \- org.apache.kerby:token-provider:jar:1.0.1:compile [INFO] | | \- com.nimbusds:nimbus-jose-jwt:jar:9.8:compile** [INFO] | |\- com.github.stephenc.jcip:jcip-annotations:jar:1.0-1:compile [INFO] | \- org.apache.kerby:kerb-admin:jar:1.0.1:compile ``` & ``` **[INFO] +- com.nimbusds:nimbus-jose-jwt:jar:9.8:compile** [INFO] | \- com.github.stephenc.jcip:jcip-annotations:jar:1.0-1:compile ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17633) Please upgrade json-smart dependency to the latest version
[ https://issues.apache.org/jira/browse/HADOOP-17633?focusedWorklogId=581638=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-581638 ] ASF GitHub Bot logged work on HADOOP-17633: --- Author: ASF GitHub Bot Created on: 13/Apr/21 08:58 Start Date: 13/Apr/21 08:58 Worklog Time Spent: 10m Work Description: virajjasani commented on pull request #2895: URL: https://github.com/apache/hadoop/pull/2895#issuecomment-818572599 I just tested locally and realized that `nimbus-jose-jwt` 9.8 does not even produce `json-smart` hence all the places where we have excluded `json-smart` from `nimbus-jose-jwt` no longer needs to be done. Let me also check with latest version of `kerby`. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 581638) Time Spent: 0.5h (was: 20m) > Please upgrade json-smart dependency to the latest version > -- > > Key: HADOOP-17633 > URL: https://issues.apache.org/jira/browse/HADOOP-17633 > Project: Hadoop Common > Issue Type: Improvement > Components: auth, build >Affects Versions: 3.3.0, 3.2.1, 3.2.2, 3.4.0 >Reporter: helen huang >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > Time Spent: 0.5h > Remaining Estimate: 0h > > Please upgrade the json-smart dependency to the latest version available. > Currently hadoop-auth is using version 2.3. Fortify scan picked up a security > issue with this version. Please upgrade to the latest version. > Thanks! > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] virajjasani commented on pull request #2895: HADOOP-17633. Bump json-smart to 2.4.2 due to CVEs
virajjasani commented on pull request #2895: URL: https://github.com/apache/hadoop/pull/2895#issuecomment-818572599 I just tested locally and realized that `nimbus-jose-jwt` 9.8 does not even produce `json-smart` hence all the places where we have excluded `json-smart` from `nimbus-jose-jwt` no longer needs to be done. Let me also check with latest version of `kerby`. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-17611) Distcp parallel file copy breaks the modification time
[ https://issues.apache.org/jira/browse/HADOOP-17611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17318843#comment-17318843 ] Adam Maroti edited comment on HADOOP-17611 at 4/13/21, 8:29 AM: It already has an api for that: Filesystem.setTimes(Path, long, long) was (Author: amaroti): It already has an api for that: Filesystem.setTimes(Path, long, long) Viraj Jasani (Jira) ezt írta (időpont: 2021. ápr. 11., V > Distcp parallel file copy breaks the modification time > -- > > Key: HADOOP-17611 > URL: https://issues.apache.org/jira/browse/HADOOP-17611 > Project: Hadoop Common > Issue Type: Bug >Reporter: Adam Maroti >Priority: Major > Labels: pull-request-available > Time Spent: 2h 10m > Remaining Estimate: 0h > > The commit HADOOP-11794. Enable distcp to copy blocks in parallel. > (bf3fb585aaf2b179836e139c041fc87920a3c886) broke the modification time of > large files. > > In CopyCommitter.java inside concatFileChunks Filesystem.concat is called > which changes the modification time therefore the modification times of files > copeid by distcp will not match the source files. However this only occurs > for large enough files, which are copied by splitting them up by distcp. > In concatFileChunks before calling concat extract the modification time and > apply that to the concatenated result-file after the concat. (probably best > -after- before the rename()). -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-17611) Distcp parallel file copy breaks the modification time
[ https://issues.apache.org/jira/browse/HADOOP-17611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17318851#comment-17318851 ] Adam Maroti edited comment on HADOOP-17611 at 4/13/21, 8:28 AM: Yes, also it is possible to use it to just change one or the other. (Or obviously both the access time and he modification time simultaneously) was (Author: amaroti): Yes, also it is possible to use it to just change one or the other. (Or obviously both the access time and he modification time simultaneously) Viraj Jasani (Jira) ezt írta (időpont: 2021. ápr. 11., V > Distcp parallel file copy breaks the modification time > -- > > Key: HADOOP-17611 > URL: https://issues.apache.org/jira/browse/HADOOP-17611 > Project: Hadoop Common > Issue Type: Bug >Reporter: Adam Maroti >Priority: Major > Labels: pull-request-available > Time Spent: 2h 10m > Remaining Estimate: 0h > > The commit HADOOP-11794. Enable distcp to copy blocks in parallel. > (bf3fb585aaf2b179836e139c041fc87920a3c886) broke the modification time of > large files. > > In CopyCommitter.java inside concatFileChunks Filesystem.concat is called > which changes the modification time therefore the modification times of files > copeid by distcp will not match the source files. However this only occurs > for large enough files, which are copied by splitting them up by distcp. > In concatFileChunks before calling concat extract the modification time and > apply that to the concatenated result-file after the concat. (probably best > -after- before the rename()). -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-17611) Distcp parallel file copy breaks the modification time
[ https://issues.apache.org/jira/browse/HADOOP-17611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17319615#comment-17319615 ] Adam Maroti edited comment on HADOOP-17611 at 4/13/21, 8:28 AM: When times is set the preserve() function is called from the copy mapper after the file/file chunk creation. The copycomitter which runs after that and does the concat doesn't call preserve because it no longer has the source file statuses. So the concat happens inside of copycomitter which is run after the copy mapper causing the concat to be run after the preserve. was (Author: amaroti): When times is set the preserve() function is called from the copy mapper after the file/file junk creation. The copycomitter which runs after that and does the concat doesn't call preserve because it no longer has the source file statuses. So the concat happens inside of copycomitter which is run after the copy mapper causing the concat to be run after the preserve. Viraj Jasani (Jira) ezt írta (időpont: 2021. ápr. 12., H > Distcp parallel file copy breaks the modification time > -- > > Key: HADOOP-17611 > URL: https://issues.apache.org/jira/browse/HADOOP-17611 > Project: Hadoop Common > Issue Type: Bug >Reporter: Adam Maroti >Priority: Major > Labels: pull-request-available > Time Spent: 2h 10m > Remaining Estimate: 0h > > The commit HADOOP-11794. Enable distcp to copy blocks in parallel. > (bf3fb585aaf2b179836e139c041fc87920a3c886) broke the modification time of > large files. > > In CopyCommitter.java inside concatFileChunks Filesystem.concat is called > which changes the modification time therefore the modification times of files > copeid by distcp will not match the source files. However this only occurs > for large enough files, which are copied by splitting them up by distcp. > In concatFileChunks before calling concat extract the modification time and > apply that to the concatenated result-file after the concat. (probably best > -after- before the rename()). -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] jojochuang opened a new pull request #2902: HDFS-15940. Fixing and refactoring tests specific to Block recovery.
jojochuang opened a new pull request #2902: URL: https://github.com/apache/hadoop/pull/2902 HDFS-15940 for branch-3.2 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17630) [JDK 15] TestPrintableString fails due to Unicode 13.0 support
[ https://issues.apache.org/jira/browse/HADOOP-17630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-17630: --- Fix Version/s: 3.2.3 3.1.5 3.4.0 3.3.1 Resolution: Fixed Status: Resolved (was: Patch Available) Committed to trunk, branch-3.3, branch-3.2, and branch-3.1. > [JDK 15] TestPrintableString fails due to Unicode 13.0 support > -- > > Key: HADOOP-17630 > URL: https://issues.apache.org/jira/browse/HADOOP-17630 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Major > Labels: newbie, pull-request-available > Fix For: 3.3.1, 3.4.0, 3.1.5, 3.2.3 > > Time Spent: 40m > Remaining Estimate: 0h > > After [JDK-8239383|https://bugs.openjdk.java.net/browse/JDK-8239383], Unicode > 13.0 is supported and TestPrintableString fails. > U+3 is actually used in Unicode 13.0: > https://en.wikipedia.org/wiki/CJK_Unified_Ideographs_Extension_G > {quote} > [ERROR] Tests run: 2, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.055 > s <<< FAILURE! - in org.apache.hadoop.fs.shell.TestPrintableString > [ERROR] > testNonPrintableCharacters(org.apache.hadoop.fs.shell.TestPrintableString) > Time elapsed: 0.014 s <<< FAILURE! > java.lang.AssertionError: > Should replace unassigned U+3 and U+D > Expected: is "-?-?-" > but: was "- -?-" > at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20) > at org.junit.Assert.assertThat(Assert.java:964) > at > org.apache.hadoop.fs.shell.TestPrintableString.expect(TestPrintableString.java:32) > at > org.apache.hadoop.fs.shell.TestPrintableString.testNonPrintableCharacters(TestPrintableString.java:79) > at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:78) > at > java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.base/java.lang.reflect.Method.invoke(Method.java:567) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) > at > org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) > at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) > at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) > at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) > at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) > at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) > at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) > at org.junit.runners.ParentRunner.run(ParentRunner.java:413) > at > org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238) > at > org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159) > at > org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384) > at > org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345) > at > org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126) > at > org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418) > {quote} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17630) [JDK 15] TestPrintableString fails due to Unicode 13.0 support
[ https://issues.apache.org/jira/browse/HADOOP-17630?focusedWorklogId=581613=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-581613 ] ASF GitHub Bot logged work on HADOOP-17630: --- Author: ASF GitHub Bot Created on: 13/Apr/21 08:09 Start Date: 13/Apr/21 08:09 Worklog Time Spent: 10m Work Description: aajisaka merged pull request #2890: URL: https://github.com/apache/hadoop/pull/2890 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 581613) Time Spent: 0.5h (was: 20m) > [JDK 15] TestPrintableString fails due to Unicode 13.0 support > -- > > Key: HADOOP-17630 > URL: https://issues.apache.org/jira/browse/HADOOP-17630 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Major > Labels: newbie, pull-request-available > Time Spent: 0.5h > Remaining Estimate: 0h > > After [JDK-8239383|https://bugs.openjdk.java.net/browse/JDK-8239383], Unicode > 13.0 is supported and TestPrintableString fails. > U+3 is actually used in Unicode 13.0: > https://en.wikipedia.org/wiki/CJK_Unified_Ideographs_Extension_G > {quote} > [ERROR] Tests run: 2, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.055 > s <<< FAILURE! - in org.apache.hadoop.fs.shell.TestPrintableString > [ERROR] > testNonPrintableCharacters(org.apache.hadoop.fs.shell.TestPrintableString) > Time elapsed: 0.014 s <<< FAILURE! > java.lang.AssertionError: > Should replace unassigned U+3 and U+D > Expected: is "-?-?-" > but: was "- -?-" > at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20) > at org.junit.Assert.assertThat(Assert.java:964) > at > org.apache.hadoop.fs.shell.TestPrintableString.expect(TestPrintableString.java:32) > at > org.apache.hadoop.fs.shell.TestPrintableString.testNonPrintableCharacters(TestPrintableString.java:79) > at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:78) > at > java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.base/java.lang.reflect.Method.invoke(Method.java:567) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) > at > org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) > at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) > at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) > at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) > at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) > at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) > at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) > at org.junit.runners.ParentRunner.run(ParentRunner.java:413) > at > org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238) > at > org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159) > at > org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384) > at > org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345) > at > org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126) > at > org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418) > {quote} -- This message was sent by Atlassian
[GitHub] [hadoop] aajisaka commented on pull request #2890: HADOOP-17630. [JDK 15] TestPrintableString fails due to Unicode 13.0 support.
aajisaka commented on pull request #2890: URL: https://github.com/apache/hadoop/pull/2890#issuecomment-818537893 Merged. Thanks @jojochuang -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2898: HDFS-15971. Make mkstemp cross platform
hadoop-yetus commented on pull request #2898: URL: https://github.com/apache/hadoop/pull/2898#issuecomment-818538528 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 38s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 6 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 32m 32s | | trunk passed | | +1 :green_heart: | compile | 2m 38s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 2m 41s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | mvnsite | 0m 27s | | trunk passed | | +1 :green_heart: | shadedclient | 52m 1s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 17s | | the patch passed | | +1 :green_heart: | compile | 2m 47s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | cc | 2m 47s | | the patch passed | | +1 :green_heart: | golang | 2m 47s | | the patch passed | | +1 :green_heart: | javac | 2m 47s | | the patch passed | | +1 :green_heart: | compile | 2m 36s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | cc | 2m 36s | | the patch passed | | +1 :green_heart: | golang | 2m 36s | | the patch passed | | +1 :green_heart: | javac | 2m 36s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | mvnsite | 0m 19s | | the patch passed | | +1 :green_heart: | shadedclient | 13m 30s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 31m 39s | | hadoop-hdfs-native-client in the patch passed. | | +1 :green_heart: | asflicense | 0m 34s | | The patch does not generate ASF License warnings. | | | | 106m 41s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2898/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2898 | | Optional Tests | dupname asflicense compile cc mvnsite javac unit codespell golang | | uname | Linux ac03461e8a6c 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 09c548bf9c671f1839d2764fc80f8b342da80278 | | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2898/2/testReport/ | | Max. process+thread count | 616 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: hadoop-hdfs-project/hadoop-hdfs-native-client | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2898/2/console | | versions | git=2.25.1 maven=3.6.3 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17630) [JDK 15] TestPrintableString fails due to Unicode 13.0 support
[ https://issues.apache.org/jira/browse/HADOOP-17630?focusedWorklogId=581614=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-581614 ] ASF GitHub Bot logged work on HADOOP-17630: --- Author: ASF GitHub Bot Created on: 13/Apr/21 08:09 Start Date: 13/Apr/21 08:09 Worklog Time Spent: 10m Work Description: aajisaka commented on pull request #2890: URL: https://github.com/apache/hadoop/pull/2890#issuecomment-818537893 Merged. Thanks @jojochuang -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 581614) Time Spent: 40m (was: 0.5h) > [JDK 15] TestPrintableString fails due to Unicode 13.0 support > -- > > Key: HADOOP-17630 > URL: https://issues.apache.org/jira/browse/HADOOP-17630 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Major > Labels: newbie, pull-request-available > Time Spent: 40m > Remaining Estimate: 0h > > After [JDK-8239383|https://bugs.openjdk.java.net/browse/JDK-8239383], Unicode > 13.0 is supported and TestPrintableString fails. > U+3 is actually used in Unicode 13.0: > https://en.wikipedia.org/wiki/CJK_Unified_Ideographs_Extension_G > {quote} > [ERROR] Tests run: 2, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.055 > s <<< FAILURE! - in org.apache.hadoop.fs.shell.TestPrintableString > [ERROR] > testNonPrintableCharacters(org.apache.hadoop.fs.shell.TestPrintableString) > Time elapsed: 0.014 s <<< FAILURE! > java.lang.AssertionError: > Should replace unassigned U+3 and U+D > Expected: is "-?-?-" > but: was "- -?-" > at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20) > at org.junit.Assert.assertThat(Assert.java:964) > at > org.apache.hadoop.fs.shell.TestPrintableString.expect(TestPrintableString.java:32) > at > org.apache.hadoop.fs.shell.TestPrintableString.testNonPrintableCharacters(TestPrintableString.java:79) > at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:78) > at > java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.base/java.lang.reflect.Method.invoke(Method.java:567) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) > at > org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) > at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) > at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) > at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) > at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) > at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) > at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) > at org.junit.runners.ParentRunner.run(ParentRunner.java:413) > at > org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238) > at > org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159) > at > org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384) > at > org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345) > at > org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126) > at >
[GitHub] [hadoop] aajisaka merged pull request #2890: HADOOP-17630. [JDK 15] TestPrintableString fails due to Unicode 13.0 support.
aajisaka merged pull request #2890: URL: https://github.com/apache/hadoop/pull/2890 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] jojochuang commented on a change in pull request #2889: HDFS-15963. Unreleased volume references cause an infinite loop.
jojochuang commented on a change in pull request #2889: URL: https://github.com/apache/hadoop/pull/2889#discussion_r612221274 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDataTransferProtocol.java ## @@ -562,4 +565,57 @@ void writeBlock(ExtendedBlock block, BlockConstructionStage stage, checksum, CachingStrategy.newDefaultStrategy(), false, false, null, null, new String[0]); } + + @Test + public void testReleaseVolumeRefIfExceptionThrown() throws IOException { +Path file = new Path("dataprotocol.dat"); +int numDataNodes = 1; + +Configuration conf = new HdfsConfiguration(); +conf.setInt(DFSConfigKeys.DFS_REPLICATION_KEY, numDataNodes); +MiniDFSCluster cluster = new MiniDFSCluster.Builder(conf).numDataNodes( +numDataNodes).build(); +try { + cluster.waitActive(); + datanode = cluster.getFileSystem().getDataNodeStats( + DatanodeReportType.LIVE)[0]; + dnAddr = NetUtils.createSocketAddr(datanode.getXferAddr()); + FileSystem fileSys = cluster.getFileSystem(); + + int fileLen = Math.min( + conf.getInt(DFSConfigKeys.DFS_BLOCK_SIZE_KEY, 4096), 4096); + + DFSTestUtil.createFile(fileSys, file, fileLen, fileLen, + fileSys.getDefaultBlockSize(file), + fileSys.getDefaultReplication(file), 0L); + + // get the first blockid for the file + final ExtendedBlock firstBlock = DFSTestUtil.getFirstBlock(fileSys, file); + + String bpid = cluster.getNamesystem().getBlockPoolId(); + ExtendedBlock blk = new ExtendedBlock(bpid, firstBlock.getLocalBlock()); + sendBuf.reset(); + recvBuf.reset(); + + // delete the meta file to create a exception in BlockSender constructor + DataNode dn = cluster.getDataNodes().get(0); + cluster.getMaterializedReplica(0, blk).deleteMeta(); + + FsVolumeImpl volume = (FsVolumeImpl) DataNodeTestUtils.getFSDataset( + dn).getVolume(blk); + int beforeCnt = volume.getReferenceCount(); + + sender.copyBlock(blk, BlockTokenSecretManager.DUMMY_TOKEN); + sendRecvData("Copy a block.", false); + Thread.sleep(1000); + + int afterCnt = volume.getReferenceCount(); + assertEquals(beforeCnt, afterCnt); + +} catch (InterruptedException e) { Review comment: Sorry I wasn't being clear myself. You don't have to catch the exception. Instead, add InterruptedException to the test method signature. ## File path: hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestLazyPersistFiles.java ## @@ -280,4 +283,40 @@ public void run() { } } } + @Test + public void testReleaseVolumeRefIfExceptionThrown() throws IOException { +getClusterBuilder().setRamDiskReplicaCapacity(2).build(); +final String METHOD_NAME = GenericTestUtils.getMethodName(); +final int SEED = 0xFADED; +Path path = new Path("/" + METHOD_NAME + ".Writer.File.dat"); + +DataNode dn = cluster.getDataNodes().get(0); +FsDatasetSpi.FsVolumeReferences volumes = +DataNodeTestUtils.getFSDataset(dn).getFsVolumeReferences(); +int[] beforeCnts = new int[volumes.size()]; +try { + FsDatasetImpl ds = (FsDatasetImpl) DataNodeTestUtils.getFSDataset(dn); + + // Create a runtime exception + ds.asyncLazyPersistService.shutdown(); + for (int i = 0; i < volumes.size(); ++i) { +beforeCnts[i] = ((FsVolumeImpl) volumes.get(i)).getReferenceCount(); + } + + makeRandomTestFile(path, BLOCK_SIZE, true, SEED); + Thread.sleep(3 * LAZY_WRITER_INTERVAL_SEC * 1000); + + for (int i = 0; i < volumes.size(); ++i) { +int afterCnt = ((FsVolumeImpl) volumes.get(i)).getReferenceCount(); +// LazyWriter keeps trying to save copies even if +// asyncLazyPersistService is already shutdown. +// If we do not release references, the number of +// references will increase infinitely. +Assert.assertTrue( +beforeCnts[i] == afterCnt || beforeCnts[i] == (afterCnt - 1)); + } +} catch (InterruptedException e) { Review comment: Sorry I wasn't being clear myself. You don't have to catch the exception. Instead, add InterruptedException to the test method signature. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2896: HDFS-15970. Print network topology on the web
hadoop-yetus commented on pull request #2896: URL: https://github.com/apache/hadoop/pull/2896#issuecomment-818532881 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 37s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | -1 :x: | mvninstall | 33m 41s | [/branch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2896/3/artifact/out/branch-mvninstall-root.txt) | root in trunk failed. | | +1 :green_heart: | compile | 1m 18s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 1m 13s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 0m 59s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 25s | | trunk passed | | +1 :green_heart: | javadoc | 0m 53s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 1m 24s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 3m 5s | | trunk passed | | +1 :green_heart: | shadedclient | 16m 1s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 12s | | the patch passed | | +1 :green_heart: | compile | 1m 13s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 1m 13s | | the patch passed | | +1 :green_heart: | compile | 1m 9s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | javac | 1m 9s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 51s | | hadoop-hdfs-project/hadoop-hdfs: The patch generated 0 new + 5 unchanged - 1 fixed = 5 total (was 6) | | +1 :green_heart: | mvnsite | 1m 16s | | the patch passed | | +1 :green_heart: | javadoc | 0m 45s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 1m 21s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 3m 5s | | the patch passed | | +1 :green_heart: | shadedclient | 15m 43s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 232m 3s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2896/3/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 43s | | The patch does not generate ASF License warnings. | | | | 317m 52s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.namenode.snapshot.TestNestedSnapshots | | | hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys | | | hadoop.hdfs.server.datanode.TestDirectoryScanner | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2896/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2896 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 33d5cf59d69f 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 4934adbcd5d1a616e670348a07c4b4ac9f1d15ee | | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2896/3/testReport/ | | Max. process+thread count | 3366 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | |
[GitHub] [hadoop] jojochuang merged pull request #2882: HDFS-15815. if required storageType are unavailable, log the failed reason during choosing Datanode. Contributed by Yang Yun.
jojochuang merged pull request #2882: URL: https://github.com/apache/hadoop/pull/2882 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2896: HDFS-15970. Print network topology on the web
hadoop-yetus commented on pull request #2896: URL: https://github.com/apache/hadoop/pull/2896#issuecomment-818480937 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 36s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | -1 :x: | mvninstall | 33m 53s | [/branch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2896/2/artifact/out/branch-mvninstall-root.txt) | root in trunk failed. | | +1 :green_heart: | compile | 1m 21s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 1m 13s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 1m 1s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 22s | | trunk passed | | +1 :green_heart: | javadoc | 0m 54s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 1m 27s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 3m 7s | | trunk passed | | +1 :green_heart: | shadedclient | 16m 12s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 14s | | the patch passed | | +1 :green_heart: | compile | 1m 12s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 1m 12s | | the patch passed | | +1 :green_heart: | compile | 1m 6s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | javac | 1m 6s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 52s | | hadoop-hdfs-project/hadoop-hdfs: The patch generated 0 new + 5 unchanged - 1 fixed = 5 total (was 6) | | +1 :green_heart: | mvnsite | 1m 9s | | the patch passed | | +1 :green_heart: | javadoc | 0m 44s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 1m 18s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 3m 7s | | the patch passed | | +1 :green_heart: | shadedclient | 16m 0s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 230m 19s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2896/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 50s | | The patch does not generate ASF License warnings. | | | | 316m 54s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover | | | hadoop.hdfs.server.balancer.TestBalancer | | | hadoop.hdfs.server.namenode.snapshot.TestNestedSnapshots | | | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks | | | hadoop.hdfs.server.datanode.TestDirectoryScanner | | | hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2896/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2896 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 78a824358c12 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 30daa9a064af200ebd3e79df6893273b517cb308 | | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Test Results |
[GitHub] [hadoop] GauthamBanasandra edited a comment on pull request #2898: HDFS-15971. Make mkstemp cross platform
GauthamBanasandra edited a comment on pull request #2898: URL: https://github.com/apache/hadoop/pull/2898#issuecomment-818472914 I've refactored TempFile class to implement the `Rule of 5` C++ idiom for efficient and correct management of the temporary file resource - https://cpppatterns.com/patterns/rule-of-five.html. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] GauthamBanasandra commented on pull request #2898: HDFS-15971. Make mkstemp cross platform
GauthamBanasandra commented on pull request #2898: URL: https://github.com/apache/hadoop/pull/2898#issuecomment-818472914 I've refactored TempFile class to implement the `Rule of 5` C++ idiom - https://cpppatterns.com/patterns/rule-of-five.html. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org