[GitHub] [hadoop] liuml07 commented on pull request #2830: HDFS-15931. Fix non-static inner classes for better memory management
liuml07 commented on pull request #2830: URL: https://github.com/apache/hadoop/pull/2830#issuecomment-812331417 Right right, the PR title is as this one shows now - without "Contributed by...". ``` HADOOP-12345. Fix foo. ``` When committing, this "Contributed by ..." will be added into subject/ title. I also seen people keep this in commit message. My comment was more about `:` vs `.`. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] virajjasani commented on pull request #2830: HDFS-15931. Fix non-static inner classes for better memory management
virajjasani commented on pull request #2830: URL: https://github.com/apache/hadoop/pull/2830#issuecomment-812329113 > @virajjasani Just saw you have multiple PRs recently. In Hadoop, the PR title and git commit subject have format: > > ``` > HADOOP-12345. Fix foo. Contributed by Viraj > ``` > > So the `.` instead of ` :`. I personally am fine with either char, but looks like the convention is to have `.` instead of space and colon ` :` to join the JIRA number and subject. Thanks @liuml07 , I will take care of this going forward. One question: `Contributed by user` should be in PR title? Or that is something taken care of while merging PR? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2852: MAPREDUCE-7287. Distcp will delete exists file , If we use "--delete …
hadoop-yetus commented on pull request #2852: URL: https://github.com/apache/hadoop/pull/2852#issuecomment-812325467 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 38s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 34m 41s | | trunk passed | | +1 :green_heart: | compile | 0m 35s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 0m 32s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 0m 25s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 32s | | trunk passed | | +1 :green_heart: | javadoc | 0m 27s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 0m 24s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 0m 52s | | trunk passed | | +1 :green_heart: | shadedclient | 14m 59s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 28s | | the patch passed | | +1 :green_heart: | compile | 0m 26s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 0m 26s | | the patch passed | | +1 :green_heart: | compile | 0m 21s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | javac | 0m 21s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 18s | [/results-checkstyle-hadoop-tools_hadoop-distcp.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2852/1/artifact/out/results-checkstyle-hadoop-tools_hadoop-distcp.txt) | hadoop-tools/hadoop-distcp: The patch generated 1 new + 174 unchanged - 0 fixed = 175 total (was 174) | | +1 :green_heart: | mvnsite | 0m 24s | | the patch passed | | +1 :green_heart: | javadoc | 0m 18s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 0m 17s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 0m 50s | | the patch passed | | +1 :green_heart: | shadedclient | 14m 27s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 24m 15s | | hadoop-distcp in the patch passed. | | +1 :green_heart: | asflicense | 0m 34s | | The patch does not generate ASF License warnings. | | | | 98m 14s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2852/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2852 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 654f123a5808 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 9ec5f8453c1602f8a80c7dde00e11fdd0be59377 | | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2852/1/testReport/ | | Max. process+thread count | 612 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-distcp U: hadoop-tools/hadoop-distcp | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2852/1/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the
[jira] [Updated] (HADOOP-17587) Kinit with keytab should not display the keytab file's full path in any logs
[ https://issues.apache.org/jira/browse/HADOOP-17587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HADOOP-17587: -- Fix Version/s: 3.4.0 3.3.1 Hadoop Flags: Reviewed Resolution: Fixed Status: Resolved (was: Patch Available) Committed to trunk and branch-3.3. [~Sushma_28] thanks for contribution. > Kinit with keytab should not display the keytab file's full path in any logs > > > Key: HADOOP-17587 > URL: https://issues.apache.org/jira/browse/HADOOP-17587 > Project: Hadoop Common > Issue Type: Bug >Reporter: Ravuri Sushma sree >Assignee: Ravuri Sushma sree >Priority: Major > Fix For: 3.3.1, 3.4.0 > > Attachments: HADOOP-17587.001.patch, HADOOP-17587.002.patch > > > The keytab is sensitive information, and the full path should not be printed > in the log -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17610) DelegationTokenAuthenticator prints token information
[ https://issues.apache.org/jira/browse/HADOOP-17610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HADOOP-17610: -- Fix Version/s: 3.4.0 3.3.1 Hadoop Flags: Reviewed Resolution: Fixed Status: Resolved (was: Patch Available) Committed to trunk and branch-3.3.. [~Sushma_28] thanks for contribution. > DelegationTokenAuthenticator prints token information > - > > Key: HADOOP-17610 > URL: https://issues.apache.org/jira/browse/HADOOP-17610 > Project: Hadoop Common > Issue Type: Bug >Reporter: Ravuri Sushma sree >Assignee: Ravuri Sushma sree >Priority: Major > Fix For: 3.3.1, 3.4.0 > > Attachments: HADOOP-17610.patch > > > Resource Manager logs print token information , as this is sensitive > information it must be exempted from being printed -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16145) Add Quota Preservation to DistCp
[ https://issues.apache.org/jira/browse/HADOOP-16145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17313547#comment-17313547 ] Hadoop QA commented on HADOOP-16145: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Logfile || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 12s{color} | {color:red}{color} | {color:red} HADOOP-16145 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HADOOP-16145 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12961394/HADOOP-16145.000.patch | | Console output | https://ci-hadoop.apache.org/job/PreCommit-HADOOP-Build/180/console | | versions | git=2.17.1 | | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. > Add Quota Preservation to DistCp > > > Key: HADOOP-16145 > URL: https://issues.apache.org/jira/browse/HADOOP-16145 > Project: Hadoop Common > Issue Type: Improvement > Components: tools/distcp >Reporter: Ranith Sardar >Assignee: Ranith Sardar >Priority: Major > Attachments: HADOOP-16145.000.patch > > > This JIRA to track the distcp support to handle the quota with preserving > options. > Add new command line argument to support that. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16145) Add Quota Preservation to DistCp
[ https://issues.apache.org/jira/browse/HADOOP-16145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17313543#comment-17313543 ] ANANDA G B commented on HADOOP-16145: - [~RANith] Thanks for working on it. CopyListingFileStatus.readFields() method can throw EOFException on version upgrade. So can handle it as part of this fix. > Add Quota Preservation to DistCp > > > Key: HADOOP-16145 > URL: https://issues.apache.org/jira/browse/HADOOP-16145 > Project: Hadoop Common > Issue Type: Improvement > Components: tools/distcp >Reporter: Ranith Sardar >Assignee: Ranith Sardar >Priority: Major > Attachments: HADOOP-16145.000.patch > > > This JIRA to track the distcp support to handle the quota with preserving > options. > Add new command line argument to support that. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] zhengchenyu opened a new pull request #2852: MAPREDUCE-7287. Distcp will delete exists file , If we use "--delete …
zhengchenyu opened a new pull request #2852: URL: https://github.com/apache/hadoop/pull/2852 MAPREDUCE-7287. Distcp will delete exists file , If we use "--delete and --update" options and distcp file. hdfs://ns1/tmp/a is an existing file, hdfs://ns2/tmp/a is also an existing file. When I run this command, ``` hadoop distcp -delete -update hdfs://ns1/tmp/a hdfs://ns2/tmp/a ``` I Found hdfs://ns2/tmp/a is deleted unpectectedly. Issue link: https://issues.apache.org/jira/browse/MAPREDUCE-7287 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] tomscut commented on pull request #2837: HDFS-15938. Fix java doc in FSEditLog
tomscut commented on pull request #2837: URL: https://github.com/apache/hadoop/pull/2837#issuecomment-812284799 Thanks @liuml07 for your review and merge. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] liuml07 merged pull request #2837: HDFS-15938. Fix java doc in FSEditLog
liuml07 merged pull request #2837: URL: https://github.com/apache/hadoop/pull/2837 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] jianghuazhu opened a new pull request #2851: HDFS-15941.Solve the problem that the inspection period of HeartbeatManager#Monitor can be configured.
jianghuazhu opened a new pull request #2851: URL: https://github.com/apache/hadoop/pull/2851 …anager#Monitor can be configured. ## NOTICE Please create an issue in ASF JIRA before opening a pull request, and you need to set the title of the pull request which starts with the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.) For more details, please see https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] liuml07 commented on a change in pull request #2841: HDFS-15939.Solve the problem that DataXceiverServer#run() does not record SocketTimeout exception.
liuml07 commented on a change in pull request #2841: URL: https://github.com/apache/hadoop/pull/2841#discussion_r606038591 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiverServer.java ## @@ -241,6 +241,13 @@ public void run() { .start(); } catch (SocketTimeoutException ignored) { // wake up to see if should continue to run +if (peer != null) { + LOG.warn("A timeout occurred between DataXceiverServer: {} " + Review comment: 1. Let's rename the `ignored` to ste as now we are not ignoring it. 2. Let's not use warn level here. This seems not a concern for this timeout exception generally. I think info this level and `debug` for the `else` clause should be enough? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] tasanuma commented on pull request #2770: HDFS-15892. Add metric for editPendingQ in FSEditLogAsync
tasanuma commented on pull request #2770: URL: https://github.com/apache/hadoop/pull/2770#issuecomment-812268557 Merged it. Thanks for your contribution, @tomscut! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] tasanuma merged pull request #2770: HDFS-15892. Add metric for editPendingQ in FSEditLogAsync
tasanuma merged pull request #2770: URL: https://github.com/apache/hadoop/pull/2770 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] tomscut commented on pull request #2770: HDFS-15892. Add metric for editPendingQ in FSEditLogAsync
tomscut commented on pull request #2770: URL: https://github.com/apache/hadoop/pull/2770#issuecomment-812265914 > Thanks for updating the PR, @tomscut. > The checkstyle issue is the same as other metrics. +1. Thanks @tasanuma for your review. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] tomscut commented on pull request #2770: HDFS-15892. Add metric for editPendingQ in FSEditLogAsync
tomscut commented on pull request #2770: URL: https://github.com/apache/hadoop/pull/2770#issuecomment-812261649 Those failed unit tests are unrelated to the change, they work fine locally. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2849: HDFS-15621. Datanode DirectoryScanner uses excessive memory
hadoop-yetus commented on pull request #2849: URL: https://github.com/apache/hadoop/pull/2849#issuecomment-812241886 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 57s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 32m 37s | | trunk passed | | +1 :green_heart: | compile | 1m 20s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 1m 12s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 1m 2s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 22s | | trunk passed | | +1 :green_heart: | javadoc | 0m 53s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 1m 26s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 3m 2s | | trunk passed | | +1 :green_heart: | shadedclient | 16m 0s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 16s | | the patch passed | | +1 :green_heart: | compile | 1m 13s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 1m 13s | | the patch passed | | +1 :green_heart: | compile | 1m 10s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | javac | 1m 10s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 51s | [/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2849/1/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs-project/hadoop-hdfs: The patch generated 3 new + 92 unchanged - 0 fixed = 95 total (was 92) | | +1 :green_heart: | mvnsite | 1m 13s | | the patch passed | | +1 :green_heart: | javadoc | 0m 45s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 1m 17s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | -1 :x: | spotbugs | 3m 17s | [/new-spotbugs-hadoop-hdfs-project_hadoop-hdfs.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2849/1/artifact/out/new-spotbugs-hadoop-hdfs-project_hadoop-hdfs.html) | hadoop-hdfs-project/hadoop-hdfs generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | | +1 :green_heart: | shadedclient | 16m 44s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 421m 50s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2849/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 44s | | The patch does not generate ASF License warnings. | | | | 508m 3s | | | | Reason | Tests | |---:|:--| | SpotBugs | module:hadoop-hdfs-project/hadoop-hdfs | | | Redundant nullcheck of file, which is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.compileReport(File, File, Collection, DirectoryScanner$ReportCompiler) Redundant null check at FsVolumeImpl.java:is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.compileReport(File, File, Collection, DirectoryScanner$ReportCompiler) Redundant null check at FsVolumeImpl.java:[line 1477] | | Failed junit tests | hadoop.hdfs.server.namenode.TestDecommissioningStatus | | | hadoop.hdfs.TestViewDistributedFileSystemContract | | | hadoop.hdfs.TestSnapshotCommands | | | hadoop.hdfs.TestPersistBlocks | | | hadoop.hdfs.TestDFSShell | | | hadoop.hdfs.server.datanode.fsdataset.impl.TestFsVolumeList | | | hadoop.fs.viewfs.TestViewFileSystemOverloadSchemeWithHdfsScheme | | | hadoop.hdfs.TestStateAlignmentContextWithHA | | | hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes | | |
[GitHub] [hadoop] liuml07 edited a comment on pull request #2830: HDFS-15931. Fix non-static inner classes for better memory management
liuml07 edited a comment on pull request #2830: URL: https://github.com/apache/hadoop/pull/2830#issuecomment-812233579 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] liuml07 commented on pull request #2830: HDFS-15931. Fix non-static inner classes for better memory management
liuml07 commented on pull request #2830: URL: https://github.com/apache/hadoop/pull/2830#issuecomment-812233579 @virajjasani Just saw you have multiple PRs recently. In Hadoop, the PR title and git commit subject starts with format: ``` HADOOP-12345. Fix foo. Contributed by Viraj ``` So the `.` instead of `:`. I personally am fine with either char, but looks like the convention is to have `.` instead of space and colon ` :` to join the JIRA number and subject. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] liuml07 merged pull request #2830: HDFS-15931 : Fix non-static inner classes for better memory management
liuml07 merged pull request #2830: URL: https://github.com/apache/hadoop/pull/2830 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2770: HDFS-15892. Add metric for editPendingQ in FSEditLogAsync
hadoop-yetus commented on pull request #2770: URL: https://github.com/apache/hadoop/pull/2770#issuecomment-812214935 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 43s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | markdownlint | 0m 0s | | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 14m 14s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 21m 12s | | trunk passed | | +1 :green_heart: | compile | 21m 29s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 18m 55s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 3m 52s | | trunk passed | | +1 :green_heart: | mvnsite | 3m 5s | | trunk passed | | +1 :green_heart: | javadoc | 2m 6s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 3m 19s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 5m 47s | | trunk passed | | +1 :green_heart: | shadedclient | 17m 12s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 28s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 11s | | the patch passed | | +1 :green_heart: | compile | 21m 3s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 21m 3s | | the patch passed | | +1 :green_heart: | compile | 19m 33s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | javac | 19m 33s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 4m 14s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2770/2/artifact/out/results-checkstyle-root.txt) | root: The patch generated 1 new + 53 unchanged - 0 fixed = 54 total (was 53) | | +1 :green_heart: | mvnsite | 3m 15s | | the patch passed | | +1 :green_heart: | javadoc | 2m 11s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 3m 21s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 6m 19s | | the patch passed | | +1 :green_heart: | shadedclient | 18m 2s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 17m 43s | | hadoop-common in the patch passed. | | -1 :x: | unit | 230m 45s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2770/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 1m 8s | | The patch does not generate ASF License warnings. | | | | 441m 3s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys | | | hadoop.hdfs.server.namenode.snapshot.TestNestedSnapshots | | | hadoop.hdfs.qjournal.server.TestJournalNodeSync | | | hadoop.hdfs.server.datanode.TestDirectoryScanner | | | hadoop.hdfs.server.datanode.TestBlockRecovery | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2770/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2770 | | Optional Tests | dupname asflicense mvnsite codespell markdownlint compile javac javadoc mvninstall unit shadedclient spotbugs checkstyle | | uname | Linux ff06d0455a93 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk /
[GitHub] [hadoop] hadoop-yetus commented on pull request #2792: HDFS-15909. Make fnmatch cross platform
hadoop-yetus commented on pull request #2792: URL: https://github.com/apache/hadoop/pull/2792#issuecomment-812174669 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 4s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 34m 49s | | trunk passed | | +1 :green_heart: | compile | 2m 49s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 2m 53s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | mvnsite | 0m 29s | | trunk passed | | +1 :green_heart: | shadedclient | 55m 33s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 17s | | the patch passed | | +1 :green_heart: | compile | 2m 37s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | -1 :x: | cc | 2m 37s | [/results-compile-cc-hadoop-hdfs-project_hadoop-hdfs-native-client-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2792/8/artifact/out/results-compile-cc-hadoop-hdfs-project_hadoop-hdfs-native-client-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt) | hadoop-hdfs-project_hadoop-hdfs-native-client-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 generated 4 new + 34 unchanged - 4 fixed = 38 total (was 38) | | +1 :green_heart: | golang | 2m 37s | | the patch passed | | +1 :green_heart: | javac | 2m 37s | | the patch passed | | +1 :green_heart: | compile | 2m 45s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | cc | 2m 45s | | the patch passed | | +1 :green_heart: | golang | 2m 45s | | the patch passed | | +1 :green_heart: | javac | 2m 45s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | mvnsite | 0m 18s | | the patch passed | | +1 :green_heart: | shadedclient | 14m 52s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 189m 23s | | hadoop-hdfs-native-client in the patch passed. | | +1 :green_heart: | asflicense | 0m 35s | | The patch does not generate ASF License warnings. | | | | 269m 47s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2792/8/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2792 | | Optional Tests | dupname asflicense compile cc mvnsite javac unit codespell golang | | uname | Linux bce0cc751647 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / ecc17d047be5676fe47a1fbc6e6b90d8666c8188 | | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2792/8/testReport/ | | Max. process+thread count | 689 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: hadoop-hdfs-project/hadoop-hdfs-native-client | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2792/8/console | | versions | git=2.25.1 maven=3.6.3 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HADOOP-17610) DelegationTokenAuthenticator prints token information
[ https://issues.apache.org/jira/browse/HADOOP-17610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17313443#comment-17313443 ] Ravuri Sushma sree commented on HADOOP-17610: - Thanks for the review [~brahmareddy]. Yes, the jenkin's compile error is not relevant to this Jira. Compilation is successful locally > DelegationTokenAuthenticator prints token information > - > > Key: HADOOP-17610 > URL: https://issues.apache.org/jira/browse/HADOOP-17610 > Project: Hadoop Common > Issue Type: Bug >Reporter: Ravuri Sushma sree >Assignee: Ravuri Sushma sree >Priority: Major > Attachments: HADOOP-17610.patch > > > Resource Manager logs print token information , as this is sensitive > information it must be exempted from being printed -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2848: YARN-10493: RunC container repository v2
hadoop-yetus commented on pull request #2848: URL: https://github.com/apache/hadoop/pull/2848#issuecomment-812140461 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 40s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 3 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 14m 53s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 20m 9s | | trunk passed | | +1 :green_heart: | compile | 9m 5s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 7m 54s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 1m 47s | | trunk passed | | +1 :green_heart: | mvnsite | 2m 51s | | trunk passed | | +1 :green_heart: | javadoc | 2m 28s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 2m 39s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 5m 32s | | trunk passed | | +1 :green_heart: | shadedclient | 14m 44s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 24s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 50s | | the patch passed | | +1 :green_heart: | compile | 8m 29s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 8m 29s | | the patch passed | | +1 :green_heart: | compile | 7m 46s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | javac | 7m 46s | | the patch passed | | +1 :green_heart: | blanks | 0m 1s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 1m 40s | | the patch passed | | +1 :green_heart: | mvnsite | 2m 35s | | the patch passed | | +1 :green_heart: | xml | 0m 1s | | The patch has no ill-formed XML file. | | +1 :green_heart: | javadoc | 2m 12s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 2m 25s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 5m 52s | | the patch passed | | +1 :green_heart: | shadedclient | 14m 36s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 1m 5s | | hadoop-yarn-api in the patch passed. | | +1 :green_heart: | unit | 4m 53s | | hadoop-yarn-common in the patch passed. | | +1 :green_heart: | unit | 23m 9s | | hadoop-yarn-server-nodemanager in the patch passed. | | +1 :green_heart: | asflicense | 0m 50s | | The patch does not generate ASF License warnings. | | | | 162m 58s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2848/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2848 | | JIRA Issue | YARN-10493 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell xml | | uname | Linux ffea008842d7 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / aec0676df785b131d78d4f7eee96e7ce2f4cef2b | | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2848/2/testReport/ | | Max. process+thread count | 733 (vs. ulimit of 5500) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager U: hadoop-yarn-project/hadoop-yarn | | Console output |
[jira] [Commented] (HADOOP-15678) TestNativeIO#testStat fails when SELinux permissions is set on file
[ https://issues.apache.org/jira/browse/HADOOP-15678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17313366#comment-17313366 ] Hadoop QA commented on HADOOP-15678: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Logfile || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 18s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green}{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green}{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} {color} | {color:green} 0m 0s{color} | {color:green}test4tests{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 48s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 22m 17s{color} | {color:green}{color} | {color:green} trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 46s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 59s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 26s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 18m 1s{color} | {color:green}{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s{color} | {color:green}{color} | {color:green} trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 31s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 22m 48s{color} | {color:blue}{color} | {color:blue} Both FindBugs and SpotBugs are enabled, using SpotBugs. {color} | | {color:green}+1{color} | {color:green} spotbugs {color} | {color:green} 2m 21s{color} | {color:green}{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 54s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 21m 47s{color} | {color:green}{color} | {color:green} the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 21m 47s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 56s{color} | {color:green}{color} | {color:green} the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 18m 56s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 57s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 24s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green}{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 13s{color} | {color:green}{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s{color} | {color:green}{color} | {color:green} the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 28s{color} | {color:green}{color} | {color:green} the patch
[jira] [Work logged] (HADOOP-17614) Bump netty to the latest 4.1.61
[ https://issues.apache.org/jira/browse/HADOOP-17614?focusedWorklogId=575639=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-575639 ] ASF GitHub Bot logged work on HADOOP-17614: --- Author: ASF GitHub Bot Created on: 01/Apr/21 17:55 Start Date: 01/Apr/21 17:55 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #2850: URL: https://github.com/apache/hadoop/pull/2850#issuecomment-812073156 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 39s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 32m 55s | | trunk passed | | +1 :green_heart: | compile | 0m 23s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 0m 21s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | mvnsite | 0m 28s | | trunk passed | | +1 :green_heart: | javadoc | 0m 24s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 0m 24s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | shadedclient | 48m 20s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 17s | | the patch passed | | +1 :green_heart: | compile | 0m 13s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 0m 13s | | the patch passed | | +1 :green_heart: | compile | 0m 12s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | javac | 0m 12s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | mvnsite | 0m 15s | | the patch passed | | +1 :green_heart: | xml | 0m 2s | | The patch has no ill-formed XML file. | | +1 :green_heart: | javadoc | 0m 14s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 0m 14s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | shadedclient | 14m 21s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 0m 19s | | hadoop-project in the patch passed. | | +1 :green_heart: | asflicense | 0m 35s | | The patch does not generate ASF License warnings. | | | | 67m 0s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2850/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2850 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient codespell xml | | uname | Linux 4d01276abd95 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 555c0a9bf2a6a0550d40aa1043fadb073e69e84f | | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2850/1/testReport/ | | Max. process+thread count | 688 (vs. ulimit of 5500) | | modules | C: hadoop-project U: hadoop-project | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2850/1/console | | versions | git=2.25.1 maven=3.6.3 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the
[GitHub] [hadoop] hadoop-yetus commented on pull request #2850: HADOOP-17614. Bump netty to the latest 4.1.61.
hadoop-yetus commented on pull request #2850: URL: https://github.com/apache/hadoop/pull/2850#issuecomment-812073156 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 39s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 32m 55s | | trunk passed | | +1 :green_heart: | compile | 0m 23s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 0m 21s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | mvnsite | 0m 28s | | trunk passed | | +1 :green_heart: | javadoc | 0m 24s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 0m 24s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | shadedclient | 48m 20s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 17s | | the patch passed | | +1 :green_heart: | compile | 0m 13s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 0m 13s | | the patch passed | | +1 :green_heart: | compile | 0m 12s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | javac | 0m 12s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | mvnsite | 0m 15s | | the patch passed | | +1 :green_heart: | xml | 0m 2s | | The patch has no ill-formed XML file. | | +1 :green_heart: | javadoc | 0m 14s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 0m 14s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | shadedclient | 14m 21s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 0m 19s | | hadoop-project in the patch passed. | | +1 :green_heart: | asflicense | 0m 35s | | The patch does not generate ASF License warnings. | | | | 67m 0s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2850/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2850 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient codespell xml | | uname | Linux 4d01276abd95 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 555c0a9bf2a6a0550d40aa1043fadb073e69e84f | | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2850/1/testReport/ | | Max. process+thread count | 688 (vs. ulimit of 5500) | | modules | C: hadoop-project U: hadoop-project | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2850/1/console | | versions | git=2.25.1 maven=3.6.3 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17587) Kinit with keytab should not display the keytab file's full path in any logs
[ https://issues.apache.org/jira/browse/HADOOP-17587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17313349#comment-17313349 ] Ravuri Sushma sree commented on HADOOP-17587: - Thank you for reviewing [~brahmareddy] > Kinit with keytab should not display the keytab file's full path in any logs > > > Key: HADOOP-17587 > URL: https://issues.apache.org/jira/browse/HADOOP-17587 > Project: Hadoop Common > Issue Type: Bug >Reporter: Ravuri Sushma sree >Assignee: Ravuri Sushma sree >Priority: Major > Attachments: HADOOP-17587.001.patch, HADOOP-17587.002.patch > > > The keytab is sensitive information, and the full path should not be printed > in the log -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] vivekratnavel commented on pull request #2784: HDFS-15850. Superuser actions should be reported to external enforcers
vivekratnavel commented on pull request #2784: URL: https://github.com/apache/hadoop/pull/2784#issuecomment-812066259 The unit test failures reported are not related to this patch. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2848: YARN-10493: RunC container repository v2
hadoop-yetus commented on pull request #2848: URL: https://github.com/apache/hadoop/pull/2848#issuecomment-812046430 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 37s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 3 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 14m 39s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 20m 10s | | trunk passed | | +1 :green_heart: | compile | 9m 7s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 7m 53s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 1m 43s | | trunk passed | | +1 :green_heart: | mvnsite | 2m 53s | | trunk passed | | +1 :green_heart: | javadoc | 2m 29s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 2m 38s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 5m 36s | | trunk passed | | +1 :green_heart: | shadedclient | 15m 1s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 25s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 51s | | the patch passed | | +1 :green_heart: | compile | 8m 33s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 8m 33s | | the patch passed | | +1 :green_heart: | compile | 7m 48s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | javac | 7m 48s | | the patch passed | | -1 :x: | blanks | 0m 0s | [/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2848/1/artifact/out/blanks-eol.txt) | The patch has 1 line(s) that end in blanks. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply | | +1 :green_heart: | checkstyle | 1m 40s | | the patch passed | | +1 :green_heart: | mvnsite | 2m 36s | | the patch passed | | +1 :green_heart: | xml | 0m 2s | | The patch has no ill-formed XML file. | | +1 :green_heart: | javadoc | 2m 16s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 2m 24s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 5m 52s | | the patch passed | | +1 :green_heart: | shadedclient | 14m 54s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 1m 4s | | hadoop-yarn-api in the patch passed. | | +1 :green_heart: | unit | 4m 53s | | hadoop-yarn-common in the patch passed. | | -1 :x: | unit | 23m 22s | [/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2848/1/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt) | hadoop-yarn-server-nodemanager in the patch passed. | | +1 :green_heart: | asflicense | 0m 51s | | The patch does not generate ASF License warnings. | | | | 163m 36s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.yarn.server.nodemanager.TestNodeStatusUpdater | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2848/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2848 | | JIRA Issue | YARN-10493 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell xml | | uname | Linux 484f29881aec 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 4ee24f7950cc1d65451d32a6f48bb95cbb6c388d | | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Multi-JDK versions |
[jira] [Updated] (HADOOP-17614) Bump netty to the latest 4.1.61
[ https://issues.apache.org/jira/browse/HADOOP-17614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HADOOP-17614: Labels: pull-request-available (was: ) > Bump netty to the latest 4.1.61 > --- > > Key: HADOOP-17614 > URL: https://issues.apache.org/jira/browse/HADOOP-17614 > Project: Hadoop Common > Issue Type: Task >Affects Versions: 3.3.1, 3.4.0, 3.1.5, 3.2.3 >Reporter: Wei-Chiu Chuang >Priority: Blocker > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > For more details: https://netty.io/news/2021/03/09/4-1-60-Final.html > Actually, just yesterday there's a new version 4.1.61. > https://netty.io/news/2021/03/30/4-1-61-Final.html -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17614) Bump netty to the latest 4.1.61
[ https://issues.apache.org/jira/browse/HADOOP-17614?focusedWorklogId=575609=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-575609 ] ASF GitHub Bot logged work on HADOOP-17614: --- Author: ASF GitHub Bot Created on: 01/Apr/21 16:47 Start Date: 01/Apr/21 16:47 Worklog Time Spent: 10m Work Description: jojochuang opened a new pull request #2850: URL: https://github.com/apache/hadoop/pull/2850 HADOOP-17614 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 575609) Remaining Estimate: 0h Time Spent: 10m > Bump netty to the latest 4.1.61 > --- > > Key: HADOOP-17614 > URL: https://issues.apache.org/jira/browse/HADOOP-17614 > Project: Hadoop Common > Issue Type: Task >Affects Versions: 3.3.1, 3.4.0, 3.1.5, 3.2.3 >Reporter: Wei-Chiu Chuang >Priority: Blocker > Time Spent: 10m > Remaining Estimate: 0h > > For more details: https://netty.io/news/2021/03/09/4-1-60-Final.html > Actually, just yesterday there's a new version 4.1.61. > https://netty.io/news/2021/03/30/4-1-61-Final.html -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] jojochuang opened a new pull request #2850: HADOOP-17614. Bump netty to the latest 4.1.61.
jojochuang opened a new pull request #2850: URL: https://github.com/apache/hadoop/pull/2850 HADOOP-17614 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17587) Kinit with keytab should not display the keytab file's full path in any logs
[ https://issues.apache.org/jira/browse/HADOOP-17587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17313306#comment-17313306 ] Brahma Reddy Battula commented on HADOOP-17587: --- [~Sushma_28] thanks for reporting and uploading the patch.. Patch lgtm, as this is sensitive info can be avoided logging. > Kinit with keytab should not display the keytab file's full path in any logs > > > Key: HADOOP-17587 > URL: https://issues.apache.org/jira/browse/HADOOP-17587 > Project: Hadoop Common > Issue Type: Bug >Reporter: Ravuri Sushma sree >Assignee: Ravuri Sushma sree >Priority: Major > Attachments: HADOOP-17587.001.patch, HADOOP-17587.002.patch > > > The keytab is sensitive information, and the full path should not be printed > in the log -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17610) DelegationTokenAuthenticator prints token information
[ https://issues.apache.org/jira/browse/HADOOP-17610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17313304#comment-17313304 ] Brahma Reddy Battula commented on HADOOP-17610: --- [~Sushma_28] thanks for reporting this Jira and uploading the patch. As this is sensitive information can be avoided. Patch lgtm.. Looks jenkins error not relevant. > DelegationTokenAuthenticator prints token information > - > > Key: HADOOP-17610 > URL: https://issues.apache.org/jira/browse/HADOOP-17610 > Project: Hadoop Common > Issue Type: Bug >Reporter: Ravuri Sushma sree >Assignee: Ravuri Sushma sree >Priority: Major > Attachments: HADOOP-17610.patch > > > Resource Manager logs print token information , as this is sensitive > information it must be exempted from being printed -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17617) Incorrect representation of RESPONSE for Get Key Version in KMS index.md.vm file
[ https://issues.apache.org/jira/browse/HADOOP-17617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17313302#comment-17313302 ] Brahma Reddy Battula commented on HADOOP-17617: --- [~Sushma_28] can you keep the command output in this jira..? > Incorrect representation of RESPONSE for Get Key Version in KMS index.md.vm > file > > > Key: HADOOP-17617 > URL: https://issues.apache.org/jira/browse/HADOOP-17617 > Project: Hadoop Common > Issue Type: Bug >Reporter: Ravuri Sushma sree >Assignee: Ravuri Sushma sree >Priority: Major > Attachments: HADOOP-17617.001.patch, HADOOP-17617.002.patch > > > Format of RESPONSE of Get Key Versions in KMS index.md.vm is incorrect > https://hadoop.apache.org/docs/r3.1.1/hadoop-kms/index.html#Get_Key_Versions -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-17550) property 'ssl.server.keystore.location' has not been set in the ssl configuration file
[ https://issues.apache.org/jira/browse/HADOOP-17550?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] hamado dene resolved HADOOP-17550. -- Resolution: Fixed > property 'ssl.server.keystore.location' has not been set in the ssl > configuration file > -- > > Key: HADOOP-17550 > URL: https://issues.apache.org/jira/browse/HADOOP-17550 > Project: Hadoop Common > Issue Type: Bug > Components: conf >Affects Versions: 2.8.5 >Reporter: hamado dene >Priority: Major > > I trying to install hadoop cluster HA , but datanode does not start properly; > I get this errror: > 2021-02-23 17:13:26,934 ERROR > org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in secureMain > java.io.IOException: java.security.GeneralSecurityException: The property > 'ssl.server.keystore.location' has not been set in the ssl configuration file. > at > org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer.(DatanodeHttpServer.java:199) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.startInfoServer(DataNode.java:905) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1303) > at org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:481) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2609) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2497) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2544) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2729) > at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2753) > Caused by: java.security.GeneralSecurityException: The property > 'ssl.server.keystore.location' has not been set in the ssl configuration file. > at > org.apache.hadoop.security.ssl.FileBasedKeyStoresFactory.init(FileBasedKeyStoresFactory.java:152) > at org.apache.hadoop.security.ssl.SSLFactory.init(SSLFactory.java:148) > at > org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer.(DatanodeHttpServer.java:197) > ... 8 more > But in my ssl-server.xml i correctly set this property: > > ssl.server.keystore.location > /data/hadoop/server.jks > Keystore to be used by clients like distcp. Must be > specified. > > > > ssl.server.keystore.password > > Optional. Default value is "". > > > > ssl.server.keystore.keypassword > x > Optional. Default value is "". > > > > ssl.server.keystore.type > jks > Optional. The keystore file format, default value is "jks". > > > Do you have any suggestion to solve this problem? > my hadoop version is: 2.8.5 > java version: 8 > SO: centos 7 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] sodonnel opened a new pull request #2849: HDFS-15621. Datanode DirectoryScanner uses excessive memory
sodonnel opened a new pull request #2849: URL: https://github.com/apache/hadoop/pull/2849 This is a relatively simple change to reduce the memory used by the Directory Scanner and also simplify the logic in the ScanInfo object. This change ensures the same File object is re-used for all blocks in a directory. Previously a large part of the path was repeated for each block file. Aside from that, the logic of the directory scanner remains the same. Comparing heap dumps, the memory used by 100K blocks goes from ~35MB to 19MB. Or 350MB per 1M blocks down to 190MB per 1M blocks. This is a reduction of about 46%. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] tomscut commented on a change in pull request #2770: HDFS-15892. Add metric for editPendingQ in FSEditLogAsync
tomscut commented on a change in pull request #2770: URL: https://github.com/apache/hadoop/pull/2770#discussion_r605746396 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/metrics/NameNodeMetrics.java ## @@ -87,6 +87,8 @@ MutableGaugeInt blockOpsQueued; @Metric("Number of blockReports and blockReceivedAndDeleted batch processed") MutableCounterLong blockOpsBatched; + @Metric("Number of edit pending") + MutableGaugeInt editPendingCount; Review comment: > Could you add the description to `./hadoop-common-project/hadoop-common/src/site/markdown/Metrics.md`? Thanks for your review. I think your suggestion is reasonable. I fixed it. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15678) TestNativeIO#testStat fails when SELinux permissions is set on file
[ https://issues.apache.org/jira/browse/HADOOP-15678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17313238#comment-17313238 ] Ravuri Sushma sree commented on HADOOP-15678: - Hi [~surendralilhore] , Thank you for reporting this issue. I have submitted a patch, can you please review > TestNativeIO#testStat fails when SELinux permissions is set on file > --- > > Key: HADOOP-15678 > URL: https://issues.apache.org/jira/browse/HADOOP-15678 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.1.1 >Reporter: Surendra Singh Lilhore >Priority: Major > Attachments: HADOOP-15678.001.patch > > > {code} > java.lang.IllegalArgumentException: length != > 10(unixSymbolicPermission=-rw-r--r--.) > at > org.apache.hadoop.fs.permission.FsPermission.valueOf(FsPermission.java:417) > at > org.apache.hadoop.test.StatUtils.getPermissionFromProcess(StatUtils.java:81) > at > org.apache.hadoop.io.nativeio.TestNativeIO.doStatTest(TestNativeIO.java:209) > at > org.apache.hadoop.io.nativeio.TestNativeIO.testStat(TestNativeIO.java:186) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15678) TestNativeIO#testStat fails when SELinux permissions is set on file
[ https://issues.apache.org/jira/browse/HADOOP-15678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ravuri Sushma sree updated HADOOP-15678: Attachment: HADOOP-15678.001.patch Status: Patch Available (was: Open) > TestNativeIO#testStat fails when SELinux permissions is set on file > --- > > Key: HADOOP-15678 > URL: https://issues.apache.org/jira/browse/HADOOP-15678 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.1.1 >Reporter: Surendra Singh Lilhore >Priority: Major > Attachments: HADOOP-15678.001.patch > > > {code} > java.lang.IllegalArgumentException: length != > 10(unixSymbolicPermission=-rw-r--r--.) > at > org.apache.hadoop.fs.permission.FsPermission.valueOf(FsPermission.java:417) > at > org.apache.hadoop.test.StatUtils.getPermissionFromProcess(StatUtils.java:81) > at > org.apache.hadoop.io.nativeio.TestNativeIO.doStatTest(TestNativeIO.java:209) > at > org.apache.hadoop.io.nativeio.TestNativeIO.testStat(TestNativeIO.java:186) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2844: HDFS-15940 : Fixing and refactoring tests specific to Block recovery
hadoop-yetus commented on pull request #2844: URL: https://github.com/apache/hadoop/pull/2844#issuecomment-811943228 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 57s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 4 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 32m 35s | | trunk passed | | +1 :green_heart: | compile | 1m 21s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 1m 11s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 1m 0s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 21s | | trunk passed | | +1 :green_heart: | javadoc | 0m 53s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 1m 24s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 3m 3s | | trunk passed | | +1 :green_heart: | shadedclient | 16m 12s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 10s | | the patch passed | | +1 :green_heart: | compile | 1m 12s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 1m 12s | | the patch passed | | +1 :green_heart: | compile | 1m 5s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | javac | 1m 5s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 53s | [/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2844/3/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs-project/hadoop-hdfs: The patch generated 10 new + 55 unchanged - 7 fixed = 65 total (was 62) | | +1 :green_heart: | mvnsite | 1m 13s | | the patch passed | | +1 :green_heart: | javadoc | 0m 43s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 1m 19s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 3m 6s | | the patch passed | | +1 :green_heart: | shadedclient | 16m 47s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 420m 23s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2844/3/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 44s | | The patch does not generate ASF License warnings. | | | | 506m 37s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.namenode.TestDecommissioningStatus | | | hadoop.hdfs.TestViewDistributedFileSystemContract | | | hadoop.hdfs.TestPersistBlocks | | | hadoop.hdfs.TestDFSShell | | | hadoop.hdfs.TestWriteConfigurationToDFS | | | hadoop.hdfs.server.namenode.snapshot.TestINodeFileUnderConstructionWithSnapshot | | | hadoop.hdfs.server.datanode.fsdataset.impl.TestFsVolumeList | | | hadoop.hdfs.TestStateAlignmentContextWithHA | | | hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes | | | hadoop.hdfs.server.namenode.snapshot.TestOrderedSnapshotDeletionGc | | | hadoop.fs.viewfs.TestViewFSOverloadSchemeWithMountTableConfigInHDFS | | | hadoop.hdfs.server.namenode.TestListCorruptFileBlocks | | | hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor | | | hadoop.hdfs.TestLeaseRecovery2 | | | hadoop.hdfs.TestReconstructStripedFileWithValidator | | | hadoop.hdfs.server.namenode.ha.TestEditLogTailer | | | hadoop.hdfs.server.datanode.TestBlockScanner | | | hadoop.hdfs.server.namenode.snapshot.TestNestedSnapshots | | | hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints | | | hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys | | | hadoop.hdfs.server.datanode.TestDirectoryScanner | | |
[GitHub] [hadoop] mbsharp opened a new pull request #2848: YARN-10493: RunC container repository v2
mbsharp opened a new pull request #2848: URL: https://github.com/apache/hadoop/pull/2848 A new version of the image container repository. This aligns with the format that is used in the new Java CLI tool in YARN-10494. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17597) Add option to downgrade S3A rejection of Syncable to warning
[ https://issues.apache.org/jira/browse/HADOOP-17597?focusedWorklogId=575518=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-575518 ] ASF GitHub Bot logged work on HADOOP-17597: --- Author: ASF GitHub Bot Created on: 01/Apr/21 14:01 Start Date: 01/Apr/21 14:01 Worklog Time Spent: 10m Work Description: mukund-thakur commented on a change in pull request #2801: URL: https://github.com/apache/hadoop/pull/2801#discussion_r605678416 ## File path: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestDowngradeSyncable.java ## @@ -0,0 +1,110 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.fs.s3a; + +import org.junit.Test; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.FSDataOutputStream; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.fs.s3a.performance.AbstractS3ACostTest; +import org.apache.hadoop.fs.statistics.IOStatistics; + +import static org.apache.hadoop.fs.s3a.Constants.DOWNGRADE_SYNCABLE_EXCEPTIONS; +import static org.apache.hadoop.fs.s3a.S3ATestUtils.getTestBucketName; +import static org.apache.hadoop.fs.s3a.S3ATestUtils.removeBucketOverrides; +import static org.apache.hadoop.fs.statistics.IOStatisticAssertions.assertThatStatisticCounter; +import static org.apache.hadoop.fs.statistics.IOStatisticsLogging.ioStatisticsToString; +import static org.apache.hadoop.fs.statistics.StoreStatisticNames.OP_HFLUSH; +import static org.apache.hadoop.fs.statistics.StoreStatisticNames.OP_HSYNC; + + +public class ITestDowngradeSyncable extends AbstractS3ACostTest { + + protected static final Logger LOG = + LoggerFactory.getLogger(ITestDowngradeSyncable.class); + + + public ITestDowngradeSyncable() { +super(false, true, false); + } + + @Override + public Configuration createConfiguration() { +final Configuration conf = super.createConfiguration(); +String bucketName = getTestBucketName(conf); +removeBucketOverrides(bucketName, conf, +DOWNGRADE_SYNCABLE_EXCEPTIONS); +conf.setBoolean(DOWNGRADE_SYNCABLE_EXCEPTIONS, true); +return conf; + } + + @Test + public void testHFlushDowngrade() throws Throwable { Review comment: nit: add describe ## File path: hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/troubleshooting_s3a.md ## @@ -1036,6 +1036,39 @@ This includes resilient logging, HBase-style journaling and the like. The standard strategy here is to save to HDFS and then copy to S3. +### `UnsupportedOperationException` "S3A streams are not Syncable. See HADOOP-17597." + +The application has tried to call either the `Syncable.hsync()` or `Syncable.hflush()` +methods on an S3A output stream. This has been rejected because the +connector isn't saving any data at all. The `Syncable` API, especially the +`hsync()` call, are critical for applications such as HBase to safely +persist data. + +The S3A connector throws an `UnsupportedOperationException` when these API calls +are made, because the guarantees absolutely cannot be met: nothing is being flushed +or saved. + +* Applications which intend to invoke the Syncable APIs call `hasCapability("hsync")` on + the stream to see if they are supported. +* Or catch and downgrade `UnsupportedOperationException`. + +These recommendations _apply to all filesystems_. + +To downgrade the S3A connector to simplying warning of the use of Review comment: nit : typo simply warn the? ## File path: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3ABlockOutputStream.java ## @@ -108,4 +126,31 @@ public void testCallingCloseAfterCallingAbort() throws Exception { // This will ensure abort() can be called with try-with-resource. stream.close(); } + + + /** + * Unless configured to downgrade, the stream will raise exceptions on + * Syncable API calls. + */ + @Test + public void testSyncableUnsupported() throws Exception { +intercept(UnsupportedOperationException.class, () ->
[GitHub] [hadoop] mukund-thakur commented on a change in pull request #2801: HADOOP-17597. Optionally downgrade on S3A Syncable calls
mukund-thakur commented on a change in pull request #2801: URL: https://github.com/apache/hadoop/pull/2801#discussion_r605678416 ## File path: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestDowngradeSyncable.java ## @@ -0,0 +1,110 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.fs.s3a; + +import org.junit.Test; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.FSDataOutputStream; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.fs.s3a.performance.AbstractS3ACostTest; +import org.apache.hadoop.fs.statistics.IOStatistics; + +import static org.apache.hadoop.fs.s3a.Constants.DOWNGRADE_SYNCABLE_EXCEPTIONS; +import static org.apache.hadoop.fs.s3a.S3ATestUtils.getTestBucketName; +import static org.apache.hadoop.fs.s3a.S3ATestUtils.removeBucketOverrides; +import static org.apache.hadoop.fs.statistics.IOStatisticAssertions.assertThatStatisticCounter; +import static org.apache.hadoop.fs.statistics.IOStatisticsLogging.ioStatisticsToString; +import static org.apache.hadoop.fs.statistics.StoreStatisticNames.OP_HFLUSH; +import static org.apache.hadoop.fs.statistics.StoreStatisticNames.OP_HSYNC; + + +public class ITestDowngradeSyncable extends AbstractS3ACostTest { + + protected static final Logger LOG = + LoggerFactory.getLogger(ITestDowngradeSyncable.class); + + + public ITestDowngradeSyncable() { +super(false, true, false); + } + + @Override + public Configuration createConfiguration() { +final Configuration conf = super.createConfiguration(); +String bucketName = getTestBucketName(conf); +removeBucketOverrides(bucketName, conf, +DOWNGRADE_SYNCABLE_EXCEPTIONS); +conf.setBoolean(DOWNGRADE_SYNCABLE_EXCEPTIONS, true); +return conf; + } + + @Test + public void testHFlushDowngrade() throws Throwable { Review comment: nit: add describe ## File path: hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/troubleshooting_s3a.md ## @@ -1036,6 +1036,39 @@ This includes resilient logging, HBase-style journaling and the like. The standard strategy here is to save to HDFS and then copy to S3. +### `UnsupportedOperationException` "S3A streams are not Syncable. See HADOOP-17597." + +The application has tried to call either the `Syncable.hsync()` or `Syncable.hflush()` +methods on an S3A output stream. This has been rejected because the +connector isn't saving any data at all. The `Syncable` API, especially the +`hsync()` call, are critical for applications such as HBase to safely +persist data. + +The S3A connector throws an `UnsupportedOperationException` when these API calls +are made, because the guarantees absolutely cannot be met: nothing is being flushed +or saved. + +* Applications which intend to invoke the Syncable APIs call `hasCapability("hsync")` on + the stream to see if they are supported. +* Or catch and downgrade `UnsupportedOperationException`. + +These recommendations _apply to all filesystems_. + +To downgrade the S3A connector to simplying warning of the use of Review comment: nit : typo simply warn the? ## File path: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3ABlockOutputStream.java ## @@ -108,4 +126,31 @@ public void testCallingCloseAfterCallingAbort() throws Exception { // This will ensure abort() can be called with try-with-resource. stream.close(); } + + + /** + * Unless configured to downgrade, the stream will raise exceptions on + * Syncable API calls. + */ + @Test + public void testSyncableUnsupported() throws Exception { +intercept(UnsupportedOperationException.class, () -> stream.hflush()); +intercept(UnsupportedOperationException.class, () -> stream.hsync()); + } + + /** + * When configured to downgrade, the stream downgrades on. Review comment: nit: no fullstop in the end. ## File path: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestDowngradeSyncable.java ## @@ -0,0 +1,110 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or
[GitHub] [hadoop] lujiefsi closed pull request #2798: MAPREDUCE-7330. Add access check for getJobAttempts
lujiefsi closed pull request #2798: URL: https://github.com/apache/hadoop/pull/2798 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] sodonnel commented on pull request #2838: HDFS-15937. Reduce memory used during datanode layout upgrade
sodonnel commented on pull request #2838: URL: https://github.com/apache/hadoop/pull/2838#issuecomment-811876392 Thanks @Hexiaoqiao ! @jojochuang have you got any more comments or do you want to take another look before I commit this? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17619) Fix DelegationTokenRenewer#updateRenewalTime java doc error.
[ https://issues.apache.org/jira/browse/HADOOP-17619?focusedWorklogId=575441=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-575441 ] ASF GitHub Bot logged work on HADOOP-17619: --- Author: ASF GitHub Bot Created on: 01/Apr/21 11:04 Start Date: 01/Apr/21 11:04 Worklog Time Spent: 10m Work Description: qizhu-lucas edited a comment on pull request #2846: URL: https://github.com/apache/hadoop/pull/2846#issuecomment-811594958 @umamaheswararao @aajisaka @ayushtkn Could you help review this? It's a error java doc for updateRenewalTime. Thanks. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 575441) Time Spent: 40m (was: 0.5h) > Fix DelegationTokenRenewer#updateRenewalTime java doc error. > > > Key: HADOOP-17619 > URL: https://issues.apache.org/jira/browse/HADOOP-17619 > Project: Hadoop Common > Issue Type: Bug >Reporter: Qi Zhu >Priority: Minor > Labels: pull-request-available > Time Spent: 40m > Remaining Estimate: 0h > > The param of updateRenewalTime should be the renew cycle, not the new time. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] qizhu-lucas edited a comment on pull request #2846: HADOOP-17619: Fix DelegationTokenRenewer#updateRenewalTime java doc e…
qizhu-lucas edited a comment on pull request #2846: URL: https://github.com/apache/hadoop/pull/2846#issuecomment-811594958 @umamaheswararao @aajisaka @ayushtkn Could you help review this? It's a error java doc for updateRenewalTime. Thanks. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] qizhu-lucas edited a comment on pull request #2829: HDFS-15930: Fix some @param errors in DirectoryScanner.
qizhu-lucas edited a comment on pull request #2829: URL: https://github.com/apache/hadoop/pull/2829#issuecomment-810816274 @umamaheswararao @aajisaka Could you help check this? Thanks. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] tasanuma commented on a change in pull request #2770: HDFS-15892. Add metric for editPendingQ in FSEditLogAsync
tasanuma commented on a change in pull request #2770: URL: https://github.com/apache/hadoop/pull/2770#discussion_r605551460 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/metrics/NameNodeMetrics.java ## @@ -87,6 +87,8 @@ MutableGaugeInt blockOpsQueued; @Metric("Number of blockReports and blockReceivedAndDeleted batch processed") MutableCounterLong blockOpsBatched; + @Metric("Number of edit pending") + MutableGaugeInt editPendingCount; Review comment: I'm not a native English speaker, but I feel the following is more natural. What do you think? ```suggestion @Metric("Number of pending edits") MutableGaugeInt pendingEditsCount; ``` ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/metrics/NameNodeMetrics.java ## @@ -87,6 +87,8 @@ MutableGaugeInt blockOpsQueued; @Metric("Number of blockReports and blockReceivedAndDeleted batch processed") MutableCounterLong blockOpsBatched; + @Metric("Number of edit pending") + MutableGaugeInt editPendingCount; Review comment: Could you add the description to `./hadoop-common-project/hadoop-common/src/site/markdown/Metrics.md`? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17609) Make SM4 support optional for OpenSSL native code
[ https://issues.apache.org/jira/browse/HADOOP-17609?focusedWorklogId=575399=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-575399 ] ASF GitHub Bot logged work on HADOOP-17609: --- Author: ASF GitHub Bot Created on: 01/Apr/21 08:55 Start Date: 01/Apr/21 08:55 Worklog Time Spent: 10m Work Description: iwasakims edited a comment on pull request #2847: URL: https://github.com/apache/hadoop/pull/2847#issuecomment-811652915 OpensslAesCtrCryptoCodec is used for 'AES/CTR/NoPadding': ``` $ bin/hadoop key create key-aes -cipher 'AES/CTR/NoPadding' $ bin/hdfs dfs -mkdir /zone-aes $ bin/hdfs crypto -createZone -path /zone-aes -keyName key-aes $ bin/hdfs dfs -put README.txt /zone-aes/ 2021-04-01 05:23:37,755 DEBUG util.NativeCodeLoader: Trying to load the custom-built native-hadoop library... 2021-04-01 05:23:37,756 DEBUG util.NativeCodeLoader: Loaded the native-hadoop library 2021-04-01 05:23:38,457 DEBUG util.PerformanceAdvisory: Both short-circuit local reads and UNIX domain socket are disabled. 2021-04-01 05:23:39,072 DEBUG crypto.OpensslAesCtrCryptoCodec: Using org.apache.hadoop.crypto.random.OpensslSecureRandom as random number generator. 2021-04-01 05:23:39,073 DEBUG util.PerformanceAdvisory: Using crypto codec org.apache.hadoop.crypto.OpensslAesCtrCryptoCodec. ... $ bin/hdfs dfs -cat /zone-aes/README.txt 2021-04-01 05:23:52,844 DEBUG util.NativeCodeLoader: Trying to load the custom-built native-hadoop library... 2021-04-01 05:23:52,845 DEBUG util.NativeCodeLoader: Loaded the native-hadoop library 2021-04-01 05:23:53,549 DEBUG util.PerformanceAdvisory: Both short-circuit local reads and UNIX domain socket are disabled. 2021-04-01 05:23:54,084 DEBUG kms.KMSClientProvider: KMSClientProvider created for KMS url: http://localhost:9600/kms/v1/ delegation token service: kms://http@localhost:9600/kms canonical service: 127.0.0.1:9600. 2021-04-01 05:23:54,087 DEBUG kms.LoadBalancingKMSClientProvider: Created LoadBalancingKMSClientProvider for KMS url: kms://http@localhost:9600/kms with 1 providers. delegation token service: kms://http@localhost:9600/kms, canonical service: 127.0.0.1:9600 2021-04-01 05:23:54,111 DEBUG crypto.OpensslAesCtrCryptoCodec: Using org.apache.hadoop.crypto.random.OpensslSecureRandom as random number generator. 2021-04-01 05:23:54,111 DEBUG util.PerformanceAdvisory: Using crypto codec org.apache.hadoop.crypto.OpensslAesCtrCryptoCodec. ... For the latest information about Hadoop, please visit our website at: http://hadoop.apache.org/ and our wiki, at: $ bin/hadoop checknative 2>/dev/null Native library checking: hadoop: true /home/centos/dist/hadoop-3.4.0-SNAPSHOT-HADOOP-17609/lib/native/libhadoop.so.1.0.0 zlib:true /lib64/libz.so.1 zstd : true /lib64/libzstd.so.1 bzip2: true /lib64/libbz2.so.1 openssl: true /lib64/libcrypto.so ISA-L: true /lib64/libisal.so.2 PMDK:false The native code was built without PMDK support. ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 575399) Time Spent: 1h 10m (was: 1h) > Make SM4 support optional for OpenSSL native code > - > > Key: HADOOP-17609 > URL: https://issues.apache.org/jira/browse/HADOOP-17609 > Project: Hadoop Common > Issue Type: Improvement > Components: native >Affects Versions: 3.4.0 >Reporter: Masatake Iwasaki >Assignee: Masatake Iwasaki >Priority: Major > Labels: pull-request-available > Time Spent: 1h 10m > Remaining Estimate: 0h > > openssl-devel-1.1.1g provided by CentOS 8 does not work after HDFS-15098 > because the SM4 is not enabled on the openssl package. We should not force > users to install OpenSSL from source code even if they do not use SM4 feature. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] iwasakims edited a comment on pull request #2847: HADOOP-17609. Make SM4 support optional for OpenSSL native code.
iwasakims edited a comment on pull request #2847: URL: https://github.com/apache/hadoop/pull/2847#issuecomment-811652915 OpensslAesCtrCryptoCodec is used for 'AES/CTR/NoPadding': ``` $ bin/hadoop key create key-aes -cipher 'AES/CTR/NoPadding' $ bin/hdfs dfs -mkdir /zone-aes $ bin/hdfs crypto -createZone -path /zone-aes -keyName key-aes $ bin/hdfs dfs -put README.txt /zone-aes/ 2021-04-01 05:23:37,755 DEBUG util.NativeCodeLoader: Trying to load the custom-built native-hadoop library... 2021-04-01 05:23:37,756 DEBUG util.NativeCodeLoader: Loaded the native-hadoop library 2021-04-01 05:23:38,457 DEBUG util.PerformanceAdvisory: Both short-circuit local reads and UNIX domain socket are disabled. 2021-04-01 05:23:39,072 DEBUG crypto.OpensslAesCtrCryptoCodec: Using org.apache.hadoop.crypto.random.OpensslSecureRandom as random number generator. 2021-04-01 05:23:39,073 DEBUG util.PerformanceAdvisory: Using crypto codec org.apache.hadoop.crypto.OpensslAesCtrCryptoCodec. ... $ bin/hdfs dfs -cat /zone-aes/README.txt 2021-04-01 05:23:52,844 DEBUG util.NativeCodeLoader: Trying to load the custom-built native-hadoop library... 2021-04-01 05:23:52,845 DEBUG util.NativeCodeLoader: Loaded the native-hadoop library 2021-04-01 05:23:53,549 DEBUG util.PerformanceAdvisory: Both short-circuit local reads and UNIX domain socket are disabled. 2021-04-01 05:23:54,084 DEBUG kms.KMSClientProvider: KMSClientProvider created for KMS url: http://localhost:9600/kms/v1/ delegation token service: kms://http@localhost:9600/kms canonical service: 127.0.0.1:9600. 2021-04-01 05:23:54,087 DEBUG kms.LoadBalancingKMSClientProvider: Created LoadBalancingKMSClientProvider for KMS url: kms://http@localhost:9600/kms with 1 providers. delegation token service: kms://http@localhost:9600/kms, canonical service: 127.0.0.1:9600 2021-04-01 05:23:54,111 DEBUG crypto.OpensslAesCtrCryptoCodec: Using org.apache.hadoop.crypto.random.OpensslSecureRandom as random number generator. 2021-04-01 05:23:54,111 DEBUG util.PerformanceAdvisory: Using crypto codec org.apache.hadoop.crypto.OpensslAesCtrCryptoCodec. ... For the latest information about Hadoop, please visit our website at: http://hadoop.apache.org/ and our wiki, at: $ bin/hadoop checknative 2>/dev/null Native library checking: hadoop: true /home/centos/dist/hadoop-3.4.0-SNAPSHOT-HADOOP-17609/lib/native/libhadoop.so.1.0.0 zlib:true /lib64/libz.so.1 zstd : true /lib64/libzstd.so.1 bzip2: true /lib64/libbz2.so.1 openssl: true /lib64/libcrypto.so ISA-L: true /lib64/libisal.so.2 PMDK:false The native code was built without PMDK support. ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2784: HDFS-15850. Superuser actions should be reported to external enforcers
hadoop-yetus commented on pull request #2784: URL: https://github.com/apache/hadoop/pull/2784#issuecomment-811739537 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 38s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 14m 39s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 26m 22s | | trunk passed | | +1 :green_heart: | compile | 5m 41s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 5m 31s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 1m 25s | | trunk passed | | +1 :green_heart: | mvnsite | 2m 9s | | trunk passed | | +1 :green_heart: | javadoc | 1m 46s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 2m 29s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 5m 10s | | trunk passed | | +1 :green_heart: | shadedclient | 17m 28s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 21s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 44s | | the patch passed | | +1 :green_heart: | compile | 5m 3s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 5m 3s | | the patch passed | | +1 :green_heart: | compile | 4m 39s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | javac | 4m 39s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 1m 10s | | hadoop-hdfs-project: The patch generated 0 new + 498 unchanged - 6 fixed = 498 total (was 504) | | +1 :green_heart: | mvnsite | 1m 45s | | the patch passed | | +1 :green_heart: | xml | 0m 1s | | The patch has no ill-formed XML file. | | +1 :green_heart: | javadoc | 1m 17s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 2m 6s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 4m 36s | | the patch passed | | +1 :green_heart: | shadedclient | 16m 38s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 353m 44s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2784/13/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | -1 :x: | unit | 23m 53s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2784/13/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt) | hadoop-hdfs-rbf in the patch passed. | | +1 :green_heart: | asflicense | 0m 39s | | The patch does not generate ASF License warnings. | | | | 503m 42s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys | | | hadoop.hdfs.server.datanode.TestBlockScanner | | | hadoop.hdfs.server.namenode.TestAddOverReplicatedStripedBlocks | | | hadoop.hdfs.server.namenode.ha.TestBootstrapStandby | | | hadoop.hdfs.TestPersistBlocks | | | hadoop.hdfs.TestLeaseRecovery | | | hadoop.hdfs.server.datanode.TestBlockRecovery | | | hadoop.hdfs.server.namenode.ha.TestEditLogTailer | | | hadoop.hdfs.TestDFSShell | | | hadoop.hdfs.server.namenode.snapshot.TestNestedSnapshots | | | hadoop.hdfs.server.datanode.fsdataset.impl.TestFsVolumeList | | | hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor | | | hadoop.hdfs.server.namenode.TestDecommissioningStatus | | | hadoop.hdfs.server.federation.router.TestRouterFederationRename | | Subsystem | Report/Notes |
[jira] [Commented] (HADOOP-17144) Update Hadoop's lz4 to v1.9.2
[ https://issues.apache.org/jira/browse/HADOOP-17144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17312973#comment-17312973 ] Masatake Iwasaki commented on HADOOP-17144: --- [~prasad-acit] Native lz4 code was replaced with lz4-java by HADOOP-17292. branch-3.3 already has it. > Update Hadoop's lz4 to v1.9.2 > - > > Key: HADOOP-17144 > URL: https://issues.apache.org/jira/browse/HADOOP-17144 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Hemanth Boyina >Assignee: Hemanth Boyina >Priority: Major > Fix For: 3.4.0 > > Attachments: HADOOP-17144.001.patch, HADOOP-17144.002.patch, > HADOOP-17144.003.patch, HADOOP-17144.004.patch, HADOOP-17144.005.patch > > > Update hadoop's native lz4 to v1.9.2 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17144) Update Hadoop's lz4 to v1.9.2
[ https://issues.apache.org/jira/browse/HADOOP-17144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17312967#comment-17312967 ] Renukaprasad C commented on HADOOP-17144: - Thanks [~hemanthboyina], Same can be backported to branch 3.3 as well ? > Update Hadoop's lz4 to v1.9.2 > - > > Key: HADOOP-17144 > URL: https://issues.apache.org/jira/browse/HADOOP-17144 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Hemanth Boyina >Assignee: Hemanth Boyina >Priority: Major > Fix For: 3.4.0 > > Attachments: HADOOP-17144.001.patch, HADOOP-17144.002.patch, > HADOOP-17144.003.patch, HADOOP-17144.004.patch, HADOOP-17144.005.patch > > > Update hadoop's native lz4 to v1.9.2 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17618) ABFS: Partially obfuscate SAS object IDs in Logs
[ https://issues.apache.org/jira/browse/HADOOP-17618?focusedWorklogId=575356=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-575356 ] ASF GitHub Bot logged work on HADOOP-17618: --- Author: ASF GitHub Bot Created on: 01/Apr/21 07:15 Start Date: 01/Apr/21 07:15 Worklog Time Spent: 10m Work Description: vinaysbadami commented on a change in pull request #2845: URL: https://github.com/apache/hadoop/pull/2845#discussion_r605424939 ## File path: hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsHttpOperation.java ## @@ -558,6 +560,24 @@ public String getSignatureMaskedEncodedUrl() { return this.maskedEncodedUrl; } + public void maskSASObjectIDs() { +int oidStartIdx, ampIdx, oidEndIndex, qpStrIdx; +for (String qpKey : SAS_OID_PARAM_KEYS) { + qpStrIdx = maskedUrl.indexOf('&' + qpKey); Review comment: this.maskedUrl to be consistent with rest of file ## File path: hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsHttpOperation.java ## @@ -558,6 +560,24 @@ public String getSignatureMaskedEncodedUrl() { return this.maskedEncodedUrl; } + public void maskSASObjectIDs() { +int oidStartIdx, ampIdx, oidEndIndex, qpStrIdx; +for (String qpKey : SAS_OID_PARAM_KEYS) { + qpStrIdx = maskedUrl.indexOf('&' + qpKey); + if (qpStrIdx == -1) { +qpStrIdx = maskedUrl.indexOf('?' + qpKey); +if (qpStrIdx == -1) { + continue; +} + } + oidStartIdx = qpStrIdx + qpKey.length() + 1; + ampIdx = maskedUrl.indexOf("&", oidStartIdx); + oidEndIndex = (ampIdx != -1) ? ampIdx : maskedUrl.length(); + maskedUrl = maskedUrl.substring(0, oidStartIdx + 5) + "" + maskedUrl + .substring(oidEndIndex); +} Review comment: should we move all the masking logic to a single static method that takes a string and returns a masked string that will make testing easier. Also should we look at this method in a utils class to keep this class cleaner ## File path: hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsHttpOperation.java ## @@ -558,6 +560,24 @@ public String getSignatureMaskedEncodedUrl() { return this.maskedEncodedUrl; } + public void maskSASObjectIDs() { +int oidStartIdx, ampIdx, oidEndIndex, qpStrIdx; Review comment: move to point of first use -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 575356) Time Spent: 1h 10m (was: 1h) > ABFS: Partially obfuscate SAS object IDs in Logs > > > Key: HADOOP-17618 > URL: https://issues.apache.org/jira/browse/HADOOP-17618 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.3.1 >Reporter: Sumangala Patki >Assignee: Sumangala Patki >Priority: Major > Labels: pull-request-available > Time Spent: 1h 10m > Remaining Estimate: 0h > > Delegation SAS tokens are created using various parameters for specifying > details such as permissions and validity. The requests are logged, along with > values of all the query parameters. This change will partially mask values > logged for the following object IDs representing the security principal: > skoid, saoid, suoid -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] vinaysbadami commented on a change in pull request #2845: HADOOP-17618. ABFS: Partially obfuscate SAS object IDs in Logs
vinaysbadami commented on a change in pull request #2845: URL: https://github.com/apache/hadoop/pull/2845#discussion_r605424939 ## File path: hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsHttpOperation.java ## @@ -558,6 +560,24 @@ public String getSignatureMaskedEncodedUrl() { return this.maskedEncodedUrl; } + public void maskSASObjectIDs() { +int oidStartIdx, ampIdx, oidEndIndex, qpStrIdx; +for (String qpKey : SAS_OID_PARAM_KEYS) { + qpStrIdx = maskedUrl.indexOf('&' + qpKey); Review comment: this.maskedUrl to be consistent with rest of file ## File path: hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsHttpOperation.java ## @@ -558,6 +560,24 @@ public String getSignatureMaskedEncodedUrl() { return this.maskedEncodedUrl; } + public void maskSASObjectIDs() { +int oidStartIdx, ampIdx, oidEndIndex, qpStrIdx; +for (String qpKey : SAS_OID_PARAM_KEYS) { + qpStrIdx = maskedUrl.indexOf('&' + qpKey); + if (qpStrIdx == -1) { +qpStrIdx = maskedUrl.indexOf('?' + qpKey); +if (qpStrIdx == -1) { + continue; +} + } + oidStartIdx = qpStrIdx + qpKey.length() + 1; + ampIdx = maskedUrl.indexOf("&", oidStartIdx); + oidEndIndex = (ampIdx != -1) ? ampIdx : maskedUrl.length(); + maskedUrl = maskedUrl.substring(0, oidStartIdx + 5) + "" + maskedUrl + .substring(oidEndIndex); +} Review comment: should we move all the masking logic to a single static method that takes a string and returns a masked string that will make testing easier. Also should we look at this method in a utils class to keep this class cleaner ## File path: hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsHttpOperation.java ## @@ -558,6 +560,24 @@ public String getSignatureMaskedEncodedUrl() { return this.maskedEncodedUrl; } + public void maskSASObjectIDs() { +int oidStartIdx, ampIdx, oidEndIndex, qpStrIdx; Review comment: move to point of first use -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17613) Log not flushed fully when NN shutdown
[ https://issues.apache.org/jira/browse/HADOOP-17613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17312928#comment-17312928 ] Renukaprasad C commented on HADOOP-17613: - [~hexiaoqiao] Thanks for the quick update. We are scared to handle in other shutdown hooks as it close the appender and we may miss the other log. We have just added here as we expected last log to be pushed. Whatever the logs in the buffer so far will be flushed and SHUTDOWN message gets logged. LM#shutdown() acts on both the the versions. > Log not flushed fully when NN shutdown > -- > > Key: HADOOP-17613 > URL: https://issues.apache.org/jira/browse/HADOOP-17613 > Project: Hadoop Common > Issue Type: Bug > Components: common >Affects Versions: 2.7.2, 3.1.1 >Reporter: Renukaprasad C >Assignee: Renukaprasad C >Priority: Major > Attachments: HADOOP-17613.001.patch > > > When server generating large amount of logs and gets stopped, doesnt print > all the logs. Need to call LogManager.shutdown(); to flush all the pending > log to be written before shutdown. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] liuml07 commented on pull request #2830: HDFS-15931 : Fix non-static inner classes for better memory management
liuml07 commented on pull request #2830: URL: https://github.com/apache/hadoop/pull/2830#issuecomment-811692029 @virajjasani No i think it's safe to backport directly. No need for separate PRs. Thanks -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] virajjasani commented on pull request #2830: HDFS-15931 : Fix non-static inner classes for better memory management
virajjasani commented on pull request #2830: URL: https://github.com/apache/hadoop/pull/2830#issuecomment-811668379 @liuml07 Thanks for the review. Shall I create backport PRs till branch-3.1 / branch-2.10? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] virajjasani commented on pull request #2844: HDFS-15940 : Fixing and refactoring tests specific to Block recovery
virajjasani commented on pull request #2844: URL: https://github.com/apache/hadoop/pull/2844#issuecomment-811668051 This is the usual no of failed tests that we have seen recently. Except for a couple of tests, majority of them seem to be failing quite often. I have seen them on other PRs too. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org