[jira] [Commented] (HADOOP-16628) Update the year to 2019 in the web site
[ https://issues.apache.org/jira/browse/HADOOP-16628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16944246#comment-16944246 ] Dinesh Chitlangia commented on HADOOP-16628: [~aajisaka] Thanks for filing this. I think we can write a simple javascript function to populate Year based on current date. This way, no one has to push an update once a year. If this sounds good, we can update the jira title. > Update the year to 2019 in the web site > --- > > Key: HADOOP-16628 > URL: https://issues.apache.org/jira/browse/HADOOP-16628 > Project: Hadoop Common > Issue Type: Bug > Components: documentation, website >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Major > Labels: newbie > > https://hadoop.apache.org/ > bq. Copyright © 2018 The Apache Software Foundation. > Let's update the year to 2019. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1589: HDDS-2244. Use new ReadWrite lock in OzoneManager.
hadoop-yetus commented on issue #1589: HDDS-2244. Use new ReadWrite lock in OzoneManager. URL: https://github.com/apache/hadoop/pull/1589#issuecomment-538247315 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 34 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 1 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | -1 | test4tests | 0 | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ trunk Compile Tests _ | | 0 | mvndep | 29 | Maven dependency ordering for branch | | -1 | mvninstall | 31 | hadoop-hdds in trunk failed. | | -1 | mvninstall | 33 | hadoop-ozone in trunk failed. | | -1 | compile | 18 | hadoop-hdds in trunk failed. | | -1 | compile | 13 | hadoop-ozone in trunk failed. | | +1 | checkstyle | 53 | trunk passed | | +1 | mvnsite | 0 | trunk passed | | +1 | shadedclient | 921 | branch has no errors when building and testing our client artifacts. | | -1 | javadoc | 18 | hadoop-hdds in trunk failed. | | -1 | javadoc | 16 | hadoop-ozone in trunk failed. | | 0 | spotbugs | 1009 | Used deprecated FindBugs config; considering switching to SpotBugs. | | -1 | findbugs | 32 | hadoop-hdds in trunk failed. | | -1 | findbugs | 17 | hadoop-ozone in trunk failed. | ||| _ Patch Compile Tests _ | | 0 | mvndep | 15 | Maven dependency ordering for patch | | -1 | mvninstall | 31 | hadoop-hdds in the patch failed. | | -1 | mvninstall | 34 | hadoop-ozone in the patch failed. | | -1 | compile | 21 | hadoop-hdds in the patch failed. | | -1 | compile | 15 | hadoop-ozone in the patch failed. | | -1 | javac | 21 | hadoop-hdds in the patch failed. | | -1 | javac | 15 | hadoop-ozone in the patch failed. | | -0 | checkstyle | 26 | hadoop-ozone: The patch generated 6 new + 0 unchanged - 0 fixed = 6 total (was 0) | | +1 | mvnsite | 0 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 774 | patch has no errors when building and testing our client artifacts. | | -1 | javadoc | 18 | hadoop-hdds in the patch failed. | | -1 | javadoc | 16 | hadoop-ozone in the patch failed. | | -1 | findbugs | 28 | hadoop-hdds in the patch failed. | | -1 | findbugs | 17 | hadoop-ozone in the patch failed. | ||| _ Other Tests _ | | -1 | unit | 23 | hadoop-hdds in the patch failed. | | -1 | unit | 23 | hadoop-ozone in the patch failed. | | +1 | asflicense | 28 | The patch does not generate ASF License warnings. | | | | 2431 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1589/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1589 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 74a8dbbb1c89 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 844b766 | | Default Java | 1.8.0_222 | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1589/2/artifact/out/branch-mvninstall-hadoop-hdds.txt | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1589/2/artifact/out/branch-mvninstall-hadoop-ozone.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1589/2/artifact/out/branch-compile-hadoop-hdds.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1589/2/artifact/out/branch-compile-hadoop-ozone.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1589/2/artifact/out/branch-javadoc-hadoop-hdds.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1589/2/artifact/out/branch-javadoc-hadoop-ozone.txt | | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-1589/2/artifact/out/branch-findbugs-hadoop-hdds.txt | | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-1589/2/artifact/out/branch-findbugs-hadoop-ozone.txt | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1589/2/artifact/out/patch-mvninstall-hadoop-hdds.txt | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1589/2/artifact/out/patch-mvninstall-hadoop-ozone.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1589/2/artifact/out/patch-compile-hadoop-hdds.txt | | compile |
[jira] [Assigned] (HADOOP-16628) Update the year to 2019 in the web site
[ https://issues.apache.org/jira/browse/HADOOP-16628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka reassigned HADOOP-16628: -- Assignee: Akira Ajisaka > Update the year to 2019 in the web site > --- > > Key: HADOOP-16628 > URL: https://issues.apache.org/jira/browse/HADOOP-16628 > Project: Hadoop Common > Issue Type: Bug > Components: documentation, website >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Major > Labels: newbie > > https://hadoop.apache.org/ > bq. Copyright © 2018 The Apache Software Foundation. > Let's update the year to 2019. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1589: HDDS-2244. Use new ReadWrite lock in OzoneManager.
hadoop-yetus commented on issue #1589: HDDS-2244. Use new ReadWrite lock in OzoneManager. URL: https://github.com/apache/hadoop/pull/1589#issuecomment-538246119 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 71 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | -1 | test4tests | 0 | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ trunk Compile Tests _ | | 0 | mvndep | 27 | Maven dependency ordering for branch | | -1 | mvninstall | 29 | hadoop-hdds in trunk failed. | | -1 | mvninstall | 33 | hadoop-ozone in trunk failed. | | -1 | compile | 18 | hadoop-hdds in trunk failed. | | -1 | compile | 13 | hadoop-ozone in trunk failed. | | +1 | checkstyle | 54 | trunk passed | | +1 | mvnsite | 0 | trunk passed | | +1 | shadedclient | 942 | branch has no errors when building and testing our client artifacts. | | -1 | javadoc | 20 | hadoop-hdds in trunk failed. | | -1 | javadoc | 17 | hadoop-ozone in trunk failed. | | 0 | spotbugs | 1030 | Used deprecated FindBugs config; considering switching to SpotBugs. | | -1 | findbugs | 30 | hadoop-hdds in trunk failed. | | -1 | findbugs | 16 | hadoop-ozone in trunk failed. | ||| _ Patch Compile Tests _ | | 0 | mvndep | 15 | Maven dependency ordering for patch | | -1 | mvninstall | 31 | hadoop-hdds in the patch failed. | | -1 | mvninstall | 35 | hadoop-ozone in the patch failed. | | -1 | compile | 21 | hadoop-hdds in the patch failed. | | -1 | compile | 16 | hadoop-ozone in the patch failed. | | -1 | javac | 21 | hadoop-hdds in the patch failed. | | -1 | javac | 16 | hadoop-ozone in the patch failed. | | -0 | checkstyle | 28 | hadoop-ozone: The patch generated 6 new + 0 unchanged - 0 fixed = 6 total (was 0) | | +1 | mvnsite | 0 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 801 | patch has no errors when building and testing our client artifacts. | | -1 | javadoc | 20 | hadoop-hdds in the patch failed. | | -1 | javadoc | 16 | hadoop-ozone in the patch failed. | | -1 | findbugs | 28 | hadoop-hdds in the patch failed. | | -1 | findbugs | 17 | hadoop-ozone in the patch failed. | ||| _ Other Tests _ | | -1 | unit | 25 | hadoop-hdds in the patch failed. | | -1 | unit | 23 | hadoop-ozone in the patch failed. | | +1 | asflicense | 30 | The patch does not generate ASF License warnings. | | | | 2522 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=18.09.7 Server=18.09.7 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1589/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1589 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux fb4933b9414e 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 844b766 | | Default Java | 1.8.0_222 | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1589/1/artifact/out/branch-mvninstall-hadoop-hdds.txt | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1589/1/artifact/out/branch-mvninstall-hadoop-ozone.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1589/1/artifact/out/branch-compile-hadoop-hdds.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1589/1/artifact/out/branch-compile-hadoop-ozone.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1589/1/artifact/out/branch-javadoc-hadoop-hdds.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1589/1/artifact/out/branch-javadoc-hadoop-ozone.txt | | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-1589/1/artifact/out/branch-findbugs-hadoop-hdds.txt | | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-1589/1/artifact/out/branch-findbugs-hadoop-ozone.txt | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1589/1/artifact/out/patch-mvninstall-hadoop-hdds.txt | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1589/1/artifact/out/patch-mvninstall-hadoop-ozone.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1589/1/artifact/out/patch-compile-hadoop-hdds.txt | | compile |
[jira] [Created] (HADOOP-16628) Update the year to 2019 in the web site
Akira Ajisaka created HADOOP-16628: -- Summary: Update the year to 2019 in the web site Key: HADOOP-16628 URL: https://issues.apache.org/jira/browse/HADOOP-16628 Project: Hadoop Common Issue Type: Bug Components: documentation, website Reporter: Akira Ajisaka https://hadoop.apache.org/ bq. Copyright © 2018 The Apache Software Foundation. Let's update the year to 2019. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16627) Remove links to the releases of EOL branches from web site
[ https://issues.apache.org/jira/browse/HADOOP-16627?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-16627: --- Summary: Remove links to the releases of EOL branches from web site (was: Remove links to the releases in EOL branches in web site) > Remove links to the releases of EOL branches from web site > -- > > Key: HADOOP-16627 > URL: https://issues.apache.org/jira/browse/HADOOP-16627 > Project: Hadoop Common > Issue Type: Improvement > Components: documentation, website >Reporter: Akira Ajisaka >Priority: Major > Labels: newbie > > Hadoop 2.7.7 is still linked from download page. > https://hadoop.apache.org/releases.html > branch-2.7 and branch-3.0 are EoL, so 2.7.7 should be removed from download > page. > https://cwiki.apache.org/confluence/display/HADOOP/EOL+%28End-of-life%29+Release+Branches -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16627) Remove links to the releases in EOL branches in web site
[ https://issues.apache.org/jira/browse/HADOOP-16627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16944240#comment-16944240 ] Akira Ajisaka commented on HADOOP-16627: Removing {{linked:true}} from src/releases/2.7.7.md and src/releases/3.0.3.md should be fine. > Remove links to the releases in EOL branches in web site > > > Key: HADOOP-16627 > URL: https://issues.apache.org/jira/browse/HADOOP-16627 > Project: Hadoop Common > Issue Type: Improvement > Components: documentation, website >Reporter: Akira Ajisaka >Priority: Major > Labels: newbie > > Hadoop 2.7.7 is still linked from download page. > https://hadoop.apache.org/releases.html > branch-2.7 and branch-3.0 are EoL, so 2.7.7 should be removed from download > page. > https://cwiki.apache.org/confluence/display/HADOOP/EOL+%28End-of-life%29+Release+Branches -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16627) Remove links to the releases in EOL branches in web site
[ https://issues.apache.org/jira/browse/HADOOP-16627?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-16627: --- Description: Hadoop 2.7.7 is still linked from download page. https://hadoop.apache.org/releases.html branch-2.7 and branch-3.0 are EoL, so 2.7.7 should be removed from download page. https://cwiki.apache.org/confluence/display/HADOOP/EOL+%28End-of-life%29+Release+Branches was: Hadoop 2.7.7 is still linked from download page. https://hadoop.apache.org/releases.html > Remove links to the releases in EOL branches in web site > > > Key: HADOOP-16627 > URL: https://issues.apache.org/jira/browse/HADOOP-16627 > Project: Hadoop Common > Issue Type: Improvement > Components: documentation, website >Reporter: Akira Ajisaka >Priority: Major > Labels: newbie > > Hadoop 2.7.7 is still linked from download page. > https://hadoop.apache.org/releases.html > branch-2.7 and branch-3.0 are EoL, so 2.7.7 should be removed from download > page. > https://cwiki.apache.org/confluence/display/HADOOP/EOL+%28End-of-life%29+Release+Branches -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-16627) Remove links to the releases in EOL branches in web site
Akira Ajisaka created HADOOP-16627: -- Summary: Remove links to the releases in EOL branches in web site Key: HADOOP-16627 URL: https://issues.apache.org/jira/browse/HADOOP-16627 Project: Hadoop Common Issue Type: Improvement Components: documentation, website Reporter: Akira Ajisaka Hadoop 2.7.7 is still linked from download page. https://hadoop.apache.org/releases.html -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] dineshchitlangia commented on a change in pull request #1542: HDDS-2140. Add robot test for GDPR feature
dineshchitlangia commented on a change in pull request #1542: HDDS-2140. Add robot test for GDPR feature URL: https://github.com/apache/hadoop/pull/1542#discussion_r331345076 ## File path: hadoop-ozone/dist/src/main/smoketest/gdpr/gdpr.robot ## @@ -0,0 +1,68 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +*** Settings *** +Documentation Smoketest Ozone GDPR Feature +Library OperatingSystem +Library BuiltIn +Resource../commonlib.robot + +*** Variables *** +${volume} testvol + +*** Test Cases *** +Test GDPR(disabled) without explicit options +Execute ozone sh volume create /${volume} --quota 100TB Review comment: Introduced random volume name in recent commit. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] szetszwo commented on issue #1578: HDDS-2222 Add a method to update ByteBuffer in PureJavaCrc32/PureJavaCrc32C
szetszwo commented on issue #1578: HDDS- Add a method to update ByteBuffer in PureJavaCrc32/PureJavaCrc32C URL: https://github.com/apache/hadoop/pull/1578#issuecomment-538239632 The checkstyle warnings in ChecksumByteBuffer are absurd so that we will ignore them. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] jnp commented on issue #1578: HDDS-2222 Add a method to update ByteBuffer in PureJavaCrc32/PureJavaCrc32C
jnp commented on issue #1578: HDDS- Add a method to update ByteBuffer in PureJavaCrc32/PureJavaCrc32C URL: https://github.com/apache/hadoop/pull/1578#issuecomment-538236213 +1 for the patch This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bharatviswa504 opened a new pull request #1589: HDDS-2244. Use new ReadWrite lock in OzoneManager.
bharatviswa504 opened a new pull request #1589: HDDS-2244. Use new ReadWrite lock in OzoneManager. URL: https://github.com/apache/hadoop/pull/1589 Use new ReadWriteLock added in HDDS-2223. Existing tests should cover this. Ran a few Integration tests. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] virajith closed pull request #1573: HDFS-14889. Ability to check if a block has a replica on provided storage.
virajith closed pull request #1573: HDFS-14889. Ability to check if a block has a replica on provided storage. URL: https://github.com/apache/hadoop/pull/1573 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] christeoh commented on issue #1582: HDDS-2217. Removed redundant LOG4J lines from docker configurations
christeoh commented on issue #1582: HDDS-2217. Removed redundant LOG4J lines from docker configurations URL: https://github.com/apache/hadoop/pull/1582#issuecomment-538229895 ci/acceptance appears to be no failing tests. ci/integration appears to be failing on seemingly unrelated ratis issues? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ashvina opened a new pull request #1573: HDFS-14889. Ability to check if a block has a replica on provided storage.
ashvina opened a new pull request #1573: HDFS-14889. Ability to check if a block has a replica on provided storage. URL: https://github.com/apache/hadoop/pull/1573 Addresses https://issues.apache.org/jira/browse/HDFS-14889. Adding a method in `BlockInfo` to return true of the block has any replica on a Provided storage. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] virajith commented on issue #1573: HDFS-14889. Ability to check if a block has a replica on provided storage.
virajith commented on issue #1573: HDFS-14889. Ability to check if a block has a replica on provided storage. URL: https://github.com/apache/hadoop/pull/1573#issuecomment-538229082 Thanks for posting this @ashvina and the review @goiri . Merged this into trunk. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] virajith closed pull request #1573: HDFS-14889. Ability to check if a block has a replica on provided storage.
virajith closed pull request #1573: HDFS-14889. Ability to check if a block has a replica on provided storage. URL: https://github.com/apache/hadoop/pull/1573 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] dineshchitlangia commented on issue #1542: HDDS-2140. Add robot test for GDPR feature
dineshchitlangia commented on issue #1542: HDDS-2140. Add robot test for GDPR feature URL: https://github.com/apache/hadoop/pull/1542#issuecomment-538226560 > Unrelated to this patch (as this patch tests the CLI arguments) but I am wondering how the core GDPR feature can be tested. I mean how can we be sure that the data is _really_ unreadable (grep to the chunk files for a specific strings??). To be honest, I have no idea, but putting this interesting question to here ;-) Recap: GDPR talk in Vegas ;) - When putting a key in a GDPR enforced bucket, Ozone will create a symmetric key and Client will use that to encrypt and write to key. - This encryption key is stored in KeyInfo Metadata - When reading the key, the encryption key is fetched from KeyInfo Metadata and used to decrypt the key. After our Vegas conference, we modified the delete path (HDDS-2174): - When user asks Ozone to delete a Key, we first delete the encryption key details from KeyInfo Metadata, then we move the KeyInfo to DeletedTable in OM. - Since the encryption key is lost, there is no way you can read that data(except if you restore a back/snapshot of your entire system from before deletion, which will also be address in version 2) - HDDS-2174 included a test to confirm the key metadata in DeletedTable does not have the GDPR Encryption Key details. Thereby, even if you get your hands on chunks, you will still read encrypted junk :) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16466) Clean up the Assert usage in tests
[ https://issues.apache.org/jira/browse/HADOOP-16466?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lisheng Sun updated HADOOP-16466: - Attachment: HADOOP-16466.001.patch Status: Patch Available (was: Open) > Clean up the Assert usage in tests > -- > > Key: HADOOP-16466 > URL: https://issues.apache.org/jira/browse/HADOOP-16466 > Project: Hadoop Common > Issue Type: Improvement > Components: test >Reporter: Fengnan Li >Assignee: Fengnan Li >Priority: Major > Attachments: HADOOP-16466.001.patch > > > This tickets started with https://issues.apache.org/jira/browse/HDFS-14449 > and we would like to clean up all of the Assert usage in tests to make the > repo cleaner. This mainly is to make use static imports for the Assert > functions and use function call without the *Assert.* explicitly. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] adoroszlai commented on a change in pull request #1585: HDDS-2230. Invalid entries in ozonesecure-mr config
adoroszlai commented on a change in pull request #1585: HDDS-2230. Invalid entries in ozonesecure-mr config URL: https://github.com/apache/hadoop/pull/1585#discussion_r331336022 ## File path: hadoop-ozone/dist/src/main/compose/ozonesecure-mr/docker-compose.yaml ## @@ -14,7 +14,7 @@ # See the License for the specific language governing permissions and # limitations under the License. -version: "3" +version: "3.5" Review comment: Docker Compose file [version 3.5](https://docs.docker.com/compose/compose-file/compose-versioning/#version-35) is the first to allow `name` for networks. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] adoroszlai commented on a change in pull request #1585: HDDS-2230. Invalid entries in ozonesecure-mr config
adoroszlai commented on a change in pull request #1585: HDDS-2230. Invalid entries in ozonesecure-mr config URL: https://github.com/apache/hadoop/pull/1585#discussion_r331335642 ## File path: hadoop-ozone/dist/src/main/compose/ozonesecure-mr/docker-compose.yaml ## @@ -23,17 +23,23 @@ services: args: buildno: 1 hostname: kdc +networks: + - ozone Review comment: Default network does not work, since it's name is `ozonesecure-mr_default`, which triggers `URISyntaxException` due to `_`. Adding an explicit name avoids the `_default` suffix. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] adoroszlai commented on issue #1568: HDDS-2225. SCM fails to start in most unsecure environments due to leftover secure config
adoroszlai commented on issue #1568: HDDS-2225. SCM fails to start in most unsecure environments due to leftover secure config URL: https://github.com/apache/hadoop/pull/1568#issuecomment-538222650 @anuengineer Thanks for taking a look at this. > So now we have removed the mount and gen config? The new solution only removes the offending container (spark), which is not required by the test at all. Volume mount and config generation for the other containers is not changed. > I am presuming that +1s were given for the earlier solution, but with force push I am not able to see the older changes. Both earlier solution (ff3671022a267d765d7d631cb5b6e57d46ced12d) and new one (5caa23a390197d4b2d4dbb738ac850d02378edc0) are visible in the list of commits. The second commit just reverts the first attempt. The force push was needed only to rebase on current trunk. I agree, it makes understanding the conversation harder. I'll try a plain merge next time. Earlier +1s were for the first solution, while @elek's [latest comment](https://github.com/apache/hadoop/pull/1568#issuecomment-537440726) is for the second one. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] adoroszlai commented on issue #1580: HDDS-2234. rat.sh fails due to ozone-recon-web/build files
adoroszlai commented on issue #1580: HDDS-2234. rat.sh fails due to ozone-recon-web/build files URL: https://github.com/apache/hadoop/pull/1580#issuecomment-538219116 Thanks @anuengineer for reporting this issue and reviewing/committing the fix. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] adoroszlai commented on issue #1575: HDDS-2231. test-single.sh cannot copy results
adoroszlai commented on issue #1575: HDDS-2231. test-single.sh cannot copy results URL: https://github.com/apache/hadoop/pull/1575#issuecomment-538218935 Thanks @anuengineer for the review/commit. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] goiri commented on a change in pull request #1573: HDFS-14889. Ability to check if a block has a replica on provided storage.
goiri commented on a change in pull request #1573: HDFS-14889. Ability to check if a block has a replica on provided storage. URL: https://github.com/apache/hadoop/pull/1573#discussion_r331330915 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockInfo.java ## @@ -64,6 +67,28 @@ public void testAddStorage() throws Exception { Assert.assertEquals(storage, blockInfo.getStorageInfo(0)); } + @Test + public void testAddProvidedStorage() throws Exception { +BlockInfo blockInfo = new BlockInfoContiguous((short) 3); + +DatanodeStorageInfo storage = mock(DatanodeStorageInfo.class); +when(storage.getStorageType()).thenReturn(StorageType.PROVIDED); +boolean added = blockInfo.addStorage(storage, blockInfo); + +Assert.assertTrue(added); +Assert.assertEquals(storage, blockInfo.getStorageInfo(0)); +Assert.assertTrue(blockInfo.isProvided()); + Review comment: LGTM This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1564: HDDS-2223. Support ReadWrite lock in LockManager.
hadoop-yetus commented on issue #1564: HDDS-2223. Support ReadWrite lock in LockManager. URL: https://github.com/apache/hadoop/pull/1564#issuecomment-538211164 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 92 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 1 new or modified test files. | ||| _ trunk Compile Tests _ | | -1 | mvninstall | 32 | hadoop-hdds in trunk failed. | | -1 | mvninstall | 35 | hadoop-ozone in trunk failed. | | -1 | compile | 20 | hadoop-hdds in trunk failed. | | -1 | compile | 13 | hadoop-ozone in trunk failed. | | +1 | checkstyle | 50 | trunk passed | | +1 | mvnsite | 0 | trunk passed | | +1 | shadedclient | 920 | branch has no errors when building and testing our client artifacts. | | -1 | javadoc | 21 | hadoop-hdds in trunk failed. | | -1 | javadoc | 17 | hadoop-ozone in trunk failed. | | 0 | spotbugs | 1010 | Used deprecated FindBugs config; considering switching to SpotBugs. | | -1 | findbugs | 31 | hadoop-hdds in trunk failed. | | -1 | findbugs | 17 | hadoop-ozone in trunk failed. | ||| _ Patch Compile Tests _ | | -1 | mvninstall | 34 | hadoop-hdds in the patch failed. | | -1 | mvninstall | 38 | hadoop-ozone in the patch failed. | | -1 | compile | 23 | hadoop-hdds in the patch failed. | | -1 | compile | 16 | hadoop-ozone in the patch failed. | | -1 | javac | 23 | hadoop-hdds in the patch failed. | | -1 | javac | 16 | hadoop-ozone in the patch failed. | | +1 | checkstyle | 55 | the patch passed | | +1 | mvnsite | 0 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 774 | patch has no errors when building and testing our client artifacts. | | -1 | javadoc | 21 | hadoop-hdds in the patch failed. | | -1 | javadoc | 18 | hadoop-ozone in the patch failed. | | -1 | findbugs | 32 | hadoop-hdds in the patch failed. | | -1 | findbugs | 17 | hadoop-ozone in the patch failed. | ||| _ Other Tests _ | | -1 | unit | 25 | hadoop-hdds in the patch failed. | | -1 | unit | 22 | hadoop-ozone in the patch failed. | | +1 | asflicense | 30 | The patch does not generate ASF License warnings. | | | | 2480 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.2 Server=19.03.2 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1564/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1564 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 8a0c59964b6c 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 1dde3ef | | Default Java | 1.8.0_222 | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1564/3/artifact/out/branch-mvninstall-hadoop-hdds.txt | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1564/3/artifact/out/branch-mvninstall-hadoop-ozone.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1564/3/artifact/out/branch-compile-hadoop-hdds.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1564/3/artifact/out/branch-compile-hadoop-ozone.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1564/3/artifact/out/branch-javadoc-hadoop-hdds.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1564/3/artifact/out/branch-javadoc-hadoop-ozone.txt | | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-1564/3/artifact/out/branch-findbugs-hadoop-hdds.txt | | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-1564/3/artifact/out/branch-findbugs-hadoop-ozone.txt | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1564/3/artifact/out/patch-mvninstall-hadoop-hdds.txt | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1564/3/artifact/out/patch-mvninstall-hadoop-ozone.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1564/3/artifact/out/patch-compile-hadoop-hdds.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1564/3/artifact/out/patch-compile-hadoop-ozone.txt | | javac | https://builds.apache.org/job/hadoop-multibranch/job/PR-1564/3/artifact/out/patch-compile-hadoop-hdds.txt | | javac | https://builds.apache.org/job/hadoop-multibranch/job/PR-1564/3/artifact/out/patch-compile-hadoop-ozone.txt | | javadoc |
[GitHub] [hadoop] hadoop-yetus commented on issue #1528: HDDS-2181. Ozone Manager should send correct ACL type in ACL requests…
hadoop-yetus commented on issue #1528: HDDS-2181. Ozone Manager should send correct ACL type in ACL requests… URL: https://github.com/apache/hadoop/pull/1528#issuecomment-538210876 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 94 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | -1 | test4tests | 0 | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ trunk Compile Tests _ | | 0 | mvndep | 12 | Maven dependency ordering for branch | | -1 | mvninstall | 28 | hadoop-hdds in trunk failed. | | -1 | mvninstall | 34 | hadoop-ozone in trunk failed. | | -1 | compile | 21 | hadoop-hdds in trunk failed. | | -1 | compile | 15 | hadoop-ozone in trunk failed. | | +1 | checkstyle | 54 | trunk passed | | +1 | mvnsite | 0 | trunk passed | | +1 | shadedclient | 943 | branch has no errors when building and testing our client artifacts. | | -1 | javadoc | 18 | hadoop-hdds in trunk failed. | | -1 | javadoc | 16 | hadoop-ozone in trunk failed. | | 0 | spotbugs | 1028 | Used deprecated FindBugs config; considering switching to SpotBugs. | | -1 | findbugs | 29 | hadoop-hdds in trunk failed. | | -1 | findbugs | 17 | hadoop-ozone in trunk failed. | ||| _ Patch Compile Tests _ | | 0 | mvndep | 16 | Maven dependency ordering for patch | | -1 | mvninstall | 33 | hadoop-hdds in the patch failed. | | -1 | mvninstall | 38 | hadoop-ozone in the patch failed. | | -1 | compile | 22 | hadoop-hdds in the patch failed. | | -1 | compile | 15 | hadoop-ozone in the patch failed. | | -1 | javac | 22 | hadoop-hdds in the patch failed. | | -1 | javac | 15 | hadoop-ozone in the patch failed. | | +1 | checkstyle | 57 | the patch passed | | +1 | mvnsite | 0 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 791 | patch has no errors when building and testing our client artifacts. | | -1 | javadoc | 18 | hadoop-hdds in the patch failed. | | -1 | javadoc | 16 | hadoop-ozone in the patch failed. | | -1 | findbugs | 30 | hadoop-hdds in the patch failed. | | -1 | findbugs | 17 | hadoop-ozone in the patch failed. | ||| _ Other Tests _ | | -1 | unit | 26 | hadoop-hdds in the patch failed. | | -1 | unit | 25 | hadoop-ozone in the patch failed. | | +1 | asflicense | 32 | The patch does not generate ASF License warnings. | | | | 2537 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.2 Server=19.03.2 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/7/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1528 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux d5b4cdc3b883 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 1dde3ef | | Default Java | 1.8.0_222 | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/7/artifact/out/branch-mvninstall-hadoop-hdds.txt | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/7/artifact/out/branch-mvninstall-hadoop-ozone.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/7/artifact/out/branch-compile-hadoop-hdds.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/7/artifact/out/branch-compile-hadoop-ozone.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/7/artifact/out/branch-javadoc-hadoop-hdds.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/7/artifact/out/branch-javadoc-hadoop-ozone.txt | | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/7/artifact/out/branch-findbugs-hadoop-hdds.txt | | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/7/artifact/out/branch-findbugs-hadoop-ozone.txt | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/7/artifact/out/patch-mvninstall-hadoop-hdds.txt | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/7/artifact/out/patch-mvninstall-hadoop-ozone.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/7/artifact/out/patch-compile-hadoop-hdds.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/7/artifact/out/patch-compile-hadoop-ozone.txt
[GitHub] [hadoop] nandakumar131 commented on issue #1536: HDDS-2164 : om.db.checkpoints is getting filling up fast.
nandakumar131 commented on issue #1536: HDDS-2164 : om.db.checkpoints is getting filling up fast. URL: https://github.com/apache/hadoop/pull/1536#issuecomment-538210556 @avijayanhwx findbugs violations are related to this change, can you take a look at it? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] nandakumar131 commented on issue #1536: HDDS-2164 : om.db.checkpoints is getting filling up fast.
nandakumar131 commented on issue #1536: HDDS-2164 : om.db.checkpoints is getting filling up fast. URL: https://github.com/apache/hadoop/pull/1536#issuecomment-538210066 @anuengineer Rat failures are real, but they are not related to this PR. They were introduced in HDDS-2193 and fixed in HDDS-1146. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] dineshchitlangia commented on a change in pull request #1586: HDDS-2240. Command line tool for OM HA.
dineshchitlangia commented on a change in pull request #1586: HDDS-2240. Command line tool for OM HA. URL: https://github.com/apache/hadoop/pull/1586#discussion_r331326857 ## File path: hadoop-ozone/common/src/main/bin/ozone ## @@ -55,6 +55,7 @@ function hadoop_usage hadoop_add_subcommand "version" client "print the version" hadoop_add_subcommand "dtutil" client "operations related to delegation tokens" hadoop_add_subcommand "upgrade" client "HDFS to Ozone in-place upgrade tool" + hadoop_add_subcommand "omha" client "OM HA tool" Review comment: NIT: Have we named this command `omha` because we have plans in future to also add similar command for SCM? If not, how about naming the command as `haadmin` , `omadmin`, `admin`. Not a deal breaker but was just thinking if we can keep it similar to hdfs. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] nandakumar131 merged pull request #1564: HDDS-2223. Support ReadWrite lock in LockManager.
nandakumar131 merged pull request #1564: HDDS-2223. Support ReadWrite lock in LockManager. URL: https://github.com/apache/hadoop/pull/1564 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16598) Backport "HADOOP-16558 [COMMON+HDFS] use protobuf-maven-plugin to generate protobuf classes" to all active branches
[ https://issues.apache.org/jira/browse/HADOOP-16598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Stack updated HADOOP-16598: --- Attachment: HADOOP-16598-branch-2.9-v1.patch > Backport "HADOOP-16558 [COMMON+HDFS] use protobuf-maven-plugin to generate > protobuf classes" to all active branches > --- > > Key: HADOOP-16598 > URL: https://issues.apache.org/jira/browse/HADOOP-16598 > Project: Hadoop Common > Issue Type: Sub-task > Components: common >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > Attachments: HADOOP-16598-branch-2-v1.patch, > HADOOP-16598-branch-2-v1.patch, HADOOP-16598-branch-2.9-v1.patch, > HADOOP-16598-branch-2.9-v1.patch, HADOOP-16598-branch-2.9.patch, > HADOOP-16598-branch-2.patch, HADOOP-16598-branch-3.1.patch, > HADOOP-16598-branch-3.2.patch > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] dineshchitlangia commented on a change in pull request #1586: HDDS-2240. Command line tool for OM HA.
dineshchitlangia commented on a change in pull request #1586: HDDS-2240. Command line tool for OM HA. URL: https://github.com/apache/hadoop/pull/1586#discussion_r331325033 ## File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManagerHA.java ## @@ -0,0 +1,76 @@ +package org.apache.hadoop.ozone.om; + +import org.apache.hadoop.hdds.cli.GenericCli; +import org.apache.hadoop.hdds.cli.HddsVersionProvider; +import org.apache.hadoop.hdds.conf.OzoneConfiguration; +import org.apache.hadoop.hdds.tracing.TracingUtil; +import org.apache.hadoop.ozone.om.protocol.OzoneManagerProtocol; +import org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB; +import org.apache.hadoop.security.UserGroupInformation; +import org.apache.hadoop.security.authentication.client.AuthenticationException; +import org.apache.ratis.protocol.ClientId; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; +import picocli.CommandLine; +import picocli.CommandLine.Command; + +import java.io.IOException; +import java.util.Map; + + +/** + * A command line tool for making calls in OM HA protocols. + */ +@Command(name = "ozone omha", +hidden = true, description = "Command line tool for OM HA.", +versionProvider = HddsVersionProvider.class, +mixinStandardHelpOptions = true) +public class OzoneManagerHA extends GenericCli { + private OzoneConfiguration conf; + private static final Logger LOG = + LoggerFactory.getLogger(OzoneManagerHA.class); + + public static void main(String[] args) throws Exception { +TracingUtil.initTracing("OzoneManager"); +new OzoneManagerHA().run(args); + } + + private OzoneManagerHA() { +super(); + } + + /** + * This function implements a sub-command to allow the OM to be + * initialized from the command line. + */ + @CommandLine.Command(name = "--getservicestate", + customSynopsis = "ozone om [global options] --getservicestate " + + "--serviceId=", + hidden = false, + description = "Get the Ratis server state of all OMs belonging to given" + + " OM Service ID", + mixinStandardHelpOptions = true, + versionProvider = HddsVersionProvider.class) + public void getRoleInfoOm(@CommandLine.Option(names = { "--serviceId" }, + description = "The OM Service ID of the OMs to get the server states for", + paramLabel = "id") String serviceId) + throws Exception { +conf = createOzoneConfiguration(); +Map serviceStates = getServiceStates(conf, serviceId); +for (String nodeId : serviceStates.keySet()) { + System.out.println(nodeId + " : " + serviceStates.get(nodeId)); +} Review comment: It will be better to use entrySet() instead of keySet() for performance as it would avoid the lookup on L61. Although this method will not be doing extensive amount of work, I believe it is still worth making this change. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] dineshchitlangia commented on a change in pull request #1586: HDDS-2240. Command line tool for OM HA.
dineshchitlangia commented on a change in pull request #1586: HDDS-2240. Command line tool for OM HA. URL: https://github.com/apache/hadoop/pull/1586#discussion_r331325219 ## File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManagerHA.java ## @@ -0,0 +1,76 @@ +package org.apache.hadoop.ozone.om; + +import org.apache.hadoop.hdds.cli.GenericCli; +import org.apache.hadoop.hdds.cli.HddsVersionProvider; +import org.apache.hadoop.hdds.conf.OzoneConfiguration; +import org.apache.hadoop.hdds.tracing.TracingUtil; +import org.apache.hadoop.ozone.om.protocol.OzoneManagerProtocol; +import org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB; +import org.apache.hadoop.security.UserGroupInformation; +import org.apache.hadoop.security.authentication.client.AuthenticationException; +import org.apache.ratis.protocol.ClientId; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; +import picocli.CommandLine; +import picocli.CommandLine.Command; + +import java.io.IOException; +import java.util.Map; + + +/** + * A command line tool for making calls in OM HA protocols. + */ +@Command(name = "ozone omha", +hidden = true, description = "Command line tool for OM HA.", +versionProvider = HddsVersionProvider.class, +mixinStandardHelpOptions = true) +public class OzoneManagerHA extends GenericCli { Review comment: Declare this class as final. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] dineshchitlangia commented on a change in pull request #1586: HDDS-2240. Command line tool for OM HA.
dineshchitlangia commented on a change in pull request #1586: HDDS-2240. Command line tool for OM HA. URL: https://github.com/apache/hadoop/pull/1586#discussion_r331325033 ## File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManagerHA.java ## @@ -0,0 +1,76 @@ +package org.apache.hadoop.ozone.om; + +import org.apache.hadoop.hdds.cli.GenericCli; +import org.apache.hadoop.hdds.cli.HddsVersionProvider; +import org.apache.hadoop.hdds.conf.OzoneConfiguration; +import org.apache.hadoop.hdds.tracing.TracingUtil; +import org.apache.hadoop.ozone.om.protocol.OzoneManagerProtocol; +import org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB; +import org.apache.hadoop.security.UserGroupInformation; +import org.apache.hadoop.security.authentication.client.AuthenticationException; +import org.apache.ratis.protocol.ClientId; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; +import picocli.CommandLine; +import picocli.CommandLine.Command; + +import java.io.IOException; +import java.util.Map; + + +/** + * A command line tool for making calls in OM HA protocols. + */ +@Command(name = "ozone omha", +hidden = true, description = "Command line tool for OM HA.", +versionProvider = HddsVersionProvider.class, +mixinStandardHelpOptions = true) +public class OzoneManagerHA extends GenericCli { + private OzoneConfiguration conf; + private static final Logger LOG = + LoggerFactory.getLogger(OzoneManagerHA.class); + + public static void main(String[] args) throws Exception { +TracingUtil.initTracing("OzoneManager"); +new OzoneManagerHA().run(args); + } + + private OzoneManagerHA() { +super(); + } + + /** + * This function implements a sub-command to allow the OM to be + * initialized from the command line. + */ + @CommandLine.Command(name = "--getservicestate", + customSynopsis = "ozone om [global options] --getservicestate " + + "--serviceId=", + hidden = false, + description = "Get the Ratis server state of all OMs belonging to given" + + " OM Service ID", + mixinStandardHelpOptions = true, + versionProvider = HddsVersionProvider.class) + public void getRoleInfoOm(@CommandLine.Option(names = { "--serviceId" }, + description = "The OM Service ID of the OMs to get the server states for", + paramLabel = "id") String serviceId) + throws Exception { +conf = createOzoneConfiguration(); +Map serviceStates = getServiceStates(conf, serviceId); +for (String nodeId : serviceStates.keySet()) { + System.out.println(nodeId + " : " + serviceStates.get(nodeId)); +} Review comment: It will be better to use entrySet() instead of keySet() for performance. Although this method will not be doing extensive amount of work, I believe it is still worth making this change. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] nandakumar131 merged pull request #1540: HDDS-2198. SCM should not consider containers in CLOSING state to come out of safemode.
nandakumar131 merged pull request #1540: HDDS-2198. SCM should not consider containers in CLOSING state to come out of safemode. URL: https://github.com/apache/hadoop/pull/1540 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] nandakumar131 commented on issue #1540: HDDS-2198. SCM should not consider containers in CLOSING state to come out of safemode.
nandakumar131 commented on issue #1540: HDDS-2198. SCM should not consider containers in CLOSING state to come out of safemode. URL: https://github.com/apache/hadoop/pull/1540#issuecomment-538206445 Failures are not related to this change. I will merge this shortly. Thanks @bharatviswa504 for the review. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16598) Backport "HADOOP-16558 [COMMON+HDFS] use protobuf-maven-plugin to generate protobuf classes" to all active branches
[ https://issues.apache.org/jira/browse/HADOOP-16598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16944178#comment-16944178 ] Hadoop QA commented on HADOOP-16598: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 10s{color} | {color:red} HADOOP-16598 does not apply to branch-2. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HADOOP-16598 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12982182/HADOOP-16598-branch-2-v1.patch | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/16570/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > Backport "HADOOP-16558 [COMMON+HDFS] use protobuf-maven-plugin to generate > protobuf classes" to all active branches > --- > > Key: HADOOP-16598 > URL: https://issues.apache.org/jira/browse/HADOOP-16598 > Project: Hadoop Common > Issue Type: Sub-task > Components: common >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > Attachments: HADOOP-16598-branch-2-v1.patch, > HADOOP-16598-branch-2-v1.patch, HADOOP-16598-branch-2.9-v1.patch, > HADOOP-16598-branch-2.9.patch, HADOOP-16598-branch-2.patch, > HADOOP-16598-branch-3.1.patch, HADOOP-16598-branch-3.2.patch > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] nandakumar131 commented on a change in pull request #1564: HDDS-2223. Support ReadWrite lock in LockManager.
nandakumar131 commented on a change in pull request #1564: HDDS-2223. Support ReadWrite lock in LockManager. URL: https://github.com/apache/hadoop/pull/1564#discussion_r331322814 ## File path: hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/lock/LockManager.java ## @@ -25,42 +25,146 @@ import java.util.Map; import java.util.concurrent.ConcurrentHashMap; +import java.util.function.Consumer; /** * Manages the locks on a given resource. A new lock is created for each * and every unique resource. Uniqueness of resource depends on the * {@code equals} implementation of it. */ -public class LockManager { +public class LockManager { private static final Logger LOG = LoggerFactory.getLogger(LockManager.class); - private final Map activeLocks = new ConcurrentHashMap<>(); + private final Map activeLocks = new ConcurrentHashMap<>(); private final GenericObjectPool lockPool = new GenericObjectPool<>(new PooledLockFactory()); /** - * Creates new LockManager instance. + * Creates new LockManager instance with the given Configuration. * * @param conf Configuration object */ - public LockManager(Configuration conf) { -int maxPoolSize = conf.getInt(HddsConfigKeys.HDDS_LOCK_MAX_CONCURRENCY, + public LockManager(final Configuration conf) { +final int maxPoolSize = conf.getInt(HddsConfigKeys.HDDS_LOCK_MAX_CONCURRENCY, HddsConfigKeys.HDDS_LOCK_MAX_CONCURRENCY_DEFAULT); lockPool.setMaxTotal(maxPoolSize); } - /** * Acquires the lock on given resource. * * If the lock is not available then the current thread becomes * disabled for thread scheduling purposes and lies dormant until the * lock has been acquired. + * + * @param resource on which the lock has to be acquired + * @deprecated Use {@link LockManager#writeLock} instead + */ + public void lock(final R resource) { + writeLock(resource); + } + + /** + * Releases the lock on given resource. + * + * @param resource for which the lock has to be released + * @deprecated Use {@link LockManager#writeUnlock} instead + */ + public void unlock(final R resource) { + writeUnlock(resource); + } + + /** + * Acquires the read lock on given resource. + * + * Acquires the read lock on resource if the write lock is not held by + * another thread and returns immediately. + * + * If the write lock on resource is held by another thread then + * the current thread becomes disabled for thread scheduling + * purposes and lies dormant until the read lock has been acquired. + * + * @param resource on which the read lock has to be acquired + */ + public void readLock(final R resource) { +acquire(resource, ActiveLock::readLock); + } + + /** + * Releases the read lock on given resource. + * + * @param resource for which the read lock has to be released + * @throws IllegalMonitorStateException if the current thread does not + * hold this lock + */ + public void readUnlock(final R resource) throws IllegalMonitorStateException { +release(resource, ActiveLock::readUnlock); + } + + /** + * Acquires the write lock on given resource. + * + * Acquires the write lock on resource if neither the read nor write lock + * are held by another thread and returns immediately. + * + * If the current thread already holds the write lock then the + * hold count is incremented by one and the method returns + * immediately. + * + * If the lock is held by another thread then the current + * thread becomes disabled for thread scheduling purposes and + * lies dormant until the write lock has been acquired. + * + * @param resource on which the lock has to be acquired */ - public void lock(T resource) { -activeLocks.compute(resource, (k, v) -> { - ActiveLock lock; + public void writeLock(final R resource) { +acquire(resource, ActiveLock::writeLock); + } + + /** + * Releases the write lock on given resource. + * + * @param resource for which the lock has to be released + * @throws IllegalMonitorStateException if the current thread does not + * hold this lock + */ + public void writeUnlock(final R resource) throws IllegalMonitorStateException { +release(resource, ActiveLock::writeUnlock); + } + + /** + * Acquires the lock on given resource using the provided lock function. + * + * @param resource on which the lock has to be acquired + * @param lockFn function to acquire the lock + */ + private void acquire(final R resource, final Consumer lockFn) { +lockFn.accept(getLockForLocking(resource)); Review comment: Added Additional comment for clarity. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the
[GitHub] [hadoop] nandakumar131 commented on issue #1564: HDDS-2223. Support ReadWrite lock in LockManager.
nandakumar131 commented on issue #1564: HDDS-2223. Support ReadWrite lock in LockManager. URL: https://github.com/apache/hadoop/pull/1564#issuecomment-538203299 The test failures are not related. I will merge the PR shortly. Thanks @arp7 @bharatviswa504 for the reviews. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16598) Backport "HADOOP-16558 [COMMON+HDFS] use protobuf-maven-plugin to generate protobuf classes" to all active branches
[ https://issues.apache.org/jira/browse/HADOOP-16598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16944166#comment-16944166 ] Michael Stack commented on HADOOP-16598: Retry. Was going to commit this tomorrow unless objection. > Backport "HADOOP-16558 [COMMON+HDFS] use protobuf-maven-plugin to generate > protobuf classes" to all active branches > --- > > Key: HADOOP-16598 > URL: https://issues.apache.org/jira/browse/HADOOP-16598 > Project: Hadoop Common > Issue Type: Sub-task > Components: common >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > Attachments: HADOOP-16598-branch-2-v1.patch, > HADOOP-16598-branch-2-v1.patch, HADOOP-16598-branch-2.9-v1.patch, > HADOOP-16598-branch-2.9.patch, HADOOP-16598-branch-2.patch, > HADOOP-16598-branch-3.1.patch, HADOOP-16598-branch-3.2.patch > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16598) Backport "HADOOP-16558 [COMMON+HDFS] use protobuf-maven-plugin to generate protobuf classes" to all active branches
[ https://issues.apache.org/jira/browse/HADOOP-16598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Stack updated HADOOP-16598: --- Attachment: HADOOP-16598-branch-2-v1.patch > Backport "HADOOP-16558 [COMMON+HDFS] use protobuf-maven-plugin to generate > protobuf classes" to all active branches > --- > > Key: HADOOP-16598 > URL: https://issues.apache.org/jira/browse/HADOOP-16598 > Project: Hadoop Common > Issue Type: Sub-task > Components: common >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > Attachments: HADOOP-16598-branch-2-v1.patch, > HADOOP-16598-branch-2-v1.patch, HADOOP-16598-branch-2.9-v1.patch, > HADOOP-16598-branch-2.9.patch, HADOOP-16598-branch-2.patch, > HADOOP-16598-branch-3.1.patch, HADOOP-16598-branch-3.2.patch > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #1585: HDDS-2230. Invalid entries in ozonesecure-mr config
xiaoyuyao commented on a change in pull request #1585: HDDS-2230. Invalid entries in ozonesecure-mr config URL: https://github.com/apache/hadoop/pull/1585#discussion_r331321334 ## File path: hadoop-ozone/dist/src/main/compose/ozonesecure-mr/docker-compose.yaml ## @@ -23,17 +23,23 @@ services: args: buildno: 1 hostname: kdc +networks: + - ozone Review comment: Does default network work for this case? why do we explicitly change the network name? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #1585: HDDS-2230. Invalid entries in ozonesecure-mr config
xiaoyuyao commented on a change in pull request #1585: HDDS-2230. Invalid entries in ozonesecure-mr config URL: https://github.com/apache/hadoop/pull/1585#discussion_r331321174 ## File path: hadoop-ozone/dist/src/main/compose/ozonesecure-mr/docker-compose.yaml ## @@ -14,7 +14,7 @@ # See the License for the specific language governing permissions and # limitations under the License. -version: "3" +version: "3.5" Review comment: Do we track the docker-compose versions for this change? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1528: HDDS-2181. Ozone Manager should send correct ACL type in ACL requests…
hadoop-yetus commented on issue #1528: HDDS-2181. Ozone Manager should send correct ACL type in ACL requests… URL: https://github.com/apache/hadoop/pull/1528#issuecomment-538198616 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 85 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | -1 | test4tests | 0 | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ trunk Compile Tests _ | | 0 | mvndep | 28 | Maven dependency ordering for branch | | -1 | mvninstall | 29 | hadoop-hdds in trunk failed. | | -1 | mvninstall | 33 | hadoop-ozone in trunk failed. | | -1 | compile | 19 | hadoop-hdds in trunk failed. | | -1 | compile | 13 | hadoop-ozone in trunk failed. | | +1 | checkstyle | 54 | trunk passed | | +1 | mvnsite | 0 | trunk passed | | +1 | shadedclient | 937 | branch has no errors when building and testing our client artifacts. | | -1 | javadoc | 19 | hadoop-hdds in trunk failed. | | -1 | javadoc | 16 | hadoop-ozone in trunk failed. | | 0 | spotbugs | 1024 | Used deprecated FindBugs config; considering switching to SpotBugs. | | -1 | findbugs | 30 | hadoop-hdds in trunk failed. | | -1 | findbugs | 17 | hadoop-ozone in trunk failed. | ||| _ Patch Compile Tests _ | | 0 | mvndep | 15 | Maven dependency ordering for patch | | -1 | mvninstall | 32 | hadoop-hdds in the patch failed. | | -1 | mvninstall | 35 | hadoop-ozone in the patch failed. | | -1 | compile | 21 | hadoop-hdds in the patch failed. | | -1 | compile | 16 | hadoop-ozone in the patch failed. | | -1 | javac | 21 | hadoop-hdds in the patch failed. | | -1 | javac | 16 | hadoop-ozone in the patch failed. | | -0 | checkstyle | 28 | hadoop-ozone: The patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | | +1 | mvnsite | 0 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 788 | patch has no errors when building and testing our client artifacts. | | -1 | javadoc | 19 | hadoop-hdds in the patch failed. | | -1 | javadoc | 17 | hadoop-ozone in the patch failed. | | -1 | findbugs | 29 | hadoop-hdds in the patch failed. | | -1 | findbugs | 17 | hadoop-ozone in the patch failed. | ||| _ Other Tests _ | | -1 | unit | 24 | hadoop-hdds in the patch failed. | | -1 | unit | 23 | hadoop-ozone in the patch failed. | | +1 | asflicense | 29 | The patch does not generate ASF License warnings. | | | | 2512 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.2 Server=19.03.2 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/6/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1528 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 32b6298e6c9c 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 1dde3ef | | Default Java | 1.8.0_222 | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/6/artifact/out/branch-mvninstall-hadoop-hdds.txt | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/6/artifact/out/branch-mvninstall-hadoop-ozone.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/6/artifact/out/branch-compile-hadoop-hdds.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/6/artifact/out/branch-compile-hadoop-ozone.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/6/artifact/out/branch-javadoc-hadoop-hdds.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/6/artifact/out/branch-javadoc-hadoop-ozone.txt | | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/6/artifact/out/branch-findbugs-hadoop-hdds.txt | | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/6/artifact/out/branch-findbugs-hadoop-ozone.txt | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/6/artifact/out/patch-mvninstall-hadoop-hdds.txt | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/6/artifact/out/patch-mvninstall-hadoop-ozone.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/6/artifact/out/patch-compile-hadoop-hdds.txt | | compile |
[jira] [Updated] (HADOOP-16127) In ipc.Client, put a new connection could happen after stop
[ https://issues.apache.org/jira/browse/HADOOP-16127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HADOOP-16127: - Fix Version/s: 3.2.2 3.1.4 > In ipc.Client, put a new connection could happen after stop > --- > > Key: HADOOP-16127 > URL: https://issues.apache.org/jira/browse/HADOOP-16127 > Project: Hadoop Common > Issue Type: Bug > Components: ipc >Reporter: Tsz-wo Sze >Assignee: Tsz-wo Sze >Priority: Major > Fix For: 3.3.0, 3.1.4, 3.2.2 > > Attachments: c16127_20190219.patch, c16127_20190220.patch, > c16127_20190225.patch > > > In getConnection(..), running can be initially true but becomes false before > putIfAbsent. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16624) Upgrade hugo to the latest version in Dockerfile
[ https://issues.apache.org/jira/browse/HADOOP-16624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16944157#comment-16944157 ] Hudson commented on HADOOP-16624: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17460 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/17460/]) HADOOP-16624. Upgrade hugo to the latest version in Dockerfile (aajisaka: rev 1dde3efb91e8b4cb7f990522e840ab935835b586) * (edit) dev-support/docker/Dockerfile > Upgrade hugo to the latest version in Dockerfile > > > Key: HADOOP-16624 > URL: https://issues.apache.org/jira/browse/HADOOP-16624 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Reporter: Akira Ajisaka >Assignee: kevin su >Priority: Minor > Fix For: 3.3.0 > > Attachments: HADOOP-16624.001.patch > > > In Dockerfile, the hugo version is 0.30.2. Now the latest hugo version is > 0.58.3. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16624) Upgrade hugo to the latest version in Dockerfile
[ https://issues.apache.org/jira/browse/HADOOP-16624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-16624: --- Fix Version/s: 3.3.0 Hadoop Flags: Reviewed Resolution: Fixed Status: Resolved (was: Patch Available) Committed this to trunk. Thank you [~pingsutw] and [~ayushtkn]! > Upgrade hugo to the latest version in Dockerfile > > > Key: HADOOP-16624 > URL: https://issues.apache.org/jira/browse/HADOOP-16624 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Reporter: Akira Ajisaka >Assignee: kevin su >Priority: Minor > Fix For: 3.3.0 > > Attachments: HADOOP-16624.001.patch > > > In Dockerfile, the hugo version is 0.30.2. Now the latest hugo version is > 0.58.3. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-16624) Upgrade hugo to the latest version in Dockerfile
[ https://issues.apache.org/jira/browse/HADOOP-16624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16944152#comment-16944152 ] Akira Ajisaka edited comment on HADOOP-16624 at 10/4/19 1:07 AM: - +1, committed this to trunk. Thank you [~pingsutw] and [~ayushtkn]! was (Author: ajisakaa): Committed this to trunk. Thank you [~pingsutw] and [~ayushtkn]! > Upgrade hugo to the latest version in Dockerfile > > > Key: HADOOP-16624 > URL: https://issues.apache.org/jira/browse/HADOOP-16624 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Reporter: Akira Ajisaka >Assignee: kevin su >Priority: Minor > Fix For: 3.3.0 > > Attachments: HADOOP-16624.001.patch > > > In Dockerfile, the hugo version is 0.30.2. Now the latest hugo version is > 0.58.3. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16624) Upgrade hugo to the latest version in Dockerfile
[ https://issues.apache.org/jira/browse/HADOOP-16624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-16624: --- Issue Type: Improvement (was: Bug) > Upgrade hugo to the latest version in Dockerfile > > > Key: HADOOP-16624 > URL: https://issues.apache.org/jira/browse/HADOOP-16624 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Reporter: Akira Ajisaka >Assignee: kevin su >Priority: Minor > Attachments: HADOOP-16624.001.patch > > > In Dockerfile, the hugo version is 0.30.2. Now the latest hugo version is > 0.58.3. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] sidseth commented on issue #1587: HADOOP-16626. S3A ITestRestrictedReadAccess fails
sidseth commented on issue #1587: HADOOP-16626. S3A ITestRestrictedReadAccess fails URL: https://github.com/apache/hadoop/pull/1587#issuecomment-538182844 Looks good to me mostly, without fully understanding the problem which caused this (the resource loading bit unsetting configs, but what was being unset). 1. Cannot comment on the changes in S3AContract from a design POV - don't really know exactly what this intends to. If you think this fits with the design requirements - great. 2. Tests still fail with -Ds3guard (They pass with -Ds3guard -Dauth -Ddynamo) ``` [ERROR] testNoReadAccess[auth](org.apache.hadoop.fs.s3a.auth.ITestRestrictedReadAccess) Time elapsed: 1.363 s <<< ERROR! java.nio.file.AccessDeniedException: test/testNoReadAccess-auth/noReadDir/emptyDir/: getFileStatus on test/testNoReadAccess-auth/noReadDir/emptyDir/: com.amazonaws.services.s3.model.AmazonS3Exception: Forbidden (Service: Amazon S3; Status Code: 403; Error Code: 403 Forbidden; Req uest ID: A2C756FA3DFE842A; S3 Extended Request ID: 0uDlBPTbAhnsw672prqrbd2qpyjIK7zKd6nZ0OGA1A8GX0xSs2DGemc1P4j737YGITJChOUi7HI=), S3 Extended Request ID: 0uDlBPTbAhnsw672prqrbd2qpyjIK7zKd6nZ0OGA1A8GX0xSs2DGemc1P4j737YGITJChOUi7HI=:403 Forbidden at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:244) at org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:2777) at org.apache.hadoop.fs.s3a.S3AFileSystem.innerGetFileStatus(S3AFileSystem.java:2705) at org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:2589) at org.apache.hadoop.fs.s3a.S3AFileSystem.innerListStatus(S3AFileSystem.java:2377) at org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$listStatus$10(S3AFileSystem.java:2356) at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:110) at org.apache.hadoop.fs.s3a.S3AFileSystem.listStatus(S3AFileSystem.java:2356) at org.apache.hadoop.fs.s3a.auth.ITestRestrictedReadAccess.lambda$checkBasicFileOperations$3(ITestRestrictedReadAccess.java:403) at org.apache.hadoop.fs.s3a.auth.ITestRestrictedReadAccess.accessDeniedIf(ITestRestrictedReadAccess.java:689) at org.apache.hadoop.fs.s3a.auth.ITestRestrictedReadAccess.checkBasicFileOperations(ITestRestrictedReadAccess.java:402) at org.apache.hadoop.fs.s3a.auth.ITestRestrictedReadAccess.testNoReadAccess(ITestRestrictedReadAccess.java:302) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:748) Caused by: com.amazonaws.services.s3.model.AmazonS3Exception: Forbidden (Service: Amazon S3; Status Code: 403; Error Code: 403 Forbidden; Request ID: A2C756FA3DFE842A; S3 Extended Request ID: 0uDlBPTbAhnsw672prqrbd2qpyjIK7zKd6nZ0OGA1A8GX0xSs2DGemc1P4j737YGITJChOUi7HI=), S3 Extended Request ID: 0uDlBPTbAhnsw672prqrbd2qpyjIK7zKd6nZ0OGA1A8GX0xSs2DGemc1P4j737YGITJChOUi7HI= at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1712) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1367) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1113) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:770) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:744) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:726) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:686) at
[jira] [Updated] (HADOOP-12282) Connection thread's name should be updated after address changing is detected
[ https://issues.apache.org/jira/browse/HADOOP-12282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HADOOP-12282: - Fix Version/s: 3.2.2 3.1.4 > Connection thread's name should be updated after address changing is detected > - > > Key: HADOOP-12282 > URL: https://issues.apache.org/jira/browse/HADOOP-12282 > Project: Hadoop Common > Issue Type: Bug > Components: ipc >Reporter: zhouyingchao >Assignee: Lisheng Sun >Priority: Major > Fix For: 3.3.0, 3.1.4, 3.2.2 > > Attachments: HADOOP-12282-001.patch, HADOOP-12282.002.patch > > > In a hadoop hdfs cluster, I changed the standby Namenode's ip address (the > hostname is not changed and the routing tables are updated). After the > change, the cluster is running as normal. > However, I found that the debug message of datanode's IPC still prints the > original ip address. By looking into the implementation, it turns out that > the original address is used as the thread's name. I think the thread's name > should be changed if the address change is detected. Because one of the > constituent elements of the thread's name is server. > {code:java} > Connection(ConnectionId remoteId, int serviceClass, > Consumer removeMethod) { > .. > UserGroupInformation ticket = remoteId.getTicket(); > // try SASL if security is enabled or if the ugi contains tokens. > // this causes a SIMPLE client with tokens to attempt SASL > boolean trySasl = UserGroupInformation.isSecurityEnabled() || > (ticket != null && !ticket.getTokens().isEmpty()); > this.authProtocol = trySasl ? AuthProtocol.SASL : AuthProtocol.NONE; > this.setName("IPC Client (" + socketFactory.hashCode() +") connection to " + > server.toString() + > " from " + ((ticket==null)?"an unknown user":ticket.getUserName())); > this.setDaemon(true); > }{code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16626) S3A ITestRestrictedReadAccess fails
[ https://issues.apache.org/jira/browse/HADOOP-16626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16944139#comment-16944139 ] Siddharth Seth commented on HADOOP-16626: - bq. When you call Configuration.addResource() it reloads all configs, so all settings you've previously cleared get set again. Interesting. Any properties which have explicitly been set using config.set(...) are retained after an addResource() call. However, properties which have been unset explicitly via conf.unset() are lost of after an addResource(). This is probably a bug in 'Configuration'. For my understanding, this specific call in createConfiguration() {code} removeBucketOverrides(bucketName, conf, S3_METADATA_STORE_IMPL, METADATASTORE_AUTHORITATIVE); {code} All the unsets it does are lost, and somehow in your config files you have bucket level overrides set up, which are lost as a result? > S3A ITestRestrictedReadAccess fails > --- > > Key: HADOOP-16626 > URL: https://issues.apache.org/jira/browse/HADOOP-16626 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Siddharth Seth >Assignee: Steve Loughran >Priority: Major > > Just tried running the S3A test suite. Consistently seeing the following. > Command used > {code} > mvn -T 1C verify -Dparallel-tests -DtestsThreadCount=12 -Ds3guard -Dauth > -Ddynamo -Dtest=moo -Dit.test=ITestRestrictedReadAccess > {code} > cc [~ste...@apache.org] > {code} > --- > Test set: org.apache.hadoop.fs.s3a.auth.ITestRestrictedReadAccess > --- > Tests run: 3, Failures: 0, Errors: 3, Skipped: 0, Time elapsed: 5.335 s <<< > FAILURE! - in org.apache.hadoop.fs.s3a.auth.ITestRestrictedReadAccess > testNoReadAccess[raw](org.apache.hadoop.fs.s3a.auth.ITestRestrictedReadAccess) > Time elapsed: 2.841 s <<< ERROR! > java.nio.file.AccessDeniedException: > test/testNoReadAccess-raw/noReadDir/emptyDir/: getFileStatus on > test/testNoReadAccess-raw/noReadDir/emptyDir/: > com.amazonaws.services.s3.model.AmazonS3Exception: Forbidden (Service: Amazon > S3; Status Code: 403; Error Code: 403 Forbidden; Request ID: > FE8B4D6F25648BCD; S3 Extended Request ID: > hgUHzFskU9CcEUT3DxgAkYcWLl6vFoa1k7qXX29cx1u3lpl7RVsWr5rp27/B8s5yjmWvvi6hVgk=), > S3 Extended Request ID: > hgUHzFskU9CcEUT3DxgAkYcWLl6vFoa1k7qXX29cx1u3lpl7RVsWr5rp27/B8s5yjmWvvi6hVgk=:403 > Forbidden > at > org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:244) > at > org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:2777) > at > org.apache.hadoop.fs.s3a.S3AFileSystem.innerGetFileStatus(S3AFileSystem.java:2705) > at > org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:2589) > at > org.apache.hadoop.fs.s3a.S3AFileSystem.innerListStatus(S3AFileSystem.java:2377) > at > org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$listStatus$10(S3AFileSystem.java:2356) > at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:110) > at > org.apache.hadoop.fs.s3a.S3AFileSystem.listStatus(S3AFileSystem.java:2356) > at > org.apache.hadoop.fs.s3a.auth.ITestRestrictedReadAccess.checkBasicFileOperations(ITestRestrictedReadAccess.java:360) > at > org.apache.hadoop.fs.s3a.auth.ITestRestrictedReadAccess.testNoReadAccess(ITestRestrictedReadAccess.java:282) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) > at > org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298) > at > org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at
[GitHub] [hadoop] sidseth commented on issue #1115: HADOOP-16207 testMR failures
sidseth commented on issue #1115: HADOOP-16207 testMR failures URL: https://github.com/apache/hadoop/pull/1115#issuecomment-538176889 The change that you mentioned about not running tests if a previous test fails - if that's already in, I'm +1 for the patch. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] sidseth commented on a change in pull request #1115: HADOOP-16207 testMR failures
sidseth commented on a change in pull request #1115: HADOOP-16207 testMR failures URL: https://github.com/apache/hadoop/pull/1115#discussion_r331301857 ## File path: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/commit/integration/ITestS3ACommitterMRJob.java ## @@ -0,0 +1,643 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.fs.s3a.commit.integration; + +import java.io.File; +import java.io.FileNotFoundException; +import java.io.FileOutputStream; +import java.io.IOException; +import java.net.URL; +import java.nio.charset.StandardCharsets; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.Collection; +import java.util.Collections; +import java.util.List; +import java.util.Locale; +import java.util.Set; +import java.util.UUID; +import java.util.stream.Collectors; + +import com.google.common.collect.Sets; +import org.assertj.core.api.Assertions; +import org.junit.FixMethodOrder; +import org.junit.Rule; +import org.junit.Test; +import org.junit.rules.TemporaryFolder; +import org.junit.runner.RunWith; +import org.junit.runners.MethodSorters; +import org.junit.runners.Parameterized; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.FileStatus; +import org.apache.hadoop.fs.FileSystem; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.fs.s3a.S3AFileSystem; +import org.apache.hadoop.fs.s3a.S3AUtils; +import org.apache.hadoop.fs.s3a.commit.AbstractYarnClusterITest; +import org.apache.hadoop.fs.s3a.commit.CommitConstants; +import org.apache.hadoop.fs.s3a.commit.LoggingTextOutputFormat; +import org.apache.hadoop.fs.s3a.commit.files.SuccessData; +import org.apache.hadoop.fs.s3a.commit.magic.MagicS3GuardCommitter; +import org.apache.hadoop.fs.s3a.commit.staging.DirectoryStagingCommitter; +import org.apache.hadoop.fs.s3a.commit.staging.PartitionedStagingCommitter; +import org.apache.hadoop.io.LongWritable; +import org.apache.hadoop.io.Text; +import org.apache.hadoop.mapred.JobConf; +import org.apache.hadoop.mapreduce.Job; +import org.apache.hadoop.mapreduce.JobStatus; +import org.apache.hadoop.mapreduce.Mapper; +import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; +import org.apache.hadoop.mapreduce.lib.input.TextInputFormat; +import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; +import org.apache.hadoop.security.UserGroupInformation; +import org.apache.hadoop.util.DurationInfo; + +import static org.apache.hadoop.fs.s3a.S3ATestUtils.disableFilesystemCaching; +import static org.apache.hadoop.fs.s3a.S3ATestUtils.lsR; +import static org.apache.hadoop.fs.s3a.S3AUtils.applyLocatedFiles; +import static org.apache.hadoop.fs.s3a.commit.CommitConstants.FS_S3A_COMMITTER_STAGING_TMP_PATH; +import static org.apache.hadoop.fs.s3a.commit.CommitConstants.MAGIC; +import static org.apache.hadoop.fs.s3a.commit.CommitConstants._SUCCESS; +import static org.apache.hadoop.fs.s3a.commit.InternalCommitterConstants.FS_S3A_COMMITTER_STAGING_UUID; +import static org.apache.hadoop.fs.s3a.commit.staging.Paths.getMultipartUploadCommitsDirectory; +import static org.apache.hadoop.fs.s3a.commit.staging.StagingCommitterConstants.STAGING_UPLOADS; +import static org.apache.hadoop.test.LambdaTestUtils.intercept; + +/** + * Test an MR Job with all the different committers. + * + * This is a fairly complex parameterization: it is designed to + * avoid the overhead of starting a Yarn cluster for + * individual committer types, so speed up operations. + * + * It also implicitly guarantees that there is never more than one of these + * MR jobs active at a time, so avoids overloading the test machine with too + * many processes. + * How the binding works: + * + * + * Each parameterized suite is configured through its own + * {@link CommitterTestBinding} subclass. + * + * + * JUnit runs these test suites one parameterized binding at a time. + * + * + * The test suites are declared to be executed in ascending order, so Review comment: Sounds good to me. The change that you mentioned about not running tests if a previous test fails - if that's already
[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1528: HDDS-2181. Ozone Manager should send correct ACL type in ACL requests…
bharatviswa504 commented on a change in pull request #1528: HDDS-2181. Ozone Manager should send correct ACL type in ACL requests… URL: https://github.com/apache/hadoop/pull/1528#discussion_r331295902 ## File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyCommitRequest.java ## @@ -112,11 +115,20 @@ public OMClientResponse validateAndUpdateCache(OzoneManager ozoneManager, IOException exception = null; OmKeyInfo omKeyInfo = null; OMClientResponse omClientResponse = null; +boolean bucketLockAcquired = false; OMMetadataManager omMetadataManager = ozoneManager.getMetadataManager(); try { // check Acl - checkKeyAcls(ozoneManager, volumeName, bucketName, keyName, + // Native authorizer requires client id as part of keyname to check + // write ACL on key. Add client id to key name if ozone native + // authorizer is configured. + Configuration config = new OzoneConfiguration(); + String keyNameForAclCheck = keyName; + if (OmUtils.isNativeAuthorizerEnabled(config)) { Review comment: Can you add a little explanation of why this special case is needed in code comments? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1528: HDDS-2181. Ozone Manager should send correct ACL type in ACL requests…
bharatviswa504 commented on a change in pull request #1528: HDDS-2181. Ozone Manager should send correct ACL type in ACL requests… URL: https://github.com/apache/hadoop/pull/1528#discussion_r331295902 ## File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyCommitRequest.java ## @@ -112,11 +115,20 @@ public OMClientResponse validateAndUpdateCache(OzoneManager ozoneManager, IOException exception = null; OmKeyInfo omKeyInfo = null; OMClientResponse omClientResponse = null; +boolean bucketLockAcquired = false; OMMetadataManager omMetadataManager = ozoneManager.getMetadataManager(); try { // check Acl - checkKeyAcls(ozoneManager, volumeName, bucketName, keyName, + // Native authorizer requires client id as part of keyname to check + // write ACL on key. Add client id to key name if ozone native + // authorizer is configured. + Configuration config = new OzoneConfiguration(); + String keyNameForAclCheck = keyName; + if (OmUtils.isNativeAuthorizerEnabled(config)) { Review comment: Can you add a little explanation of why this special case is needed in code comments? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1528: HDDS-2181. Ozone Manager should send correct ACL type in ACL requests…
bharatviswa504 commented on a change in pull request #1528: HDDS-2181. Ozone Manager should send correct ACL type in ACL requests… URL: https://github.com/apache/hadoop/pull/1528#discussion_r331295902 ## File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyCommitRequest.java ## @@ -112,11 +115,20 @@ public OMClientResponse validateAndUpdateCache(OzoneManager ozoneManager, IOException exception = null; OmKeyInfo omKeyInfo = null; OMClientResponse omClientResponse = null; +boolean bucketLockAcquired = false; OMMetadataManager omMetadataManager = ozoneManager.getMetadataManager(); try { // check Acl - checkKeyAcls(ozoneManager, volumeName, bucketName, keyName, + // Native authorizer requires client id as part of keyname to check + // write ACL on key. Add client id to key name if ozone native + // authorizer is configured. + Configuration config = new OzoneConfiguration(); + String keyNameForAclCheck = keyName; + if (OmUtils.isNativeAuthorizerEnabled(config)) { Review comment: Can you add a little explanation of why this special case is needed? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1588: HDDS-1986. Fix listkeys API.
hadoop-yetus commented on issue #1588: HDDS-1986. Fix listkeys API. URL: https://github.com/apache/hadoop/pull/1588#issuecomment-538169169 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 42 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 2 new or modified test files. | ||| _ trunk Compile Tests _ | | -1 | mvninstall | 33 | hadoop-hdds in trunk failed. | | -1 | mvninstall | 31 | hadoop-ozone in trunk failed. | | -1 | compile | 22 | hadoop-hdds in trunk failed. | | -1 | compile | 16 | hadoop-ozone in trunk failed. | | +1 | checkstyle | 54 | trunk passed | | +1 | mvnsite | 0 | trunk passed | | +1 | shadedclient | 823 | branch has no errors when building and testing our client artifacts. | | -1 | javadoc | 20 | hadoop-hdds in trunk failed. | | -1 | javadoc | 22 | hadoop-ozone in trunk failed. | | 0 | spotbugs | 920 | Used deprecated FindBugs config; considering switching to SpotBugs. | | -1 | findbugs | 31 | hadoop-hdds in trunk failed. | | -1 | findbugs | 19 | hadoop-ozone in trunk failed. | ||| _ Patch Compile Tests _ | | -1 | mvninstall | 35 | hadoop-hdds in the patch failed. | | -1 | mvninstall | 39 | hadoop-ozone in the patch failed. | | -1 | compile | 24 | hadoop-hdds in the patch failed. | | -1 | compile | 20 | hadoop-ozone in the patch failed. | | -1 | javac | 24 | hadoop-hdds in the patch failed. | | -1 | javac | 20 | hadoop-ozone in the patch failed. | | -0 | checkstyle | 32 | hadoop-ozone: The patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | | +1 | mvnsite | 0 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 695 | patch has no errors when building and testing our client artifacts. | | -1 | javadoc | 20 | hadoop-hdds in the patch failed. | | -1 | javadoc | 20 | hadoop-ozone in the patch failed. | | -1 | findbugs | 32 | hadoop-hdds in the patch failed. | | -1 | findbugs | 22 | hadoop-ozone in the patch failed. | ||| _ Other Tests _ | | -1 | unit | 28 | hadoop-hdds in the patch failed. | | -1 | unit | 26 | hadoop-ozone in the patch failed. | | +1 | asflicense | 33 | The patch does not generate ASF License warnings. | | | | 2305 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1588 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 09b9504ba0c3 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 76605f1 | | Default Java | 1.8.0_222 | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/2/artifact/out/branch-mvninstall-hadoop-hdds.txt | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/2/artifact/out/branch-mvninstall-hadoop-ozone.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/2/artifact/out/branch-compile-hadoop-hdds.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/2/artifact/out/branch-compile-hadoop-ozone.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/2/artifact/out/branch-javadoc-hadoop-hdds.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/2/artifact/out/branch-javadoc-hadoop-ozone.txt | | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/2/artifact/out/branch-findbugs-hadoop-hdds.txt | | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/2/artifact/out/branch-findbugs-hadoop-ozone.txt | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/2/artifact/out/patch-mvninstall-hadoop-hdds.txt | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/2/artifact/out/patch-mvninstall-hadoop-ozone.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/2/artifact/out/patch-compile-hadoop-hdds.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/2/artifact/out/patch-compile-hadoop-ozone.txt | | javac | https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/2/artifact/out/patch-compile-hadoop-hdds.txt | | javac |
[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1528: HDDS-2181. Ozone Manager should send correct ACL type in ACL requests…
bharatviswa504 commented on a change in pull request #1528: HDDS-2181. Ozone Manager should send correct ACL type in ACL requests… URL: https://github.com/apache/hadoop/pull/1528#discussion_r331294890 ## File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyCommitRequest.java ## @@ -112,11 +115,20 @@ public OMClientResponse validateAndUpdateCache(OzoneManager ozoneManager, IOException exception = null; OmKeyInfo omKeyInfo = null; OMClientResponse omClientResponse = null; +boolean bucketLockAcquired = false; OMMetadataManager omMetadataManager = ozoneManager.getMetadataManager(); try { // check Acl - checkKeyAcls(ozoneManager, volumeName, bucketName, keyName, + // Native authorizer requires client id as part of keyname to check + // write ACL on key. Add client id to key name if ozone native + // authorizer is configured. + Configuration config = new OzoneConfiguration(); Review comment: Here we should not construct config, should n't we pick the config from OzoneManager? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1588: HDDS-1986. Fix listkeys API.
hadoop-yetus commented on issue #1588: HDDS-1986. Fix listkeys API. URL: https://github.com/apache/hadoop/pull/1588#issuecomment-538168696 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 39 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 1 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 2 new or modified test files. | ||| _ trunk Compile Tests _ | | -1 | mvninstall | 38 | hadoop-hdds in trunk failed. | | -1 | mvninstall | 39 | hadoop-ozone in trunk failed. | | -1 | compile | 21 | hadoop-hdds in trunk failed. | | -1 | compile | 12 | hadoop-ozone in trunk failed. | | +1 | checkstyle | 63 | trunk passed | | +1 | mvnsite | 0 | trunk passed | | +1 | shadedclient | 845 | branch has no errors when building and testing our client artifacts. | | -1 | javadoc | 21 | hadoop-hdds in trunk failed. | | -1 | javadoc | 19 | hadoop-ozone in trunk failed. | | 0 | spotbugs | 944 | Used deprecated FindBugs config; considering switching to SpotBugs. | | -1 | findbugs | 34 | hadoop-hdds in trunk failed. | | -1 | findbugs | 20 | hadoop-ozone in trunk failed. | ||| _ Patch Compile Tests _ | | -1 | mvninstall | 36 | hadoop-hdds in the patch failed. | | -1 | mvninstall | 40 | hadoop-ozone in the patch failed. | | -1 | compile | 22 | hadoop-hdds in the patch failed. | | -1 | compile | 17 | hadoop-ozone in the patch failed. | | -1 | javac | 22 | hadoop-hdds in the patch failed. | | -1 | javac | 17 | hadoop-ozone in the patch failed. | | -0 | checkstyle | 31 | hadoop-ozone: The patch generated 4 new + 0 unchanged - 0 fixed = 4 total (was 0) | | +1 | mvnsite | 0 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 709 | patch has no errors when building and testing our client artifacts. | | -1 | javadoc | 21 | hadoop-hdds in the patch failed. | | -1 | javadoc | 19 | hadoop-ozone in the patch failed. | | -1 | findbugs | 31 | hadoop-hdds in the patch failed. | | -1 | findbugs | 18 | hadoop-ozone in the patch failed. | ||| _ Other Tests _ | | -1 | unit | 29 | hadoop-hdds in the patch failed. | | -1 | unit | 27 | hadoop-ozone in the patch failed. | | +1 | asflicense | 34 | The patch does not generate ASF License warnings. | | | | 2342 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1588 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux dbf2530a1ece 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 76605f1 | | Default Java | 1.8.0_222 | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/1/artifact/out/branch-mvninstall-hadoop-hdds.txt | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/1/artifact/out/branch-mvninstall-hadoop-ozone.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/1/artifact/out/branch-compile-hadoop-hdds.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/1/artifact/out/branch-compile-hadoop-ozone.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/1/artifact/out/branch-javadoc-hadoop-hdds.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/1/artifact/out/branch-javadoc-hadoop-ozone.txt | | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/1/artifact/out/branch-findbugs-hadoop-hdds.txt | | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/1/artifact/out/branch-findbugs-hadoop-ozone.txt | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/1/artifact/out/patch-mvninstall-hadoop-hdds.txt | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/1/artifact/out/patch-mvninstall-hadoop-ozone.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/1/artifact/out/patch-compile-hadoop-hdds.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/1/artifact/out/patch-compile-hadoop-ozone.txt | | javac | https://builds.apache.org/job/hadoop-multibranch/job/PR-1588/1/artifact/out/patch-compile-hadoop-hdds.txt | | javac |
[GitHub] [hadoop] hadoop-yetus commented on issue #1528: HDDS-2181. Ozone Manager should send correct ACL type in ACL requests…
hadoop-yetus commented on issue #1528: HDDS-2181. Ozone Manager should send correct ACL type in ACL requests… URL: https://github.com/apache/hadoop/pull/1528#issuecomment-538168043 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 87 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | -1 | test4tests | 0 | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ trunk Compile Tests _ | | 0 | mvndep | 12 | Maven dependency ordering for branch | | -1 | mvninstall | 30 | hadoop-hdds in trunk failed. | | -1 | mvninstall | 34 | hadoop-ozone in trunk failed. | | -1 | compile | 19 | hadoop-hdds in trunk failed. | | -1 | compile | 13 | hadoop-ozone in trunk failed. | | +1 | checkstyle | 48 | trunk passed | | +1 | mvnsite | 0 | trunk passed | | +1 | shadedclient | 952 | branch has no errors when building and testing our client artifacts. | | -1 | javadoc | 20 | hadoop-hdds in trunk failed. | | -1 | javadoc | 17 | hadoop-ozone in trunk failed. | | 0 | spotbugs | 1041 | Used deprecated FindBugs config; considering switching to SpotBugs. | | -1 | findbugs | 31 | hadoop-hdds in trunk failed. | | -1 | findbugs | 18 | hadoop-ozone in trunk failed. | ||| _ Patch Compile Tests _ | | 0 | mvndep | 15 | Maven dependency ordering for patch | | -1 | mvninstall | 32 | hadoop-hdds in the patch failed. | | -1 | mvninstall | 35 | hadoop-ozone in the patch failed. | | -1 | compile | 21 | hadoop-hdds in the patch failed. | | -1 | compile | 15 | hadoop-ozone in the patch failed. | | -1 | javac | 21 | hadoop-hdds in the patch failed. | | -1 | javac | 15 | hadoop-ozone in the patch failed. | | +1 | checkstyle | 53 | the patch passed | | +1 | mvnsite | 0 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 796 | patch has no errors when building and testing our client artifacts. | | -1 | javadoc | 19 | hadoop-hdds in the patch failed. | | -1 | javadoc | 16 | hadoop-ozone in the patch failed. | | -1 | findbugs | 30 | hadoop-hdds in the patch failed. | | -1 | findbugs | 17 | hadoop-ozone in the patch failed. | ||| _ Other Tests _ | | -1 | unit | 27 | hadoop-hdds in the patch failed. | | -1 | unit | 24 | hadoop-ozone in the patch failed. | | +1 | asflicense | 29 | The patch does not generate ASF License warnings. | | | | 2535 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.2 Server=19.03.2 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/5/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1528 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 683c3789963a 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 76605f1 | | Default Java | 1.8.0_222 | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/5/artifact/out/branch-mvninstall-hadoop-hdds.txt | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/5/artifact/out/branch-mvninstall-hadoop-ozone.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/5/artifact/out/branch-compile-hadoop-hdds.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/5/artifact/out/branch-compile-hadoop-ozone.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/5/artifact/out/branch-javadoc-hadoop-hdds.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/5/artifact/out/branch-javadoc-hadoop-ozone.txt | | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/5/artifact/out/branch-findbugs-hadoop-hdds.txt | | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/5/artifact/out/branch-findbugs-hadoop-ozone.txt | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/5/artifact/out/patch-mvninstall-hadoop-hdds.txt | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/5/artifact/out/patch-mvninstall-hadoop-ozone.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/5/artifact/out/patch-compile-hadoop-hdds.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/5/artifact/out/patch-compile-hadoop-ozone.txt
[GitHub] [hadoop] anuengineer commented on issue #1536: HDDS-2164 : om.db.checkpoints is getting filling up fast.
anuengineer commented on issue #1536: HDDS-2164 : om.db.checkpoints is getting filling up fast. URL: https://github.com/apache/hadoop/pull/1536#issuecomment-538167213 Can you please check if the rat failures are real? thx This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] anuengineer commented on issue #1577: HDDS-2200 : Recon does not handle the NULL snapshot from OM DB cleanly.
anuengineer commented on issue #1577: HDDS-2200 : Recon does not handle the NULL snapshot from OM DB cleanly. URL: https://github.com/apache/hadoop/pull/1577#issuecomment-538166613 Thank you for the contribution. @vivekratnavel Thanks for the review. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] anuengineer closed pull request #1577: HDDS-2200 : Recon does not handle the NULL snapshot from OM DB cleanly.
anuengineer closed pull request #1577: HDDS-2200 : Recon does not handle the NULL snapshot from OM DB cleanly. URL: https://github.com/apache/hadoop/pull/1577 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14445) Use DelegationTokenIssuer to create KMS delegation tokens that can authenticate to all KMS instances
[ https://issues.apache.org/jira/browse/HADOOP-14445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16944110#comment-16944110 ] Siyao Meng commented on HADOOP-14445: - Looks like we still need to backport this to branch-2, as the previous buggy commit got reverted. The branch 2 patches need a bit of revise (based on the latest branch 3 patch). > Use DelegationTokenIssuer to create KMS delegation tokens that can > authenticate to all KMS instances > > > Key: HADOOP-14445 > URL: https://issues.apache.org/jira/browse/HADOOP-14445 > Project: Hadoop Common > Issue Type: Bug > Components: kms >Affects Versions: 2.8.0, 3.0.0-alpha1 > Environment: CDH5.7.4, Kerberized, SSL, KMS-HA, at rest encryption >Reporter: Wei-Chiu Chuang >Assignee: Xiao Chen >Priority: Major > Fix For: 3.2.0, 3.0.4, 3.1.2 > > Attachments: HADOOP-14445-branch-2.8.002.patch, > HADOOP-14445-branch-2.8.patch, HADOOP-14445.002.patch, > HADOOP-14445.003.patch, HADOOP-14445.004.patch, HADOOP-14445.05.patch, > HADOOP-14445.06.patch, HADOOP-14445.07.patch, HADOOP-14445.08.patch, > HADOOP-14445.09.patch, HADOOP-14445.10.patch, HADOOP-14445.11.patch, > HADOOP-14445.12.patch, HADOOP-14445.13.patch, HADOOP-14445.14.patch, > HADOOP-14445.15.patch, HADOOP-14445.16.patch, HADOOP-14445.17.patch, > HADOOP-14445.18.patch, HADOOP-14445.19.patch, HADOOP-14445.20.patch, > HADOOP-14445.addemdum.patch, HADOOP-14445.branch-2.000.precommit.patch, > HADOOP-14445.branch-2.001.precommit.patch, HADOOP-14445.branch-2.01.patch, > HADOOP-14445.branch-2.02.patch, HADOOP-14445.branch-2.03.patch, > HADOOP-14445.branch-2.04.patch, HADOOP-14445.branch-2.05.patch, > HADOOP-14445.branch-2.06.patch, HADOOP-14445.branch-2.8.003.patch, > HADOOP-14445.branch-2.8.004.patch, HADOOP-14445.branch-2.8.005.patch, > HADOOP-14445.branch-2.8.006.patch, HADOOP-14445.branch-2.8.revert.patch, > HADOOP-14445.branch-3.0.001.patch, HADOOP-14445.compat.patch, > HADOOP-14445.revert.patch > > > As discovered in HADOOP-14441, KMS HA using LoadBalancingKMSClientProvider do > not share delegation tokens. (a client uses KMS address/port as the key for > delegation token) > {code:title=DelegationTokenAuthenticatedURL#openConnection} > if (!creds.getAllTokens().isEmpty()) { > InetSocketAddress serviceAddr = new InetSocketAddress(url.getHost(), > url.getPort()); > Text service = SecurityUtil.buildTokenService(serviceAddr); > dToken = creds.getToken(service); > {code} > But KMS doc states: > {quote} > Delegation Tokens > Similar to HTTP authentication, KMS uses Hadoop Authentication for delegation > tokens too. > Under HA, A KMS instance must verify the delegation token given by another > KMS instance, by checking the shared secret used to sign the delegation > token. To do this, all KMS instances must be able to retrieve the shared > secret from ZooKeeper. > {quote} > We should either update the KMS documentation, or fix this code to share > delegation tokens. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bharatviswa504 edited a comment on issue #1564: HDDS-2223. Support ReadWrite lock in LockManager.
bharatviswa504 edited a comment on issue #1564: HDDS-2223. Support ReadWrite lock in LockManager. URL: https://github.com/apache/hadoop/pull/1564#issuecomment-538161681 I am fine with the changes. One minor comment to add some comments in the code of locking order reason for acquire/release. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bharatviswa504 commented on issue #1564: HDDS-2223. Support ReadWrite lock in LockManager.
bharatviswa504 commented on issue #1564: HDDS-2223. Support ReadWrite lock in LockManager. URL: https://github.com/apache/hadoop/pull/1564#issuecomment-538161681 I am fine with the changes. One minor comment to add some comments in the code of locking order. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] smengcl commented on issue #1588: HDDS-1986. Fix listkeys API.
smengcl commented on issue #1588: HDDS-1986. Fix listkeys API. URL: https://github.com/apache/hadoop/pull/1588#issuecomment-538160179 /label ozone This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bharatviswa504 opened a new pull request #1588: HDDS-1986. Fix listkeys API.
bharatviswa504 opened a new pull request #1588: HDDS-1986. Fix listkeys API. URL: https://github.com/apache/hadoop/pull/1588 Implement listKeys API. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16625) Backport HADOOP-14624 to branch-3.1
[ https://issues.apache.org/jira/browse/HADOOP-16625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HADOOP-16625: - Fix Version/s: 3.1.4 Resolution: Fixed Status: Resolved (was: Patch Available) Thanks! > Backport HADOOP-14624 to branch-3.1 > --- > > Key: HADOOP-16625 > URL: https://issues.apache.org/jira/browse/HADOOP-16625 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Major > Fix For: 3.1.4 > > Attachments: HADOOP-16625.branch-3.1.001.patch > > > I am trying to bring commits from trunk/branch-3.2 to branch-3.1, but some of > them do not compile because of the commons-logging to slf4j migration. > One of the issue is GenericTestUtils.DelayAnswer do not accept slf4j logger > API. > Backport HADOOP-14624 to branch-3.1 to make backport easier. It updates the > DelayAnswer signature, but it's in the test scope, so we're not really > breaking backward compat. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] anuengineer commented on issue #1538: HDDS-1720 : Add ability to configure RocksDB logs for Ozone Manager.
anuengineer commented on issue #1538: HDDS-1720 : Add ability to configure RocksDB logs for Ozone Manager. URL: https://github.com/apache/hadoop/pull/1538#issuecomment-538152599 Thank you for the contribution. I have committed this to the trunk. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] anuengineer closed pull request #1538: HDDS-1720 : Add ability to configure RocksDB logs for Ozone Manager.
anuengineer closed pull request #1538: HDDS-1720 : Add ability to configure RocksDB logs for Ozone Manager. URL: https://github.com/apache/hadoop/pull/1538 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] anuengineer commented on issue #1568: HDDS-2225. SCM fails to start in most unsecure environments due to leftover secure config
anuengineer commented on issue #1568: HDDS-2225. SCM fails to start in most unsecure environments due to leftover secure config URL: https://github.com/apache/hadoop/pull/1568#issuecomment-538144302 @adoroszlai So now we have removed the mount and gen config? I am presuming that +1s were given for the earlier solution, but with force push I am not able to see the older changes. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1573: HDFS-14889. Ability to check if a block has a replica on provided storage.
hadoop-yetus commented on issue #1573: HDFS-14889. Ability to check if a block has a replica on provided storage. URL: https://github.com/apache/hadoop/pull/1573#issuecomment-538144230 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 365 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 1 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 | mvninstall | 1456 | trunk passed | | +1 | compile | 77 | trunk passed | | +1 | checkstyle | 57 | trunk passed | | +1 | mvnsite | 85 | trunk passed | | +1 | shadedclient | 1007 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 95 | trunk passed | | 0 | spotbugs | 213 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 210 | trunk passed | ||| _ Patch Compile Tests _ | | +1 | mvninstall | 78 | the patch passed | | +1 | compile | 68 | the patch passed | | +1 | javac | 67 | the patch passed | | +1 | checkstyle | 47 | the patch passed | | +1 | mvnsite | 65 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 782 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 82 | the patch passed | | +1 | findbugs | 195 | the patch passed | ||| _ Other Tests _ | | -1 | unit | 5219 | hadoop-hdfs in the patch failed. | | +1 | asflicense | 42 | The patch does not generate ASF License warnings. | | | | 10008 | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.namenode.TestRedudantBlocks | | | hadoop.hdfs.server.namenode.ha.TestPendingCorruptDnMessages | | | hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots | | | hadoop.hdfs.TestReconstructStripedFile | | | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks | | | hadoop.hdfs.TestDFSInotifyEventInputStreamKerberized | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1573/5/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1573 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux b3f859c79d77 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / a3fe404 | | Default Java | 1.8.0_222 | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1573/5/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1573/5/testReport/ | | Max. process+thread count | 4824 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1573/5/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] anuengineer commented on issue #1575: HDDS-2231. test-single.sh cannot copy results
anuengineer commented on issue #1575: HDDS-2231. test-single.sh cannot copy results URL: https://github.com/apache/hadoop/pull/1575#issuecomment-538143045 Thank you for the contribution. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1573: HDFS-14889. Ability to check if a block has a replica on provided storage.
hadoop-yetus commented on issue #1573: HDFS-14889. Ability to check if a block has a replica on provided storage. URL: https://github.com/apache/hadoop/pull/1573#issuecomment-538142579 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 426 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 1 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 | mvninstall | 1215 | trunk passed | | +1 | compile | 63 | trunk passed | | +1 | checkstyle | 45 | trunk passed | | +1 | mvnsite | 70 | trunk passed | | +1 | shadedclient | 912 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 79 | trunk passed | | 0 | spotbugs | 175 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 173 | trunk passed | ||| _ Patch Compile Tests _ | | +1 | mvninstall | 66 | the patch passed | | +1 | compile | 57 | the patch passed | | +1 | javac | 57 | the patch passed | | +1 | checkstyle | 40 | the patch passed | | +1 | mvnsite | 65 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 840 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 76 | the patch passed | | +1 | findbugs | 176 | the patch passed | ||| _ Other Tests _ | | -1 | unit | 5951 | hadoop-hdfs in the patch failed. | | +1 | asflicense | 35 | The patch does not generate ASF License warnings. | | | | 10364 | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.namenode.TestNameNodeMXBean | | | hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots | | | hadoop.hdfs.TestRollingUpgrade | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1573/4/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1573 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux dedb0fa0bade 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / a3fe404 | | Default Java | 1.8.0_222 | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1573/4/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1573/4/testReport/ | | Max. process+thread count | 3070 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1573/4/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] anuengineer merged pull request #1580: HDDS-2234. rat.sh fails due to ozone-recon-web/build files
anuengineer merged pull request #1580: HDDS-2234. rat.sh fails due to ozone-recon-web/build files URL: https://github.com/apache/hadoop/pull/1580 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] anuengineer commented on issue #1580: HDDS-2234. rat.sh fails due to ozone-recon-web/build files
anuengineer commented on issue #1580: HDDS-2234. rat.sh fails due to ozone-recon-web/build files URL: https://github.com/apache/hadoop/pull/1580#issuecomment-538142065 Thank you for taking care of this issue. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1587: HADOOP-16626. S3A ITestRestrictedReadAccess fails
hadoop-yetus commented on issue #1587: HADOOP-16626. S3A ITestRestrictedReadAccess fails URL: https://github.com/apache/hadoop/pull/1587#issuecomment-538134317 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 77 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 1 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 4 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 | mvninstall | 1287 | trunk passed | | +1 | compile | 33 | trunk passed | | +1 | checkstyle | 24 | trunk passed | | +1 | mvnsite | 38 | trunk passed | | +1 | shadedclient | 851 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 25 | trunk passed | | 0 | spotbugs | 59 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 58 | trunk passed | ||| _ Patch Compile Tests _ | | +1 | mvninstall | 34 | the patch passed | | +1 | compile | 27 | the patch passed | | +1 | javac | 27 | the patch passed | | -0 | checkstyle | 19 | hadoop-tools/hadoop-aws: The patch generated 1 new + 7 unchanged - 0 fixed = 8 total (was 7) | | +1 | mvnsite | 32 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 867 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 22 | the patch passed | | +1 | findbugs | 61 | the patch passed | ||| _ Other Tests _ | | +1 | unit | 67 | hadoop-aws in the patch passed. | | +1 | asflicense | 29 | The patch does not generate ASF License warnings. | | | | 3645 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.2 Server=19.03.2 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1587/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1587 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 1e837cfaecd3 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 51eaeca | | Default Java | 1.8.0_222 | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1587/2/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1587/2/testReport/ | | Max. process+thread count | 356 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1587/2/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1587: HADOOP-16626. S3A ITestRestrictedReadAccess fails
hadoop-yetus commented on issue #1587: HADOOP-16626. S3A ITestRestrictedReadAccess fails URL: https://github.com/apache/hadoop/pull/1587#issuecomment-538133653 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 37 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 4 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 | mvninstall | 1085 | trunk passed | | +1 | compile | 36 | trunk passed | | +1 | checkstyle | 26 | trunk passed | | +1 | mvnsite | 42 | trunk passed | | +1 | shadedclient | 869 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 29 | trunk passed | | 0 | spotbugs | 61 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 59 | trunk passed | ||| _ Patch Compile Tests _ | | +1 | mvninstall | 33 | the patch passed | | +1 | compile | 28 | the patch passed | | +1 | javac | 28 | the patch passed | | -0 | checkstyle | 20 | hadoop-tools/hadoop-aws: The patch generated 1 new + 7 unchanged - 0 fixed = 8 total (was 7) | | +1 | mvnsite | 33 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 778 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 26 | the patch passed | | +1 | findbugs | 62 | the patch passed | ||| _ Other Tests _ | | +1 | unit | 84 | hadoop-aws in the patch passed. | | +1 | asflicense | 34 | The patch does not generate ASF License warnings. | | | | 3384 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1587/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1587 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux b6e136f4d19e 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 51eaeca | | Default Java | 1.8.0_222 | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1587/3/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1587/3/testReport/ | | Max. process+thread count | 401 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1587/3/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1587: HADOOP-16626. S3A ITestRestrictedReadAccess fails
hadoop-yetus commented on issue #1587: HADOOP-16626. S3A ITestRestrictedReadAccess fails URL: https://github.com/apache/hadoop/pull/1587#issuecomment-538132878 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 37 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 4 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 | mvninstall | 1239 | trunk passed | | +1 | compile | 31 | trunk passed | | +1 | checkstyle | 24 | trunk passed | | +1 | mvnsite | 37 | trunk passed | | +1 | shadedclient | 887 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 25 | trunk passed | | 0 | spotbugs | 61 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 60 | trunk passed | ||| _ Patch Compile Tests _ | | +1 | mvninstall | 32 | the patch passed | | +1 | compile | 28 | the patch passed | | +1 | javac | 28 | the patch passed | | -0 | checkstyle | 21 | hadoop-tools/hadoop-aws: The patch generated 4 new + 7 unchanged - 0 fixed = 11 total (was 7) | | +1 | mvnsite | 33 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 870 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 23 | the patch passed | | +1 | findbugs | 62 | the patch passed | ||| _ Other Tests _ | | +1 | unit | 84 | hadoop-aws in the patch passed. | | +1 | asflicense | 29 | The patch does not generate ASF License warnings. | | | | 3623 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1587/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1587 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 2224c2edde8d 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 51eaeca | | Default Java | 1.8.0_222 | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1587/1/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1587/1/testReport/ | | Max. process+thread count | 306 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1587/1/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1480: HDFS-14857 FS operations fail in HA mode: DataNode fails to connect to NameNode
hadoop-yetus commented on issue #1480: HDFS-14857 FS operations fail in HA mode: DataNode fails to connect to NameNode URL: https://github.com/apache/hadoop/pull/1480#issuecomment-538128324 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 426 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 1 new or modified test files. | ||| _ trunk Compile Tests _ | | 0 | mvndep | 24 | Maven dependency ordering for branch | | +1 | mvninstall | 1226 | trunk passed | | -1 | compile | 113 | hadoop-hdfs-project in trunk failed. | | +1 | checkstyle | 58 | trunk passed | | +1 | mvnsite | 119 | trunk passed | | +1 | shadedclient | 966 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 109 | trunk passed | | 0 | spotbugs | 177 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 311 | trunk passed | ||| _ Patch Compile Tests _ | | 0 | mvndep | 14 | Maven dependency ordering for patch | | +1 | mvninstall | 108 | the patch passed | | -1 | compile | 108 | hadoop-hdfs-project in the patch failed. | | -1 | javac | 108 | hadoop-hdfs-project in the patch failed. | | -0 | checkstyle | 52 | hadoop-hdfs-project: The patch generated 14 new + 22 unchanged - 0 fixed = 36 total (was 22) | | +1 | mvnsite | 112 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 844 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 102 | the patch passed | | -1 | findbugs | 144 | hadoop-hdfs-project/hadoop-hdfs-client generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | ||| _ Other Tests _ | | +1 | unit | 117 | hadoop-hdfs-client in the patch passed. | | -1 | unit | 6479 | hadoop-hdfs in the patch failed. | | +1 | asflicense | 39 | The patch does not generate ASF License warnings. | | | | 11674 | | | Reason | Tests | |---:|:--| | FindBugs | module:hadoop-hdfs-project/hadoop-hdfs-client | | | Inconsistent synchronization of org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider.currentProxyIndex; locked 60% of time Unsynchronized access at ConfiguredFailoverProxyProvider.java:60% of time Unsynchronized access at ConfiguredFailoverProxyProvider.java:[line 69] | | Failed junit tests | hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots | | | hadoop.hdfs.server.balancer.TestBalancer | | | hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean | | | hadoop.hdfs.server.namenode.ha.TestBootstrapAliasmap | | | hadoop.fs.viewfs.TestViewFileSystemLinkFallback | | | hadoop.hdfs.tools.TestDFSZKFailoverController | | | hadoop.hdfs.TestDFSClientFailover | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.2 Server=19.03.2 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1480/7/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1480 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 842e0fe3bdc5 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / a3fe404 | | Default Java | 1.8.0_222 | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1480/7/artifact/out/branch-compile-hadoop-hdfs-project.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1480/7/artifact/out/patch-compile-hadoop-hdfs-project.txt | | javac | https://builds.apache.org/job/hadoop-multibranch/job/PR-1480/7/artifact/out/patch-compile-hadoop-hdfs-project.txt | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1480/7/artifact/out/diff-checkstyle-hadoop-hdfs-project.txt | | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-1480/7/artifact/out/new-findbugs-hadoop-hdfs-project_hadoop-hdfs-client.html | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1480/7/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1480/7/testReport/ | | Max. process+thread count | 2742 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1480/7/console | | versions | git=2.7.4 maven=3.3.9
[GitHub] [hadoop] hadoop-yetus commented on issue #1568: HDDS-2225. SCM fails to start in most unsecure environments due to leftover secure config
hadoop-yetus commented on issue #1568: HDDS-2225. SCM fails to start in most unsecure environments due to leftover secure config URL: https://github.com/apache/hadoop/pull/1568#issuecomment-538115930 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 305 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | 0 | shelldocs | 0 | Shelldocs was not available. | | 0 | yamllint | 0 | yamllint was not available. | | +1 | @author | 0 | The patch does not contain any @author tags. | | -1 | test4tests | 0 | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ trunk Compile Tests _ | | -1 | mvninstall | 47 | hadoop-hdds in trunk failed. | | -1 | mvninstall | 43 | hadoop-ozone in trunk failed. | | -1 | compile | 24 | hadoop-hdds in trunk failed. | | -1 | compile | 19 | hadoop-ozone in trunk failed. | | +1 | mvnsite | 0 | trunk passed | | +1 | shadedclient | 794 | branch has no errors when building and testing our client artifacts. | | -1 | javadoc | 26 | hadoop-hdds in trunk failed. | | -1 | javadoc | 25 | hadoop-ozone in trunk failed. | ||| _ Patch Compile Tests _ | | -1 | mvninstall | 39 | hadoop-hdds in the patch failed. | | -1 | mvninstall | 41 | hadoop-ozone in the patch failed. | | -1 | compile | 29 | hadoop-hdds in the patch failed. | | -1 | compile | 24 | hadoop-ozone in the patch failed. | | -1 | javac | 29 | hadoop-hdds in the patch failed. | | -1 | javac | 25 | hadoop-ozone in the patch failed. | | +1 | mvnsite | 0 | the patch passed | | +1 | shellcheck | 0 | There were no new shellcheck issues. | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 711 | patch has no errors when building and testing our client artifacts. | | -1 | javadoc | 27 | hadoop-hdds in the patch failed. | | -1 | javadoc | 24 | hadoop-ozone in the patch failed. | ||| _ Other Tests _ | | -1 | unit | 32 | hadoop-hdds in the patch failed. | | -1 | unit | 30 | hadoop-ozone in the patch failed. | | +1 | asflicense | 39 | The patch does not generate ASF License warnings. | | | | 2492 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1568/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1568 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient shellcheck shelldocs yamllint | | uname | Linux ba433dd9aeed 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 51eaeca | | Default Java | 1.8.0_222 | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1568/3/artifact/out/branch-mvninstall-hadoop-hdds.txt | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1568/3/artifact/out/branch-mvninstall-hadoop-ozone.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1568/3/artifact/out/branch-compile-hadoop-hdds.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1568/3/artifact/out/branch-compile-hadoop-ozone.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1568/3/artifact/out/branch-javadoc-hadoop-hdds.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1568/3/artifact/out/branch-javadoc-hadoop-ozone.txt | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1568/3/artifact/out/patch-mvninstall-hadoop-hdds.txt | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1568/3/artifact/out/patch-mvninstall-hadoop-ozone.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1568/3/artifact/out/patch-compile-hadoop-hdds.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1568/3/artifact/out/patch-compile-hadoop-ozone.txt | | javac | https://builds.apache.org/job/hadoop-multibranch/job/PR-1568/3/artifact/out/patch-compile-hadoop-hdds.txt | | javac | https://builds.apache.org/job/hadoop-multibranch/job/PR-1568/3/artifact/out/patch-compile-hadoop-ozone.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1568/3/artifact/out/patch-javadoc-hadoop-hdds.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1568/3/artifact/out/patch-javadoc-hadoop-ozone.txt | | unit |
[GitHub] [hadoop] arp7 commented on issue #1564: HDDS-2223. Support ReadWrite lock in LockManager.
arp7 commented on issue #1564: HDDS-2223. Support ReadWrite lock in LockManager. URL: https://github.com/apache/hadoop/pull/1564#issuecomment-538112785 Thanks for this nice improvement @nandakumar131. I am +1 to commit this assuming @bharatviswa504 is also ok. Can you file a follow up jira to replace all usages of the lock/unlock API? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] arp7 commented on a change in pull request #1564: HDDS-2223. Support ReadWrite lock in LockManager.
arp7 commented on a change in pull request #1564: HDDS-2223. Support ReadWrite lock in LockManager. URL: https://github.com/apache/hadoop/pull/1564#discussion_r331234086 ## File path: hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/lock/LockManager.java ## @@ -25,42 +25,146 @@ import java.util.Map; import java.util.concurrent.ConcurrentHashMap; +import java.util.function.Consumer; /** * Manages the locks on a given resource. A new lock is created for each * and every unique resource. Uniqueness of resource depends on the * {@code equals} implementation of it. */ -public class LockManager { +public class LockManager { private static final Logger LOG = LoggerFactory.getLogger(LockManager.class); - private final Map activeLocks = new ConcurrentHashMap<>(); + private final Map activeLocks = new ConcurrentHashMap<>(); private final GenericObjectPool lockPool = new GenericObjectPool<>(new PooledLockFactory()); /** - * Creates new LockManager instance. + * Creates new LockManager instance with the given Configuration. * * @param conf Configuration object */ - public LockManager(Configuration conf) { -int maxPoolSize = conf.getInt(HddsConfigKeys.HDDS_LOCK_MAX_CONCURRENCY, + public LockManager(final Configuration conf) { +final int maxPoolSize = conf.getInt(HddsConfigKeys.HDDS_LOCK_MAX_CONCURRENCY, HddsConfigKeys.HDDS_LOCK_MAX_CONCURRENCY_DEFAULT); lockPool.setMaxTotal(maxPoolSize); } - /** * Acquires the lock on given resource. * * If the lock is not available then the current thread becomes * disabled for thread scheduling purposes and lies dormant until the * lock has been acquired. + * + * @param resource on which the lock has to be acquired + * @deprecated Use {@link LockManager#writeLock} instead + */ + public void lock(final R resource) { Review comment: I think you need the `@deprecated` annotation on the function also - outside the javadoc comment. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on issue #1587: HADOOP-16626. S3A ITestRestrictedReadAccess fails
steveloughran commented on issue #1587: HADOOP-16626. S3A ITestRestrictedReadAccess fails URL: https://github.com/apache/hadoop/pull/1587#issuecomment-538112086 Testing s3a ireland -a full run of everything (kicking off another) with s3guard and ddb -this test suite with s3guard off, on and local. Verifying that without s3guard, the guarded versions of the tests are not executed This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16626) S3A ITestRestrictedReadAccess fails
[ https://issues.apache.org/jira/browse/HADOOP-16626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16943880#comment-16943880 ] Steve Loughran commented on HADOOP-16626: - OK. I have now learned something. When you call Configuration.addResource() it reloads all configs, so all settings you've previously cleared get set again. And we force in the contract/s3a.xml settings, don't we? I'm going to change how we load that file (which declares the expected FS behaviour in the common tests). I'm going to make that load optional and only load it in those s3a contract tests, not the other S3A tests. (pause) Actually, that's not enough! The first call to Filesystem.get() will force service discovery of all filesystems, which will force their class instantiation, and then any class which forces in a config (HDFS) triggers this. {code} Breakpoint reached at org.apache.hadoop.conf.Configuration.addDefaultResource(Configuration.java:893) at org.apache.hadoop.mapreduce.util.ConfigUtil.loadResources(ConfigUtil.java:43) at org.apache.hadoop.mapred.JobConf.(JobConf.java:123) at java.lang.Class.forName0(Class.java:-1) at java.lang.Class.forName(Class.java:348) at org.apache.hadoop.conf.Configuration.getClassByNameOrNull(Configuration.java:2603) at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:96) at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:79) at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:137) at org.apache.hadoop.security.Groups.(Groups.java:106) at org.apache.hadoop.security.Groups.(Groups.java:102) at org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:451) at org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:355) at org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:317) at org.apache.hadoop.security.UserGroupInformation.doSubjectLogin(UserGroupInformation.java:1989) at org.apache.hadoop.security.UserGroupInformation.createLoginUser(UserGroupInformation.java:746) at org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:696) at org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:607) at org.apache.hadoop.fs.viewfs.ViewFileSystem.(ViewFileSystem.java:230) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(NativeConstructorAccessorImpl.java:-1) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at java.lang.Class.newInstance(Class.java:442) at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:380) at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:404) at java.util.ServiceLoader$1.next(ServiceLoader.java:480) at org.apache.hadoop.fs.FileSystem.loadFileSystems(FileSystem.java:3310) at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:3355) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3394) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:500) at org.apache.hadoop.fs.contract.AbstractBondedFSContract.init(AbstractBondedFSContract.java:72) at org.apache.hadoop.fs.contract.AbstractFSContractTestBase.setup(AbstractFSContractTestBase.java:178) at org.apache.hadoop.fs.s3a.AbstractS3ATestBase.setup(AbstractS3ATestBase.java:55) at org.apache.hadoop.fs.s3a.auth.ITestRestrictedReadAccess.setup(ITestRestrictedReadAccess.java:233) at sun.reflect.NativeMethodAccessorImpl.invoke0(NativeMethodAccessorImpl.java:-1) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) at
[GitHub] [hadoop] steveloughran opened a new pull request #1587: HADOOP-16626. S3A ITestRestrictedReadAccess fails
steveloughran opened a new pull request #1587: HADOOP-16626. S3A ITestRestrictedReadAccess fails URL: https://github.com/apache/hadoop/pull/1587 Fix up test setup for the restricted access. -Force load the filesystems early on -And only add the contract resource if needed. -Only run the guarded tests if S3Guard is on according to the build. I had a predecessor which always used the Local store, but it was hard to set up -you need to share across FS instances-, and you could never guarantee that it worked the same way with DDB. That patching is still there -it's just not needed/used for the DDB test runs Change-Id: I79644ac264f74005775ff194d48f08fe951df0f1 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1586: HDDS-2240. Command line tool for OM HA.
hadoop-yetus commented on issue #1586: HDDS-2240. Command line tool for OM HA. URL: https://github.com/apache/hadoop/pull/1586#issuecomment-538100410 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 427 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | 0 | shelldocs | 0 | Shelldocs was not available. | | +1 | @author | 0 | The patch does not contain any @author tags. | | -1 | test4tests | 0 | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ trunk Compile Tests _ | | 0 | mvndep | 26 | Maven dependency ordering for branch | | -1 | mvninstall | 39 | hadoop-hdds in trunk failed. | | -1 | mvninstall | 36 | hadoop-ozone in trunk failed. | | -1 | compile | 21 | hadoop-hdds in trunk failed. | | -1 | compile | 14 | hadoop-ozone in trunk failed. | | +1 | checkstyle | 59 | trunk passed | | +1 | mvnsite | 0 | trunk passed | | +1 | shadedclient | 897 | branch has no errors when building and testing our client artifacts. | | -1 | javadoc | 22 | hadoop-hdds in trunk failed. | | -1 | javadoc | 19 | hadoop-ozone in trunk failed. | | 0 | spotbugs | 996 | Used deprecated FindBugs config; considering switching to SpotBugs. | | -1 | findbugs | 34 | hadoop-hdds in trunk failed. | | -1 | findbugs | 19 | hadoop-ozone in trunk failed. | ||| _ Patch Compile Tests _ | | 0 | mvndep | 17 | Maven dependency ordering for patch | | -1 | mvninstall | 35 | hadoop-hdds in the patch failed. | | -1 | mvninstall | 37 | hadoop-ozone in the patch failed. | | -1 | compile | 25 | hadoop-hdds in the patch failed. | | -1 | compile | 18 | hadoop-ozone in the patch failed. | | -1 | cc | 25 | hadoop-hdds in the patch failed. | | -1 | cc | 18 | hadoop-ozone in the patch failed. | | -1 | javac | 25 | hadoop-hdds in the patch failed. | | -1 | javac | 18 | hadoop-ozone in the patch failed. | | -0 | checkstyle | 29 | hadoop-ozone: The patch generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) | | +1 | mvnsite | 0 | the patch passed | | +1 | shellcheck | 32 | There were no new shellcheck issues. | | +1 | whitespace | 1 | The patch has no whitespace issues. | | +1 | shadedclient | 824 | patch has no errors when building and testing our client artifacts. | | -1 | javadoc | 21 | hadoop-hdds in the patch failed. | | -1 | javadoc | 18 | hadoop-ozone in the patch failed. | | -1 | findbugs | 32 | hadoop-hdds in the patch failed. | | -1 | findbugs | 19 | hadoop-ozone in the patch failed. | ||| _ Other Tests _ | | -1 | unit | 27 | hadoop-hdds in the patch failed. | | -1 | unit | 25 | hadoop-ozone in the patch failed. | | +1 | asflicense | 33 | The patch does not generate ASF License warnings. | | | | 3016 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.0 Server=19.03.0 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1586/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1586 | | Optional Tests | dupname asflicense mvnsite unit shellcheck shelldocs compile javac javadoc mvninstall shadedclient findbugs checkstyle cc | | uname | Linux ca7bea350dc9 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / a3fe404 | | Default Java | 1.8.0_222 | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1586/1/artifact/out/branch-mvninstall-hadoop-hdds.txt | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1586/1/artifact/out/branch-mvninstall-hadoop-ozone.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1586/1/artifact/out/branch-compile-hadoop-hdds.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1586/1/artifact/out/branch-compile-hadoop-ozone.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1586/1/artifact/out/branch-javadoc-hadoop-hdds.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1586/1/artifact/out/branch-javadoc-hadoop-ozone.txt | | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-1586/1/artifact/out/branch-findbugs-hadoop-hdds.txt | | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-1586/1/artifact/out/branch-findbugs-hadoop-ozone.txt | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1586/1/artifact/out/patch-mvninstall-hadoop-hdds.txt | | mvninstall |
[GitHub] [hadoop] adoroszlai commented on issue #1553: HDDS-2211. Collect docker logs if env fails to start
adoroszlai commented on issue #1553: HDDS-2211. Collect docker logs if env fails to start URL: https://github.com/apache/hadoop/pull/1553#issuecomment-538099575 Thanks @dineshchitlangia and @arp7 for the reviews. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] arp7 commented on issue #1584: HDDS-2237. KeyDeletingService throws NPE if it's started too early
arp7 commented on issue #1584: HDDS-2237. KeyDeletingService throws NPE if it's started too early URL: https://github.com/apache/hadoop/pull/1584#issuecomment-538099357 The UT failure looks related. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] arp7 merged pull request #1553: HDDS-2211. Collect docker logs if env fails to start
arp7 merged pull request #1553: HDDS-2211. Collect docker logs if env fails to start URL: https://github.com/apache/hadoop/pull/1553 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] adoroszlai commented on a change in pull request #1583: HDDS-2071. Support filters in ozone insight point
adoroszlai commented on a change in pull request #1583: HDDS-2071. Support filters in ozone insight point URL: https://github.com/apache/hadoop/pull/1583#discussion_r331205328 ## File path: hadoop-ozone/insight/src/main/java/org/apache/hadoop/ozone/insight/LogSubcommand.java ## @@ -59,6 +61,10 @@ + "show more information / detailed message") private boolean verbose; + @CommandLine.Option(names = "-f", description = "Enable verbose mode to " + + "show more information / detailed message") Review comment: Description should be updated (is copied from `-v`). This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] adoroszlai commented on a change in pull request #1583: HDDS-2071. Support filters in ozone insight point
adoroszlai commented on a change in pull request #1583: HDDS-2071. Support filters in ozone insight point URL: https://github.com/apache/hadoop/pull/1583#discussion_r331208222 ## File path: hadoop-ozone/insight/src/main/java/org/apache/hadoop/ozone/insight/BaseInsightPoint.java ## @@ -185,4 +186,18 @@ public void addRpcMetrics(List metrics, metrics.add(performance); } + @Override + public boolean filterLog(Map filters, String logLine) { +if (filters == null) { + return true; +} +boolean result = true; +for (Entry entry : filters.entrySet()) { + if (!logLine.matches( + String.format(".*\\[%s=%s\\].*", entry.getKey(), entry.getValue( { +result = result & false; Review comment: Can be simplified to `return false;`. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] adoroszlai commented on a change in pull request #1583: HDDS-2071. Support filters in ozone insight point
adoroszlai commented on a change in pull request #1583: HDDS-2071. Support filters in ozone insight point URL: https://github.com/apache/hadoop/pull/1583#discussion_r331207138 ## File path: hadoop-ozone/insight/src/main/java/org/apache/hadoop/ozone/insight/LogSubcommand.java ## @@ -86,12 +93,20 @@ public Void call() throws Exception { return null; } + /** + * Stream log from multiple endpoint. + * + * @param conf Configuration (to find the log endpoints) + * @param sourcesComponents to connect to (like scm, om...) + * @param relatedLoggers loggers to display + * @param filter any additional filter + */ private void streamLog(OzoneConfiguration conf, Set sources, - List relatedLoggers) { + List relatedLoggers, Function filter) { Review comment: I'd prefer a `Predicate` for simplicity (and null-safety). This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1551: HDDS-2199 In SCMNodeManager dnsToUuidMap cannot track multiple DNs on the same host
hadoop-yetus commented on issue #1551: HDDS-2199 In SCMNodeManager dnsToUuidMap cannot track multiple DNs on the same host URL: https://github.com/apache/hadoop/pull/1551#issuecomment-538097211 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 40 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 3 new or modified test files. | ||| _ trunk Compile Tests _ | | -1 | mvninstall | 31 | hadoop-hdds in trunk failed. | | -1 | mvninstall | 34 | hadoop-ozone in trunk failed. | | -1 | compile | 22 | hadoop-hdds in trunk failed. | | -1 | compile | 16 | hadoop-ozone in trunk failed. | | +1 | checkstyle | 49 | trunk passed | | +1 | mvnsite | 0 | trunk passed | | +1 | shadedclient | 854 | branch has no errors when building and testing our client artifacts. | | -1 | javadoc | 22 | hadoop-hdds in trunk failed. | | -1 | javadoc | 21 | hadoop-ozone in trunk failed. | | 0 | spotbugs | 957 | Used deprecated FindBugs config; considering switching to SpotBugs. | | -1 | findbugs | 32 | hadoop-hdds in trunk failed. | | -1 | findbugs | 22 | hadoop-ozone in trunk failed. | ||| _ Patch Compile Tests _ | | -1 | mvninstall | 34 | hadoop-hdds in the patch failed. | | -1 | mvninstall | 37 | hadoop-ozone in the patch failed. | | -1 | compile | 26 | hadoop-hdds in the patch failed. | | -1 | compile | 19 | hadoop-ozone in the patch failed. | | -1 | javac | 26 | hadoop-hdds in the patch failed. | | -1 | javac | 19 | hadoop-ozone in the patch failed. | | +1 | checkstyle | 57 | the patch passed | | +1 | mvnsite | 0 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 724 | patch has no errors when building and testing our client artifacts. | | -1 | javadoc | 23 | hadoop-hdds in the patch failed. | | -1 | javadoc | 21 | hadoop-ozone in the patch failed. | | -1 | findbugs | 33 | hadoop-hdds in the patch failed. | | -1 | findbugs | 20 | hadoop-ozone in the patch failed. | ||| _ Other Tests _ | | -1 | unit | 29 | hadoop-hdds in the patch failed. | | -1 | unit | 26 | hadoop-ozone in the patch failed. | | +1 | asflicense | 36 | The patch does not generate ASF License warnings. | | | | 2378 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1551/11/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1551 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 21e174faee70 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / a3fe404 | | Default Java | 1.8.0_222 | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1551/11/artifact/out/branch-mvninstall-hadoop-hdds.txt | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1551/11/artifact/out/branch-mvninstall-hadoop-ozone.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1551/11/artifact/out/branch-compile-hadoop-hdds.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1551/11/artifact/out/branch-compile-hadoop-ozone.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1551/11/artifact/out/branch-javadoc-hadoop-hdds.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1551/11/artifact/out/branch-javadoc-hadoop-ozone.txt | | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-1551/11/artifact/out/branch-findbugs-hadoop-hdds.txt | | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-1551/11/artifact/out/branch-findbugs-hadoop-ozone.txt | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1551/11/artifact/out/patch-mvninstall-hadoop-hdds.txt | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1551/11/artifact/out/patch-mvninstall-hadoop-ozone.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1551/11/artifact/out/patch-compile-hadoop-hdds.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1551/11/artifact/out/patch-compile-hadoop-ozone.txt | | javac | https://builds.apache.org/job/hadoop-multibranch/job/PR-1551/11/artifact/out/patch-compile-hadoop-hdds.txt | | javac |
[GitHub] [hadoop] hadoop-yetus commented on issue #1585: HDDS-2230. Invalid entries in ozonesecure-mr config
hadoop-yetus commented on issue #1585: HDDS-2230. Invalid entries in ozonesecure-mr config URL: https://github.com/apache/hadoop/pull/1585#issuecomment-538097009 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 601 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | 0 | yamllint | 0 | yamllint was not available. | | 0 | shelldocs | 1 | Shelldocs was not available. | | +1 | @author | 0 | The patch does not contain any @author tags. | | -1 | test4tests | 0 | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ trunk Compile Tests _ | | -1 | mvninstall | 65 | hadoop-hdds in trunk failed. | | -1 | mvninstall | 50 | hadoop-ozone in trunk failed. | | -1 | compile | 23 | hadoop-hdds in trunk failed. | | -1 | compile | 19 | hadoop-ozone in trunk failed. | | +1 | mvnsite | 0 | trunk passed | | +1 | shadedclient | 960 | branch has no errors when building and testing our client artifacts. | | -1 | javadoc | 30 | hadoop-hdds in trunk failed. | | -1 | javadoc | 23 | hadoop-ozone in trunk failed. | ||| _ Patch Compile Tests _ | | -1 | mvninstall | 43 | hadoop-hdds in the patch failed. | | -1 | mvninstall | 42 | hadoop-ozone in the patch failed. | | -1 | compile | 32 | hadoop-hdds in the patch failed. | | -1 | compile | 22 | hadoop-ozone in the patch failed. | | -1 | javac | 32 | hadoop-hdds in the patch failed. | | -1 | javac | 22 | hadoop-ozone in the patch failed. | | +1 | mvnsite | 0 | the patch passed | | +1 | shellcheck | 0 | There were no new shellcheck issues. | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 844 | patch has no errors when building and testing our client artifacts. | | -1 | javadoc | 25 | hadoop-hdds in the patch failed. | | -1 | javadoc | 21 | hadoop-ozone in the patch failed. | ||| _ Other Tests _ | | -1 | unit | 32 | hadoop-hdds in the patch failed. | | -1 | unit | 29 | hadoop-ozone in the patch failed. | | +1 | asflicense | 39 | The patch does not generate ASF License warnings. | | | | 3090 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.2 Server=19.03.2 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1585/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1585 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient yamllint shellcheck shelldocs | | uname | Linux acf49af14521 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / a3fe404 | | Default Java | 1.8.0_222 | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1585/1/artifact/out/branch-mvninstall-hadoop-hdds.txt | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1585/1/artifact/out/branch-mvninstall-hadoop-ozone.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1585/1/artifact/out/branch-compile-hadoop-hdds.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1585/1/artifact/out/branch-compile-hadoop-ozone.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1585/1/artifact/out/branch-javadoc-hadoop-hdds.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1585/1/artifact/out/branch-javadoc-hadoop-ozone.txt | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1585/1/artifact/out/patch-mvninstall-hadoop-hdds.txt | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1585/1/artifact/out/patch-mvninstall-hadoop-ozone.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1585/1/artifact/out/patch-compile-hadoop-hdds.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1585/1/artifact/out/patch-compile-hadoop-ozone.txt | | javac | https://builds.apache.org/job/hadoop-multibranch/job/PR-1585/1/artifact/out/patch-compile-hadoop-hdds.txt | | javac | https://builds.apache.org/job/hadoop-multibranch/job/PR-1585/1/artifact/out/patch-compile-hadoop-ozone.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1585/1/artifact/out/patch-javadoc-hadoop-hdds.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1585/1/artifact/out/patch-javadoc-hadoop-ozone.txt | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1585/1/artifact/out/patch-unit-hadoop-hdds.txt | | unit |
[GitHub] [hadoop] ashvina commented on a change in pull request #1573: HDFS-14889. Ability to check if a block has a replica on provided storage.
ashvina commented on a change in pull request #1573: HDFS-14889. Ability to check if a block has a replica on provided storage. URL: https://github.com/apache/hadoop/pull/1573#discussion_r331204461 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockInfo.java ## @@ -64,6 +67,28 @@ public void testAddStorage() throws Exception { Assert.assertEquals(storage, blockInfo.getStorageInfo(0)); } + @Test + public void testAddProvidedStorage() throws Exception { +BlockInfo blockInfo = new BlockInfoContiguous((short) 3); + +DatanodeStorageInfo storage = mock(DatanodeStorageInfo.class); +when(storage.getStorageType()).thenReturn(StorageType.PROVIDED); +boolean added = blockInfo.addStorage(storage, blockInfo); + +Assert.assertTrue(added); +Assert.assertEquals(storage, blockInfo.getStorageInfo(0)); +Assert.assertTrue(blockInfo.isProvided()); + Review comment: Done. Please take a look. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org