[jira] [Commented] (HADOOP-14671) Upgrade to Apache Yetus 0.8.0
[ https://issues.apache.org/jira/browse/HADOOP-14671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16927291#comment-16927291 ] Zhankun Tang commented on HADOOP-14671: --- [~aw], [~aajisaka], one question on this. I saw this commits generated the missing release note and changelog file for some releases. But there're releases like 2.8.0, 3.1.3, 3.2.0 commit these files manually. Which way is preferred? These things are not mentioned in the howToRelease guide. > Upgrade to Apache Yetus 0.8.0 > - > > Key: HADOOP-14671 > URL: https://issues.apache.org/jira/browse/HADOOP-14671 > Project: Hadoop Common > Issue Type: Improvement > Components: build, documentation, test >Affects Versions: 3.0.0-beta1 >Reporter: Allen Wittenauer >Assignee: Allen Wittenauer >Priority: Major > Fix For: 3.2.0 > > Attachments: HADOOP-14671.001.patch, HADOOP-14671.02.patch > > > Apache Yetus 0.7.0 was released. Let's upgrade the bundled reference to the > new version. -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1360: HDDS-2007. Make ozone fs shell command work with OM HA service ids
hadoop-yetus commented on issue #1360: HDDS-2007. Make ozone fs shell command work with OM HA service ids URL: https://github.com/apache/hadoop/pull/1360#issuecomment-530241185 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 94 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | 0 | shelldocs | 0 | Shelldocs was not available. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 5 new or modified test files. | ||| _ trunk Compile Tests _ | | 0 | mvndep | 40 | Maven dependency ordering for branch | | +1 | mvninstall | 655 | trunk passed | | +1 | compile | 415 | trunk passed | | +1 | checkstyle | 85 | trunk passed | | +1 | mvnsite | 0 | trunk passed | | +1 | shadedclient | 876 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 195 | trunk passed | | 0 | spotbugs | 508 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 767 | trunk passed | ||| _ Patch Compile Tests _ | | 0 | mvndep | 29 | Maven dependency ordering for patch | | +1 | mvninstall | 611 | the patch passed | | +1 | compile | 410 | the patch passed | | +1 | javac | 410 | the patch passed | | +1 | checkstyle | 88 | the patch passed | | +1 | mvnsite | 0 | the patch passed | | +1 | shellcheck | 0 | There were no new shellcheck issues. | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 771 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 205 | the patch passed | | +1 | findbugs | 756 | the patch passed | ||| _ Other Tests _ | | +1 | unit | 351 | hadoop-hdds in the patch passed. | | -1 | unit | 2532 | hadoop-ozone in the patch failed. | | +1 | asflicense | 72 | The patch does not generate ASF License warnings. | | | | 9230 | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.ozone.container.common.statemachine.commandhandler.TestBlockDeletion | | | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures | | | hadoop.ozone.om.TestOzoneManagerHA | | | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient | | | hadoop.ozone.scm.TestContainerSmallFile | | | hadoop.ozone.om.snapshot.TestOzoneManagerSnapshotProvider | | | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures | | | hadoop.ozone.client.rpc.TestBlockOutputStream | | | hadoop.ozone.om.TestOMRatisSnapshots | | | hadoop.ozone.TestSecureOzoneCluster | | | hadoop.ozone.scm.TestSCMContainerPlacementPolicyMetrics | | | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption | | | hadoop.ozone.container.TestContainerReplication | | | hadoop.ozone.client.rpc.TestContainerStateMachineFailures | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.2 Server=19.03.2 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1360/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1360 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle shellcheck shelldocs | | uname | Linux 36ee253cfa4b 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / dacc448 | | Default Java | 1.8.0_212 | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1360/3/artifact/out/patch-unit-hadoop-ozone.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1360/3/testReport/ | | Max. process+thread count | 5296 (vs. ulimit of 5500) | | modules | C: hadoop-ozone/common hadoop-ozone/client hadoop-ozone/ozone-manager hadoop-ozone/dist hadoop-ozone/integration-test hadoop-ozone/ozone-recon hadoop-ozone/ozonefs U: hadoop-ozone | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1360/3/console | | versions | git=2.7.4 maven=3.3.9 shellcheck=0.4.6 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For
[GitHub] [hadoop] hadoop-yetus commented on issue #1422: HDFS-14839: Use Java Concurrent BlockingQueue instead of Internal Blo…
hadoop-yetus commented on issue #1422: HDFS-14839: Use Java Concurrent BlockingQueue instead of Internal Blo… URL: https://github.com/apache/hadoop/pull/1422#issuecomment-530239012 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 42 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | -1 | test4tests | 0 | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ trunk Compile Tests _ | | +1 | mvninstall | 1032 | trunk passed | | +1 | compile | 65 | trunk passed | | +1 | checkstyle | 46 | trunk passed | | +1 | mvnsite | 71 | trunk passed | | +1 | shadedclient | 792 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 58 | trunk passed | | 0 | spotbugs | 162 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 161 | trunk passed | ||| _ Patch Compile Tests _ | | +1 | mvninstall | 62 | the patch passed | | +1 | compile | 59 | the patch passed | | +1 | javac | 59 | the patch passed | | +1 | checkstyle | 40 | hadoop-hdfs-project/hadoop-hdfs: The patch generated 0 new + 25 unchanged - 6 fixed = 25 total (was 31) | | +1 | mvnsite | 63 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 729 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 53 | the patch passed | | -1 | findbugs | 164 | hadoop-hdfs-project/hadoop-hdfs generated 3 new + 0 unchanged - 0 fixed = 3 total (was 0) | ||| _ Other Tests _ | | -1 | unit | 5289 | hadoop-hdfs in the patch failed. | | +1 | asflicense | 41 | The patch does not generate ASF License warnings. | | | | 8837 | | | Reason | Tests | |---:|:--| | FindBugs | module:hadoop-hdfs-project/hadoop-hdfs | | | Exceptional return value of java.util.concurrent.BlockingQueue.offer(Object) ignored in org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor.addBlockToBeErasureCoded(ExtendedBlock, DatanodeDescriptor[], DatanodeStorageInfo[], byte[], ErasureCodingPolicy) At DatanodeDescriptor.java:ignored in org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor.addBlockToBeErasureCoded(ExtendedBlock, DatanodeDescriptor[], DatanodeStorageInfo[], byte[], ErasureCodingPolicy) At DatanodeDescriptor.java:[line 624] | | | Exceptional return value of java.util.concurrent.BlockingQueue.offer(Object) ignored in org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor.addBlockToBeRecovered(BlockInfo) At DatanodeDescriptor.java:ignored in org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor.addBlockToBeRecovered(BlockInfo) At DatanodeDescriptor.java:[line 638] | | | Exceptional return value of java.util.concurrent.BlockingQueue.offer(Object) ignored in org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor.addBlockToBeReplicated(Block, DatanodeStorageInfo[]) At DatanodeDescriptor.java:ignored in org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor.addBlockToBeReplicated(Block, DatanodeStorageInfo[]) At DatanodeDescriptor.java:[line 612] | | Failed junit tests | hadoop.hdfs.tools.TestDFSZKFailoverController | | | hadoop.hdfs.TestSafeMode | | | hadoop.hdfs.server.namenode.ha.TestBootstrapAliasmap | | | hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1422/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1422 | | JIRA Issue | HDFS-14839 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 846f1177826e 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / dacc448 | | Default Java | 1.8.0_222 | | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-1422/1/artifact/out/new-findbugs-hadoop-hdfs-project_hadoop-hdfs.html | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1422/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1422/1/testReport/ | | Max. process+thread count | 4400 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/ha
[jira] [Issue Comment Deleted] (HADOOP-16542) Update commons-beanutils version to 1.9.4
[ https://issues.apache.org/jira/browse/HADOOP-16542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] YiSheng Lien updated HADOOP-16542: -- Comment: was deleted (was: 👍) > Update commons-beanutils version to 1.9.4 > - > > Key: HADOOP-16542 > URL: https://issues.apache.org/jira/browse/HADOOP-16542 > Project: Hadoop Common > Issue Type: Task >Affects Versions: 3.3.0 >Reporter: Wei-Chiu Chuang >Assignee: kevin su >Priority: Major > Labels: release-blocker > Fix For: 3.3.0 > > Attachments: HADOOP-16542.001.patch, HADOOP-16542.002.patch, > HADOOP-16542.003.patch > > > [http://mail-archives.apache.org/mod_mbox/www-announce/201908.mbox/%3cc628798f-315d-4428-8cb1-4ed1ecc95...@apache.org%3e] > {quote} > CVE-2019-10086. Apache Commons Beanutils does not suppresses the class > property in PropertyUtilsBean > by default. > Severity: Medium > Vendor: The Apache Software Foundation > Versions Affected: commons-beanutils-1.9.3 and earlier > Description: A special BeanIntrospector class was added in version 1.9.2. > This can be used to stop attackers from using the class property of > Java objects to get access to the classloader. > However this protection was not enabled by default. > PropertyUtilsBean (and consequently BeanUtilsBean) now disallows class > level property access by default, thus protecting against > CVE-2014-0114. > Mitigation: 1.X users should migrate to 1.9.4. > {quote} -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16543) Cached DNS name resolution error
[ https://issues.apache.org/jira/browse/HADOOP-16543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16927272#comment-16927272 ] Hadoop QA commented on HADOOP-16543: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 43s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 1s{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 21s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 5s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 33s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 44s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 19s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 1m 2s{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 34s{color} | {color:green} trunk passed {color} | | {color:orange}-0{color} | {color:orange} patch {color} | {color:orange} 1m 42s{color} | {color:orange} Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 18s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 34s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 1m 29s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch generated 263 new + 214 unchanged - 0 fixed = 477 total (was 214) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 9s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 58s{color} | {color:red} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common generated 1 new + 4190 unchanged - 0 fixed = 4191 total (was 4190) {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 58s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 1s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 4m 4s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 26m 52s{color} | {color:red} hadoop-yarn-client in the patch failed. {col
[GitHub] [hadoop] hadoop-yetus commented on issue #1399: HADOOP-16543: Cached DNS name resolution error
hadoop-yetus commented on issue #1399: HADOOP-16543: Cached DNS name resolution error URL: https://github.com/apache/hadoop/pull/1399#issuecomment-530231383 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 43 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 1 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 2 new or modified test files. | ||| _ trunk Compile Tests _ | | 0 | mvndep | 21 | Maven dependency ordering for branch | | +1 | mvninstall | 1025 | trunk passed | | +1 | compile | 492 | trunk passed | | +1 | checkstyle | 84 | trunk passed | | +1 | mvnsite | 153 | trunk passed | | +1 | shadedclient | 944 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 139 | trunk passed | | 0 | spotbugs | 62 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 274 | trunk passed | | -0 | patch | 102 | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | ||| _ Patch Compile Tests _ | | 0 | mvndep | 18 | Maven dependency ordering for patch | | +1 | mvninstall | 104 | the patch passed | | +1 | compile | 454 | the patch passed | | +1 | javac | 454 | the patch passed | | -0 | checkstyle | 89 | hadoop-yarn-project/hadoop-yarn: The patch generated 263 new + 214 unchanged - 0 fixed = 477 total (was 214) | | +1 | mvnsite | 141 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | xml | 1 | The patch has no ill-formed XML file. | | +1 | shadedclient | 729 | patch has no errors when building and testing our client artifacts. | | -1 | javadoc | 58 | hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common generated 1 new + 4190 unchanged - 0 fixed = 4191 total (was 4190) | | -1 | findbugs | 118 | hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | ||| _ Other Tests _ | | +1 | unit | 61 | hadoop-yarn-api in the patch passed. | | +1 | unit | 244 | hadoop-yarn-common in the patch passed. | | -1 | unit | 1612 | hadoop-yarn-client in the patch failed. | | +1 | asflicense | 51 | The patch does not generate ASF License warnings. | | | | 7127 | | | Reason | Tests | |---:|:--| | FindBugs | module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common | | | Unchecked/unconfirmed cast from org.apache.hadoop.conf.Configuration to org.apache.hadoop.yarn.conf.YarnConfiguration in org.apache.hadoop.yarn.client.DefaultNoHARMFailoverProxyProvider.init(Configuration, RMProxy, Class) At DefaultNoHARMFailoverProxyProvider.java:org.apache.hadoop.yarn.conf.YarnConfiguration in org.apache.hadoop.yarn.client.DefaultNoHARMFailoverProxyProvider.init(Configuration, RMProxy, Class) At DefaultNoHARMFailoverProxyProvider.java:[line 56] | | Failed junit tests | hadoop.yarn.client.api.impl.TestAMRMClient | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1399/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1399 | | JIRA Issue | HADOOP-16543 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle xml | | uname | Linux 325c577e79fc 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / dacc448 | | Default Java | 1.8.0_222 | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1399/2/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1399/2/artifact/out/diff-javadoc-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt | | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-1399/2/artifact/out/new-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.html | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1399/2/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1399/2/testReport/ | | Max. process+thread count | 586 (vs. ulimit of 5500) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client U: hadoop-yarn-project/hadoop-yarn | | Console output | https://builds.apache.org/
[GitHub] [hadoop] hadoop-yetus commented on issue #1344: HDDS-1982 Extend SCMNodeManager to support decommission and maintenance states
hadoop-yetus commented on issue #1344: HDDS-1982 Extend SCMNodeManager to support decommission and maintenance states URL: https://github.com/apache/hadoop/pull/1344#issuecomment-530229574 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 92 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 1 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 15 new or modified test files. | ||| _ trunk Compile Tests _ | | 0 | mvndep | 24 | Maven dependency ordering for branch | | +1 | mvninstall | 649 | trunk passed | | +1 | compile | 405 | trunk passed | | +1 | checkstyle | 76 | trunk passed | | +1 | mvnsite | 0 | trunk passed | | +1 | shadedclient | 932 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 182 | trunk passed | | 0 | spotbugs | 499 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 750 | trunk passed | ||| _ Patch Compile Tests _ | | 0 | mvndep | 32 | Maven dependency ordering for patch | | -1 | mvninstall | 299 | hadoop-ozone in the patch failed. | | -1 | compile | 247 | hadoop-ozone in the patch failed. | | -1 | cc | 247 | hadoop-ozone in the patch failed. | | -1 | javac | 247 | hadoop-ozone in the patch failed. | | -0 | checkstyle | 43 | hadoop-ozone: The patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | | +1 | mvnsite | 0 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 739 | patch has no errors when building and testing our client artifacts. | | -1 | javadoc | 75 | hadoop-hdds generated 20 new + 16 unchanged - 0 fixed = 36 total (was 16) | | -1 | findbugs | 411 | hadoop-ozone in the patch failed. | ||| _ Other Tests _ | | +1 | unit | 339 | hadoop-hdds in the patch passed. | | -1 | unit | 466 | hadoop-ozone in the patch failed. | | +1 | asflicense | 40 | The patch does not generate ASF License warnings. | | | | 6574 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.2 Server=19.03.2 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1344/4/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1344 | | Optional Tests | dupname asflicense compile cc mvnsite javac unit javadoc mvninstall shadedclient findbugs checkstyle | | uname | Linux 2edae08c6f80 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / dacc448 | | Default Java | 1.8.0_212 | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1344/4/artifact/out/patch-mvninstall-hadoop-ozone.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1344/4/artifact/out/patch-compile-hadoop-ozone.txt | | cc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1344/4/artifact/out/patch-compile-hadoop-ozone.txt | | javac | https://builds.apache.org/job/hadoop-multibranch/job/PR-1344/4/artifact/out/patch-compile-hadoop-ozone.txt | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1344/4/artifact/out/diff-checkstyle-hadoop-ozone.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1344/4/artifact/out/diff-javadoc-javadoc-hadoop-hdds.txt | | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-1344/4/artifact/out/patch-findbugs-hadoop-ozone.txt | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1344/4/artifact/out/patch-unit-hadoop-ozone.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1344/4/testReport/ | | Max. process+thread count | 1247 (vs. ulimit of 5500) | | modules | C: hadoop-hdds/common hadoop-hdds/server-scm hadoop-hdds/tools hadoop-ozone/integration-test U: . | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1344/4/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-
[jira] [Assigned] (HADOOP-11842) Common side change for follow-on work for erasure coding phase I
[ https://issues.apache.org/jira/browse/HADOOP-11842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka reassigned HADOOP-11842: -- Assignee: (was: Akira Ajisaka) > Common side change for follow-on work for erasure coding phase I > > > Key: HADOOP-11842 > URL: https://issues.apache.org/jira/browse/HADOOP-11842 > Project: Hadoop Common > Issue Type: New Feature >Reporter: Tsz Wo Nicholas Sze >Priority: Major > > This is the common side work for HDFS-8031. -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] aajisaka commented on issue #474: fix some alerts raised by LGTM
aajisaka commented on issue #474: fix some alerts raised by LGTM URL: https://github.com/apache/hadoop/pull/474#issuecomment-530226040 Hi @1m2c3t4 Would you create an issue in ASF JIRA before creating a pull request? Contributor guide: https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16494) Add SHA-256 or SHA-512 checksum to release artifacts to comply with the release distribution policy
[ https://issues.apache.org/jira/browse/HADOOP-16494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16927260#comment-16927260 ] Christopher Tubbs commented on HADOOP-16494: [~rohithsharma] While that isn't a new thing, it is probably still going to be a source of frustration, since it is very unlikely that whoever is verifying the file is going to be doing so in a path that beings with {{/build/source/target}}, and therefore, it might be a good idea to address this in a new issue. Do you think? > Add SHA-256 or SHA-512 checksum to release artifacts to comply with the > release distribution policy > --- > > Key: HADOOP-16494 > URL: https://issues.apache.org/jira/browse/HADOOP-16494 > Project: Hadoop Common > Issue Type: Bug > Components: build >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Blocker > Fix For: 2.10.0, 3.3.0, 2.8.6, 3.2.1, 2.9.3, 3.1.3 > > > Originally reported by [~ctubbsii]: > https://lists.apache.org/thread.html/db2f5d5d8600c405293ebfb3bfc415e200e59f72605c5a920a461c09@%3Cgeneral.hadoop.apache.org%3E > bq. None of the artifacts seem to have valid detached checksum files that are > in compliance with https://www.apache.org/dev/release-distribution There > should be some ".shaXXX" files in there, and not just the (optional) ".mds" > files. -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16494) Add SHA-256 or SHA-512 checksum to release artifacts to comply with the release distribution policy
[ https://issues.apache.org/jira/browse/HADOOP-16494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16927241#comment-16927241 ] Rohith Sharma K S commented on HADOOP-16494: Just checked previous releases where same as above folder path is in md5 files. Looks it is fine. > Add SHA-256 or SHA-512 checksum to release artifacts to comply with the > release distribution policy > --- > > Key: HADOOP-16494 > URL: https://issues.apache.org/jira/browse/HADOOP-16494 > Project: Hadoop Common > Issue Type: Bug > Components: build >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Blocker > Fix For: 2.10.0, 3.3.0, 2.8.6, 3.2.1, 2.9.3, 3.1.3 > > > Originally reported by [~ctubbsii]: > https://lists.apache.org/thread.html/db2f5d5d8600c405293ebfb3bfc415e200e59f72605c5a920a461c09@%3Cgeneral.hadoop.apache.org%3E > bq. None of the artifacts seem to have valid detached checksum files that are > in compliance with https://www.apache.org/dev/release-distribution There > should be some ".shaXXX" files in there, and not just the (optional) ".mds" > files. -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16494) Add SHA-256 or SHA-512 checksum to release artifacts to comply with the release distribution policy
[ https://issues.apache.org/jira/browse/HADOOP-16494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16927236#comment-16927236 ] Rohith Sharma K S commented on HADOOP-16494: [~aajisaka] I created release artifacts for 3.2.1 and trying to verify sha512. I see below error {noformat} rohithsharmaks@ip-172-31-38-26:~/branch-3.2.1/target/artifacts$ sha512sum -c CHANGELOG.md.sha512 sha512sum: /build/source/target/artifacts/CHANGELOG.md: No such file or directory /build/source/target/artifacts/CHANGELOG.md: FAILED open or read sha512sum: WARNING: 1 listed file could not be read {noformat} When sha512 is created, it is taking entire folder path into account because *"${i}"* is entire folder patch. {noformat} for i in ${ARTIFACTS_DIR}/*; do ${GPG} --use-agent --armor --output "${i}.asc" --detach-sig "${i}" sha512sum --tag "${i}" > "${i}.sha512" done {noformat} Is this expected? or need fix to consider only file name. How was earlier md5 was running? > Add SHA-256 or SHA-512 checksum to release artifacts to comply with the > release distribution policy > --- > > Key: HADOOP-16494 > URL: https://issues.apache.org/jira/browse/HADOOP-16494 > Project: Hadoop Common > Issue Type: Bug > Components: build >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Blocker > Fix For: 2.10.0, 3.3.0, 2.8.6, 3.2.1, 2.9.3, 3.1.3 > > > Originally reported by [~ctubbsii]: > https://lists.apache.org/thread.html/db2f5d5d8600c405293ebfb3bfc415e200e59f72605c5a920a461c09@%3Cgeneral.hadoop.apache.org%3E > bq. None of the artifacts seem to have valid detached checksum files that are > in compliance with https://www.apache.org/dev/release-distribution There > should be some ".shaXXX" files in there, and not just the (optional) ".mds" > files. -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16514) Current/stable documentation should link to Hadoop 3
[ https://issues.apache.org/jira/browse/HADOOP-16514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-16514: --- Fix Version/s: asf-site > Current/stable documentation should link to Hadoop 3 > > > Key: HADOOP-16514 > URL: https://issues.apache.org/jira/browse/HADOOP-16514 > Project: Hadoop Common > Issue Type: Improvement > Components: documentation, site >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Major > Fix For: asf-site > > > Now both http://hadoop.apache.org/docs/current/ and > http://hadoop.apache.org/docs/stable/ link to Hadoop 2.9.2. Can we move these > to Hadoop 3? -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Reopened] (HADOOP-16514) Current/stable documentation should link to Hadoop 3
[ https://issues.apache.org/jira/browse/HADOOP-16514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka reopened HADOOP-16514: > Current/stable documentation should link to Hadoop 3 > > > Key: HADOOP-16514 > URL: https://issues.apache.org/jira/browse/HADOOP-16514 > Project: Hadoop Common > Issue Type: Improvement > Components: documentation, site >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Major > Fix For: asf-site > > > Now both http://hadoop.apache.org/docs/current/ and > http://hadoop.apache.org/docs/stable/ link to Hadoop 2.9.2. Can we move these > to Hadoop 3? -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-16514) Current/stable documentation should link to Hadoop 3
[ https://issues.apache.org/jira/browse/HADOOP-16514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka resolved HADOOP-16514. Resolution: Fixed > Current/stable documentation should link to Hadoop 3 > > > Key: HADOOP-16514 > URL: https://issues.apache.org/jira/browse/HADOOP-16514 > Project: Hadoop Common > Issue Type: Improvement > Components: documentation, site >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Major > Fix For: asf-site > > > Now both http://hadoop.apache.org/docs/current/ and > http://hadoop.apache.org/docs/stable/ link to Hadoop 2.9.2. Can we move these > to Hadoop 3? -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16514) Current/stable documentation should link to Hadoop 3
[ https://issues.apache.org/jira/browse/HADOOP-16514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-16514: --- Resolution: Done Status: Resolved (was: Patch Available) Merged the PR. Closing. > Current/stable documentation should link to Hadoop 3 > > > Key: HADOOP-16514 > URL: https://issues.apache.org/jira/browse/HADOOP-16514 > Project: Hadoop Common > Issue Type: Improvement > Components: documentation, site >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Major > > Now both http://hadoop.apache.org/docs/current/ and > http://hadoop.apache.org/docs/stable/ link to Hadoop 2.9.2. Can we move these > to Hadoop 3? -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1419: HADOOP-15184. Add GitHub pull request template.
hadoop-yetus commented on issue #1419: HADOOP-15184. Add GitHub pull request template. URL: https://github.com/apache/hadoop/pull/1419#issuecomment-530211835 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 76 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | -1 | test4tests | 0 | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ trunk Compile Tests _ | | +1 | mvninstall | 1171 | trunk passed | | +1 | compile | 1035 | trunk passed | | +1 | mvnsite | 932 | trunk passed | | +1 | shadedclient | 3903 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 351 | trunk passed | ||| _ Patch Compile Tests _ | | +1 | mvninstall | 1224 | the patch passed | | +1 | compile | 995 | the patch passed | | +1 | javac | 995 | the patch passed | | +1 | mvnsite | 879 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | xml | 2 | The patch has no ill-formed XML file. | | +1 | shadedclient | 743 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 356 | the patch passed | ||| _ Other Tests _ | | -1 | unit | 974 | root in the patch failed. | | +1 | asflicense | 43 | The patch does not generate ASF License warnings. | | | | 9704 | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.security.TestFixKerberosTicketOrder | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=18.09.7 Server=18.09.7 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1419/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1419 | | Optional Tests | dupname asflicense mvnsite compile javac javadoc mvninstall unit shadedclient xml | | uname | Linux 53090f7d33fa 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 524b553 | | Default Java | 1.8.0_222 | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1419/2/artifact/out/patch-unit-root.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1419/2/testReport/ | | Max. process+thread count | 1344 (vs. ulimit of 5500) | | modules | C: . U: . | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1419/2/console | | versions | git=2.7.4 maven=3.3.9 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16371) Option to disable GCM for SSL connections when running on Java 8
[ https://issues.apache.org/jira/browse/HADOOP-16371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16927227#comment-16927227 ] Hadoop QA commented on HADOOP-16371: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 27s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 1s{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 4 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 26s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 3s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m 11s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 35s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 18m 42s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 11s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 1m 4s{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 34s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 27s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 19m 0s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 2m 34s{color} | {color:orange} root: The patch generated 1 new + 16 unchanged - 0 fixed = 17 total (was 16) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 3s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 33s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 34s{color} | {color:red} hadoop-tools_hadoop-aws generated 4 new + 1 unchanged - 0 fixed = 5 total (was 1) {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 10s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 15s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 34s{color} | {color:green} hadoop-aws in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 23s{color} | {color:green} hadoop-azure in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 46s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}134m 11s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.2 Server=19.03.2 base: https://builds.apache.org/job/hadoop-multibranch/j
[GitHub] [hadoop] hadoop-yetus commented on issue #970: HADOOP-16371: Option to disable GCM for SSL connections when running on Java 8
hadoop-yetus commented on issue #970: HADOOP-16371: Option to disable GCM for SSL connections when running on Java 8 URL: https://github.com/apache/hadoop/pull/970#issuecomment-530208855 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 87 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 1 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 4 new or modified test files. | ||| _ trunk Compile Tests _ | | 0 | mvndep | 26 | Maven dependency ordering for branch | | +1 | mvninstall | 1263 | trunk passed | | +1 | compile | 1211 | trunk passed | | +1 | checkstyle | 155 | trunk passed | | +1 | mvnsite | 172 | trunk passed | | +1 | shadedclient | 1122 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 131 | trunk passed | | 0 | spotbugs | 64 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 274 | trunk passed | ||| _ Patch Compile Tests _ | | 0 | mvndep | 27 | Maven dependency ordering for patch | | +1 | mvninstall | 126 | the patch passed | | +1 | compile | 1140 | the patch passed | | +1 | javac | 1140 | the patch passed | | -0 | checkstyle | 154 | root: The patch generated 1 new + 16 unchanged - 0 fixed = 17 total (was 16) | | +1 | mvnsite | 169 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | xml | 3 | The patch has no ill-formed XML file. | | +1 | shadedclient | 753 | patch has no errors when building and testing our client artifacts. | | -1 | javadoc | 34 | hadoop-tools_hadoop-aws generated 4 new + 1 unchanged - 0 fixed = 5 total (was 1) | | +1 | findbugs | 310 | the patch passed | ||| _ Other Tests _ | | +1 | unit | 615 | hadoop-common in the patch passed. | | +1 | unit | 94 | hadoop-aws in the patch passed. | | +1 | unit | 83 | hadoop-azure in the patch passed. | | +1 | asflicense | 46 | The patch does not generate ASF License warnings. | | | | 8051 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.2 Server=19.03.2 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-970/13/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/970 | | JIRA Issue | HADOOP-16371 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml findbugs checkstyle | | uname | Linux 6153829a62f6 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 524b553 | | Default Java | 1.8.0_212 | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-970/13/artifact/out/diff-checkstyle-root.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-970/13/artifact/out/diff-javadoc-javadoc-hadoop-tools_hadoop-aws.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-970/13/testReport/ | | Max. process+thread count | 1348 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws hadoop-tools/hadoop-azure U: . | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-970/13/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16371) Option to disable GCM for SSL connections when running on Java 8
[ https://issues.apache.org/jira/browse/HADOOP-16371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16927226#comment-16927226 ] Hadoop QA commented on HADOOP-16371: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 40s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 1s{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 4 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 5s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m 10s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 35s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 18m 38s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 9s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 1m 11s{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 32s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 25s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 20s{color} | {color:red} hadoop-aws in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 18s{color} | {color:red} hadoop-azure in the patch failed. {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 19m 18s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 2m 39s{color} | {color:orange} root: The patch generated 1 new + 16 unchanged - 0 fixed = 17 total (was 16) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 3s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 40s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 38s{color} | {color:red} hadoop-tools_hadoop-aws generated 4 new + 1 unchanged - 0 fixed = 5 total (was 1) {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 10s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 25s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 30s{color} | {color:green} hadoop-aws in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 32s{color} | {color:green} hadoop-azure in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 46s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}135m 40s{color} | {color:bl
[GitHub] [hadoop] hadoop-yetus commented on issue #970: HADOOP-16371: Option to disable GCM for SSL connections when running on Java 8
hadoop-yetus commented on issue #970: HADOOP-16371: Option to disable GCM for SSL connections when running on Java 8 URL: https://github.com/apache/hadoop/pull/970#issuecomment-530208728 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 100 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 1 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 4 new or modified test files. | ||| _ trunk Compile Tests _ | | 0 | mvndep | 65 | Maven dependency ordering for branch | | +1 | mvninstall | 1288 | trunk passed | | +1 | compile | 1210 | trunk passed | | +1 | checkstyle | 155 | trunk passed | | +1 | mvnsite | 170 | trunk passed | | +1 | shadedclient | 1118 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 129 | trunk passed | | 0 | spotbugs | 71 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 272 | trunk passed | ||| _ Patch Compile Tests _ | | 0 | mvndep | 25 | Maven dependency ordering for patch | | -1 | mvninstall | 20 | hadoop-aws in the patch failed. | | -1 | mvninstall | 18 | hadoop-azure in the patch failed. | | +1 | compile | 1158 | the patch passed | | +1 | javac | 1158 | the patch passed | | -0 | checkstyle | 159 | root: The patch generated 1 new + 16 unchanged - 0 fixed = 17 total (was 16) | | +1 | mvnsite | 170 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | xml | 3 | The patch has no ill-formed XML file. | | +1 | shadedclient | 760 | patch has no errors when building and testing our client artifacts. | | -1 | javadoc | 38 | hadoop-tools_hadoop-aws generated 4 new + 1 unchanged - 0 fixed = 5 total (was 1) | | +1 | findbugs | 310 | the patch passed | ||| _ Other Tests _ | | +1 | unit | 625 | hadoop-common in the patch passed. | | +1 | unit | 90 | hadoop-aws in the patch passed. | | +1 | unit | 92 | hadoop-azure in the patch passed. | | +1 | asflicense | 46 | The patch does not generate ASF License warnings. | | | | 8140 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.2 Server=19.03.2 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-970/12/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/970 | | JIRA Issue | HADOOP-16371 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml findbugs checkstyle | | uname | Linux 8104ee6f99c6 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 524b553 | | Default Java | 1.8.0_212 | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-970/12/artifact/out/patch-mvninstall-hadoop-tools_hadoop-aws.txt | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-970/12/artifact/out/patch-mvninstall-hadoop-tools_hadoop-azure.txt | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-970/12/artifact/out/diff-checkstyle-root.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-970/12/artifact/out/diff-javadoc-javadoc-hadoop-tools_hadoop-aws.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-970/12/testReport/ | | Max. process+thread count | 1348 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws hadoop-tools/hadoop-azure U: . | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-970/12/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1424: HDDS-2107. Datanodes should retry forever to connect to SCM in an…
hadoop-yetus commented on issue #1424: HDDS-2107. Datanodes should retry forever to connect to SCM in an… URL: https://github.com/apache/hadoop/pull/1424#issuecomment-530208630 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 41 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 1 new or modified test files. | ||| _ trunk Compile Tests _ | | 0 | mvndep | 67 | Maven dependency ordering for branch | | +1 | mvninstall | 589 | trunk passed | | +1 | compile | 381 | trunk passed | | +1 | checkstyle | 83 | trunk passed | | +1 | mvnsite | 0 | trunk passed | | +1 | shadedclient | 868 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 178 | trunk passed | | 0 | spotbugs | 417 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 615 | trunk passed | ||| _ Patch Compile Tests _ | | 0 | mvndep | 41 | Maven dependency ordering for patch | | +1 | mvninstall | 536 | the patch passed | | +1 | compile | 387 | the patch passed | | +1 | javac | 387 | the patch passed | | +1 | checkstyle | 90 | the patch passed | | +1 | mvnsite | 0 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 678 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 175 | the patch passed | | +1 | findbugs | 631 | the patch passed | ||| _ Other Tests _ | | -1 | unit | 280 | hadoop-hdds in the patch failed. | | -1 | unit | 2824 | hadoop-ozone in the patch failed. | | +1 | asflicense | 55 | The patch does not generate ASF License warnings. | | | | 8715 | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdds.scm.container.placement.algorithms.TestSCMContainerPlacementRackAware | | | hadoop.ozone.container.TestContainerReplication | | | hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient | | | hadoop.ozone.container.common.statemachine.commandhandler.TestBlockDeletion | | | hadoop.ozone.client.rpc.TestContainerStateMachineFailures | | | hadoop.ozone.client.rpc.Test2WayCommitInRatis | | | hadoop.ozone.TestSecureOzoneCluster | | | hadoop.ozone.scm.TestContainerSmallFile | | | hadoop.ozone.client.rpc.TestBlockOutputStream | | | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures | | | hadoop.ozone.om.TestOzoneManagerHA | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1424/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1424 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 2f98f8163e51 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / f8f8598 | | Default Java | 1.8.0_222 | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1424/1/artifact/out/patch-unit-hadoop-hdds.txt | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1424/1/artifact/out/patch-unit-hadoop-ozone.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1424/1/testReport/ | | Max. process+thread count | 5408 (vs. ulimit of 5500) | | modules | C: hadoop-hdds/container-service hadoop-ozone/ozone-manager U: . | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1424/1/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15184) Add GitHub pull request template
[ https://issues.apache.org/jira/browse/HADOOP-15184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16927215#comment-16927215 ] Hudson commented on HADOOP-15184: - FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17273 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/17273/]) HADOOP-15184. Add GitHub pull request template. (#1419) (github: rev dacc44821d1fff2971c4d61c97b3edb886d0553f) * (edit) pom.xml * (add) .github/pull_request_template.md > Add GitHub pull request template > > > Key: HADOOP-15184 > URL: https://issues.apache.org/jira/browse/HADOOP-15184 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Major > Fix For: 3.3.0 > > Attachments: HADOOP-15184.001.patch, HADOOP-15184.002.patch > > > There are many GitHub pull requests which do not follow the contribution > guideline (e.g. creating a PR without filing a issue in ASF JIRA). I'd like > to add a GitHub pull request template to avoid such things. -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16551) The changelog*.md seems not generated when create-release
[ https://issues.apache.org/jira/browse/HADOOP-16551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zhankun Tang updated HADOOP-16551: -- Fix Version/s: 3.1.4 3.1.3 > The changelog*.md seems not generated when create-release > - > > Key: HADOOP-16551 > URL: https://issues.apache.org/jira/browse/HADOOP-16551 > Project: Hadoop Common > Issue Type: Task >Reporter: Zhankun Tang >Priority: Blocker > Fix For: 3.1.3, 3.1.4 > > > Hi, > When creating Hadoop 3.1.3 release with "create-release" script, after the > mvn site succeeded. But it complains about this and failed: > {code:java} > dev-support/bin/create-release --asfrelease --docker --dockercache{code} > {code:java} > $ cd /build/source > $ mv /build/source/target/hadoop-site-3.1.3.tar.gz > /build/source/target/artifacts/hadoop-3.1.3-site.tar.gz > $ cp -p > /build/source/hadoop-common-project/hadoop-common/src/site/markdown/release/3.1.3/CHANGES*.md > /build/source/target/artifacts/CHANGES.md > cp: cannot stat > '/build/source/hadoop-common-project/hadoop-common/src/site/markdown/release/3.1.3/CHANGES*.md': > No such file or directory > {code} > And there's no 3.1.3 release site markdown folder. > {code:java} > [ztang@release-vm hadoop]$ ls > hadoop-common-project/hadoop-common/src/site/markdown/release/3.1.3 > ls: cannot access > hadoop-common-project/hadoop-common/src/site/markdown/release/3.1.3: No such > file or directory > {code} > I've checked the HADOOP-14671 but have no idea why this changelog is missing. > *Update:* > Found that the CHANGELOG.md and RELEASENOTES.md are generated but not in > directory "3.1.3" > {code:java} > [ztang@release-vm hadoop]$ ls > hadoop-common-project/hadoop-common/src/site/markdown/release/ > 0.1.0 0.15.2 0.19.2 0.23.2 0.7.2 2.0.1-alpha 2.6.3 3.0.0-alpha3 > 0.10.0 0.15.3 0.2.0 0.23.3 0.8.0 2.0.2-alpha 2.6.4 3.0.0-alpha4 > 0.10.1 0.15.4 0.20.0 0.23.4 0.9.0 2.0.3-alpha 2.6.5 3.0.0-beta1 > 0.1.1 0.16.0 0.20.1 0.23.5 0.9.1 2.0.4-alpha 2.6.6 3.0.1 > 0.11.0 0.16.1 0.20.2 0.23.6 0.9.2 2.0.5-alpha 2.7.0 3.0.3 > 0.11.1 0.16.2 0.20.203.0 0.23.7 1.0.0 2.0.6-alpha 2.7.1 3.1.0 > 0.11.2 0.16.3 0.20.203.1 0.23.8 1.0.1 2.1.0-beta 2.7.2 3.1.1 > 0.12.0 0.16.4 0.20.204.0 0.23.9 1.0.2 2.1.1-beta 2.7.3 3.1.2 > 0.12.1 0.17.0 0.20.205.0 0.24.0 1.0.3 2.2.0 2.7.4 CHANGELOG.md > 0.12.2 0.17.1 0.20.3 0.3.0 1.0.4 2.2.1 2.7.5 index.md > 0.12.3 0.17.2 0.2.1 0.3.1 1.1.0 2.3.0 2.8.0 README.md > 0.13.0 0.17.3 0.21.0 0.3.2 1.1.1 2.4.0 2.8.1 RELEASENOTES.md > 0.14.0 0.18.0 0.21.1 0.4.0 1.1.2 2.4.1 2.8.2 > 0.14.1 0.18.1 0.22.0 0.5.0 1.1.3 2.5.0 2.8.3 > 0.14.2 0.18.2 0.22.1 0.6.0 1.2.0 2.5.1 2.9.0 > 0.14.3 0.18.3 0.23.0 0.6.1 1.2.1 2.5.2 2.9.1 > 0.14.4 0.18.4 0.23.1 0.6.2 1.2.2 2.6.0 3.0.0 > 0.15.0 0.19.0 0.23.10 0.7.0 1.3.0 2.6.1 3.0.0-alpha1 > 0.15.1 0.19.1 0.23.11 0.7.1 2.0.0-alpha 2.6.2 3.0.0-alpha2{code} -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-16551) The changelog*.md seems not generated when create-release
[ https://issues.apache.org/jira/browse/HADOOP-16551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zhankun Tang resolved HADOOP-16551. --- Resolution: Fixed Close this since revert HADOOP-16061 has fixed it. > The changelog*.md seems not generated when create-release > - > > Key: HADOOP-16551 > URL: https://issues.apache.org/jira/browse/HADOOP-16551 > Project: Hadoop Common > Issue Type: Task >Reporter: Zhankun Tang >Priority: Blocker > > Hi, > When creating Hadoop 3.1.3 release with "create-release" script, after the > mvn site succeeded. But it complains about this and failed: > {code:java} > dev-support/bin/create-release --asfrelease --docker --dockercache{code} > {code:java} > $ cd /build/source > $ mv /build/source/target/hadoop-site-3.1.3.tar.gz > /build/source/target/artifacts/hadoop-3.1.3-site.tar.gz > $ cp -p > /build/source/hadoop-common-project/hadoop-common/src/site/markdown/release/3.1.3/CHANGES*.md > /build/source/target/artifacts/CHANGES.md > cp: cannot stat > '/build/source/hadoop-common-project/hadoop-common/src/site/markdown/release/3.1.3/CHANGES*.md': > No such file or directory > {code} > And there's no 3.1.3 release site markdown folder. > {code:java} > [ztang@release-vm hadoop]$ ls > hadoop-common-project/hadoop-common/src/site/markdown/release/3.1.3 > ls: cannot access > hadoop-common-project/hadoop-common/src/site/markdown/release/3.1.3: No such > file or directory > {code} > I've checked the HADOOP-14671 but have no idea why this changelog is missing. > *Update:* > Found that the CHANGELOG.md and RELEASENOTES.md are generated but not in > directory "3.1.3" > {code:java} > [ztang@release-vm hadoop]$ ls > hadoop-common-project/hadoop-common/src/site/markdown/release/ > 0.1.0 0.15.2 0.19.2 0.23.2 0.7.2 2.0.1-alpha 2.6.3 3.0.0-alpha3 > 0.10.0 0.15.3 0.2.0 0.23.3 0.8.0 2.0.2-alpha 2.6.4 3.0.0-alpha4 > 0.10.1 0.15.4 0.20.0 0.23.4 0.9.0 2.0.3-alpha 2.6.5 3.0.0-beta1 > 0.1.1 0.16.0 0.20.1 0.23.5 0.9.1 2.0.4-alpha 2.6.6 3.0.1 > 0.11.0 0.16.1 0.20.2 0.23.6 0.9.2 2.0.5-alpha 2.7.0 3.0.3 > 0.11.1 0.16.2 0.20.203.0 0.23.7 1.0.0 2.0.6-alpha 2.7.1 3.1.0 > 0.11.2 0.16.3 0.20.203.1 0.23.8 1.0.1 2.1.0-beta 2.7.2 3.1.1 > 0.12.0 0.16.4 0.20.204.0 0.23.9 1.0.2 2.1.1-beta 2.7.3 3.1.2 > 0.12.1 0.17.0 0.20.205.0 0.24.0 1.0.3 2.2.0 2.7.4 CHANGELOG.md > 0.12.2 0.17.1 0.20.3 0.3.0 1.0.4 2.2.1 2.7.5 index.md > 0.12.3 0.17.2 0.2.1 0.3.1 1.1.0 2.3.0 2.8.0 README.md > 0.13.0 0.17.3 0.21.0 0.3.2 1.1.1 2.4.0 2.8.1 RELEASENOTES.md > 0.14.0 0.18.0 0.21.1 0.4.0 1.1.2 2.4.1 2.8.2 > 0.14.1 0.18.1 0.22.0 0.5.0 1.1.3 2.5.0 2.8.3 > 0.14.2 0.18.2 0.22.1 0.6.0 1.2.0 2.5.1 2.9.0 > 0.14.3 0.18.3 0.23.0 0.6.1 1.2.1 2.5.2 2.9.1 > 0.14.4 0.18.4 0.23.1 0.6.2 1.2.2 2.6.0 3.0.0 > 0.15.0 0.19.0 0.23.10 0.7.0 1.3.0 2.6.1 3.0.0-alpha1 > 0.15.1 0.19.1 0.23.11 0.7.1 2.0.0-alpha 2.6.2 3.0.0-alpha2{code} -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16551) The changelog*.md seems not generated when create-release
[ https://issues.apache.org/jira/browse/HADOOP-16551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16927211#comment-16927211 ] Zhankun Tang commented on HADOOP-16551: --- [~aajisaka], it works well. Thanks! > The changelog*.md seems not generated when create-release > - > > Key: HADOOP-16551 > URL: https://issues.apache.org/jira/browse/HADOOP-16551 > Project: Hadoop Common > Issue Type: Task >Reporter: Zhankun Tang >Priority: Blocker > > Hi, > When creating Hadoop 3.1.3 release with "create-release" script, after the > mvn site succeeded. But it complains about this and failed: > {code:java} > dev-support/bin/create-release --asfrelease --docker --dockercache{code} > {code:java} > $ cd /build/source > $ mv /build/source/target/hadoop-site-3.1.3.tar.gz > /build/source/target/artifacts/hadoop-3.1.3-site.tar.gz > $ cp -p > /build/source/hadoop-common-project/hadoop-common/src/site/markdown/release/3.1.3/CHANGES*.md > /build/source/target/artifacts/CHANGES.md > cp: cannot stat > '/build/source/hadoop-common-project/hadoop-common/src/site/markdown/release/3.1.3/CHANGES*.md': > No such file or directory > {code} > And there's no 3.1.3 release site markdown folder. > {code:java} > [ztang@release-vm hadoop]$ ls > hadoop-common-project/hadoop-common/src/site/markdown/release/3.1.3 > ls: cannot access > hadoop-common-project/hadoop-common/src/site/markdown/release/3.1.3: No such > file or directory > {code} > I've checked the HADOOP-14671 but have no idea why this changelog is missing. > *Update:* > Found that the CHANGELOG.md and RELEASENOTES.md are generated but not in > directory "3.1.3" > {code:java} > [ztang@release-vm hadoop]$ ls > hadoop-common-project/hadoop-common/src/site/markdown/release/ > 0.1.0 0.15.2 0.19.2 0.23.2 0.7.2 2.0.1-alpha 2.6.3 3.0.0-alpha3 > 0.10.0 0.15.3 0.2.0 0.23.3 0.8.0 2.0.2-alpha 2.6.4 3.0.0-alpha4 > 0.10.1 0.15.4 0.20.0 0.23.4 0.9.0 2.0.3-alpha 2.6.5 3.0.0-beta1 > 0.1.1 0.16.0 0.20.1 0.23.5 0.9.1 2.0.4-alpha 2.6.6 3.0.1 > 0.11.0 0.16.1 0.20.2 0.23.6 0.9.2 2.0.5-alpha 2.7.0 3.0.3 > 0.11.1 0.16.2 0.20.203.0 0.23.7 1.0.0 2.0.6-alpha 2.7.1 3.1.0 > 0.11.2 0.16.3 0.20.203.1 0.23.8 1.0.1 2.1.0-beta 2.7.2 3.1.1 > 0.12.0 0.16.4 0.20.204.0 0.23.9 1.0.2 2.1.1-beta 2.7.3 3.1.2 > 0.12.1 0.17.0 0.20.205.0 0.24.0 1.0.3 2.2.0 2.7.4 CHANGELOG.md > 0.12.2 0.17.1 0.20.3 0.3.0 1.0.4 2.2.1 2.7.5 index.md > 0.12.3 0.17.2 0.2.1 0.3.1 1.1.0 2.3.0 2.8.0 README.md > 0.13.0 0.17.3 0.21.0 0.3.2 1.1.1 2.4.0 2.8.1 RELEASENOTES.md > 0.14.0 0.18.0 0.21.1 0.4.0 1.1.2 2.4.1 2.8.2 > 0.14.1 0.18.1 0.22.0 0.5.0 1.1.3 2.5.0 2.8.3 > 0.14.2 0.18.2 0.22.1 0.6.0 1.2.0 2.5.1 2.9.0 > 0.14.3 0.18.3 0.23.0 0.6.1 1.2.1 2.5.2 2.9.1 > 0.14.4 0.18.4 0.23.1 0.6.2 1.2.2 2.6.0 3.0.0 > 0.15.0 0.19.0 0.23.10 0.7.0 1.3.0 2.6.1 3.0.0-alpha1 > 0.15.1 0.19.1 0.23.11 0.7.1 2.0.0-alpha 2.6.2 3.0.0-alpha2{code} -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] arthas-171 commented on issue #1021: huowang
arthas-171 commented on issue #1021: huowang URL: https://github.com/apache/hadoop/pull/1021#issuecomment-530196491 hi @aajisaka yes i have created a issue ,HDFS-14604 [https://issues.apache.org/jira/browse/HDFS-14604](url) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] HeartSaVioR closed pull request #1413: HADOOP-16255. Add ChecksumFs.rename(path, path, boolean) to rename crc file as well when FileContext.rename(path, path, options) is called. [BR
HeartSaVioR closed pull request #1413: HADOOP-16255. Add ChecksumFs.rename(path, path, boolean) to rename crc file as well when FileContext.rename(path, path, options) is called. [BRANCH-2] URL: https://github.com/apache/hadoop/pull/1413 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] HeartSaVioR commented on issue #1413: HADOOP-16255. Add ChecksumFs.rename(path, path, boolean) to rename crc file as well when FileContext.rename(path, path, options) is called. [BRA
HeartSaVioR commented on issue #1413: HADOOP-16255. Add ChecksumFs.rename(path, path, boolean) to rename crc file as well when FileContext.rename(path, path, options) is called. [BRANCH-2] URL: https://github.com/apache/hadoop/pull/1413#issuecomment-530195644 I just attached the patch file to JIRA issue - looks like Yetus doesn't work for this case. Closing. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16255) ChecksumFS.Make FileSystem.rename(path, path, options) doesn't rename checksum
[ https://issues.apache.org/jira/browse/HADOOP-16255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16927199#comment-16927199 ] Jungtaek Lim commented on HADOOP-16255: --- [~ste...@apache.org] Just attached the patch file for branch-2 as guided. Thanks! > ChecksumFS.Make FileSystem.rename(path, path, options) doesn't rename checksum > -- > > Key: HADOOP-16255 > URL: https://issues.apache.org/jira/browse/HADOOP-16255 > Project: Hadoop Common > Issue Type: Bug > Components: fs >Affects Versions: 2.8.5, 3.1.2 >Reporter: Steve Loughran >Assignee: Jungtaek Lim >Priority: Major > Fix For: 3.2.1 > > Attachments: HADOOP-16255-branch-2-001.patch > > > ChecksumFS doesn't override FilterFS rename/3, so doesn't rename the checksum > with the file. > As a result, if a file is renamed over an existing file using rename(src, > dest, OVERWRITE) the renamed file will be considered to have an invalid > checksum -the old one is picked up instead. -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16255) ChecksumFS.Make FileSystem.rename(path, path, options) doesn't rename checksum
[ https://issues.apache.org/jira/browse/HADOOP-16255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jungtaek Lim updated HADOOP-16255: -- Attachment: HADOOP-16255-branch-2-001.patch > ChecksumFS.Make FileSystem.rename(path, path, options) doesn't rename checksum > -- > > Key: HADOOP-16255 > URL: https://issues.apache.org/jira/browse/HADOOP-16255 > Project: Hadoop Common > Issue Type: Bug > Components: fs >Affects Versions: 2.8.5, 3.1.2 >Reporter: Steve Loughran >Assignee: Jungtaek Lim >Priority: Major > Fix For: 3.2.1 > > Attachments: HADOOP-16255-branch-2-001.patch > > > ChecksumFS doesn't override FilterFS rename/3, so doesn't rename the checksum > with the file. > As a result, if a file is renamed over an existing file using rename(src, > dest, OVERWRITE) the renamed file will be considered to have an invalid > checksum -the old one is picked up instead. -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] aajisaka commented on issue #1021: huowang
aajisaka commented on issue #1021: huowang URL: https://github.com/apache/hadoop/pull/1021#issuecomment-530192026 Hi @arthas-171 Would you create an issue in ASF jira before submitting a pull request? This is a guide for contributors: https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] aajisaka closed pull request #363: Branch 2.9.1
aajisaka closed pull request #363: Branch 2.9.1 URL: https://github.com/apache/hadoop/pull/363 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] aajisaka closed pull request #372: add a pic
aajisaka closed pull request #372: add a pic URL: https://github.com/apache/hadoop/pull/372 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] aajisaka commented on issue #363: Branch 2.9.1
aajisaka commented on issue #363: Branch 2.9.1 URL: https://github.com/apache/hadoop/pull/363#issuecomment-530191479 Closing this as invalid. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] aajisaka commented on issue #358: Create REad.me
aajisaka commented on issue #358: Create REad.me URL: https://github.com/apache/hadoop/pull/358#issuecomment-530191305 README.txt already exists. Closing this. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] aajisaka closed pull request #358: Create REad.me
aajisaka closed pull request #358: Create REad.me URL: https://github.com/apache/hadoop/pull/358 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15184) Add GitHub pull request template
[ https://issues.apache.org/jira/browse/HADOOP-15184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-15184: --- Fix Version/s: 3.3.0 Resolution: Fixed Status: Resolved (was: Patch Available) Merged [https://github.com/apache/hadoop/pull/1419] into trunk. Thanks. > Add GitHub pull request template > > > Key: HADOOP-15184 > URL: https://issues.apache.org/jira/browse/HADOOP-15184 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Major > Fix For: 3.3.0 > > Attachments: HADOOP-15184.001.patch, HADOOP-15184.002.patch > > > There are many GitHub pull requests which do not follow the contribution > guideline (e.g. creating a PR without filing a issue in ASF JIRA). I'd like > to add a GitHub pull request template to avoid such things. -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] aajisaka commented on issue #1419: HADOOP-15184. Add GitHub pull request template.
aajisaka commented on issue #1419: HADOOP-15184. Add GitHub pull request template. URL: https://github.com/apache/hadoop/pull/1419#issuecomment-530189438 @steveloughran @adamantal @aajisaka Thank you for your review! This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] aajisaka merged pull request #1419: HADOOP-15184. Add GitHub pull request template.
aajisaka merged pull request #1419: HADOOP-15184. Add GitHub pull request template. URL: https://github.com/apache/hadoop/pull/1419 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] aajisaka commented on issue #267: Branch 2.7.4
aajisaka commented on issue #267: Branch 2.7.4 URL: https://github.com/apache/hadoop/pull/267#issuecomment-530188478 Closing this as stale. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] aajisaka closed pull request #267: Branch 2.7.4
aajisaka closed pull request #267: Branch 2.7.4 URL: https://github.com/apache/hadoop/pull/267 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16543) Cached DNS name resolution error
[ https://issues.apache.org/jira/browse/HADOOP-16543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16927182#comment-16927182 ] lqjacklee commented on HADOOP-16543: [~roliu][HDDS-1933|https://issues.apache.org/jira/browse/HDDS-1933] should see the same issue ? > Cached DNS name resolution error > > > Key: HADOOP-16543 > URL: https://issues.apache.org/jira/browse/HADOOP-16543 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.1.2 >Reporter: Roger Liu >Priority: Major > > In Kubernetes, the a node may go down and then come back later with a > different IP address. Yarn clients which are already running will be unable > to rediscover the node after it comes back up due to caching the original IP > address. This is problematic for cases such as Spark HA on Kubernetes, as the > node containing the resource manager may go down and come back up, meaning > existing node managers must then also be restarted. -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] tasanuma merged pull request #1414: HDFS-14835. RBF: Secured Router should not run when it can't initialize DelegationTokenSecretManager
tasanuma merged pull request #1414: HDFS-14835. RBF: Secured Router should not run when it can't initialize DelegationTokenSecretManager URL: https://github.com/apache/hadoop/pull/1414 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] tasanuma commented on issue #1414: HDFS-14835. RBF: Secured Router should not run when it can't initialize DelegationTokenSecretManager
tasanuma commented on issue #1414: HDFS-14835. RBF: Secured Router should not run when it can't initialize DelegationTokenSecretManager URL: https://github.com/apache/hadoop/pull/1414#issuecomment-530181643 @chittshota Thanks for your confirmation! I will merge it soon. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] aajisaka commented on a change in pull request #1419: HADOOP-15184. Add GitHub pull request template.
aajisaka commented on a change in pull request #1419: HADOOP-15184. Add GitHub pull request template. URL: https://github.com/apache/hadoop/pull/1419#discussion_r323026797 ## File path: .github/pull_request_template.md ## @@ -0,0 +1,6 @@ +## NOTICE + +Please create a issue in ASF JIRA before opening a pull request, Review comment: Thanks! Fixed. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] vivekratnavel commented on issue #1424: HDDS-2107. Datanodes should retry forever to connect to SCM in an…
vivekratnavel commented on issue #1424: HDDS-2107. Datanodes should retry forever to connect to SCM in an… URL: https://github.com/apache/hadoop/pull/1424#issuecomment-530181099 @xiaoyuyao @hanishakoneru @anuengineer @elek @bharatviswa504 Please review This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] vivekratnavel commented on issue #1424: HDDS-2107. Datanodes should retry forever to connect to SCM in an…
vivekratnavel commented on issue #1424: HDDS-2107. Datanodes should retry forever to connect to SCM in an… URL: https://github.com/apache/hadoop/pull/1424#issuecomment-530180967 /label ozone This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] vivekratnavel opened a new pull request #1424: HDDS-2107. Datanodes should retry forever to connect to SCM in an…
vivekratnavel opened a new pull request #1424: HDDS-2107. Datanodes should retry forever to connect to SCM in an… URL: https://github.com/apache/hadoop/pull/1424 … unsecure environment In an unsecure environment, the datanodes try upto 10 times after waiting for 1000 milliseconds each time before throwing this error: ```Unable to communicate to SCM server at scm:9861 for past 0 seconds. java.net.ConnectException: Call From scm:9861 failed on connection exception: java.net.ConnectException: Connection refused;``` This PR fixes that issue by having datanodes try forever to connect with SCM and not fail immediately after 10 retries. I have also increased timeouts on a unit test to improve its stability. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] HeartSaVioR commented on a change in pull request #1413: HADOOP-16255. Add ChecksumFs.rename(path, path, boolean) to rename crc file as well when FileContext.rename(path, path, optio
HeartSaVioR commented on a change in pull request #1413: HADOOP-16255. Add ChecksumFs.rename(path, path, boolean) to rename crc file as well when FileContext.rename(path, path, options) is called. [BRANCH-2] URL: https://github.com/apache/hadoop/pull/1413#discussion_r323022263 ## File path: hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestChecksumFs.java ## @@ -0,0 +1,135 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.fs; + +import java.io.IOException; +import java.util.EnumSet; + +import org.junit.After; +import org.junit.Before; +import org.junit.Test; + +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.permission.FsPermission; +import org.apache.hadoop.test.GenericTestUtils; + +import static org.apache.hadoop.fs.CreateFlag.*; +import static org.junit.Assert.*; + +/** + * This class tests the functionality of ChecksumFs. + */ +public class TestChecksumFs { Review comment: Ah OK. I thought it would be fine as HadoopTestBase doesn't exist. I'll copy the rule here. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] HeartSaVioR opened a new pull request #1413: HADOOP-16255. Add ChecksumFs.rename(path, path, boolean) to rename crc file as well when FileContext.rename(path, path, options) is calle
HeartSaVioR opened a new pull request #1413: HADOOP-16255. Add ChecksumFs.rename(path, path, boolean) to rename crc file as well when FileContext.rename(path, path, options) is called. [BRANCH-2] URL: https://github.com/apache/hadoop/pull/1413 Please refer https://issues.apache.org/jira/browse/HADOOP-16255 for more details. FYI, FileContext.rename(path, path, options) leaks crc file for source of rename when CheckFs or its descendant is used as underlying filesystem. https://issues.apache.org/jira/browse/SPARK-28025 took a workaround via removing crc file manually, and we hope to get rid of workaround eventually. This PR is ported version of #1388 for branch-2. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] HeartSaVioR commented on issue #1413: HADOOP-16255. Add ChecksumFs.rename(path, path, boolean) to rename crc file as well when FileContext.rename(path, path, options) is called. [BRA
HeartSaVioR commented on issue #1413: HADOOP-16255. Add ChecksumFs.rename(path, path, boolean) to rename crc file as well when FileContext.rename(path, path, options) is called. [BRANCH-2] URL: https://github.com/apache/hadoop/pull/1413#issuecomment-530175447 I just subscribed the common-dev@ and recently there is a discussion regarding build - looks like it recognizes JIRA number, so maybe it requires JIRA number to be prefixed (though not sure Yetus recognizes branch). Let me just try to modify title and wait for a hour: if Yetus didn't run I'll add a patch file to JIRA issue. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] HeartSaVioR closed pull request #1413: HADOOP-16255. Add ChecksumFs.rename(path, path, boolean) to rename crc file as well when FileContext.rename(path, path, options) is called. [BR
HeartSaVioR closed pull request #1413: HADOOP-16255. Add ChecksumFs.rename(path, path, boolean) to rename crc file as well when FileContext.rename(path, path, options) is called. [BRANCH-2] URL: https://github.com/apache/hadoop/pull/1413 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] smengcl commented on issue #1360: HDDS-2007. Make ozone fs shell command work with OM HA service ids
smengcl commented on issue #1360: HDDS-2007. Make ozone fs shell command work with OM HA service ids URL: https://github.com/apache/hadoop/pull/1360#issuecomment-530156703 @bharatviswa504 The previous commit passed all acceptance and unit. The latest commit shouldn't cause those failures. I'll trigger a retest to check. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] smengcl commented on issue #1360: HDDS-2007. Make ozone fs shell command work with OM HA service ids
smengcl commented on issue #1360: HDDS-2007. Make ozone fs shell command work with OM HA service ids URL: https://github.com/apache/hadoop/pull/1360#issuecomment-530156719 /retest This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16543) Cached DNS name resolution error
[ https://issues.apache.org/jira/browse/HADOOP-16543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16927082#comment-16927082 ] Roger Liu commented on HADOOP-16543: Sure thing > Cached DNS name resolution error > > > Key: HADOOP-16543 > URL: https://issues.apache.org/jira/browse/HADOOP-16543 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.1.2 >Reporter: Roger Liu >Priority: Major > > In Kubernetes, the a node may go down and then come back later with a > different IP address. Yarn clients which are already running will be unable > to rediscover the node after it comes back up due to caching the original IP > address. This is problematic for cases such as Spark HA on Kubernetes, as the > node containing the resource manager may go down and come back up, meaning > existing node managers must then also be restarted. -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #1418: HDDS-2089: Add createPipeline CLI.
xiaoyuyao commented on a change in pull request #1418: HDDS-2089: Add createPipeline CLI. URL: https://github.com/apache/hadoop/pull/1418#discussion_r322987593 ## File path: hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMClientProtocolServer.java ## @@ -390,10 +390,9 @@ public void notifyObjectStageChange(StorageContainerLocationProtocolProtos public Pipeline createReplicationPipeline(HddsProtos.ReplicationType type, HddsProtos.ReplicationFactor factor, HddsProtos.NodePool nodePool) throws IOException { -// TODO: will be addressed in future patch. -// This is needed only for debugging purposes to make sure cluster is -// working correctly. -return null; +AUDIT.logReadSuccess( Review comment: Should we log this as a write success for pipeline creation. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bharatviswa504 commented on issue #1360: HDDS-2007. Make ozone fs shell command work with OM HA service ids
bharatviswa504 commented on issue #1360: HDDS-2007. Make ozone fs shell command work with OM HA service ids URL: https://github.com/apache/hadoop/pull/1360#issuecomment-530141859 +1. Can you check if the acceptance tests failed are related? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] xiaoyuyao commented on issue #1194: HDDS-1879. Support multiple excluded scopes when choosing datanodes in NetworkTopology
xiaoyuyao commented on issue #1194: HDDS-1879. Support multiple excluded scopes when choosing datanodes in NetworkTopology URL: https://github.com/apache/hadoop/pull/1194#issuecomment-530141215 Thanks @ChenSammi for updating the PR. The latest change LGTM. Can you fix the checkstyle and unit test failure that seems to be related? +1 after that. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] elek opened a new pull request #1423: HDDS-2106. Avoid usage of hadoop projects as parent of hdds/ozone
elek opened a new pull request #1423: HDDS-2106. Avoid usage of hadoop projects as parent of hdds/ozone URL: https://github.com/apache/hadoop/pull/1423 Ozone uses hadoop as a dependency. The dependency defined on multiple level: 1. the hadoop artifacts are defined in the sections 2. both hadoop-ozone and hadoop-hdds projects uses "hadoop-project" as the parent As we already have a slightly different assembly process it could be more resilient to use a dedicated parent project instead of the hadoop one. With this approach it will be easier to upgrade the versions as we don't need to be careful about the pom contents only about the used dependencies. See: https://issues.apache.org/jira/browse/HDDS-2106 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] smengcl commented on issue #1360: HDDS-2007. Make ozone fs shell command work with OM HA service ids
smengcl commented on issue #1360: HDDS-2007. Make ozone fs shell command work with OM HA service ids URL: https://github.com/apache/hadoop/pull/1360#issuecomment-530115882 /retest This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] smengcl commented on a change in pull request #1360: HDDS-2007. Make ozone fs shell command work with OM HA service ids
smengcl commented on a change in pull request #1360: HDDS-2007. Make ozone fs shell command work with OM HA service ids URL: https://github.com/apache/hadoop/pull/1360#discussion_r322957816 ## File path: hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/ha/OMFailoverProxyProvider.java ## @@ -70,26 +71,46 @@ private final UserGroupInformation ugi; private final Text delegationTokenService; + // TODO: Do we want this to be final? + private String omServiceId; + public OMFailoverProxyProvider(OzoneConfiguration configuration, - UserGroupInformation ugi) throws IOException { + UserGroupInformation ugi, String omServiceId) throws IOException { this.conf = configuration; this.omVersion = RPC.getProtocolVersion(OzoneManagerProtocolPB.class); this.ugi = ugi; -loadOMClientConfigs(conf); +this.omServiceId = omServiceId; +loadOMClientConfigs(conf, this.omServiceId); this.delegationTokenService = computeDelegationTokenService(); currentProxyIndex = 0; currentProxyOMNodeId = omNodeIDList.get(currentProxyIndex); } - private void loadOMClientConfigs(Configuration config) throws IOException { + public OMFailoverProxyProvider(OzoneConfiguration configuration, + UserGroupInformation ugi) throws IOException { +this(configuration, ugi, null); + } + + private void loadOMClientConfigs(Configuration config, String omSvcId) + throws IOException { this.omProxies = new HashMap<>(); this.omProxyInfos = new HashMap<>(); this.omNodeIDList = new ArrayList<>(); -Collection omServiceIds = config.getTrimmedStringCollection( -OZONE_OM_SERVICE_IDS_KEY); +Collection omServiceIds; +if (omSvcId == null) { Review comment: Filed https://issues.apache.org/jira/browse/HDDS-2104 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] smengcl commented on a change in pull request #1360: HDDS-2007. Make ozone fs shell command work with OM HA service ids
smengcl commented on a change in pull request #1360: HDDS-2007. Make ozone fs shell command work with OM HA service ids URL: https://github.com/apache/hadoop/pull/1360#discussion_r322955933 ## File path: hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/BasicOzoneFileSystem.java ## @@ -131,6 +142,13 @@ public void initialize(URI name, Configuration conf) throws IOException { // If port number is not specified, read it from config omPort = OmUtils.getOmRpcPort(conf); } +} else if (OmUtils.isServiceIdsDefined(conf)) { Review comment: Make sense. Thanks! Fixing this in an upcoming commit. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] smengcl commented on a change in pull request #1360: HDDS-2007. Make ozone fs shell command work with OM HA service ids
smengcl commented on a change in pull request #1360: HDDS-2007. Make ozone fs shell command work with OM HA service ids URL: https://github.com/apache/hadoop/pull/1360#discussion_r322955607 ## File path: hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/OzoneClientFactory.java ## @@ -136,6 +136,31 @@ public static OzoneClient getRpcClient(String omHost, Integer omRpcPort, return getRpcClient(config); } + /** + * Returns an OzoneClient which will use RPC protocol. + * + * @param omServiceId + *Service ID of OzoneManager HA cluster. + * + * @param config + *Configuration to be used for OzoneClient creation + * + * @return OzoneClient + * + * @throws IOException + */ + public static OzoneClient getRpcClient(String omServiceId, Review comment: Filed https://issues.apache.org/jira/browse/HDDS-2105 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1163: HDDS-1786 : Datanodes takeSnapshot should delete previously created s…
hadoop-yetus commented on issue #1163: HDDS-1786 : Datanodes takeSnapshot should delete previously created s… URL: https://github.com/apache/hadoop/pull/1163#issuecomment-530093799 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 38 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 1 new or modified test files. | ||| _ trunk Compile Tests _ | | 0 | mvndep | 26 | Maven dependency ordering for branch | | +1 | mvninstall | 573 | trunk passed | | +1 | compile | 398 | trunk passed | | +1 | checkstyle | 78 | trunk passed | | +1 | mvnsite | 0 | trunk passed | | +1 | shadedclient | 874 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 178 | trunk passed | | 0 | spotbugs | 446 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 653 | trunk passed | | -0 | patch | 500 | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | ||| _ Patch Compile Tests _ | | 0 | mvndep | 40 | Maven dependency ordering for patch | | +1 | mvninstall | 562 | the patch passed | | +1 | compile | 400 | the patch passed | | +1 | javac | 400 | the patch passed | | +1 | checkstyle | 86 | the patch passed | | +1 | mvnsite | 0 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 686 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 192 | the patch passed | | +1 | findbugs | 789 | the patch passed | ||| _ Other Tests _ | | +1 | unit | 314 | hadoop-hdds in the patch passed. | | -1 | unit | 3129 | hadoop-ozone in the patch failed. | | +1 | asflicense | 48 | The patch does not generate ASF License warnings. | | | | 9252 | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient | | | hadoop.ozone.container.common.statemachine.commandhandler.TestCloseContainerHandler | | | hadoop.ozone.container.common.statemachine.commandhandler.TestCloseContainerByPipeline | | | hadoop.ozone.client.rpc.TestDeleteWithSlowFollower | | | hadoop.ozone.TestMiniChaosOzoneCluster | | | hadoop.ozone.om.TestOzoneManagerHA | | | hadoop.ozone.container.common.statemachine.commandhandler.TestBlockDeletion | | | hadoop.ozone.TestSecureOzoneCluster | | | hadoop.ozone.client.rpc.TestCommitWatcher | | | hadoop.ozone.client.rpc.TestContainerStateMachineFailures | | | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption | | | hadoop.ozone.container.TestContainerReplication | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1163/9/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1163 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux f632dd8cccfe 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / dc9abd2 | | Default Java | 1.8.0_222 | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1163/9/artifact/out/patch-unit-hadoop-ozone.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1163/9/testReport/ | | Max. process+thread count | 4107 (vs. ulimit of 5500) | | modules | C: hadoop-hdds/container-service hadoop-ozone/integration-test U: . | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1163/9/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1360: HDDS-2007. Make ozone fs shell command work with OM HA service ids
bharatviswa504 commented on a change in pull request #1360: HDDS-2007. Make ozone fs shell command work with OM HA service ids URL: https://github.com/apache/hadoop/pull/1360#discussion_r322930998 ## File path: hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/BasicOzoneFileSystem.java ## @@ -131,6 +142,13 @@ public void initialize(URI name, Configuration conf) throws IOException { // If port number is not specified, read it from config omPort = OmUtils.getOmRpcPort(conf); } +} else if (OmUtils.isServiceIdsDefined(conf)) { Review comment: https://github.com/apache/hadoop/blob/trunk/hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/BasicOzoneFileSystem.java#L154 This is code line calling creation of BasicOzoneClientAdapterImpl. And below is the code where it checks if conf passed is instance of OzoneConfiguration or not, if not convert to OzoneConfiguration object. https://github.com/apache/hadoop/blob/trunk/hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/BasicOzoneClientAdapterImpl.java#L112 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1163: HDDS-1786 : Datanodes takeSnapshot should delete previously created s…
hadoop-yetus commented on issue #1163: HDDS-1786 : Datanodes takeSnapshot should delete previously created s… URL: https://github.com/apache/hadoop/pull/1163#issuecomment-530089343 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 99 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 1 new or modified test files. | ||| _ trunk Compile Tests _ | | 0 | mvndep | 76 | Maven dependency ordering for branch | | +1 | mvninstall | 678 | trunk passed | | +1 | compile | 375 | trunk passed | | +1 | checkstyle | 74 | trunk passed | | +1 | mvnsite | 0 | trunk passed | | +1 | shadedclient | 972 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 184 | trunk passed | | 0 | spotbugs | 440 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 653 | trunk passed | | -0 | patch | 483 | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | ||| _ Patch Compile Tests _ | | 0 | mvndep | 30 | Maven dependency ordering for patch | | +1 | mvninstall | 546 | the patch passed | | +1 | compile | 377 | the patch passed | | +1 | javac | 377 | the patch passed | | +1 | checkstyle | 77 | the patch passed | | +1 | mvnsite | 0 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 728 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 169 | the patch passed | | +1 | findbugs | 657 | the patch passed | ||| _ Other Tests _ | | +1 | unit | 311 | hadoop-hdds in the patch passed. | | -1 | unit | 2292 | hadoop-ozone in the patch failed. | | +1 | asflicense | 43 | The patch does not generate ASF License warnings. | | | | 8517 | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures | | | hadoop.ozone.container.common.statemachine.commandhandler.TestBlockDeletion | | | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures | | | hadoop.ozone.scm.node.TestQueryNode | | | hadoop.ozone.TestSecureOzoneCluster | | | hadoop.ozone.container.TestContainerReplication | | | hadoop.ozone.client.rpc.TestBlockOutputStream | | | hadoop.ozone.client.rpc.TestContainerStateMachine | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.0 Server=19.03.0 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1163/8/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1163 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 8d855eed86b8 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / dc9abd2 | | Default Java | 1.8.0_222 | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1163/8/artifact/out/patch-unit-hadoop-ozone.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1163/8/testReport/ | | Max. process+thread count | 5342 (vs. ulimit of 5500) | | modules | C: hadoop-hdds/container-service hadoop-ozone/integration-test U: . | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1163/8/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] belugabehr opened a new pull request #1422: HDFS-14839: Use Java Concurrent BlockingQueue instead of Internal Blo…
belugabehr opened a new pull request #1422: HDFS-14839: Use Java Concurrent BlockingQueue instead of Internal Blo… URL: https://github.com/apache/hadoop/pull/1422 …ckQueue This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] swagle commented on a change in pull request #1344: HDDS-1982 Extend SCMNodeManager to support decommission and maintenance states
swagle commented on a change in pull request #1344: HDDS-1982 Extend SCMNodeManager to support decommission and maintenance states URL: https://github.com/apache/hadoop/pull/1344#discussion_r322918560 ## File path: hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/states/NodeStateMap.java ## @@ -309,4 +381,61 @@ private void checkIfNodeExist(UUID uuid) throws NodeNotFoundException { throw new NodeNotFoundException("Node UUID: " + uuid); } } + + /** + * Create a list of datanodeInfo for all nodes matching the passed states. + * Passing null for one of the states acts like a wildcard for that state. + * + * @param opState + * @param health + * @return List of DatanodeInfo objects matching the passed state + */ + private List filterNodes( + NodeOperationalState opState, NodeState health) { +if (opState != null && health != null) { Review comment: Can we write Line 395-440 with one simple stream().filter? Nothing wrong with code itself but just a thought. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on issue #1404: HDFS-13660 Copy file till the source file length during distcp
steveloughran commented on issue #1404: HDFS-13660 Copy file till the source file length during distcp URL: https://github.com/apache/hadoop/pull/1404#issuecomment-530081623 +one of the warnings on RetriableFileCopyCommand.java:300 is about an existing issue on the line you've edited. You aren't to blame for that -but now is the time to fix it. sorry This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on issue #1404: HDFS-13660 Copy file till the source file length during distcp
steveloughran commented on issue #1404: HDFS-13660 Copy file till the source file length during distcp URL: https://github.com/apache/hadoop/pull/1404#issuecomment-530080808 checkstyle: ignore the "More than 7 parameters" warning; for the others address them *unless* the line length is about 81-82 chars and cutting it down would make readability worse. We're not purists about "must fit on a punched card" but really want lines of a width where side-by-side reviews are straightforward. Thanks. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran edited a comment on issue #1407: HADOOP-16490. Improve S3Guard handling of FNFEs in copy
steveloughran edited a comment on issue #1407: HADOOP-16490. Improve S3Guard handling of FNFEs in copy URL: https://github.com/apache/hadoop/pull/1407#issuecomment-530077834 spurious test failure in org.apache.hadoop.util.TestDiskCheckerWithDiskIo, Assumption: jenkins disk is root cause. The hadoop-aws tests were successful This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on issue #1407: HADOOP-16490. Improve S3Guard handling of FNFEs in copy
steveloughran commented on issue #1407: HADOOP-16490. Improve S3Guard handling of FNFEs in copy URL: https://github.com/apache/hadoop/pull/1407#issuecomment-530077834 spurious test failure in org.apache.hadoop.util.TestDiskCheckerWithDiskIo, but its blocking running of the hadoop-aws suite. Assumption: jenkins disk is root cause This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] swagle commented on a change in pull request #1344: HDDS-1982 Extend SCMNodeManager to support decommission and maintenance states
swagle commented on a change in pull request #1344: HDDS-1982 Extend SCMNodeManager to support decommission and maintenance states URL: https://github.com/apache/hadoop/pull/1344#discussion_r322911485 ## File path: hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/SCMNodeManager.java ## @@ -417,9 +451,12 @@ private SCMNodeStat getNodeStatInternal(DatanodeDetails datanodeDetails) { @Override public Map getNodeCount() { +// TODO - This does not consider decom, maint etc. Map nodeCountMap = new HashMap(); Review comment: Why not ? It makes it easier to consume for the caller in my opinion. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on a change in pull request #1208: HADOOP-16423. S3Guard fsck: Check metadata consistency between S3 and metadatastore (log)
steveloughran commented on a change in pull request #1208: HADOOP-16423. S3Guard fsck: Check metadata consistency between S3 and metadatastore (log) URL: https://github.com/apache/hadoop/pull/1208#discussion_r322910443 ## File path: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/S3GuardFsckViolationHandler.java ## @@ -0,0 +1,312 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.fs.s3a.s3guard; + +import org.apache.hadoop.fs.FileStatus; +import org.apache.hadoop.fs.s3a.S3AFileStatus; +import org.apache.hadoop.fs.s3a.S3AFileSystem; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.lang.reflect.InvocationTargetException; +import java.util.Arrays; +import java.util.List; + +/** + * Violation handler for the S3Guard's fsck. + */ +public class S3GuardFsckViolationHandler { + private static final Logger LOG = LoggerFactory.getLogger( + S3GuardFsckViolationHandler.class); + + private S3AFileSystem rawFs; + private DynamoDBMetadataStore metadataStore; + private static String newLine = System.getProperty("line.separator"); + + public S3GuardFsckViolationHandler(S3AFileSystem fs, + DynamoDBMetadataStore ddbms) { + +this.metadataStore = ddbms; +this.rawFs = fs; + } + + public void handle(S3GuardFsck.ComparePair comparePair) { +if (!comparePair.containsViolation()) { + LOG.debug("There is no violation in the compare pair: " + toString()); + return; +} + +StringBuilder sB = new StringBuilder(); +sB.append(newLine) +.append("On path: ").append(comparePair.getPath()).append(newLine); + +// Create a new instance of the handler and use it. +for (S3GuardFsck.Violation violation : comparePair.getViolations()) { + try { +ViolationHandler handler = violation.getHandler() +.getDeclaredConstructor(S3GuardFsck.ComparePair.class) +.newInstance(comparePair); +final String errorStr = handler.getError(); +sB.append(errorStr); + } catch (NoSuchMethodException e) { +LOG.error("Can not find declared constructor for handler: {}", +violation.getHandler()); + } catch (IllegalAccessException | InstantiationException | InvocationTargetException e) { +LOG.error("Can not instantiate handler: {}", +violation.getHandler()); + } + sB.append(newLine); +} +LOG.error(sB.toString()); + } + + /** + * Violation handler abstract class. + * This class should be extended for violation handlers. + */ + public static abstract class ViolationHandler { +private final PathMetadata pathMetadata; +private final S3AFileStatus s3FileStatus; +private final S3AFileStatus msFileStatus; +private final List s3DirListing; +private final DirListingMetadata msDirListing; + +public ViolationHandler(S3GuardFsck.ComparePair comparePair) { + pathMetadata = comparePair.getMsPathMetadata(); + s3FileStatus = comparePair.getS3FileStatus(); + if (pathMetadata != null) { +msFileStatus = pathMetadata.getFileStatus(); + } else { +msFileStatus = null; + } + s3DirListing = comparePair.getS3DirListing(); + msDirListing = comparePair.getMsDirListing(); +} + +abstract String getError(); + +public PathMetadata getPathMetadata() { + return pathMetadata; +} + +public S3AFileStatus getS3FileStatus() { + return s3FileStatus; +} + +public S3AFileStatus getMsFileStatus() { + return msFileStatus; +} + +public List getS3DirListing() { + return s3DirListing; +} + +public DirListingMetadata getMsDirListing() { + return msDirListing; +} + } + + /** + * The violation handler when there's no matching metadata entry in the MS. + */ + public static class NoMetadataEntry extends ViolationHandler { + +public NoMetadataEntry(S3GuardFsck.ComparePair comparePair) { + super(comparePair); +} + +@Override +public String getError() { + return "No PathMetadata for this path in the MS."; +} + } + + /** + * The violation handler when there's no parent entry. + */ + public s
[GitHub] [hadoop] steveloughran commented on a change in pull request #1208: HADOOP-16423. S3Guard fsck: Check metadata consistency between S3 and metadatastore (log)
steveloughran commented on a change in pull request #1208: HADOOP-16423. S3Guard fsck: Check metadata consistency between S3 and metadatastore (log) URL: https://github.com/apache/hadoop/pull/1208#discussion_r322909779 ## File path: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/S3GuardFsckViolationHandler.java ## @@ -0,0 +1,312 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.fs.s3a.s3guard; + +import org.apache.hadoop.fs.FileStatus; +import org.apache.hadoop.fs.s3a.S3AFileStatus; +import org.apache.hadoop.fs.s3a.S3AFileSystem; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.lang.reflect.InvocationTargetException; +import java.util.Arrays; +import java.util.List; + +/** + * Violation handler for the S3Guard's fsck. + */ +public class S3GuardFsckViolationHandler { + private static final Logger LOG = LoggerFactory.getLogger( + S3GuardFsckViolationHandler.class); + + private S3AFileSystem rawFs; + private DynamoDBMetadataStore metadataStore; + private static String newLine = System.getProperty("line.separator"); + + public S3GuardFsckViolationHandler(S3AFileSystem fs, + DynamoDBMetadataStore ddbms) { + +this.metadataStore = ddbms; +this.rawFs = fs; + } + + public void handle(S3GuardFsck.ComparePair comparePair) { +if (!comparePair.containsViolation()) { + LOG.debug("There is no violation in the compare pair: " + toString()); + return; +} + +StringBuilder sB = new StringBuilder(); +sB.append(newLine) +.append("On path: ").append(comparePair.getPath()).append(newLine); + +// Create a new instance of the handler and use it. +for (S3GuardFsck.Violation violation : comparePair.getViolations()) { + try { +ViolationHandler handler = violation.getHandler() +.getDeclaredConstructor(S3GuardFsck.ComparePair.class) +.newInstance(comparePair); +final String errorStr = handler.getError(); +sB.append(errorStr); + } catch (NoSuchMethodException e) { +LOG.error("Can not find declared constructor for handler: {}", +violation.getHandler()); + } catch (IllegalAccessException | InstantiationException | InvocationTargetException e) { +LOG.error("Can not instantiate handler: {}", +violation.getHandler()); + } + sB.append(newLine); +} +LOG.error(sB.toString()); + } + + /** + * Violation handler abstract class. + * This class should be extended for violation handlers. + */ + public static abstract class ViolationHandler { +private final PathMetadata pathMetadata; +private final S3AFileStatus s3FileStatus; +private final S3AFileStatus msFileStatus; +private final List s3DirListing; +private final DirListingMetadata msDirListing; + +public ViolationHandler(S3GuardFsck.ComparePair comparePair) { + pathMetadata = comparePair.getMsPathMetadata(); + s3FileStatus = comparePair.getS3FileStatus(); + if (pathMetadata != null) { +msFileStatus = pathMetadata.getFileStatus(); + } else { +msFileStatus = null; + } + s3DirListing = comparePair.getS3DirListing(); + msDirListing = comparePair.getMsDirListing(); +} + +abstract String getError(); + +public PathMetadata getPathMetadata() { + return pathMetadata; +} + +public S3AFileStatus getS3FileStatus() { + return s3FileStatus; +} + +public S3AFileStatus getMsFileStatus() { + return msFileStatus; +} + +public List getS3DirListing() { + return s3DirListing; +} + +public DirListingMetadata getMsDirListing() { + return msDirListing; +} + } + + /** + * The violation handler when there's no matching metadata entry in the MS. + */ + public static class NoMetadataEntry extends ViolationHandler { + +public NoMetadataEntry(S3GuardFsck.ComparePair comparePair) { + super(comparePair); +} + +@Override +public String getError() { + return "No PathMetadata for this path in the MS."; +} + } + + /** + * The violation handler when there's no parent entry. + */ + public s
[GitHub] [hadoop] steveloughran commented on a change in pull request #1208: HADOOP-16423. S3Guard fsck: Check metadata consistency between S3 and metadatastore (log)
steveloughran commented on a change in pull request #1208: HADOOP-16423. S3Guard fsck: Check metadata consistency between S3 and metadatastore (log) URL: https://github.com/apache/hadoop/pull/1208#discussion_r322909421 ## File path: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java ## @@ -1449,7 +1449,7 @@ public boolean hasMetadataStore() { * is set for this filesystem. */ @VisibleForTesting - boolean hasAuthoritativeMetadataStore() { + public boolean hasAuthoritativeMetadataStore() { Review comment: we're only using this for tests, so I'm not as worried as I was. +1 for this change This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16543) Cached DNS name resolution error
[ https://issues.apache.org/jira/browse/HADOOP-16543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16926823#comment-16926823 ] Íñigo Goiri commented on HADOOP-16543: -- [~csun], [~fengnanli], you guys have experience with the HDFS side and DNS right? Can you take a look? [~roliu] do you mind taking care of the Yetus issues? > Cached DNS name resolution error > > > Key: HADOOP-16543 > URL: https://issues.apache.org/jira/browse/HADOOP-16543 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.1.2 >Reporter: Roger Liu >Priority: Major > > In Kubernetes, the a node may go down and then come back later with a > different IP address. Yarn clients which are already running will be unable > to rediscover the node after it comes back up due to caching the original IP > address. This is problematic for cases such as Spark HA on Kubernetes, as the > node containing the resource manager may go down and come back up, meaning > existing node managers must then also be restarted. -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1360: HDDS-2007. Make ozone fs shell command work with OM HA service ids
hadoop-yetus commented on issue #1360: HDDS-2007. Make ozone fs shell command work with OM HA service ids URL: https://github.com/apache/hadoop/pull/1360#issuecomment-530060194 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 5746 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 1 | No case conflicting files found. | | 0 | shelldocs | 0 | Shelldocs was not available. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 5 new or modified test files. | ||| _ trunk Compile Tests _ | | 0 | mvndep | 31 | Maven dependency ordering for branch | | +1 | mvninstall | 718 | trunk passed | | +1 | compile | 453 | trunk passed | | +1 | checkstyle | 94 | trunk passed | | +1 | mvnsite | 0 | trunk passed | | +1 | shadedclient | 936 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 197 | trunk passed | | 0 | spotbugs | 504 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 750 | trunk passed | ||| _ Patch Compile Tests _ | | 0 | mvndep | 37 | Maven dependency ordering for patch | | +1 | mvninstall | 556 | the patch passed | | +1 | compile | 414 | the patch passed | | +1 | javac | 414 | the patch passed | | +1 | checkstyle | 105 | the patch passed | | +1 | mvnsite | 0 | the patch passed | | +1 | shellcheck | 0 | There were no new shellcheck issues. | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 702 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 191 | the patch passed | | +1 | findbugs | 666 | the patch passed | ||| _ Other Tests _ | | +1 | unit | 291 | hadoop-hdds in the patch passed. | | -1 | unit | 2517 | hadoop-ozone in the patch failed. | | +1 | asflicense | 66 | The patch does not generate ASF License warnings. | | | | 14783 | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.ozone.om.TestOzoneManagerHA | | | hadoop.ozone.container.TestContainerReplication | | | hadoop.ozone.client.rpc.TestBlockOutputStream | | | hadoop.ozone.om.snapshot.TestOzoneManagerSnapshotProvider | | | hadoop.ozone.om.TestOMRatisSnapshots | | | hadoop.hdds.scm.pipeline.TestRatisPipelineProvider | | | hadoop.ozone.TestSecureOzoneCluster | | | hadoop.ozone.container.common.statemachine.commandhandler.TestBlockDeletion | | | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures | | | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1360/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1360 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle shellcheck shelldocs | | uname | Linux e46cf8c849e3 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / dc9abd2 | | Default Java | 1.8.0_222 | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1360/2/artifact/out/patch-unit-hadoop-ozone.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1360/2/testReport/ | | Max. process+thread count | 5403 (vs. ulimit of 5500) | | modules | C: hadoop-ozone/common hadoop-ozone/client hadoop-ozone/ozone-manager hadoop-ozone/dist hadoop-ozone/integration-test hadoop-ozone/ozone-recon hadoop-ozone/ozonefs U: hadoop-ozone | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1360/2/console | | versions | git=2.7.4 maven=3.3.9 shellcheck=0.4.6 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] chittshota commented on issue #1414: HDFS-14835. RBF: Secured Router should not run when it can't initialize DelegationTokenSecretManager
chittshota commented on issue #1414: HDFS-14835. RBF: Secured Router should not run when it can't initialize DelegationTokenSecretManager URL: https://github.com/apache/hadoop/pull/1414#issuecomment-530058890 @tasanuma That's a good point. I was checking namenode to see if it does throw any exceptions if secret manager could not be successfully initialized. Namenode DOES fail if secret manager is not initialized correctly so yes then router failing should be fine. ``` private void startSecretManager() { if (dtSecretManager != null) { try { dtSecretManager.startThreads(); } catch (IOException e) { // Inability to start secret manager // can't be recovered from. throw new RuntimeException(e); } } } ``` LGTM. +1 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] mukul1987 commented on a change in pull request #1420: HDDS-2032. Ozone client should retry writes in case of any ratis/stateMachine exceptions.
mukul1987 commented on a change in pull request #1420: HDDS-2032. Ozone client should retry writes in case of any ratis/stateMachine exceptions. URL: https://github.com/apache/hadoop/pull/1420#discussion_r322872820 ## File path: hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/io/KeyOutputStream.java ## @@ -290,11 +288,12 @@ private void handleException(BlockOutputStreamEntry streamEntry, if (!failedServers.isEmpty()) { excludeList.addDatanodes(failedServers); } -if (closedContainerException) { + +// if the container needs to be excluded , add the container to the +// exclusion list , otherwise add the pipeline to the exclusion list +if (containerExclusionException) { excludeList.addConatinerId(ContainerID.valueof(containerId)); -} else if (retryFailure || t instanceof TimeoutException -|| t instanceof GroupMismatchException -|| t instanceof NotReplicatedException) { +} else { Review comment: So apart from SCE, all exceptions are expected to be related to the pipeline ? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1208: HADOOP-16423. S3Guard fsck: Check metadata consistency between S3 and metadatastore (log)
hadoop-yetus commented on issue #1208: HADOOP-16423. S3Guard fsck: Check metadata consistency between S3 and metadatastore (log) URL: https://github.com/apache/hadoop/pull/1208#issuecomment-53004 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 53 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 6 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 | mvninstall | 1040 | trunk passed | | +1 | compile | 35 | trunk passed | | +1 | checkstyle | 26 | trunk passed | | +1 | mvnsite | 40 | trunk passed | | +1 | shadedclient | 753 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 28 | trunk passed | | 0 | spotbugs | 57 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 56 | trunk passed | ||| _ Patch Compile Tests _ | | +1 | mvninstall | 33 | the patch passed | | +1 | compile | 28 | the patch passed | | +1 | javac | 28 | the patch passed | | -0 | checkstyle | 20 | hadoop-tools/hadoop-aws: The patch generated 12 new + 25 unchanged - 0 fixed = 37 total (was 25) | | +1 | mvnsite | 32 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 762 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 26 | the patch passed | | +1 | findbugs | 61 | the patch passed | ||| _ Other Tests _ | | +1 | unit | 84 | hadoop-aws in the patch passed. | | +1 | asflicense | 33 | The patch does not generate ASF License warnings. | | | | 3214 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1208/22/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1208 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux bd120e7e39df 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / dc9abd2 | | Default Java | 1.8.0_222 | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1208/22/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1208/22/testReport/ | | Max. process+thread count | 438 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1208/22/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1398: HDDS-2064. OzoneManagerRatisServer#newOMRatisServer throws NPE when OM HA is configured incorrectly
hadoop-yetus commented on issue #1398: HDDS-2064. OzoneManagerRatisServer#newOMRatisServer throws NPE when OM HA is configured incorrectly URL: https://github.com/apache/hadoop/pull/1398#issuecomment-530037869 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 4861 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 1 new or modified test files. | ||| _ trunk Compile Tests _ | | 0 | mvndep | 33 | Maven dependency ordering for branch | | +1 | mvninstall | 620 | trunk passed | | +1 | compile | 408 | trunk passed | | +1 | checkstyle | 89 | trunk passed | | +1 | mvnsite | 0 | trunk passed | | +1 | shadedclient | 891 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 194 | trunk passed | | 0 | spotbugs | 515 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 751 | trunk passed | ||| _ Patch Compile Tests _ | | 0 | mvndep | 31 | Maven dependency ordering for patch | | +1 | mvninstall | 615 | the patch passed | | +1 | compile | 424 | the patch passed | | +1 | javac | 424 | the patch passed | | +1 | checkstyle | 92 | the patch passed | | +1 | mvnsite | 0 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 681 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 189 | the patch passed | | +1 | findbugs | 692 | the patch passed | ||| _ Other Tests _ | | +1 | unit | 300 | hadoop-hdds in the patch passed. | | -1 | unit | 199 | hadoop-ozone in the patch failed. | | +1 | asflicense | 48 | The patch does not generate ASF License warnings. | | | | 11309 | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithOMResponse | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1398/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1398 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux e3eaa0d1930b 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / dc9abd2 | | Default Java | 1.8.0_222 | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1398/1/artifact/out/patch-unit-hadoop-ozone.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1398/1/testReport/ | | Max. process+thread count | 1328 (vs. ulimit of 5500) | | modules | C: hadoop-ozone/ozone-manager hadoop-ozone/integration-test U: hadoop-ozone | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1398/1/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] avijayanhwx commented on issue #1163: HDDS-1786 : Datanodes takeSnapshot should delete previously created s…
avijayanhwx commented on issue #1163: HDDS-1786 : Datanodes takeSnapshot should delete previously created s… URL: https://github.com/apache/hadoop/pull/1163#issuecomment-530036546 > Thanks @avijayanhwx for working on this. The changes look good. > I think it would be better to move all configs related to RaftServer under the RaftServerConfig group but that's beyond the scope of this jira. > > I would prefer to have a test in Ozone as well to verify the snapshot retention behaviour of Ratis so that, in case there are changes made in Ratis related to this, we should be able to catch this here in ozone. > > A simple unit test where can change snapshot threshold to 1 entry and verify we have n snapshot files after n transactions in the raft log directory. Added unit test. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1344: HDDS-1982 Extend SCMNodeManager to support decommission and maintenance states
hadoop-yetus commented on issue #1344: HDDS-1982 Extend SCMNodeManager to support decommission and maintenance states URL: https://github.com/apache/hadoop/pull/1344#issuecomment-530027446 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 82 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 3 new or modified test files. | ||| _ trunk Compile Tests _ | | 0 | mvndep | 67 | Maven dependency ordering for branch | | +1 | mvninstall | 628 | trunk passed | | +1 | compile | 390 | trunk passed | | +1 | checkstyle | 75 | trunk passed | | +1 | mvnsite | 0 | trunk passed | | +1 | shadedclient | 948 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 175 | trunk passed | | 0 | spotbugs | 459 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 682 | trunk passed | ||| _ Patch Compile Tests _ | | 0 | mvndep | 31 | Maven dependency ordering for patch | | +1 | mvninstall | 576 | the patch passed | | +1 | compile | 388 | the patch passed | | +1 | cc | 388 | the patch passed | | +1 | javac | 388 | the patch passed | | -0 | checkstyle | 37 | hadoop-hdds: The patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | | +1 | mvnsite | 0 | the patch passed | | +1 | whitespace | 1 | The patch has no whitespace issues. | | +1 | shadedclient | 736 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 179 | the patch passed | | -1 | findbugs | 215 | hadoop-hdds generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) | ||| _ Other Tests _ | | -1 | unit | 298 | hadoop-hdds in the patch failed. | | -1 | unit | 3499 | hadoop-ozone in the patch failed. | | +1 | asflicense | 61 | The patch does not generate ASF License warnings. | | | | 9701 | | | Reason | Tests | |---:|:--| | FindBugs | module:hadoop-hdds | | | Dead store to nodes in org.apache.hadoop.hdds.scm.node.NodeStateManager.getAllNodes() At NodeStateManager.java:org.apache.hadoop.hdds.scm.node.NodeStateManager.getAllNodes() At NodeStateManager.java:[line 396] | | | org.apache.hadoop.hdds.scm.node.states.NodeStateMap.getNodes(NodeStatus) does not release lock on all exception paths At NodeStateMap.java:on all exception paths At NodeStateMap.java:[line 156] | | Failed junit tests | hadoop.hdds.scm.block.TestBlockManager | | | hadoop.ozone.container.TestContainerReplication | | | hadoop.ozone.TestSecureOzoneCluster | | | hadoop.ozone.scm.node.TestQueryNode | | | hadoop.ozone.client.rpc.TestOzoneRpcClient | | | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis | | | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures | | | hadoop.ozone.scm.TestContainerSmallFile | | | hadoop.ozone.client.rpc.Test2WayCommitInRatis | | | hadoop.ozone.TestMiniChaosOzoneCluster | | | hadoop.ozone.container.common.statemachine.commandhandler.TestBlockDeletion | | | hadoop.ozone.scm.TestGetCommittedBlockLengthAndPutKey | | | hadoop.ozone.client.rpc.TestDeleteWithSlowFollower | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1344/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1344 | | Optional Tests | dupname asflicense compile cc mvnsite javac unit javadoc mvninstall shadedclient findbugs checkstyle | | uname | Linux 7a7a07260082 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / dc9abd2 | | Default Java | 1.8.0_222 | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1344/3/artifact/out/diff-checkstyle-hadoop-hdds.txt | | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-1344/3/artifact/out/new-findbugs-hadoop-hdds.html | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1344/3/artifact/out/patch-unit-hadoop-hdds.txt | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1344/3/artifact/out/patch-unit-hadoop-ozone.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1344/3/testReport/ | | Max. process+thread count | 4364 (vs. ulimit of 5500) | | modules | C: hadoop-hdds/common hadoop-hdds/server-scm hadoop-hdds/tools hadoop-ozone/integration-test U: . | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1344/3/console | | ve
[GitHub] [hadoop] hadoop-yetus commented on issue #1418: HDDS-2089: Add createPipeline CLI.
hadoop-yetus commented on issue #1418: HDDS-2089: Add createPipeline CLI. URL: https://github.com/apache/hadoop/pull/1418#issuecomment-530025328 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 158 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | -1 | test4tests | 0 | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ HDDS-1564 Compile Tests _ | | 0 | mvndep | 45 | Maven dependency ordering for branch | | +1 | mvninstall | 828 | HDDS-1564 passed | | +1 | compile | 475 | HDDS-1564 passed | | +1 | checkstyle | 96 | HDDS-1564 passed | | +1 | mvnsite | 0 | HDDS-1564 passed | | +1 | shadedclient | 938 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 189 | HDDS-1564 passed | | 0 | spotbugs | 468 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 688 | HDDS-1564 passed | ||| _ Patch Compile Tests _ | | 0 | mvndep | 28 | Maven dependency ordering for patch | | +1 | mvninstall | 579 | the patch passed | | +1 | compile | 411 | the patch passed | | +1 | javac | 411 | the patch passed | | +1 | checkstyle | 93 | the patch passed | | +1 | mvnsite | 0 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 690 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 189 | the patch passed | | +1 | findbugs | 701 | the patch passed | ||| _ Other Tests _ | | +1 | unit | 369 | hadoop-hdds in the patch passed. | | -1 | unit | 2715 | hadoop-ozone in the patch failed. | | +1 | asflicense | 55 | The patch does not generate ASF License warnings. | | | | 9437 | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.ozone.client.rpc.TestContainerStateMachineFailures | | | hadoop.ozone.client.rpc.TestOzoneRpcClientForAclAuditLog | | | hadoop.ozone.client.rpc.Test2WayCommitInRatis | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1418/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1418 | | JIRA Issue | HDDS-2089 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 453a8f9466f1 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | HDDS-1564 / 753fc67 | | Default Java | 1.8.0_222 | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1418/1/artifact/out/patch-unit-hadoop-ozone.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1418/1/testReport/ | | Max. process+thread count | 5408 (vs. ulimit of 5500) | | modules | C: hadoop-hdds/common hadoop-hdds/server-scm hadoop-hdds/tools U: hadoop-hdds | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1418/1/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1208: HADOOP-16423. S3Guard fsck: Check metadata consistency between S3 and metadatastore (log)
hadoop-yetus removed a comment on issue #1208: HADOOP-16423. S3Guard fsck: Check metadata consistency between S3 and metadatastore (log) URL: https://github.com/apache/hadoop/pull/1208#issuecomment-530019553 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 136 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 6 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 | mvninstall | 1465 | trunk passed | | +1 | compile | 39 | trunk passed | | +1 | checkstyle | 31 | trunk passed | | +1 | mvnsite | 48 | trunk passed | | +1 | shadedclient | 935 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 31 | trunk passed | | 0 | spotbugs | 71 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 68 | trunk passed | ||| _ Patch Compile Tests _ | | +1 | mvninstall | 38 | the patch passed | | +1 | compile | 32 | the patch passed | | +1 | javac | 32 | the patch passed | | -0 | checkstyle | 22 | hadoop-tools/hadoop-aws: The patch generated 12 new + 25 unchanged - 0 fixed = 37 total (was 25) | | +1 | mvnsite | 37 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 1012 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 27 | the patch passed | | +1 | findbugs | 78 | the patch passed | ||| _ Other Tests _ | | +1 | unit | 106 | hadoop-aws in the patch passed. | | +1 | asflicense | 36 | The patch does not generate ASF License warnings. | | | | 4255 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=18.09.7 Server=18.09.7 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1208/21/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1208 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux cd4e59c7068c 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / dc9abd2 | | Default Java | 1.8.0_222 | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1208/21/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1208/21/testReport/ | | Max. process+thread count | 306 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1208/21/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bgaborg edited a comment on issue #1208: HADOOP-16423. S3Guard fsck: Check metadata consistency between S3 and metadatastore (log)
bgaborg edited a comment on issue #1208: HADOOP-16423. S3Guard fsck: Check metadata consistency between S3 and metadatastore (log) URL: https://github.com/apache/hadoop/pull/1208#issuecomment-529943871 Some things I should do - extracted from @steveloughran's comments: - [ ] > Are we confident that this command will do a check if there is a file in S3 but tombstoned in MS? that's a good point! I should look into that. - [x] > modtime check skip, or range of accuracy we should talk about this. I would say leave it as is for now and create a jira for this. - [x] > add none zero exit code when an error is found I will do that, and worry about the level of the inconsistency when we have reached that jira. - [ ] > CLI test is failing if run in parallel (other tests are running in the same bucket!) - [ ] > add docs This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1208: HADOOP-16423. S3Guard fsck: Check metadata consistency between S3 and metadatastore (log)
hadoop-yetus commented on issue #1208: HADOOP-16423. S3Guard fsck: Check metadata consistency between S3 and metadatastore (log) URL: https://github.com/apache/hadoop/pull/1208#issuecomment-530019553 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 136 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 6 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 | mvninstall | 1465 | trunk passed | | +1 | compile | 39 | trunk passed | | +1 | checkstyle | 31 | trunk passed | | +1 | mvnsite | 48 | trunk passed | | +1 | shadedclient | 935 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 31 | trunk passed | | 0 | spotbugs | 71 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 68 | trunk passed | ||| _ Patch Compile Tests _ | | +1 | mvninstall | 38 | the patch passed | | +1 | compile | 32 | the patch passed | | +1 | javac | 32 | the patch passed | | -0 | checkstyle | 22 | hadoop-tools/hadoop-aws: The patch generated 12 new + 25 unchanged - 0 fixed = 37 total (was 25) | | +1 | mvnsite | 37 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 1012 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 27 | the patch passed | | +1 | findbugs | 78 | the patch passed | ||| _ Other Tests _ | | +1 | unit | 106 | hadoop-aws in the patch passed. | | +1 | asflicense | 36 | The patch does not generate ASF License warnings. | | | | 4255 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=18.09.7 Server=18.09.7 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1208/21/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1208 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux cd4e59c7068c 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / dc9abd2 | | Default Java | 1.8.0_222 | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1208/21/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1208/21/testReport/ | | Max. process+thread count | 306 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1208/21/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1377: HDDS-2057. Incorrect Default OM Port in Ozone FS URI Error Message. Contributed by Supratim Deka
hadoop-yetus commented on issue #1377: HDDS-2057. Incorrect Default OM Port in Ozone FS URI Error Message. Contributed by Supratim Deka URL: https://github.com/apache/hadoop/pull/1377#issuecomment-530015876 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 41 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | -1 | test4tests | 0 | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ trunk Compile Tests _ | | +1 | mvninstall | 595 | trunk passed | | +1 | compile | 383 | trunk passed | | +1 | checkstyle | 82 | trunk passed | | +1 | mvnsite | 0 | trunk passed | | +1 | shadedclient | 869 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 176 | trunk passed | | 0 | spotbugs | 418 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 615 | trunk passed | ||| _ Patch Compile Tests _ | | +1 | mvninstall | 535 | the patch passed | | +1 | compile | 389 | the patch passed | | +1 | javac | 389 | the patch passed | | +1 | checkstyle | 87 | the patch passed | | +1 | mvnsite | 0 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 670 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 175 | the patch passed | | +1 | findbugs | 631 | the patch passed | ||| _ Other Tests _ | | +1 | unit | 294 | hadoop-hdds in the patch passed. | | -1 | unit | 2139 | hadoop-ozone in the patch failed. | | +1 | asflicense | 50 | The patch does not generate ASF License warnings. | | | | 7907 | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.ozone.container.common.statemachine.commandhandler.TestDeleteContainerHandler | | | hadoop.ozone.container.common.statemachine.commandhandler.TestBlockDeletion | | | hadoop.ozone.scm.TestContainerSmallFile | | | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures | | | hadoop.ozone.container.common.impl.TestContainerPersistence | | | hadoop.ozone.om.snapshot.TestOzoneManagerSnapshotProvider | | | hadoop.ozone.container.TestContainerReplication | | | hadoop.ozone.om.TestOzoneManagerRestInterface | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1377/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1377 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux e44ee5bcf24f 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / dc9abd2 | | Default Java | 1.8.0_222 | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1377/2/artifact/out/patch-unit-hadoop-ozone.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1377/2/testReport/ | | Max. process+thread count | 4960 (vs. ulimit of 5500) | | modules | C: hadoop-ozone/ozonefs U: hadoop-ozone/ozonefs | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1377/2/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16543) Cached DNS name resolution error
[ https://issues.apache.org/jira/browse/HADOOP-16543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16926758#comment-16926758 ] Hadoop QA commented on HADOOP-16543: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 49s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 48s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 15s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 33s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 16m 41s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 17s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 1m 7s{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 55s{color} | {color:green} trunk passed {color} | | {color:orange}-0{color} | {color:orange} patch {color} | {color:orange} 1m 52s{color} | {color:orange} Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 20s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 24s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 8m 24s{color} | {color:red} hadoop-yarn-project_hadoop-yarn generated 3 new + 126 unchanged - 0 fixed = 129 total (was 126) {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 1m 32s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch generated 264 new + 215 unchanged - 0 fixed = 479 total (was 215) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 20s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 5 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch 2 line(s) with tabs. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 2s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 1m 2s{color} | {color:red} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common generated 5 new + 4190 unchanged - 0 fixed = 4195 total (was 4190) {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 9s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 1m 4s{color} | {color:red} hadoop-yarn-api in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 4m 7s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
[GitHub] [hadoop] hadoop-yetus commented on issue #1399: HADOOP-16543: Cached DNS name resolution error
hadoop-yetus commented on issue #1399: HADOOP-16543: Cached DNS name resolution error URL: https://github.com/apache/hadoop/pull/1399#issuecomment-530013234 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 49 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 2 new or modified test files. | ||| _ trunk Compile Tests _ | | 0 | mvndep | 48 | Maven dependency ordering for branch | | +1 | mvninstall | 1118 | trunk passed | | +1 | compile | 555 | trunk passed | | +1 | checkstyle | 93 | trunk passed | | +1 | mvnsite | 158 | trunk passed | | +1 | shadedclient | 1001 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 137 | trunk passed | | 0 | spotbugs | 67 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 295 | trunk passed | | -0 | patch | 112 | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | ||| _ Patch Compile Tests _ | | 0 | mvndep | 20 | Maven dependency ordering for patch | | +1 | mvninstall | 111 | the patch passed | | +1 | compile | 504 | the patch passed | | -1 | javac | 504 | hadoop-yarn-project_hadoop-yarn generated 3 new + 126 unchanged - 0 fixed = 129 total (was 126) | | -0 | checkstyle | 92 | hadoop-yarn-project/hadoop-yarn: The patch generated 264 new + 215 unchanged - 0 fixed = 479 total (was 215) | | +1 | mvnsite | 140 | the patch passed | | -1 | whitespace | 0 | The patch has 5 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply | | -1 | whitespace | 0 | The patch 2 line(s) with tabs. | | +1 | shadedclient | 782 | patch has no errors when building and testing our client artifacts. | | -1 | javadoc | 62 | hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common generated 5 new + 4190 unchanged - 0 fixed = 4195 total (was 4190) | | -1 | findbugs | 129 | hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | ||| _ Other Tests _ | | -1 | unit | 64 | hadoop-yarn-api in the patch failed. | | +1 | unit | 247 | hadoop-yarn-common in the patch passed. | | -1 | unit | 1626 | hadoop-yarn-client in the patch failed. | | -1 | asflicense | 55 | The patch generated 1 ASF License warnings. | | | | 7607 | | | Reason | Tests | |---:|:--| | FindBugs | module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common | | | Unchecked/unconfirmed cast from org.apache.hadoop.conf.Configuration to org.apache.hadoop.yarn.conf.YarnConfiguration in org.apache.hadoop.yarn.client.DefaultNoHARMFailoverProxyProvider.init(Configuration, RMProxy, Class) At DefaultNoHARMFailoverProxyProvider.java:org.apache.hadoop.yarn.conf.YarnConfiguration in org.apache.hadoop.yarn.client.DefaultNoHARMFailoverProxyProvider.init(Configuration, RMProxy, Class) At DefaultNoHARMFailoverProxyProvider.java:[line 38] | | Failed junit tests | hadoop.yarn.conf.TestYarnConfigurationFields | | | hadoop.yarn.client.api.impl.TestAMRMClient | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1399/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1399 | | JIRA Issue | HADOOP-16543 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 6b2a3529c0e1 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / dc9abd2 | | Default Java | 1.8.0_222 | | javac | https://builds.apache.org/job/hadoop-multibranch/job/PR-1399/1/artifact/out/diff-compile-javac-hadoop-yarn-project_hadoop-yarn.txt | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1399/1/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt | | whitespace | https://builds.apache.org/job/hadoop-multibranch/job/PR-1399/1/artifact/out/whitespace-eol.txt | | whitespace | https://builds.apache.org/job/hadoop-multibranch/job/PR-1399/1/artifact/out/whitespace-tabs.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1399/1/artifact/out/diff-javadoc-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt | | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-1399/1/artifact/out/new-findbugs-hadoop-yarn-project_hadoop-yarn_h
[GitHub] [hadoop] hadoop-yetus commented on issue #1412: Avoiding logging Sasl message
hadoop-yetus commented on issue #1412: Avoiding logging Sasl message URL: https://github.com/apache/hadoop/pull/1412#issuecomment-530001683 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 39 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | -1 | test4tests | 0 | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ trunk Compile Tests _ | | +1 | mvninstall | 1058 | trunk passed | | +1 | compile | 988 | trunk passed | | +1 | checkstyle | 53 | trunk passed | | +1 | mvnsite | 85 | trunk passed | | +1 | shadedclient | 832 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 72 | trunk passed | | 0 | spotbugs | 125 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 123 | trunk passed | ||| _ Patch Compile Tests _ | | +1 | mvninstall | 48 | the patch passed | | +1 | compile | 936 | the patch passed | | +1 | javac | 936 | the patch passed | | +1 | checkstyle | 52 | the patch passed | | +1 | mvnsite | 81 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 688 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 72 | the patch passed | | +1 | findbugs | 128 | the patch passed | ||| _ Other Tests _ | | +1 | unit | 537 | hadoop-common in the patch passed. | | +1 | asflicense | 53 | The patch does not generate ASF License warnings. | | | | 5924 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1412/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1412 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 2f97ce7c78ca 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / dc9abd2 | | Default Java | 1.8.0_222 | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1412/1/testReport/ | | Max. process+thread count | 1373 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1412/1/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on issue #1208: HADOOP-16423. S3Guard fsck: Check metadata consistency between S3 and metadatastore (log)
steveloughran commented on issue #1208: HADOOP-16423. S3Guard fsck: Check metadata consistency between S3 and metadatastore (log) URL: https://github.com/apache/hadoop/pull/1208#issuecomment-52599 test runs before your last commit. First fine; second with -Dauth failed ``` [ERROR] Tests run: 11, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 25.371 s <<< FAILURE! - in org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardFsck [ERROR] testIAuthoritativeDirectoryContentMismatch(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardFsck) Time elapsed: 3.461 s <<< ERROR! java.util.NoSuchElementException: No value present at java.util.Optional.get(Optional.java:135) at org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardFsck.testIAuthoritativeDirectoryContentMismatch(ITestS3GuardFsck.java:403) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:748) ``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1403: HADOOP-16548 Made flush operation configurable in ABFS
hadoop-yetus commented on issue #1403: HADOOP-16548 Made flush operation configurable in ABFS URL: https://github.com/apache/hadoop/pull/1403#issuecomment-529996321 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 1997 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | -1 | test4tests | 0 | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ trunk Compile Tests _ | | +1 | mvninstall | 1104 | trunk passed | | +1 | compile | 30 | trunk passed | | +1 | checkstyle | 23 | trunk passed | | +1 | mvnsite | 33 | trunk passed | | +1 | shadedclient | 746 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 26 | trunk passed | | 0 | spotbugs | 53 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 51 | trunk passed | ||| _ Patch Compile Tests _ | | +1 | mvninstall | 29 | the patch passed | | +1 | compile | 24 | the patch passed | | +1 | javac | 24 | the patch passed | | +1 | checkstyle | 16 | the patch passed | | +1 | mvnsite | 28 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 784 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 22 | the patch passed | | +1 | findbugs | 54 | the patch passed | ||| _ Other Tests _ | | +1 | unit | 81 | hadoop-azure in the patch passed. | | +1 | asflicense | 33 | The patch does not generate ASF License warnings. | | | | 5175 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1403/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1403 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 82e1d4e70635 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / dc9abd2 | | Default Java | 1.8.0_222 | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1403/1/testReport/ | | Max. process+thread count | 412 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1403/1/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1407: HADOOP-16490. Improve S3Guard handling of FNFEs in copy
hadoop-yetus commented on issue #1407: HADOOP-16490. Improve S3Guard handling of FNFEs in copy URL: https://github.com/apache/hadoop/pull/1407#issuecomment-529994330 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 189 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 1 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 11 new or modified test files. | ||| _ trunk Compile Tests _ | | 0 | mvndep | 105 | Maven dependency ordering for branch | | +1 | mvninstall | 1051 | trunk passed | | +1 | compile | 1017 | trunk passed | | +1 | checkstyle | 149 | trunk passed | | +1 | mvnsite | 131 | trunk passed | | +1 | shadedclient | 976 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 111 | trunk passed | | 0 | spotbugs | 69 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 197 | trunk passed | ||| _ Patch Compile Tests _ | | 0 | mvndep | 27 | Maven dependency ordering for patch | | +1 | mvninstall | 80 | the patch passed | | +1 | compile | 983 | the patch passed | | +1 | javac | 983 | the patch passed | | +1 | checkstyle | 143 | the patch passed | | +1 | mvnsite | 133 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | xml | 1 | The patch has no ill-formed XML file. | | +1 | shadedclient | 715 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 117 | the patch passed | | +1 | findbugs | 206 | the patch passed | ||| _ Other Tests _ | | -1 | unit | 519 | hadoop-common in the patch failed. | | +1 | unit | 97 | hadoop-aws in the patch passed. | | +1 | asflicense | 54 | The patch does not generate ASF License warnings. | | | | 7046 | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.util.TestDiskCheckerWithDiskIo | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.2 Server=19.03.2 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1407/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1407 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle xml | | uname | Linux fd34b8e9e3dd 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 38c1a10 | | Default Java | 1.8.0_222 | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1407/3/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1407/3/testReport/ | | Max. process+thread count | 1367 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws U: . | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1407/3/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16152) Upgrade Eclipse Jetty version to 9.4.x
[ https://issues.apache.org/jira/browse/HADOOP-16152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16926732#comment-16926732 ] Wei-Chiu Chuang commented on HADOOP-16152: -- {noformat} [WARNING] /testptch/hadoop/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpRequestLog.java:[27,31] [deprecation] NCSARequestLog in org.eclipse.jetty.server has been deprecated [WARNING] /testptch/hadoop/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpRequestLog.java:[88,8] [deprecation] NCSARequestLog in org.eclipse.jetty.server has been deprecated [WARNING] /testptch/hadoop/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpRequestLog.java:[88,40] [deprecation] NCSARequestLog in org.eclipse.jetty.server has been deprecated [WARNING] /testptch/hadoop/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer2.java:[520,44] [deprecation] SslContextFactory() in SslContextFactory has been deprecated [WARNING] /testptch/hadoop/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer2.java:[521,23] [deprecation] setNeedClientAuth(boolean) in SslContextFactory has been deprecated [WARNING] /testptch/hadoop/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestHttpRequestLog.java:[21,31] [deprecation] NCSARequestLog in org.eclipse.jetty.server has been deprecated [WARNING] /testptch/hadoop/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestHttpRequestLog.java:[45,35] [deprecation] NCSARequestLog in org.eclipse.jetty.server has been deprecated [WARNING] /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/test/TestJettyHelper.java:[108,46] [deprecation] SslContextFactory() in SslContextFactory has been deprecated [WARNING] /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/test/TestJettyHelper.java:[109,25] [deprecation] setNeedClientAuth(boolean) in SslContextFactory has been deprecated [WARNING] /testptch/hadoop/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/SLSRunner.java:[99,29] [deprecation] ConcurrentHashSet in org.eclipse.jetty.util has been deprecated [WARNING] /testptch/hadoop/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/SLSRunner.java:[349,30] [deprecation] ConcurrentHashSet in org.eclipse.jetty.util has been deprecated {noformat} We need to deal with these. I'm surprised by the shadedclient failure as it passed the build successfully for me. > Upgrade Eclipse Jetty version to 9.4.x > -- > > Key: HADOOP-16152 > URL: https://issues.apache.org/jira/browse/HADOOP-16152 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.2.0 >Reporter: Yuming Wang >Assignee: Yuming Wang >Priority: Major > Attachments: HADOOP-16152.002.patch, HADOOP-16152.v1.patch > > > Some big data projects have been upgraded Jetty to 9.4.x, which causes some > compatibility issues. > Spark: > [https://github.com/apache/spark/blob/02a0cdea13a5eebd27649a60d981de35156ba52c/pom.xml#L146] > Calcite: > [https://github.com/apache/calcite/blob/avatica-1.13.0-rc0/pom.xml#L87] > Hive: https://issues.apache.org/jira/browse/HIVE-21211 -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1404: HDFS-13660 Copy file till the source file length during distcp
hadoop-yetus commented on issue #1404: HDFS-13660 Copy file till the source file length during distcp URL: https://github.com/apache/hadoop/pull/1404#issuecomment-529986874 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 102 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 1 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 5 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 | mvninstall | 1089 | trunk passed | | +1 | compile | 28 | trunk passed | | +1 | checkstyle | 28 | trunk passed | | +1 | mvnsite | 34 | trunk passed | | +1 | shadedclient | 772 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 23 | trunk passed | | 0 | spotbugs | 42 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 40 | trunk passed | ||| _ Patch Compile Tests _ | | +1 | mvninstall | 28 | the patch passed | | +1 | compile | 21 | the patch passed | | +1 | javac | 21 | the patch passed | | -0 | checkstyle | 21 | hadoop-tools/hadoop-distcp: The patch generated 18 new + 308 unchanged - 1 fixed = 326 total (was 309) | | +1 | mvnsite | 24 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 765 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 19 | the patch passed | | +1 | findbugs | 48 | the patch passed | ||| _ Other Tests _ | | +1 | unit | 817 | hadoop-distcp in the patch passed. | | +1 | asflicense | 31 | The patch does not generate ASF License warnings. | | | | 3976 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.2 Server=19.03.2 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1404/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1404 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux bf01694ede4a 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / dc9abd2 | | Default Java | 1.8.0_222 | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1404/1/artifact/out/diff-checkstyle-hadoop-tools_hadoop-distcp.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1404/1/testReport/ | | Max. process+thread count | 413 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-distcp U: hadoop-tools/hadoop-distcp | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1404/1/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1208: HADOOP-16423. S3Guard fsck: Check metadata consistency between S3 and metadatastore (log)
hadoop-yetus commented on issue #1208: HADOOP-16423. S3Guard fsck: Check metadata consistency between S3 and metadatastore (log) URL: https://github.com/apache/hadoop/pull/1208#issuecomment-529985669 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 34 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 6 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 | mvninstall | 1136 | trunk passed | | +1 | compile | 31 | trunk passed | | +1 | checkstyle | 22 | trunk passed | | +1 | mvnsite | 35 | trunk passed | | +1 | shadedclient | 782 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 27 | trunk passed | | 0 | spotbugs | 55 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 54 | trunk passed | ||| _ Patch Compile Tests _ | | +1 | mvninstall | 31 | the patch passed | | +1 | compile | 25 | the patch passed | | +1 | javac | 25 | the patch passed | | -0 | checkstyle | 18 | hadoop-tools/hadoop-aws: The patch generated 22 new + 25 unchanged - 0 fixed = 47 total (was 25) | | +1 | mvnsite | 31 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 825 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 23 | the patch passed | | +1 | findbugs | 61 | the patch passed | ||| _ Other Tests _ | | +1 | unit | 81 | hadoop-aws in the patch passed. | | +1 | asflicense | 27 | The patch does not generate ASF License warnings. | | | | 3341 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1208/20/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1208 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 3205c8e4859c 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / dc9abd2 | | Default Java | 1.8.0_222 | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1208/20/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1208/20/testReport/ | | Max. process+thread count | 339 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1208/20/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1397: MAPREDUCE-7237. Supports config the shuffle's path cache related parameters
hadoop-yetus commented on issue #1397: MAPREDUCE-7237. Supports config the shuffle's path cache related parameters URL: https://github.com/apache/hadoop/pull/1397#issuecomment-529981887 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 46 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | -1 | test4tests | 0 | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ trunk Compile Tests _ | | +1 | mvninstall | 1253 | trunk passed | | +1 | compile | 21 | trunk passed | | +1 | checkstyle | 19 | trunk passed | | +1 | mvnsite | 23 | trunk passed | | +1 | shadedclient | 763 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 20 | trunk passed | | 0 | spotbugs | 37 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 33 | trunk passed | ||| _ Patch Compile Tests _ | | +1 | mvninstall | 21 | the patch passed | | +1 | compile | 16 | the patch passed | | +1 | javac | 16 | the patch passed | | -0 | checkstyle | 12 | hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle: The patch generated 1 new + 60 unchanged - 5 fixed = 61 total (was 65) | | +1 | mvnsite | 18 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 829 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 18 | the patch passed | | +1 | findbugs | 42 | the patch passed | ||| _ Other Tests _ | | +1 | unit | 27 | hadoop-mapreduce-client-shuffle in the patch passed. | | +1 | asflicense | 29 | The patch does not generate ASF License warnings. | | | | 3306 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1397/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1397 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 048ac159b3dc 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / dc9abd2 | | Default Java | 1.8.0_222 | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1397/1/artifact/out/diff-checkstyle-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-shuffle.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1397/1/testReport/ | | Max. process+thread count | 340 (vs. ulimit of 5500) | | modules | C: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle U: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1397/1/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org