[jira] [Commented] (HADOOP-16958) NullPointerException(NPE) when hadoop.security.authorization is enabled but the input PolicyProvider for ZKFCRpcServer is NULL
[ https://issues.apache.org/jira/browse/HADOOP-16958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17081102#comment-17081102 ] Hadoop QA commented on HADOOP-16958: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 44s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 53s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 51s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 35s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 20m 20s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 2s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 41s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 21s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 46s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 2m 44s{color} | {color:orange} root: The patch generated 4 new + 50 unchanged - 0 fixed = 54 total (was 50) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 20s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 6m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 49s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 9m 33s{color} | {color:red} hadoop-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}130m 19s{color} | {color:red} hadoop-hdfs in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 1m 7s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}259m 36s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.ha.TestZKFailoverController | | | hadoop.ha.TestZKFailoverControllerStress | | | hadoop.hdfs.server.namenode.TestAddOverReplicatedStripedBlocks | | | hadoop.hdfs.server.datanode.TestBPOfferService | | | hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor | | | hadoop.hdfs.tools.TestDFSZKFailoverController | | | hadoop.hdfs.qjournal.server.TestJournalNodeSync | | | hadoop.hdfs.TestSafeModeWithStripedFile | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.8 Server=19.03.8 Image:yetus/hadoop:e6455cc864d | | JIRA Issue | HADOOP-16958 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12999592/HADOOP-16958.002.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit
[jira] [Commented] (HADOOP-16967) TestSequenceFile#testRecursiveSeqFileCreate failed to clean its data caused failures in subsequent run
[ https://issues.apache.org/jira/browse/HADOOP-16967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17081062#comment-17081062 ] Hadoop QA commented on HADOOP-16967: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 49s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 39s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 23m 51s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 3s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 20m 22s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 14s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 21m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 21m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 17m 28s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 25s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 11m 29s{color} | {color:red} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 1m 2s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}133m 50s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.ha.TestZKFailoverController | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.8 Server=19.03.8 Image:yetus/hadoop:e6455cc864d | | JIRA Issue | HADOOP-16967 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12999593/HADOOP-16967.000.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 1e258029151b 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 275c478 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_242 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HADOOP-Build/16872/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/16872/testReport/ | | Max. process+thread count | 3266 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/16872/console | | Powered by | Apache Yetus 0.8.0
[jira] [Assigned] (HADOOP-16528) Update document for web authentication kerberos principal configuration
[ https://issues.apache.org/jira/browse/HADOOP-16528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Masatake Iwasaki reassigned HADOOP-16528: - Assignee: Masatake Iwasaki (was: Chen Zhang) > Update document for web authentication kerberos principal configuration > --- > > Key: HADOOP-16528 > URL: https://issues.apache.org/jira/browse/HADOOP-16528 > Project: Hadoop Common > Issue Type: Bug > Components: auth >Reporter: Chen Zhang >Assignee: Masatake Iwasaki >Priority: Major > > The config \{{dfs.web.authentication.kerberos.principal}} is not used anymore > after HADOOP-16354, but the document for WebHDFS is not updated, the > hdfs-default.xml should be updated as well. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1950: HADOOP-16586. ITestS3GuardFsck, others fails when run using a local m…
hadoop-yetus commented on issue #1950: HADOOP-16586. ITestS3GuardFsck, others fails when run using a local m… URL: https://github.com/apache/hadoop/pull/1950#issuecomment-612266601 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 15s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 3 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 21m 57s | trunk passed | | +1 :green_heart: | compile | 0m 32s | trunk passed | | +1 :green_heart: | checkstyle | 0m 23s | trunk passed | | +1 :green_heart: | mvnsite | 0m 36s | trunk passed | | +1 :green_heart: | shadedclient | 16m 16s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 24s | trunk passed | | +0 :ok: | spotbugs | 1m 0s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 0m 58s | trunk passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 31s | the patch passed | | +1 :green_heart: | compile | 0m 26s | the patch passed | | +1 :green_heart: | javac | 0m 27s | the patch passed | | +1 :green_heart: | checkstyle | 0m 17s | the patch passed | | +1 :green_heart: | mvnsite | 0m 30s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 15m 13s | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 21s | the patch passed | | +1 :green_heart: | findbugs | 1m 2s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 9s | hadoop-aws in the patch passed. | | +1 :green_heart: | asflicense | 0m 28s | The patch does not generate ASF License warnings. | | | | 63m 45s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.8 Server=19.03.8 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1950/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1950 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 6cee8978d456 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 275c478 | | Default Java | 1.8.0_242 | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1950/2/testReport/ | | Max. process+thread count | 429 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1950/2/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-9851) dfs -chown does not like "+" plus sign in user name
[ https://issues.apache.org/jira/browse/HADOOP-9851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17081016#comment-17081016 ] Hadoop QA commented on HADOOP-9851: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 55s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 3m 9s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 29m 37s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m 49s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 16s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 24m 4s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 6m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 57s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 26s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 19m 31s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 3m 20s{color} | {color:orange} root: The patch generated 1 new + 192 unchanged - 1 fixed = 193 total (was 193) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 47s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 45s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 44s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}115m 4s{color} | {color:red} hadoop-hdfs in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 50s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}265m 47s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestRollingUpgrade | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.8 Server=19.03.8 Image:yetus/hadoop:e6455cc864d | | JIRA Issue | HADOOP-9851 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12893275/HADOOP-9851.01.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux e6d37394853f 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 275c478 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_242 | | findbugs | v3.1.0-RC1 | | checkstyle |
[GitHub] [hadoop] iwasakims commented on issue #1950: HADOOP-16586. ITestS3GuardFsck, others fails when run using a local m…
iwasakims commented on issue #1950: HADOOP-16586. ITestS3GuardFsck, others fails when run using a local m… URL: https://github.com/apache/hadoop/pull/1950#issuecomment-612243699 I needed to remove wildcard import of `S3ATestUtils.*` from ITestPartialRenamesDeletes to fix the following error. ``` [ERROR] testPartialEmptyDirDelete[bulk-delete=true](org.apache.hadoop.fs.s3a.impl.ITestPartialRenamesDeletes) Time elapsed: 0.022 s <<< ERROR! java.lang.NullPointerException: No test bucket at com.google.common.base.Preconditions.checkNotNull(Preconditions.java:895) at org.apache.hadoop.fs.s3a.S3ATestUtils.getTestBucketName(S3ATestUtils.java:739) at org.apache.hadoop.fs.s3a.impl.ITestPartialRenamesDeletes.createConfiguration(ITestPartialRenamesDeletes.java:326) at org.apache.hadoop.fs.contract.AbstractFSContractTestBase.setup(AbstractFSContractTestBase.java:186) at org.apache.hadoop.fs.s3a.AbstractS3ATestBase.setup(AbstractS3ATestBase.java:59) at org.apache.hadoop.fs.s3a.impl.ITestPartialRenamesDeletes.setup(ITestPartialRenamesDeletes.java:244) ... ``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1952: HDFS-1820. FTPFileSystem attempts to close the outputstream even when it is not initialised.
hadoop-yetus commented on issue #1952: HDFS-1820. FTPFileSystem attempts to close the outputstream even when it is not initialised. URL: https://github.com/apache/hadoop/pull/1952#issuecomment-612242452 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 34s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 2 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 19m 22s | trunk passed | | +1 :green_heart: | compile | 16m 50s | trunk passed | | +1 :green_heart: | checkstyle | 0m 48s | trunk passed | | +1 :green_heart: | mvnsite | 1m 28s | trunk passed | | +1 :green_heart: | shadedclient | 16m 37s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 1m 3s | trunk passed | | +0 :ok: | spotbugs | 2m 8s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 2m 6s | trunk passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 54s | the patch passed | | +1 :green_heart: | compile | 16m 16s | the patch passed | | +1 :green_heart: | javac | 16m 16s | the patch passed | | +1 :green_heart: | checkstyle | 0m 49s | the patch passed | | +1 :green_heart: | mvnsite | 1m 26s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | xml | 0m 1s | The patch has no ill-formed XML file. | | +1 :green_heart: | shadedclient | 13m 57s | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 1m 3s | the patch passed | | +1 :green_heart: | findbugs | 2m 13s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 9m 20s | hadoop-common in the patch passed. | | +1 :green_heart: | asflicense | 0m 54s | The patch does not generate ASF License warnings. | | | | 106m 54s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.8 Server=19.03.8 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1952/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1952 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml findbugs checkstyle | | uname | Linux 164fa4dd4ebd 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 275c478 | | Default Java | 1.8.0_242 | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1952/1/testReport/ | | Max. process+thread count | 1822 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1952/1/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16967) TestSequenceFile#testRecursiveSeqFileCreate failed to clean its data caused failures in subsequent run
[ https://issues.apache.org/jira/browse/HADOOP-16967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ctest updated HADOOP-16967: --- Attachment: HADOOP-16967.000.patch Labels: easyfix test (was: ) Status: Patch Available (was: Open) > TestSequenceFile#testRecursiveSeqFileCreate failed to clean its data caused > failures in subsequent run > --- > > Key: HADOOP-16967 > URL: https://issues.apache.org/jira/browse/HADOOP-16967 > Project: Hadoop Common > Issue Type: Bug > Components: common, test >Affects Versions: 3.2.1, 3.4.0 >Reporter: Ctest >Priority: Minor > Labels: easyfix, test > Attachments: HADOOP-16967.000.patch > > > The test expects an IOException when creating a writer for file > `target/test/data/recursiveCreateDir/file` with `createParent=false`. And it > expects to create the writer successfully when `createParent=True`. > `createParent` means `create parent directory if non-existent`. > The test will pass if it is run for the first time, but it will fail for the > second run. This is because the test did not clean the parent directory > created during the first run. > The parent directory `recursiveCreateDir` was created, but it was not deleted > before the test finished. So, when the test was run again, it still treated > the parent directory `recursiveCreateDir` as non-existent and expected an > IOException from creating a writer with `createParent=false`. Then the test > did not get the expected IOException because `recursiveCreateDir` has been > created in the first test run. > {code:java} > @SuppressWarnings("deprecation") > @Test > public void testRecursiveSeqFileCreate() throws IOException { > FileSystem fs = FileSystem.getLocal(conf); > Path name = new Path(new Path(GenericTestUtils.getTempPath( > "recursiveCreateDir")), "file"); // FILE SUCCESSULLY CREATED HERE > boolean createParent = false; > try { > SequenceFile.createWriter(fs, conf, name, RandomDatum.class, > RandomDatum.class, 512, (short) 1, 4096, createParent, > CompressionType.NONE, null, new Metadata()); > fail("Expected an IOException due to missing parent"); > } catch (IOException ioe) { > // Expected > } > createParent = true; > SequenceFile.createWriter(fs, conf, name, RandomDatum.class, > RandomDatum.class, 512, (short) 1, 4096, createParent, > CompressionType.NONE, null, new Metadata()); > // should succeed, fails if exception thrown > } > {code} > Suggested patch: > > {code:java} > diff --git > a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestSequenceFile.java > > b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestSequenceFile.java > index 044824356ed..1aff2936264 100644 > --- > a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestSequenceFile.java > +++ > b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestSequenceFile.java > @@ -649,8 +649,9 @@ public void testCreateWriterOnExistingFile() throws > IOException { > @Test > public void testRecursiveSeqFileCreate() throws IOException { > FileSystem fs = FileSystem.getLocal(conf); > - Path name = new Path(new Path(GenericTestUtils.getTempPath( > - "recursiveCreateDir")), "file"); > + Path parentDir = new Path(GenericTestUtils.getTempPath( > + "recursiveCreateDir")); > + Path name = new Path(parentDir, "file"); > boolean createParent = false; > > try { > @@ -667,6 +668,9 @@ public void testRecursiveSeqFileCreate() throws > IOException { > RandomDatum.class, 512, (short) 1, 4096, createParent, > CompressionType.NONE, null, new Metadata()); > // should succeed, fails if exception thrown > + > + fs.deleteOnExit(parentDir); > + fs.close(); > } > > @Test{code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-16967) TestSequenceFile#testRecursiveSeqFileCreate failed to clean its data caused failures in subsequent run
Ctest created HADOOP-16967: -- Summary: TestSequenceFile#testRecursiveSeqFileCreate failed to clean its data caused failures in subsequent run Key: HADOOP-16967 URL: https://issues.apache.org/jira/browse/HADOOP-16967 Project: Hadoop Common Issue Type: Bug Components: common, test Affects Versions: 3.2.1, 3.4.0 Reporter: Ctest The test expects an IOException when creating a writer for file `target/test/data/recursiveCreateDir/file` with `createParent=false`. And it expects to create the writer successfully when `createParent=True`. `createParent` means `create parent directory if non-existent`. The test will pass if it is run for the first time, but it will fail for the second run. This is because the test did not clean the parent directory created during the first run. The parent directory `recursiveCreateDir` was created, but it was not deleted before the test finished. So, when the test was run again, it still treated the parent directory `recursiveCreateDir` as non-existent and expected an IOException from creating a writer with `createParent=false`. Then the test did not get the expected IOException because `recursiveCreateDir` has been created in the first test run. {code:java} @SuppressWarnings("deprecation") @Test public void testRecursiveSeqFileCreate() throws IOException { FileSystem fs = FileSystem.getLocal(conf); Path name = new Path(new Path(GenericTestUtils.getTempPath( "recursiveCreateDir")), "file"); // FILE SUCCESSULLY CREATED HERE boolean createParent = false; try { SequenceFile.createWriter(fs, conf, name, RandomDatum.class, RandomDatum.class, 512, (short) 1, 4096, createParent, CompressionType.NONE, null, new Metadata()); fail("Expected an IOException due to missing parent"); } catch (IOException ioe) { // Expected } createParent = true; SequenceFile.createWriter(fs, conf, name, RandomDatum.class, RandomDatum.class, 512, (short) 1, 4096, createParent, CompressionType.NONE, null, new Metadata()); // should succeed, fails if exception thrown } {code} Suggested patch: {code:java} diff --git a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestSequenceFile.java b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestSequenceFile.java index 044824356ed..1aff2936264 100644 --- a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestSequenceFile.java +++ b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestSequenceFile.java @@ -649,8 +649,9 @@ public void testCreateWriterOnExistingFile() throws IOException { @Test public void testRecursiveSeqFileCreate() throws IOException { FileSystem fs = FileSystem.getLocal(conf); - Path name = new Path(new Path(GenericTestUtils.getTempPath( - "recursiveCreateDir")), "file"); + Path parentDir = new Path(GenericTestUtils.getTempPath( + "recursiveCreateDir")); + Path name = new Path(parentDir, "file"); boolean createParent = false; try { @@ -667,6 +668,9 @@ public void testRecursiveSeqFileCreate() throws IOException { RandomDatum.class, 512, (short) 1, 4096, createParent, CompressionType.NONE, null, new Metadata()); // should succeed, fails if exception thrown + + fs.deleteOnExit(parentDir); + fs.close(); } @Test{code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16958) NullPointerException(NPE) when hadoop.security.authorization is enabled but the input PolicyProvider for ZKFCRpcServer is NULL
[ https://issues.apache.org/jira/browse/HADOOP-16958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ctest updated HADOOP-16958: --- Attachment: HADOOP-16958.002.patch > NullPointerException(NPE) when hadoop.security.authorization is enabled but > the input PolicyProvider for ZKFCRpcServer is NULL > -- > > Key: HADOOP-16958 > URL: https://issues.apache.org/jira/browse/HADOOP-16958 > Project: Hadoop Common > Issue Type: Bug > Components: common, ha >Affects Versions: 3.2.1 >Reporter: Ctest >Priority: Critical > Attachments: HADOOP-16958.000.patch, HADOOP-16958.001.patch, > HADOOP-16958.002.patch > > > During initialization, ZKFCRpcServer refreshes the service authorization ACL > for the service handled by this server if config > hadoop.security.authorization is enabled, by calling refreshServiceAcl with > the input PolicyProvider and Configuration. > {code:java} > ZKFCRpcServer(Configuration conf, > InetSocketAddress bindAddr, > ZKFailoverController zkfc, > PolicyProvider policy) throws IOException { > this.server = ... > > // set service-level authorization security policy > if (conf.getBoolean( > CommonConfigurationKeys.HADOOP_SECURITY_AUTHORIZATION, false)) { > server.refreshServiceAcl(conf, policy); > } > }{code} > refreshServiceAcl calls > ServiceAuthorizationManager#refreshWithLoadedConfiguration which directly > gets services from the provider with provider.getServices(). When the > provider is NULL, the code throws NPE without an informative message. In > addition, the default value of config > `hadoop.security.authorization.policyprovider` (which controls PolicyProvider > here) is NULL and the only usage of ZKFCRpcServer initializer provides only > an abstract method getPolicyProvider which does not enforce that > PolicyProvider should not be NULL. > The suggestion here is to either add a guard check or exception handling with > an informative logging message on ZKFCRpcServer to handle input > PolicyProvider being NULL. > > I am very happy to provide a patch for it if the issue is confirmed :) -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14846) Wrong shell exit code if the shell process cannot be even started
[ https://issues.apache.org/jira/browse/HADOOP-14846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17080969#comment-17080969 ] Hadoop QA commented on HADOOP-14846: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 37m 10s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 49s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m 16s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 45s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 19s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 19m 1s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 45s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 13s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 8s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 56s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 41s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}155m 0s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.8 Server=19.03.8 Image:yetus/hadoop:e6455cc864d | | JIRA Issue | HADOOP-14846 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12976414/HADOOP-14846.002.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 2a99e6d0e95d 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 275c478 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_242 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/16857/testReport/ | | Max. process+thread count | 1373 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/16857/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > Wrong shell exit code if the shell process cannot be even started > - > >
[GitHub] [hadoop] hadoop-yetus commented on issue #1951: HDFS-15270. Account for *env == NULL in hdfsThreadDestructor
hadoop-yetus commented on issue #1951: HDFS-15270. Account for *env == NULL in hdfsThreadDestructor URL: https://github.com/apache/hadoop/pull/1951#issuecomment-612209664 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 24s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 23m 16s | trunk passed | | +1 :green_heart: | compile | 2m 8s | trunk passed | | +1 :green_heart: | mvnsite | 0m 20s | trunk passed | | +1 :green_heart: | shadedclient | 41m 32s | branch has no errors when building and testing our client artifacts. | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 15s | the patch passed | | +1 :green_heart: | compile | 1m 46s | the patch passed | | -1 :x: | cc | 1m 46s | hadoop-hdfs-project_hadoop-hdfs-native-client generated 5 new + 14 unchanged - 5 fixed = 19 total (was 19) | | +1 :green_heart: | golang | 1m 46s | the patch passed | | +1 :green_heart: | javac | 1m 46s | the patch passed | | +1 :green_heart: | mvnsite | 0m 16s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 15m 32s | patch has no errors when building and testing our client artifacts. | ||| _ Other Tests _ | | +1 :green_heart: | unit | 7m 3s | hadoop-hdfs-native-client in the patch passed. | | +1 :green_heart: | asflicense | 0m 32s | The patch does not generate ASF License warnings. | | | | 70m 51s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.8 Server=19.03.8 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1951/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1951 | | Optional Tests | dupname asflicense compile cc mvnsite javac unit golang | | uname | Linux 1526112c2fe1 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 275c478 | | Default Java | 1.8.0_242 | | cc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1951/2/artifact/out/diff-compile-cc-hadoop-hdfs-project_hadoop-hdfs-native-client.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1951/2/testReport/ | | Max. process+thread count | 308 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: hadoop-hdfs-project/hadoop-hdfs-native-client | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1951/2/console | | versions | git=2.7.4 maven=3.3.9 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15082) add AbstractContractRootDirectoryTest test for mkdir / ; wasb to implement the test
[ https://issues.apache.org/jira/browse/HADOOP-15082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17080953#comment-17080953 ] Hadoop QA commented on HADOOP-15082: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 45s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 3m 12s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 5s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 11s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 19m 21s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 28s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 25s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 54s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 32s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 59s{color} | {color:red} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 34s{color} | {color:green} hadoop-azure in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 44s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}124m 13s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.metrics2.source.TestJvmMetrics | | | hadoop.security.token.delegation.TestZKDelegationTokenSecretManager | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.8 Server=19.03.8 Image:yetus/hadoop:e6455cc864d | | JIRA Issue | HADOOP-15082 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12946523/HADOOP-15082-003.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle xml | | uname | Linux f17dccc2892f 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality |
[jira] [Commented] (HADOOP-14663) Switch to OpenClover
[ https://issues.apache.org/jira/browse/HADOOP-14663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17080952#comment-17080952 ] Hadoop QA commented on HADOOP-14663: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 37s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 2m 57s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 40s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 19m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 75m 29s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 30s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 25s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 14s{color} | {color:red} hadoop-maven-plugins in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 1m 14s{color} | {color:red} root in the patch failed. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 23s{color} | {color:red} root in the patch failed. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 23s{color} | {color:red} root in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 23s{color} | {color:red} root in the patch failed. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 6s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 23m 29s{color} | {color:red} patch has errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 27s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 6m 24s{color} | {color:red} root in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 39s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}124m 26s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.8 Server=19.03.8 Image:yetus/hadoop:e6455cc864d | | JIRA Issue | HADOOP-14663 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12935374/HADOOP-14663.06.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml | | uname | Linux 86bcebcc1a89 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 275c478 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_242 | | mvninstall | https://builds.apache.org/job/PreCommit-HADOOP-Build/16844/artifact/out/patch-mvninstall-hadoop-maven-plugins.txt | | mvninstall | https://builds.apache.org/job/PreCommit-HADOOP-Build/16844/artifact/out/patch-mvninstall-root.txt | | compile | https://builds.apache.org/job/PreCommit-HADOOP-Build/16844/artifact/out/patch-compile-root.txt | | javac | https://builds.apache.org/job/PreCommit-HADOOP-Build/16844/artifact/out/patch-compile-root.txt | |
[jira] [Commented] (HADOOP-15386) FileSystemContractBaseTest#testMoveFileUnderParent duplicates testRenameFileToSelf
[ https://issues.apache.org/jira/browse/HADOOP-15386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17080945#comment-17080945 ] Hadoop QA commented on HADOOP-15386: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 39s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 49s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m 30s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 2s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 19m 8s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 18m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 16m 36s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 11m 3s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 51s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}123m 9s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.8 Server=19.03.8 Image:yetus/hadoop:e6455cc864d | | JIRA Issue | HADOOP-15386 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12919160/HADOOP-15386.001.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 3f2e02b4467f 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 275c478 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_242 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/16852/testReport/ | | Max. process+thread count | 1350 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/16852/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > FileSystemContractBaseTest#testMoveFileUnderParent duplicates > testRenameFileToSelf >
[jira] [Commented] (HADOOP-15842) add fs.azure.account.oauth2.client.secret to hadoop.security.sensitive-config-keys
[ https://issues.apache.org/jira/browse/HADOOP-15842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17080943#comment-17080943 ] Hadoop QA commented on HADOOP-15842: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 43s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 39s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 26s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 46s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 10s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 0s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 2s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 49s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}113m 14s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.8 Server=19.03.8 Image:yetus/hadoop:e6455cc864d | | JIRA Issue | HADOOP-15842 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12944707/HADOOP-15842-002.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle xml | | uname | Linux 12fd1b52fe88 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 275c478 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_242 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/16841/testReport/ | | Max. process+thread count | 1837 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output |
[jira] [Commented] (HADOOP-14703) ConsoleSink for metrics2
[ https://issues.apache.org/jira/browse/HADOOP-14703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17080940#comment-17080940 ] Hadoop QA commented on HADOOP-14703: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 41s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 34s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 40s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 20s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 11s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 4s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 43s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 59s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 16s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 47s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}111m 6s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.8 Server=19.03.8 Image:yetus/hadoop:e6455cc864d | | JIRA Issue | HADOOP-14703 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12883278/HADOOP-14703.006.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 7b9b55cc225e 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 275c478 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_242 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/16842/testReport/ | | Max. process+thread count | 1462 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/16842/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > ConsoleSink for metrics2 > > > Key: HADOOP-14703 > URL:
[jira] [Commented] (HADOOP-14519) Client$Connection#waitForWork may suffer from spurious wakeups
[ https://issues.apache.org/jira/browse/HADOOP-14519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17080939#comment-17080939 ] Hadoop QA commented on HADOOP-14519: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 31s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 45s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 9s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 53s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 6s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 4s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 41s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch generated 1 new + 99 unchanged - 0 fixed = 100 total (was 99) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 5s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 13s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 44s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}110m 31s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.8 Server=19.03.8 Image:yetus/hadoop:e6455cc864d | | JIRA Issue | HADOOP-14519 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12872408/HADOOP-14519.001.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 9513cfb864f4 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 275c478 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_242 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-HADOOP-Build/16861/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/16861/testReport/ | | Max. process+thread count | 1384 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common U:
[jira] [Commented] (HADOOP-8690) Shell may remove a file without going to trash even if skipTrash is not enabled
[ https://issues.apache.org/jira/browse/HADOOP-8690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17080936#comment-17080936 ] Hadoop QA commented on HADOOP-8690: --- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 45s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 1s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 45s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 20s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 5s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 0s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 14s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 0s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 46s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}108m 20s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.8 Server=19.03.8 Image:yetus/hadoop:e6455cc864d | | JIRA Issue | HADOOP-8690 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12879359/HADOOP-8690.01.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 86711433b2e7 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 275c478 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_242 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/16840/testReport/ | | Max. process+thread count | 3153 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/16840/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > Shell may remove a file without going to trash even if skipTrash is not > enabled > ---
[jira] [Commented] (HADOOP-14231) Using parentheses is not allowed in auth_to_local regex
[ https://issues.apache.org/jira/browse/HADOOP-14231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17080934#comment-17080934 ] Hadoop QA commented on HADOOP-14231: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 7s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 29m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 55s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 27s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 40s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 37s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 9s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 25s{color} | {color:orange} hadoop-common-project/hadoop-auth: The patch generated 1 new + 6 unchanged - 0 fixed = 7 total (was 6) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 54s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 33s{color} | {color:green} hadoop-auth in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 43s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}103m 46s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.8 Server=19.03.8 Image:yetus/hadoop:e6455cc864d | | JIRA Issue | HADOOP-14231 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12860378/HADOOP-14231.01.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux ee59a31c497a 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 275c478 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_242 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-HADOOP-Build/16853/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-auth.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/16853/testReport/ | | Max. process+thread count | 308 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-auth U: hadoop-common-project/hadoop-auth | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/16853/console | | Powered by | Apache Yetus 0.8.0
[jira] [Commented] (HADOOP-13344) Add option to exclude Hadoop's SLF4J binding
[ https://issues.apache.org/jira/browse/HADOOP-13344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17080933#comment-17080933 ] Hadoop QA commented on HADOOP-13344: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 30s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 3m 8s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 17s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 36s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 36s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 34s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 32s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 0s{color} | {color:green} There were no new shellcheck issues. {color} | | {color:green}+1{color} | {color:green} shelldocs {color} | {color:green} 0m 36s{color} | {color:green} There were no new shelldocs issues. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 3m 33s{color} | {color:red} patch has errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 43s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 37s{color} | {color:green} hadoop-assemblies in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 10m 6s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 54s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}102m 50s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed TAP tests | hadoop_add_common_to_classpath.bats.tap | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.8 Server=19.03.8 Image:yetus/hadoop:e6455cc864d | | JIRA Issue | HADOOP-13344 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12825353/HADOOP-13344.01.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml shellcheck shelldocs | | uname | Linux 476565e80f7b 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 275c478 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_242 | | shellcheck | v0.3.7 | | TAP logs |
[jira] [Commented] (HADOOP-13238) pid handling is failing on secure datanode
[ https://issues.apache.org/jira/browse/HADOOP-13238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17080931#comment-17080931 ] Hadoop QA commented on HADOOP-13238: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 40m 17s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 30m 19s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 27s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 0s{color} | {color:green} There were no new shellcheck issues. {color} | | {color:green}+1{color} | {color:green} shelldocs {color} | {color:green} 0m 12s{color} | {color:green} The patch generated 0 new + 104 unchanged - 132 fixed = 104 total (was 236) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 29s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 28s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 35s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}107m 1s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.8 Server=19.03.8 Image:yetus/hadoop:e6455cc864d | | JIRA Issue | HADOOP-13238 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/1282/HADOOP-13238.02.patch | | Optional Tests | dupname asflicense mvnsite unit shellcheck shelldocs | | uname | Linux e93969e80625 4.15.0-91-generic #92-Ubuntu SMP Fri Feb 28 11:09:48 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 275c478 | | maven | version: Apache Maven 3.3.9 | | shellcheck | v0.3.7 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/16855/testReport/ | | Max. process+thread count | 313 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/16855/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > pid handling is failing on secure datanode > -- > > Key: HADOOP-13238 > URL: https://issues.apache.org/jira/browse/HADOOP-13238 > Project: Hadoop Common > Issue Type: Bug > Components: scripts, security >Reporter: Allen Wittenauer >Assignee: Andras Bokor >Priority: Major > Attachments: HADOOP-13238.01.patch, HADOOP-13238.02.patch > > > {code} > hdfs --daemon stop datanode > cat: /home/hadoop/H/pids/hadoop-hdfs-root-datanode.pid: No such file or > directory > WARNING: pid has changed for datanode, skip deleting pid file > cat: /home/hadoop/H/pids/hadoop-hdfs-root-datanode.pid: No such file or > directory > WARNING: daemon pid has changed for datanode, skip deleting daemon pid file > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional
[GitHub] [hadoop] mpryahin opened a new pull request #1952: HDFS-1820 FTPFileSystem attempts to close the outputstream even when it is not initialised
mpryahin opened a new pull request #1952: HDFS-1820 FTPFileSystem attempts to close the outputstream even when it is not initialised URL: https://github.com/apache/hadoop/pull/1952 - Making sure an underlying outputstream is successfully created by apache-commons FTPClient before wrapping it with FSDataOutputStream. - Gracefully release resources when a destination file can't be created due to lack of permissions. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13730) After 5 connection failures, yarn stops sending metrics graphite until restarted
[ https://issues.apache.org/jira/browse/HADOOP-13730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17080924#comment-17080924 ] Hadoop QA commented on HADOOP-13730: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 44s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 2s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 43s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 17s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 16m 20s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 59s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 42s{color} | {color:green} hadoop-common-project/hadoop-common: The patch generated 0 new + 109 unchanged - 14 fixed = 109 total (was 123) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 22s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 56s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 48s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}104m 58s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.8 Server=19.03.8 Image:yetus/hadoop:e6455cc864d | | JIRA Issue | HADOOP-13730 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12833976/0001-Graphite-can-be-unreachable-for-some-time-and-come-b.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux bb4e96eaeae2 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 275c478 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_242 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/16835/testReport/ | | Max. process+thread count | 1351 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/16835/console | |
[jira] [Updated] (HADOOP-14663) Switch to OpenClover
[ https://issues.apache.org/jira/browse/HADOOP-14663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer updated HADOOP-14663: -- Resolution: Won't Fix Status: Resolved (was: Patch Available) I no longer work on hadoop. closing. > Switch to OpenClover > > > Key: HADOOP-14663 > URL: https://issues.apache.org/jira/browse/HADOOP-14663 > Project: Hadoop Common > Issue Type: Improvement > Components: build, test >Affects Versions: 3.0.0-beta1 >Reporter: Allen Wittenauer >Priority: Minor > Attachments: HADOOP-14663.00.patch, HADOOP-14663.01.patch, > HADOOP-14663.02.patch, HADOOP-14663.03.patch, HADOOP-14663.04.patch, > HADOOP-14663.05.patch, HADOOP-14663.06.patch > > > Clover has gone open source. We should switch to it's replacement > (OpenClover) so that more people can run code coverage tests. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13632) Daemonization does not check process liveness before renicing
[ https://issues.apache.org/jira/browse/HADOOP-13632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17080919#comment-17080919 ] Hadoop QA commented on HADOOP-13632: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 15s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 43m 33s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 19s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 8s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 0s{color} | {color:green} There were no new shellcheck issues. {color} | | {color:green}+1{color} | {color:green} shelldocs {color} | {color:green} 0m 11s{color} | {color:green} The patch generated 0 new + 104 unchanged - 132 fixed = 104 total (was 236) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 43s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 43s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 44s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 82m 26s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.8 Server=19.03.8 Image:yetus/hadoop:e6455cc864d | | JIRA Issue | HADOOP-13632 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12836699/HADOOP-13632.002.patch | | Optional Tests | dupname asflicense mvnsite unit shellcheck shelldocs | | uname | Linux f99750600a1d 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 275c478 | | maven | version: Apache Maven 3.3.9 | | shellcheck | v0.3.7 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/16836/testReport/ | | Max. process+thread count | 309 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/16836/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > Daemonization does not check process liveness before renicing > - > > Key: HADOOP-13632 > URL: https://issues.apache.org/jira/browse/HADOOP-13632 > Project: Hadoop Common > Issue Type: Bug > Components: scripts >Affects Versions: 3.0.0-alpha1 >Reporter: Andrew Wang >Assignee: Andrew Wang >Priority: Major > Attachments: HADOOP-13632.001.patch, HADOOP-13632.002.patch > > > If you try to daemonize a process that is incorrectly configured, it will die > quite quickly. However, the daemonization function will still try to renice > it even if it's down, leading to something like this for my namenode: > {noformat} > -> % bin/hdfs --daemon start namenode > ERROR: Cannot set priority of namenode process 12036 > {noformat} > It'd be more user-friendly instead of this
[jira] [Commented] (HADOOP-12802) local FileContext does not rename .crc file
[ https://issues.apache.org/jira/browse/HADOOP-12802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17080917#comment-17080917 ] Hadoop QA commented on HADOOP-12802: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 37s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 40s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 39s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 21s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 24s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 9s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 24s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 48s{color} | {color:red} root in the patch failed. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 48s{color} | {color:red} root in the patch failed. {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 29s{color} | {color:green} hadoop-common-project/hadoop-common: The patch generated 0 new + 34 unchanged - 1 fixed = 34 total (was 35) {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 28s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 0m 35s{color} | {color:red} patch has errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 25s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 25s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 26s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 67m 36s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.8 Server=19.03.8 Image:yetus/hadoop:e6455cc864d | | JIRA Issue | HADOOP-12802 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12876793/HADOOP-12802.02.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 229d00d550b3 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 275c478 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_242 | | findbugs | v3.1.0-RC1 | | mvninstall | https://builds.apache.org/job/PreCommit-HADOOP-Build/16847/artifact/out/patch-mvninstall-hadoop-common-project_hadoop-common.txt | | compile | https://builds.apache.org/job/PreCommit-HADOOP-Build/16847/artifact/out/patch-compile-root.txt | | javac | https://builds.apache.org/job/PreCommit-HADOOP-Build/16847/artifact/out/patch-compile-root.txt | | mvnsite |
[jira] [Commented] (HADOOP-15066) Spurious error stopping secure datanode
[ https://issues.apache.org/jira/browse/HADOOP-15066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17080916#comment-17080916 ] Hadoop QA commented on HADOOP-15066: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 48s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 29m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 19s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 50s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 1s{color} | {color:green} There were no new shellcheck issues. {color} | | {color:green}+1{color} | {color:green} shelldocs {color} | {color:green} 0m 12s{color} | {color:green} The patch generated 0 new + 104 unchanged - 132 fixed = 104 total (was 236) {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 30s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 24s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 32s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 67m 54s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.8 Server=19.03.8 Image:yetus/hadoop:e6455cc864d | | JIRA Issue | HADOOP-15066 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12898940/HADOOP-15066.01.patch | | Optional Tests | dupname asflicense mvnsite unit shellcheck shelldocs | | uname | Linux 2986520054f8 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 275c478 | | maven | version: Apache Maven 3.3.9 | | shellcheck | v0.3.7 | | whitespace | https://builds.apache.org/job/PreCommit-HADOOP-Build/16850/artifact/out/whitespace-eol.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/16850/testReport/ | | Max. process+thread count | 309 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/16850/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > Spurious error stopping secure datanode > --- > > Key: HADOOP-15066 > URL: https://issues.apache.org/jira/browse/HADOOP-15066 > Project: Hadoop Common > Issue Type: Bug > Components: scripts >Affects Versions: 3.0.0 >Reporter: Arpit Agarwal >Assignee: Bharat Viswanadham >Priority: Major > Attachments: HADOOP-15066.00.patch, HADOOP-15066.01.patch > > > There is a spurious error when stopping a secure datanode. > {code} > # hdfs --daemon stop datanode > cat: /var/run/hadoop/hdfs//hadoop-hdfs-root-datanode.pid: No such file or > directory > WARNING: pid has changed for datanode, skip deleting pid file
[GitHub] [hadoop] sahilTakiar commented on issue #1948: HADOOP-16855. wildfly classpath/loading issues
sahilTakiar commented on issue #1948: HADOOP-16855. wildfly classpath/loading issues URL: https://github.com/apache/hadoop/pull/1948#issuecomment-612185456 hmm I guess the only third-party dependency that hadoop-aws has is the aws sdk, so I can see the hesitation in adding another dependency. on the other hand, the wildfly jar is just a runtime dependency, not compile time. the main concern I have is with the behavior change to the `openssl` option. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14498) HADOOP_OPTIONAL_TOOLS not parsed correctly
[ https://issues.apache.org/jira/browse/HADOOP-14498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17080898#comment-17080898 ] Hadoop QA commented on HADOOP-14498: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 34s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 35s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 20s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 7s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 0s{color} | {color:green} There were no new shellcheck issues. {color} | | {color:green}+1{color} | {color:green} shelldocs {color} | {color:green} 0m 12s{color} | {color:green} The patch generated 0 new + 104 unchanged - 132 fixed = 104 total (was 236) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 8s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 53s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 33s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 63m 1s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.8 Server=19.03.8 Image:yetus/hadoop:e6455cc864d | | JIRA Issue | HADOOP-14498 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12880256/HADOOP-14498.003.patch | | Optional Tests | dupname asflicense mvnsite unit shellcheck shelldocs | | uname | Linux 8ef08cccf7b3 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 275c478 | | maven | version: Apache Maven 3.3.9 | | shellcheck | v0.3.7 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/16865/testReport/ | | Max. process+thread count | 454 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/16865/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > HADOOP_OPTIONAL_TOOLS not parsed correctly > -- > > Key: HADOOP-14498 > URL: https://issues.apache.org/jira/browse/HADOOP-14498 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 3.0.0-alpha1 >Reporter: Mingliang Liu >Assignee: Sean Mackrory >Priority: Major > Attachments: HADOOP-14498.001.patch, HADOOP-14498.002.patch, > HADOOP-14498.003.patch > > > # This will make hadoop-azure not show up in the hadoop classpath, though > both hadoop-aws and hadoop-azure-datalake are in the > classpath.{code:title=hadoop-env.sh} > export HADOOP_OPTIONAL_TOOLS="hadoop-azure,hadoop-aws,hadoop-azure-datalake" > {code} > # And if we put only hadoop-azure and hadoop-aws, both of them are shown in > the classpath. > {code:title=hadoop-env.sh} > export HADOOP_OPTIONAL_TOOLS="hadoop-azure,hadoop-aws" > {code} > This makes me guess that, while parsing the {{HADOOP_OPTIONAL_TOOLS}}, we > make
[jira] [Commented] (HADOOP-13869) using HADOOP_USER_CLASSPATH_FIRST inconsistently
[ https://issues.apache.org/jira/browse/HADOOP-13869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17080896#comment-17080896 ] Hadoop QA commented on HADOOP-13869: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 36s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 5s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 0s{color} | {color:green} There were no new shellcheck issues. {color} | | {color:green}+1{color} | {color:green} shelldocs {color} | {color:green} 0m 13s{color} | {color:green} The patch generated 0 new + 104 unchanged - 132 fixed = 104 total (was 236) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 56s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 38s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 29s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 61m 36s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.8 Server=19.03.8 Image:yetus/hadoop:e6455cc864d | | JIRA Issue | HADOOP-13869 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12842118/HADOOP-13869.001.patch | | Optional Tests | dupname asflicense mvnsite unit shellcheck shelldocs | | uname | Linux 07122e9e7c09 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 275c478 | | maven | version: Apache Maven 3.3.9 | | shellcheck | v0.3.7 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/16846/testReport/ | | Max. process+thread count | 456 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/16846/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > using HADOOP_USER_CLASSPATH_FIRST inconsistently > > > Key: HADOOP-13869 > URL: https://issues.apache.org/jira/browse/HADOOP-13869 > Project: Hadoop Common > Issue Type: Bug > Components: scripts >Affects Versions: 3.0.0-alpha2 >Reporter: Fei Hui >Assignee: Fei Hui >Priority: Major > Attachments: HADOOP-13869.001.patch > > > I find HADOOP_USER_CLASSPATH_FIRST is used inconsistently. Somewhere set it > true, somewhere set it yes. > I know it doesn't mattter because it affects classpath once > HADOOP_USER_CLASSPATH_FIRST is not empty > BUT Maybe it's better that using HADOOP_USER_CLASSPATH_FIRST uniformly -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail:
[jira] [Commented] (HADOOP-11631) securemode documentation should refer to the http auth doc
[ https://issues.apache.org/jira/browse/HADOOP-11631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17080895#comment-17080895 ] Hadoop QA commented on HADOOP-11631: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 21s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 17s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 41m 11s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 5s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 28s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 57m 34s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.8 Server=19.03.8 Image:yetus/hadoop:e6455cc864d | | JIRA Issue | HADOOP-11631 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12852200/HADOOP-11631.001.patch | | Optional Tests | dupname asflicense mvnsite | | uname | Linux ac7cd3cd72fb 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 275c478 | | maven | version: Apache Maven 3.3.9 | | Max. process+thread count | 423 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/16870/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > securemode documentation should refer to the http auth doc > -- > > Key: HADOOP-11631 > URL: https://issues.apache.org/jira/browse/HADOOP-11631 > Project: Hadoop Common > Issue Type: Improvement > Components: documentation >Affects Versions: 3.0.0-alpha1 >Reporter: Allen Wittenauer >Assignee: Attila Bukor >Priority: Major > Labels: beginner > Attachments: HADOOP-11631.001.patch > > > SecureMode.md should point folks to the HTTP Auth doc for securing the > user-facing web interfaces. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1951: HDFS-15270. Account for *env == NULL in hdfsThreadDestructor
hadoop-yetus commented on issue #1951: HDFS-15270. Account for *env == NULL in hdfsThreadDestructor URL: https://github.com/apache/hadoop/pull/1951#issuecomment-612173615 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 38s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 19m 31s | trunk passed | | +1 :green_heart: | compile | 1m 50s | trunk passed | | +1 :green_heart: | mvnsite | 0m 28s | trunk passed | | +1 :green_heart: | shadedclient | 36m 34s | branch has no errors when building and testing our client artifacts. | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 16s | the patch passed | | +1 :green_heart: | compile | 1m 45s | the patch passed | | -1 :x: | cc | 1m 45s | hadoop-hdfs-project_hadoop-hdfs-native-client generated 5 new + 14 unchanged - 5 fixed = 19 total (was 19) | | +1 :green_heart: | golang | 1m 45s | the patch passed | | +1 :green_heart: | javac | 1m 45s | the patch passed | | +1 :green_heart: | mvnsite | 0m 15s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 13m 51s | patch has no errors when building and testing our client artifacts. | ||| _ Other Tests _ | | +1 :green_heart: | unit | 7m 17s | hadoop-hdfs-native-client in the patch passed. | | +1 :green_heart: | asflicense | 0m 30s | The patch does not generate ASF License warnings. | | | | 63m 46s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.8 Server=19.03.8 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1951/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1951 | | Optional Tests | dupname asflicense compile cc mvnsite javac unit golang | | uname | Linux 1644a965a10f 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 275c478 | | Default Java | 1.8.0_242 | | cc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1951/1/artifact/out/diff-compile-cc-hadoop-hdfs-project_hadoop-hdfs-native-client.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1951/1/testReport/ | | Max. process+thread count | 414 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: hadoop-hdfs-project/hadoop-hdfs-native-client | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1951/1/console | | versions | git=2.7.4 maven=3.3.9 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16144) Create a Hadoop RPC based KMS client
[ https://issues.apache.org/jira/browse/HADOOP-16144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17080846#comment-17080846 ] Hadoop QA commented on HADOOP-16144: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 9s{color} | {color:red} HADOOP-16144 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HADOOP-16144 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12964948/HADOOP-16144.001.patch | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/16869/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > Create a Hadoop RPC based KMS client > > > Key: HADOOP-16144 > URL: https://issues.apache.org/jira/browse/HADOOP-16144 > Project: Hadoop Common > Issue Type: Sub-task > Components: kms >Reporter: Wei-Chiu Chuang >Assignee: Anu Engineer >Priority: Major > Attachments: HADOOP-16144.001.patch, KMS.RPC.patch > > > Create a new KMS client implementation that speaks Hadoop RPC. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-5943) IOUtils#copyBytes methods should not close streams that are passed in as parameters
[ https://issues.apache.org/jira/browse/HADOOP-5943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17080844#comment-17080844 ] Hadoop QA commented on HADOOP-5943: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 8s{color} | {color:red} HADOOP-5943 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HADOOP-5943 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12892089/HADOOP-5943.03.patch | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/16868/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > IOUtils#copyBytes methods should not close streams that are passed in as > parameters > --- > > Key: HADOOP-5943 > URL: https://issues.apache.org/jira/browse/HADOOP-5943 > Project: Hadoop Common > Issue Type: Bug > Components: io >Reporter: Hairong Kuang >Assignee: Andras Bokor >Priority: Major > Attachments: HADOOP-5943.01.patch, HADOOP-5943.02.patch, > HADOOP-5943.03.patch > > > The following methods in IOUtils close the streams that are passed in as > parameters. Calling these methods can easily trigger findbug OBL: Method may > fail to clean up stream or resource (OBL_UNSATISFIED_OBLIGATION). A good > practice should be to close a stream in the same method where the stream is > opened. > public static void copyBytes(InputStream in, OutputStream out, int buffSize, > boolean close) > public static void copyBytes(InputStream in, OutputStream out, Configuration > conf, boolean close) > These methods should be deprecated. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15302) Enable DataNode/NameNode service plugins with Service Provider interface
[ https://issues.apache.org/jira/browse/HADOOP-15302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17080834#comment-17080834 ] Hadoop QA commented on HADOOP-15302: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 7s{color} | {color:red} HADOOP-15302 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HADOOP-15302 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12913885/HADOOP-15302.002.patch | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/16867/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > Enable DataNode/NameNode service plugins with Service Provider interface > > > Key: HADOOP-15302 > URL: https://issues.apache.org/jira/browse/HADOOP-15302 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Marton Elek >Assignee: Marton Elek >Priority: Major > Attachments: HADOOP-15302.001.patch, HADOOP-15302.002.patch > > > HADOOP-5257 introduced ServicePlugin capabilities for NameNode/DataNode. As > of now they could be activated by configuration values. > I propose to activate plugins with Service Provider Interface. In case of a > special service file is added a jar it would be enough to add the plugin to > the classpath. It would help to add optional components to NameNode/DataNode > with settings the classpath. > This is the same api which could be used in java 9 to consume defined > services. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15897) Port range binding fails due to socket bind race condition
[ https://issues.apache.org/jira/browse/HADOOP-15897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17080791#comment-17080791 ] Hadoop QA commented on HADOOP-15897: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 7s{color} | {color:red} HADOOP-15897 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HADOOP-15897 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12946726/HADOOP-15897.patch | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/16863/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > Port range binding fails due to socket bind race condition > -- > > Key: HADOOP-15897 > URL: https://issues.apache.org/jira/browse/HADOOP-15897 > Project: Hadoop Common > Issue Type: Bug > Components: ipc >Affects Versions: 2.0.2-alpha >Reporter: Daryn Sharp >Assignee: Daryn Sharp >Priority: Major > Attachments: HADOOP-15897.patch > > > Java's {{ServerSocket#bind}} does both a bind and listen. At a system level, > multiple processes may bind to the same port but only one may listen. Java > sockets are left in an unrecoverable state when a process loses the race to > listen first. > Servers that compete over a listening port range (ex. App Master) will fail > the entire range after a collision. The IPC layer should make a better > effort to recover from failed binds. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15009) hadoop-resourceestimator's shell scripts are a mess
[ https://issues.apache.org/jira/browse/HADOOP-15009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17080792#comment-17080792 ] Hadoop QA commented on HADOOP-15009: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 8s{color} | {color:red} HADOOP-15009 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HADOOP-15009 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12901745/HADOOP-15009.001.patch | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/16856/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > hadoop-resourceestimator's shell scripts are a mess > --- > > Key: HADOOP-15009 > URL: https://issues.apache.org/jira/browse/HADOOP-15009 > Project: Hadoop Common > Issue Type: Bug > Components: scripts, tools >Affects Versions: 3.1.0 >Reporter: Allen Wittenauer >Assignee: Ajay Kumar >Priority: Major > Attachments: HADOOP-15009.001.patch, Screen Shot 2017-12-12 at > 11.16.23 AM.png > > > #1: > There's no reason for estimator.sh to exist. Just make it a subcommand under > yarn or whatever. > #2: > In it's current form, it's missing a BUNCH of boilerplate that makes certain > functionality completely fail. > #3 > start/stop-estimator.sh is full of copypasta that doesn't actually do > anything/work correctly. Additionally, if estimator.sh doesn't exist, > neither does this since yarn --daemon start/stop will do everything as > necessary. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15112) create-release didn't sign artifacts
[ https://issues.apache.org/jira/browse/HADOOP-15112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17080795#comment-17080795 ] Hadoop QA commented on HADOOP-15112: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 6s{color} | {color:red} HADOOP-15112 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HADOOP-15112 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12910275/HADOOP-15112.01.patch | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/16864/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > create-release didn't sign artifacts > > > Key: HADOOP-15112 > URL: https://issues.apache.org/jira/browse/HADOOP-15112 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Andrew Wang >Assignee: Lei (Eddy) Xu >Priority: Major > Attachments: HADOOP-15112.01.patch > > > While building the 3.0.0 RC1, I had to re-invoke Maven because the > create-release script didn't deploy signatures to Nexus. Looking at the repo > (and my artifacts), it seems like "sign" didn't run properly. > I lost my create-release output, but I noticed that it will log and continue > rather than abort in some error conditions. This might have caused my lack of > signatures. IMO it'd be better to explicitly fail in these situations. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14389) Exception handling is incorrect in KerberosName.java
[ https://issues.apache.org/jira/browse/HADOOP-14389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17080817#comment-17080817 ] Hadoop QA commented on HADOOP-14389: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 7s{color} | {color:red} HADOOP-14389 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HADOOP-14389 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12897769/HADOOP-14389.03.patch | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/16866/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > Exception handling is incorrect in KerberosName.java > > > Key: HADOOP-14389 > URL: https://issues.apache.org/jira/browse/HADOOP-14389 > Project: Hadoop Common > Issue Type: Bug >Reporter: Andras Bokor >Assignee: Andras Bokor >Priority: Major > Labels: supportability > Attachments: HADOOP-14389.01.patch, HADOOP-14389.02.patch, > HADOOP-14389.03.patch > > > I found multiple inconsistency: > Rule: {{RULE:\[2:$1/$2\@$3\](.\*)s/.\*/hdfs/}} > Principal: {{nn/host.dom...@realm.tld}} > Expected exception: {{BadStringFormat: ...3 is out of range...}} > Actual exception: {{ArrayIndexOutOfBoundsException: 3}} > > Rule: {{RULE:\[:$1/$2\@$0](.\*)s/.\*/hdfs/}} (Missing num of components) > Expected: {{IllegalArgumentException}} > Actual: {{java.lang.NumberFormatException: For input string: ""}} > > Rule: {{RULE:\[2:$-1/$2\@$3\](.\*)s/.\*/hdfs/}} > Expected {{BadStringFormat: -1 is outside of valid range...}} > Actual: {{java.lang.NumberFormatException: For input string: ""}} > > Rule: {{RULE:\[2:$one/$2\@$3\](.\*)s/.\*/hdfs/}} > Expected {{java.lang.NumberFormatException: For input string: "one"}} > Acutal {{java.lang.NumberFormatException: For input string: ""}} > > In addtion: > {code}[^\\]]{code} > does not really make sense in {{ruleParser}}. Most probably it was needed > because we parse the whole rule string and remove the parsed rule from > beginning of the string: {{KerberosName#parseRules}}. This made the regex > engine parse wrong without it. > In addition: > In tests some corner cases are not covered. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14877) Trunk compilation fails in windows
[ https://issues.apache.org/jira/browse/HADOOP-14877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17080790#comment-17080790 ] Hadoop QA commented on HADOOP-14877: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 7s{color} | {color:red} HADOOP-14877 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HADOOP-14877 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12903076/HADOOP-14877-001.patch | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/16862/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > Trunk compilation fails in windows > -- > > Key: HADOOP-14877 > URL: https://issues.apache.org/jira/browse/HADOOP-14877 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 3.1.0 > Environment: windows >Reporter: Brahma Reddy Battula >Priority: Major > Attachments: HADOOP-14877-001.patch > > > {noformat} > [INFO] Dependencies classpath: > D:\trunk\hadoop\hadoop-client-modules\hadoop-client-runtime\target\hadoop-client-runtime-3.1.0-SNAPSHOT.jar;D:\trunk\had > oop\hadoop-client-modules\hadoop-client-api\target\hadoop-client-api-3.1.0-SNAPSHOT.jar > [INFO] > [INFO] --- exec-maven-plugin:1.3.1:exec (check-jar-contents) @ > hadoop-client-check-invariants --- > java.io.FileNotFoundException: D (The system cannot find the file specified) > at java.util.zip.ZipFile.open(Native Method) > at java.util.zip.ZipFile.(ZipFile.java:219) > at java.util.zip.ZipFile.(ZipFile.java:149) > at java.util.zip.ZipFile.(ZipFile.java:120) > at sun.tools.jar.Main.list(Main.java:1115) > at sun.tools.jar.Main.run(Main.java:293) > at sun.tools.jar.Main.main(Main.java:1288) > java.io.FileNotFoundException: > \trunk\hadoop\hadoop-client-modules\hadoop-client-runtime\target\hadoop-client-runtime-3. > 1.0-SNAPSHOT.jar;D (The system cannot find the file specified) > at java.util.zip.ZipFile.open(Native Method) > at java.util.zip.ZipFile.(ZipFile.java:219) > at java.util.zip.ZipFile.(ZipFile.java:149) > at java.util.zip.ZipFile.(ZipFile.java:120) > at sun.tools.jar.Main.list(Main.java:1115) > at sun.tools.jar.Main.run(Main.java:293) > at sun.tools.jar.Main.main(Main.java:1288) > [INFO] Artifact looks correct: 'D' > [INFO] Artifact looks correct: 'hadoop-client-runtime-3.1.0-SNAPSHOT.jar;D' > [ERROR] Found artifact with unexpected contents: > '\trunk\hadoop\hadoop-client-modules\hadoop-client-api\target\hadoop-cl > ient-api-3.1.0-SNAPSHOT.jar' > Please check the following and either correct the build or update > the allowed list with reasoning. > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14808) Hadoop keychain
[ https://issues.apache.org/jira/browse/HADOOP-14808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17080787#comment-17080787 ] Hadoop QA commented on HADOOP-14808: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 8s{color} | {color:red} HADOOP-14808 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HADOOP-14808 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12885725/HADOOP-14808.003.patch | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/16860/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > Hadoop keychain > --- > > Key: HADOOP-14808 > URL: https://issues.apache.org/jira/browse/HADOOP-14808 > Project: Hadoop Common > Issue Type: New Feature > Components: security >Affects Versions: 2.7.0 >Reporter: John Zhuge >Assignee: John Zhuge >Priority: Major > Attachments: HADOOP-14808.001.patch, HADOOP-14808.002.patch, > HADOOP-14808.003.patch > > > Extend the idea from HADOOP-6520 "UGI should load tokens from the > environment" to a generic lightweight "keychain" design. Load keys (secrets) > into a keychain in UGI (secret map) at startup. YARN will distribute them > securely into each container. The Hadoop code running in the container can > then retrieve the credentials from UGI. > The use case is Bring Your Own Key (BYOK) credentials for cloud connectors > (adl, wasb, s3a, etc.), while Hadoop authentication is still Kerberos. No > configuration change, no admin involved. It will support YARN applications > initially, e.g., DistCp, Tera Suite, Spark-on-Yarn, etc. > Implementation is surprisingly simple because almost all pieces are in place: > * Retrieve secrets from UGI using {{conf.getPassword}} backed by the existing > Credential Provider class {{UserProvider}} > * Reuse Credential Provider classes and interface to define local permanent > or transient credential store, e.g., {{LocalJavaKeyStoreProvider}} > * New: create a new transient Credential Provider that logs into AAD with > username/password or device code, and then put the Client ID and Refresh > Token into the keychain > * New: create a new permanent Credential Provider based on Hadoop > configuration XML, for dev/testing purpose. > Links > * HADOOP-11766 Generic token authentication support for Hadoop > * HADOOP-11744 Support OAuth2 in Hadoop > * HADOOP-10959 A Kerberos based token authentication approach > * HADOOP-9392 Token based authentication and Single Sign On -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14731) Update gitignore to exclude output of site build
[ https://issues.apache.org/jira/browse/HADOOP-14731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17080786#comment-17080786 ] Hadoop QA commented on HADOOP-14731: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 7s{color} | {color:red} HADOOP-14731 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HADOOP-14731 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12880312/HADOOP-14731.001.patch | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/16858/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > Update gitignore to exclude output of site build > > > Key: HADOOP-14731 > URL: https://issues.apache.org/jira/browse/HADOOP-14731 > Project: Hadoop Common > Issue Type: Improvement > Components: build, site >Affects Versions: 3.0.0-alpha3 >Reporter: Andrew Wang >Assignee: Andrew Wang >Priority: Major > Attachments: HADOOP-14731.001.patch > > > Site build generates a bunch of files that aren't caught by gitignore, let's > update. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14444) New implementation of ftp and sftp filesystems
[ https://issues.apache.org/jira/browse/HADOOP-1?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17080788#comment-17080788 ] Hadoop QA commented on HADOOP-1: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 10s{color} | {color:red} HADOOP-1 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HADOOP-1 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12967316/HADOOP-1.19.patch | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/16843/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > New implementation of ftp and sftp filesystems > -- > > Key: HADOOP-1 > URL: https://issues.apache.org/jira/browse/HADOOP-1 > Project: Hadoop Common > Issue Type: New Feature > Components: fs >Affects Versions: 2.8.0 >Reporter: Lukas Waldmann >Assignee: Lukas Waldmann >Priority: Major > Attachments: HADOOP-1.10.patch, HADOOP-1.11.patch, > HADOOP-1.12.patch, HADOOP-1.13.patch, HADOOP-1.14.patch, > HADOOP-1.15.patch, HADOOP-1.16.patch, HADOOP-1.17.patch, > HADOOP-1.18.patch, HADOOP-1.18.patch, HADOOP-1.19.patch, > HADOOP-1.2.patch, HADOOP-1.3.patch, HADOOP-1.4.patch, > HADOOP-1.5.patch, HADOOP-1.6.patch, HADOOP-1.7.patch, > HADOOP-1.8.patch, HADOOP-1.9.patch, HADOOP-1.patch > > > Current implementation of FTP and SFTP filesystems have severe limitations > and performance issues when dealing with high number of files. Mine patch > solve those issues and integrate both filesystems such a way that most of the > core functionality is common for both and therefore simplifying the > maintainability. > The core features: > * Support for HTTP/SOCKS proxies > * Support for passive FTP > * Support for explicit FTPS (SSL/TLS) > * Support of connection pooling - new connection is not created for every > single command but reused from the pool. > For huge number of files it shows order of magnitude performance improvement > over not pooled connections. > * Caching of directory trees. For ftp you always need to list whole > directory whenever you ask information about particular file. > Again for huge number of files it shows order of magnitude performance > improvement over not cached connections. > * Support of keep alive (NOOP) messages to avoid connection drops > * Support for Unix style or regexp wildcard glob - useful for listing a > particular files across whole directory tree > * Support for reestablishing broken ftp data transfers - can happen > surprisingly often > * Support for sftp private keys (including pass phrase) > * Support for keeping passwords, private keys and pass phrase in the jceks > key stores -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-11436) HarFileSystem does not preserve permission, users and groups
[ https://issues.apache.org/jira/browse/HADOOP-11436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17080785#comment-17080785 ] Hadoop QA commented on HADOOP-11436: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 8s{color} | {color:red} HADOOP-11436 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HADOOP-11436 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12868137/HADOOP-11436.2.patch | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/16838/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > HarFileSystem does not preserve permission, users and groups > > > Key: HADOOP-11436 > URL: https://issues.apache.org/jira/browse/HADOOP-11436 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.0.0-alpha2 >Reporter: John George >Assignee: Sarah Victor >Priority: Minor > Labels: BB2015-05-TBR > Attachments: HADOOP-11436.1.patch, HADOOP-11436.2.patch > > > HARFileSystem does not preserve permission, users or groups. The archive > itself has these stored, but the HarFileSystem ignores these. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16517) Allow optional mutual TLS in HttpServer2
[ https://issues.apache.org/jira/browse/HADOOP-16517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17080783#comment-17080783 ] Hadoop QA commented on HADOOP-16517: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 9s{color} | {color:red} HADOOP-16517 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HADOOP-16517 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12978219/HADOOP-16517.1.patch | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/16851/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > Allow optional mutual TLS in HttpServer2 > > > Key: HADOOP-16517 > URL: https://issues.apache.org/jira/browse/HADOOP-16517 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Kihwal Lee >Assignee: Kihwal Lee >Priority: Major > Attachments: HADOOP-16517.1.patch, HADOOP-16517.patch > > > Currently the webservice can enforce mTLS by setting > "dfs.client.https.need-auth" on the server side. (The config name is > misleading, as it is actually server-side config. It has been deprecated from > the client config) A hadoop client can talk to mTLS enforced web service by > setting "hadoop.ssl.require.client.cert" with proper ssl config. > We have seen use case where mTLS needs to be enabled optionally for only > those clients who supplies their cert. In a mixed environment like this, > individual services may still enforce mTLS for a subset of endpoints by > checking the existence of x509 cert in the request. > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-8022) Deprecate checkTGTAndReloginFromKeytab()
[ https://issues.apache.org/jira/browse/HADOOP-8022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17080784#comment-17080784 ] Hadoop QA commented on HADOOP-8022: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 7s{color} | {color:red} HADOOP-8022 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HADOOP-8022 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12907712/HADOOP-8022.01.patch | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/16849/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > Deprecate checkTGTAndReloginFromKeytab() > > > Key: HADOOP-8022 > URL: https://issues.apache.org/jira/browse/HADOOP-8022 > Project: Hadoop Common > Issue Type: Improvement > Components: security >Affects Versions: 0.24.0 >Reporter: Robert Joseph Evans >Assignee: Andras Bokor >Priority: Major > Attachments: HADOOP-8022.01.patch, HADOOP-8022.01.patch > > > checkTGTAndReloginFromKeytab() does a small check and then calls > reloginFromKeytab() which has been updated to do the same check. > checkTGTAndReloginFromKeytab() is redundant, and should be deprecated if it > is publicly visible, or just removed if it is not publicly visible. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15864) Job submitter / executor fail when SBN domain name can not resolved
[ https://issues.apache.org/jira/browse/HADOOP-15864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17080781#comment-17080781 ] Brahma Reddy Battula commented on HADOOP-15864: --- Remooved the fix version, as this Jira is reverted. > Job submitter / executor fail when SBN domain name can not resolved > --- > > Key: HADOOP-15864 > URL: https://issues.apache.org/jira/browse/HADOOP-15864 > Project: Hadoop Common > Issue Type: Bug >Reporter: Xiaoqiao He >Assignee: Xiaoqiao He >Priority: Critical > Attachments: HADOOP-15864-branch.2.7.001.patch, > HADOOP-15864-branch.2.7.002.patch, HADOOP-15864.003.patch, > HADOOP-15864.004.patch, HADOOP-15864.005.patch, > HADOOP-15864.branch.2.7.004.patch > > > Job submit failure and Task executes failure if Standby NameNode domain name > can not resolved on HDFS HA with DelegationToken feature. > This issue is triggered when create {{ConfiguredFailoverProxyProvider}} > instance which invoke {{HAUtil.cloneDelegationTokenForLogicalUri}} in HA mode > with Security. Since in HDFS HA mode UGI need include separate token for each > NameNode in order to dealing with Active-Standby switch, the double tokens' > content is same of course. > However when #setTokenService in {{HAUtil.cloneDelegationTokenForLogicalUri}} > it checks whether the address of NameNode has been resolved or not, if Not, > throw #IllegalArgumentException upon, then job submitter/ task executor fail. > HDFS-8068 and HADOOP-12125 try to fix it, but I don't think the two tickets > resolve completely. > Another questions many guys consider is why NameNode domain name can not > resolve? I think there are many scenarios, for instance node replace when > meet fault, and refresh DNS sometimes. Anyway, Standby NameNode failure > should not impact Hadoop cluster stability in my opinion. > a. code ref: org.apache.hadoop.security.SecurityUtil line373-386 > {code:java} > public static Text buildTokenService(InetSocketAddress addr) { > String host = null; > if (useIpForTokenService) { > if (addr.isUnresolved()) { // host has no ip address > throw new IllegalArgumentException( > new UnknownHostException(addr.getHostName()) > ); > } > host = addr.getAddress().getHostAddress(); > } else { > host = StringUtils.toLowerCase(addr.getHostName()); > } > return new Text(host + ":" + addr.getPort()); > } > {code} > b.exception log ref: > {code:xml} > at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.io.IOException: Couldn't create proxy provider class > org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider > at > org.apache.hadoop.hdfs.NameNodeProxies.createFailoverProxyProvider(NameNodeProxies.java:515) > at > org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:170) > at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:761) > at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:691) > at > org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:150) > at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2713) > at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:93) > at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2747) > at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2729) > at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:385) > at > org.apache.hadoop.fs.viewfs.ChRootedFileSystem.(ChRootedFileSystem.java:106) > at > org.apache.hadoop.fs.viewfs.ViewFileSystem$1.getTargetFileSystem(ViewFileSystem.java:178) > at > org.apache.hadoop.fs.viewfs.ViewFileSystem$1.getTargetFileSystem(ViewFileSystem.java:172) > at org.apache.hadoop.fs.viewfs.InodeTree.createLink(InodeTree.java:303) > at org.apache.hadoop.fs.viewfs.InodeTree.(InodeTree.java:377) > at > org.apache.hadoop.fs.viewfs.ViewFileSystem$1.(ViewFileSystem.java:172) > at > org.apache.hadoop.fs.viewfs.ViewFileSystem.initialize(ViewFileSystem.java:172) > at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2713) > at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:93) > at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2747) > at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2729) > at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:385) > at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:176) > at org.apache.hadoop.mapred.JobConf.getWorkingDirectory(JobConf.java:665) > ... 35 more > Caused by:
[jira] [Updated] (HADOOP-15864) Job submitter / executor fail when SBN domain name can not resolved
[ https://issues.apache.org/jira/browse/HADOOP-15864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HADOOP-15864: -- Fix Version/s: (was: 3.1.2) (was: 3.3.0) (was: 3.0.4) > Job submitter / executor fail when SBN domain name can not resolved > --- > > Key: HADOOP-15864 > URL: https://issues.apache.org/jira/browse/HADOOP-15864 > Project: Hadoop Common > Issue Type: Bug >Reporter: Xiaoqiao He >Assignee: Xiaoqiao He >Priority: Critical > Attachments: HADOOP-15864-branch.2.7.001.patch, > HADOOP-15864-branch.2.7.002.patch, HADOOP-15864.003.patch, > HADOOP-15864.004.patch, HADOOP-15864.005.patch, > HADOOP-15864.branch.2.7.004.patch > > > Job submit failure and Task executes failure if Standby NameNode domain name > can not resolved on HDFS HA with DelegationToken feature. > This issue is triggered when create {{ConfiguredFailoverProxyProvider}} > instance which invoke {{HAUtil.cloneDelegationTokenForLogicalUri}} in HA mode > with Security. Since in HDFS HA mode UGI need include separate token for each > NameNode in order to dealing with Active-Standby switch, the double tokens' > content is same of course. > However when #setTokenService in {{HAUtil.cloneDelegationTokenForLogicalUri}} > it checks whether the address of NameNode has been resolved or not, if Not, > throw #IllegalArgumentException upon, then job submitter/ task executor fail. > HDFS-8068 and HADOOP-12125 try to fix it, but I don't think the two tickets > resolve completely. > Another questions many guys consider is why NameNode domain name can not > resolve? I think there are many scenarios, for instance node replace when > meet fault, and refresh DNS sometimes. Anyway, Standby NameNode failure > should not impact Hadoop cluster stability in my opinion. > a. code ref: org.apache.hadoop.security.SecurityUtil line373-386 > {code:java} > public static Text buildTokenService(InetSocketAddress addr) { > String host = null; > if (useIpForTokenService) { > if (addr.isUnresolved()) { // host has no ip address > throw new IllegalArgumentException( > new UnknownHostException(addr.getHostName()) > ); > } > host = addr.getAddress().getHostAddress(); > } else { > host = StringUtils.toLowerCase(addr.getHostName()); > } > return new Text(host + ":" + addr.getPort()); > } > {code} > b.exception log ref: > {code:xml} > at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.io.IOException: Couldn't create proxy provider class > org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider > at > org.apache.hadoop.hdfs.NameNodeProxies.createFailoverProxyProvider(NameNodeProxies.java:515) > at > org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:170) > at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:761) > at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:691) > at > org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:150) > at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2713) > at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:93) > at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2747) > at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2729) > at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:385) > at > org.apache.hadoop.fs.viewfs.ChRootedFileSystem.(ChRootedFileSystem.java:106) > at > org.apache.hadoop.fs.viewfs.ViewFileSystem$1.getTargetFileSystem(ViewFileSystem.java:178) > at > org.apache.hadoop.fs.viewfs.ViewFileSystem$1.getTargetFileSystem(ViewFileSystem.java:172) > at org.apache.hadoop.fs.viewfs.InodeTree.createLink(InodeTree.java:303) > at org.apache.hadoop.fs.viewfs.InodeTree.(InodeTree.java:377) > at > org.apache.hadoop.fs.viewfs.ViewFileSystem$1.(ViewFileSystem.java:172) > at > org.apache.hadoop.fs.viewfs.ViewFileSystem.initialize(ViewFileSystem.java:172) > at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2713) > at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:93) > at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2747) > at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2729) > at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:385) > at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:176) > at org.apache.hadoop.mapred.JobConf.getWorkingDirectory(JobConf.java:665) > ... 35 more > Caused
[jira] [Commented] (HADOOP-16524) Automatic keystore reloading for HttpServer2
[ https://issues.apache.org/jira/browse/HADOOP-16524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17080778#comment-17080778 ] Hadoop QA commented on HADOOP-16524: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 12s{color} | {color:red} HADOOP-16524 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HADOOP-16524 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12978221/HADOOP-16524.patch | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/16837/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > Automatic keystore reloading for HttpServer2 > > > Key: HADOOP-16524 > URL: https://issues.apache.org/jira/browse/HADOOP-16524 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Kihwal Lee >Assignee: Kihwal Lee >Priority: Major > Attachments: HADOOP-16524.patch > > > Jetty 9 simplified reloading of keystore. This allows hadoop daemon's SSL > cert to be updated in place without having to restart the service. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12889) Make kdiag something services can use directly on startup
[ https://issues.apache.org/jira/browse/HADOOP-12889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17080777#comment-17080777 ] Hadoop QA commented on HADOOP-12889: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 13s{color} | {color:red} HADOOP-12889 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HADOOP-12889 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12799532/HADOOP-12289-002.patch | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/16839/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > Make kdiag something services can use directly on startup > - > > Key: HADOOP-12889 > URL: https://issues.apache.org/jira/browse/HADOOP-12889 > Project: Hadoop Common > Issue Type: Sub-task > Components: security >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Attachments: HADOOP-12289-002.patch, HADOOP-12889-001.patch > > > I want the ability to start kdiag as a service launches, without doing > anything with side-effects other than usual UGI Init (that is: no keytab > login), and hook this up so that services can start it. Then add an option > for the YARN and HDFS services to do this on launch (Default: off) -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15870) S3AInputStream.remainingInFile should use nextReadPos
[ https://issues.apache.org/jira/browse/HADOOP-15870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HADOOP-15870: -- Fix Version/s: (was: 3.3.0) > S3AInputStream.remainingInFile should use nextReadPos > - > > Key: HADOOP-15870 > URL: https://issues.apache.org/jira/browse/HADOOP-15870 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.8.4, 3.1.1 >Reporter: Shixiong Zhu >Assignee: lqjacklee >Priority: Major > Attachments: HADOOP-15870-002.patch, HADOOP-15870-003.patch, > HADOOP-15870-004.patch, HADOOP-15870-005.patch, HADOOP-15870-006.patch > > > Otherwise `remainingInFile` will not change after `seek`. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16818) ABFS: Combine append+flush calls for blockblob & appendblob
[ https://issues.apache.org/jira/browse/HADOOP-16818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HADOOP-16818: -- Fix Version/s: (was: 3.3.0) > ABFS: Combine append+flush calls for blockblob & appendblob > > > Key: HADOOP-16818 > URL: https://issues.apache.org/jira/browse/HADOOP-16818 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.3.0 >Reporter: Bilahari T H >Assignee: Ishani >Priority: Minor > > Combine append+flush calls for blockblob & appendblob -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16393) S3Guard init command uses global settings, not those of target bucket
[ https://issues.apache.org/jira/browse/HADOOP-16393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HADOOP-16393: -- Fix Version/s: (was: 3.3.0) > S3Guard init command uses global settings, not those of target bucket > - > > Key: HADOOP-16393 > URL: https://issues.apache.org/jira/browse/HADOOP-16393 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.2.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > > If you call {{s3guard init s3a://name/}} then the custom bucket options of > fs.s3a.bucket.name are not picked up, instead the global value is used. > Fix: take the name of the bucket and use that to eval properties and patch > the config used for the init command. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16909) Typo in distcp counters
[ https://issues.apache.org/jira/browse/HADOOP-16909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HADOOP-16909: -- Fix Version/s: (was: 3.3.0) > Typo in distcp counters > --- > > Key: HADOOP-16909 > URL: https://issues.apache.org/jira/browse/HADOOP-16909 > Project: Hadoop Common > Issue Type: Improvement > Components: tools/distcp >Reporter: Sebastian Nagel >Assignee: Sebastian Nagel >Priority: Trivial > > The logging of distcp job counters includes a type ("btyes" instead of > "bytes"): > {noformat} > DistCp Counters > Bandwidth in Btyes=1077528522 > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16794) S3A reverts KMS encryption to the bucket's default KMS key in rename/copy
[ https://issues.apache.org/jira/browse/HADOOP-16794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HADOOP-16794: -- Fix Version/s: (was: 3.3.0) > S3A reverts KMS encryption to the bucket's default KMS key in rename/copy > - > > Key: HADOOP-16794 > URL: https://issues.apache.org/jira/browse/HADOOP-16794 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.2.1 >Reporter: Mukund Thakur >Assignee: Mukund Thakur >Priority: Major > > When using (bucket-level) S3 Default Encryption with SSE-KMS and a CMK, all > files uploaded via the HDFS {{FileSystem}} {{s3a://}} scheme receive the > wrong encryption key, always falling back to the region-specific AWS-managed > KMS key for S3, instead of retaining the custom CMK. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15834) Improve throttling on S3Guard DDB batch retries
[ https://issues.apache.org/jira/browse/HADOOP-15834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HADOOP-15834: -- Target Version/s: 3.4.0 (was: 3.3.0) Bulk update: moved all 3.3.0 non-blocker issues, please move back if it is a blocker. > Improve throttling on S3Guard DDB batch retries > --- > > Key: HADOOP-15834 > URL: https://issues.apache.org/jira/browse/HADOOP-15834 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.2.0 >Reporter: Steve Loughran >Priority: Minor > > the batch throttling may fail too fast > if there's batch update of 25 writes but the default retry count is nine > attempts, only nine writes of the batch may be attempted...even if each > attempt is actually successfully writing data. > In contrast, a single write of a piece of data gets the same no. of attempts, > so 25 individual writes can handle a lot more throttling than a bulk write. > Proposed: retry logic to be more forgiving of batch writes, such as not > consider a batch call where at least one data item was written to count as a > failure -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14444) New implementation of ftp and sftp filesystems
[ https://issues.apache.org/jira/browse/HADOOP-1?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HADOOP-1: -- Target Version/s: 3.4.0 (was: 3.3.0) Bulk update: moved all 3.3.0 non-blocker issues, please move back if it is a blocker. > New implementation of ftp and sftp filesystems > -- > > Key: HADOOP-1 > URL: https://issues.apache.org/jira/browse/HADOOP-1 > Project: Hadoop Common > Issue Type: New Feature > Components: fs >Affects Versions: 2.8.0 >Reporter: Lukas Waldmann >Assignee: Lukas Waldmann >Priority: Major > Attachments: HADOOP-1.10.patch, HADOOP-1.11.patch, > HADOOP-1.12.patch, HADOOP-1.13.patch, HADOOP-1.14.patch, > HADOOP-1.15.patch, HADOOP-1.16.patch, HADOOP-1.17.patch, > HADOOP-1.18.patch, HADOOP-1.18.patch, HADOOP-1.19.patch, > HADOOP-1.2.patch, HADOOP-1.3.patch, HADOOP-1.4.patch, > HADOOP-1.5.patch, HADOOP-1.6.patch, HADOOP-1.7.patch, > HADOOP-1.8.patch, HADOOP-1.9.patch, HADOOP-1.patch > > > Current implementation of FTP and SFTP filesystems have severe limitations > and performance issues when dealing with high number of files. Mine patch > solve those issues and integrate both filesystems such a way that most of the > core functionality is common for both and therefore simplifying the > maintainability. > The core features: > * Support for HTTP/SOCKS proxies > * Support for passive FTP > * Support for explicit FTPS (SSL/TLS) > * Support of connection pooling - new connection is not created for every > single command but reused from the pool. > For huge number of files it shows order of magnitude performance improvement > over not pooled connections. > * Caching of directory trees. For ftp you always need to list whole > directory whenever you ask information about particular file. > Again for huge number of files it shows order of magnitude performance > improvement over not cached connections. > * Support of keep alive (NOOP) messages to avoid connection drops > * Support for Unix style or regexp wildcard glob - useful for listing a > particular files across whole directory tree > * Support for reestablishing broken ftp data transfers - can happen > surprisingly often > * Support for sftp private keys (including pass phrase) > * Support for keeping passwords, private keys and pass phrase in the jceks > key stores -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13862) AbstractWadlGeneratorGrammarGenerator couldn't find grammar element for class java.util.Map
[ https://issues.apache.org/jira/browse/HADOOP-13862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HADOOP-13862: -- Target Version/s: 3.4.0 (was: 3.3.0) Bulk update: moved all 3.3.0 non-blocker issues, please move back if it is a blocker. > AbstractWadlGeneratorGrammarGenerator couldn't find grammar element for class > java.util.Map > --- > > Key: HADOOP-13862 > URL: https://issues.apache.org/jira/browse/HADOOP-13862 > Project: Hadoop Common > Issue Type: Bug > Components: kms >Affects Versions: 3.0.0-alpha2 >Reporter: John Zhuge >Assignee: John Zhuge >Priority: Minor > > Annoying messages in kms.log: > {noformat} > 2016-12-02 22:23:33,580 INFO AbstractWadlGeneratorGrammarGenerator - > Couldn't find grammar element for class javax.ws.rs.core.Response > 2016-12-02 22:23:33,580 INFO AbstractWadlGeneratorGrammarGenerator - > Couldn't find grammar element for class java.util.Map > 2016-12-02 22:23:33,580 INFO AbstractWadlGeneratorGrammarGenerator - > Couldn't find grammar element for class java.util.Map > 2016-12-02 22:23:33,580 INFO AbstractWadlGeneratorGrammarGenerator - > Couldn't find grammar element for class java.util.Map > 2016-12-02 22:23:33,581 INFO AbstractWadlGeneratorGrammarGenerator - > Couldn't find grammar element for class javax.ws.rs.core.Response > 2016-12-02 22:23:33,581 INFO AbstractWadlGeneratorGrammarGenerator - > Couldn't find grammar element for class javax.ws.rs.core.Response > 2016-12-02 22:23:33,581 INFO AbstractWadlGeneratorGrammarGenerator - > Couldn't find grammar element for class java.util.Map > {noformat}} > http://stackoverflow.com/questions/15767973/jersey-what-does-couldnt-find-grammar-element-mean. > Tried disabling WADL, but KMS didn't work: {{hadoop key list}} > authentication failed. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14794) Standalone MiniKdc server
[ https://issues.apache.org/jira/browse/HADOOP-14794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HADOOP-14794: -- Target Version/s: 3.4.0 (was: 3.3.0) Bulk update: moved all 3.3.0 non-blocker issues, please move back if it is a blocker. > Standalone MiniKdc server > - > > Key: HADOOP-14794 > URL: https://issues.apache.org/jira/browse/HADOOP-14794 > Project: Hadoop Common > Issue Type: New Feature > Components: security, test >Affects Versions: 2.7.0 >Reporter: John Zhuge >Assignee: John Zhuge >Priority: Major > Attachments: HADOOP-14794.001.patch, HADOOP-14794.002.patch, > HADOOP-14794.003.patch > > > Add a new subcommand {{hadoop minikdc}} to start a standalone MiniKdc server. > This will make it easier to test Kerberos in pseudo-distributed mode without > an external KDC server. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16144) Create a Hadoop RPC based KMS client
[ https://issues.apache.org/jira/browse/HADOOP-16144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HADOOP-16144: -- Target Version/s: 3.4.0 (was: 3.3.0) Bulk update: moved all 3.3.0 non-blocker issues, please move back if it is a blocker. > Create a Hadoop RPC based KMS client > > > Key: HADOOP-16144 > URL: https://issues.apache.org/jira/browse/HADOOP-16144 > Project: Hadoop Common > Issue Type: Sub-task > Components: kms >Reporter: Wei-Chiu Chuang >Assignee: Anu Engineer >Priority: Major > Attachments: HADOOP-16144.001.patch, KMS.RPC.patch > > > Create a new KMS client implementation that speaks Hadoop RPC. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15870) S3AInputStream.remainingInFile should use nextReadPos
[ https://issues.apache.org/jira/browse/HADOOP-15870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HADOOP-15870: -- Target Version/s: 3.4.0 (was: 3.3.0) Bulk update: moved all 3.3.0 non-blocker issues, please move back if it is a blocker. > S3AInputStream.remainingInFile should use nextReadPos > - > > Key: HADOOP-15870 > URL: https://issues.apache.org/jira/browse/HADOOP-15870 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.8.4, 3.1.1 >Reporter: Shixiong Zhu >Assignee: lqjacklee >Priority: Major > Fix For: 3.3.0 > > Attachments: HADOOP-15870-002.patch, HADOOP-15870-003.patch, > HADOOP-15870-004.patch, HADOOP-15870-005.patch, HADOOP-15870-006.patch > > > Otherwise `remainingInFile` will not change after `seek`. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14231) Using parentheses is not allowed in auth_to_local regex
[ https://issues.apache.org/jira/browse/HADOOP-14231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HADOOP-14231: -- Target Version/s: 3.4.0 (was: 3.3.0) Bulk update: moved all 3.3.0 non-blocker issues, please move back if it is a blocker. > Using parentheses is not allowed in auth_to_local regex > --- > > Key: HADOOP-14231 > URL: https://issues.apache.org/jira/browse/HADOOP-14231 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Andras Bokor >Assignee: Andras Bokor >Priority: Minor > Attachments: HADOOP-14231.01.patch > > > I tried to set the following property for auth_to_local property: > {code}"RULE:[2:$1]((n|d)n)s/.*/hdfs//{code} > but I got the following exception: > {code}Exception in thread "main" java.util.regex.PatternSyntaxException: > Unclosed group near index 9 > (nn|dn|jn{code} > I found that this occurs because {{ruleParser}} in > {{org.apache.hadoop.security.authentication.util.KerberosName}} excludes > closing parentheses. > I do not really see the value of excluding parentheses (do I miss something?) > so I would remove this restriction to be able to use more regex > functionalities. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15112) create-release didn't sign artifacts
[ https://issues.apache.org/jira/browse/HADOOP-15112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HADOOP-15112: -- Target Version/s: 3.4.0 (was: 3.3.0) Bulk update: moved all 3.3.0 non-blocker issues, please move back if it is a blocker. > create-release didn't sign artifacts > > > Key: HADOOP-15112 > URL: https://issues.apache.org/jira/browse/HADOOP-15112 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Andrew Wang >Assignee: Lei (Eddy) Xu >Priority: Major > Attachments: HADOOP-15112.01.patch > > > While building the 3.0.0 RC1, I had to re-invoke Maven because the > create-release script didn't deploy signatures to Nexus. Looking at the repo > (and my artifacts), it seems like "sign" didn't run properly. > I lost my create-release output, but I noticed that it will log and continue > rather than abort in some error conditions. This might have caused my lack of > signatures. IMO it'd be better to explicitly fail in these situations. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14663) Switch to OpenClover
[ https://issues.apache.org/jira/browse/HADOOP-14663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HADOOP-14663: -- Target Version/s: 3.4.0 (was: 3.3.0) Bulk update: moved all 3.3.0 non-blocker issues, please move back if it is a blocker. > Switch to OpenClover > > > Key: HADOOP-14663 > URL: https://issues.apache.org/jira/browse/HADOOP-14663 > Project: Hadoop Common > Issue Type: Improvement > Components: build, test >Affects Versions: 3.0.0-beta1 >Reporter: Allen Wittenauer >Priority: Minor > Attachments: HADOOP-14663.00.patch, HADOOP-14663.01.patch, > HADOOP-14663.02.patch, HADOOP-14663.03.patch, HADOOP-14663.04.patch, > HADOOP-14663.05.patch, HADOOP-14663.06.patch > > > Clover has gone open source. We should switch to it's replacement > (OpenClover) so that more people can run code coverage tests. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13730) After 5 connection failures, yarn stops sending metrics graphite until restarted
[ https://issues.apache.org/jira/browse/HADOOP-13730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HADOOP-13730: -- Target Version/s: 3.4.0 (was: 3.3.0) Bulk update: moved all 3.3.0 non-blocker issues, please move back if it is a blocker. > After 5 connection failures, yarn stops sending metrics graphite until > restarted > > > Key: HADOOP-13730 > URL: https://issues.apache.org/jira/browse/HADOOP-13730 > Project: Hadoop Common > Issue Type: Bug > Components: common >Affects Versions: 2.7.2 >Reporter: Sean Young >Priority: Minor > Attachments: > 0001-Graphite-can-be-unreachable-for-some-time-and-come-b.patch > > > We've had issues in production where metrics stopped. We found the following > in the log files: > 2016-09-02 21:44:32,493 WARN org.apache.hadoop.metrics2.sink.GraphiteSink: > Error sending metrics to Graphite > java.net.SocketException: Broken pipe > at java.net.SocketOutputStream.socketWrite0(Native Method) > at > java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:120) > at java.net.SocketOutputStream.write(SocketOutputStream.java:164) > at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:233) > at sun.nio.cs.StreamEncoder.implWrite(StreamEncoder.java:294) > at sun.nio.cs.StreamEncoder.write(StreamEncoder.java:137) > at sun.nio.cs.StreamEncoder.write(StreamEncoder.java:147) > at java.io.OutputStreamWriter.write(OutputStreamWriter.java:270) > at java.io.Writer.write(Writer.java:154) > at > org.apache.hadoop.metrics2.sink.GraphiteSink$Graphite.write(GraphiteSink.java:170) > at > org.apache.hadoop.metrics2.sink.GraphiteSink.putMetrics(GraphiteSink.java:98) > at > org.apache.hadoop.metrics2.impl.MetricsSinkAdapter.consume(MetricsSinkAdapter.java:186) > at > org.apache.hadoop.metrics2.impl.MetricsSinkAdapter.consume(MetricsSinkAdapter.java:43) > at > org.apache.hadoop.metrics2.impl.SinkQueue.consumeAll(SinkQueue.java:87) > at > org.apache.hadoop.metrics2.impl.MetricsSinkAdapter.publishMetricsFromQueue(MetricsSinkAdapter.java:134) > at > org.apache.hadoop.metrics2.impl.MetricsSinkAdapter$1.run(MetricsSinkAdapter.java:88) > 2016-09-03 00:03:04,335 WARN org.apache.hadoop.metrics2.sink.GraphiteSink: > Error sending metrics to Graphite > java.net.SocketException: Broken pipe > at java.net.SocketOutputStream.socketWrite0(Native Method) > at > java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:120) > at java.net.SocketOutputStream.write(SocketOutputStream.java:164) > at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:233) > at sun.nio.cs.StreamEncoder.implWrite(StreamEncoder.java:294) > at sun.nio.cs.StreamEncoder.write(StreamEncoder.java:137) > at sun.nio.cs.StreamEncoder.write(StreamEncoder.java:147) > at java.io.OutputStreamWriter.write(OutputStreamWriter.java:270) > at java.io.Writer.write(Writer.java:154) > at > org.apache.hadoop.metrics2.sink.GraphiteSink$Graphite.write(GraphiteSink.java:170) > at > org.apache.hadoop.metrics2.sink.GraphiteSink.putMetrics(GraphiteSink.java:98) > at > org.apache.hadoop.metrics2.impl.MetricsSinkAdapter.consume(MetricsSinkAdapter.java:186) > at > org.apache.hadoop.metrics2.impl.MetricsSinkAdapter.consume(MetricsSinkAdapter.java:43) > at > org.apache.hadoop.metrics2.impl.SinkQueue.consumeAll(SinkQueue.java:87) > at > org.apache.hadoop.metrics2.impl.MetricsSinkAdapter.publishMetricsFromQueue(MetricsSinkAdapter.java:134) > at > org.apache.hadoop.metrics2.impl.MetricsSinkAdapter$1.run(MetricsSinkAdapter.java:88) > 2016-09-03 00:20:35,436 WARN org.apache.hadoop.metrics2.sink.GraphiteSink: > Error sending metrics to Graphite > java.net.SocketException: Connection timed out > at java.net.SocketOutputStream.socketWrite0(Native Method) > at > java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:120) > at java.net.SocketOutputStream.write(SocketOutputStream.java:164) > at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:233) > at sun.nio.cs.StreamEncoder.implWrite(StreamEncoder.java:294) > at sun.nio.cs.StreamEncoder.write(StreamEncoder.java:137) > at sun.nio.cs.StreamEncoder.write(StreamEncoder.java:147) > at java.io.OutputStreamWriter.write(OutputStreamWriter.java:270) > at java.io.Writer.write(Writer.java:154) > at > org.apache.hadoop.metrics2.sink.GraphiteSink$Graphite.write(GraphiteSink.java:170) > at >
[jira] [Updated] (HADOOP-13632) Daemonization does not check process liveness before renicing
[ https://issues.apache.org/jira/browse/HADOOP-13632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HADOOP-13632: -- Target Version/s: 3.4.0 (was: 3.3.0) Bulk update: moved all 3.3.0 non-blocker issues, please move back if it is a blocker. > Daemonization does not check process liveness before renicing > - > > Key: HADOOP-13632 > URL: https://issues.apache.org/jira/browse/HADOOP-13632 > Project: Hadoop Common > Issue Type: Bug > Components: scripts >Affects Versions: 3.0.0-alpha1 >Reporter: Andrew Wang >Assignee: Andrew Wang >Priority: Major > Attachments: HADOOP-13632.001.patch, HADOOP-13632.002.patch > > > If you try to daemonize a process that is incorrectly configured, it will die > quite quickly. However, the daemonization function will still try to renice > it even if it's down, leading to something like this for my namenode: > {noformat} > -> % bin/hdfs --daemon start namenode > ERROR: Cannot set priority of namenode process 12036 > {noformat} > It'd be more user-friendly instead of this renice error, we said that the > process couldn't be started. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15897) Port range binding fails due to socket bind race condition
[ https://issues.apache.org/jira/browse/HADOOP-15897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HADOOP-15897: -- Target Version/s: 3.4.0 (was: 3.3.0) Bulk update: moved all 3.3.0 non-blocker issues, please move back if it is a blocker. > Port range binding fails due to socket bind race condition > -- > > Key: HADOOP-15897 > URL: https://issues.apache.org/jira/browse/HADOOP-15897 > Project: Hadoop Common > Issue Type: Bug > Components: ipc >Affects Versions: 2.0.2-alpha >Reporter: Daryn Sharp >Assignee: Daryn Sharp >Priority: Major > Attachments: HADOOP-15897.patch > > > Java's {{ServerSocket#bind}} does both a bind and listen. At a system level, > multiple processes may bind to the same port but only one may listen. Java > sockets are left in an unrecoverable state when a process loses the race to > listen first. > Servers that compete over a listening port range (ex. App Master) will fail > the entire range after a collision. The IPC layer should make a better > effort to recover from failed binds. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-11436) HarFileSystem does not preserve permission, users and groups
[ https://issues.apache.org/jira/browse/HADOOP-11436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HADOOP-11436: -- Target Version/s: 3.4.0 (was: 3.3.0) Bulk update: moved all 3.3.0 non-blocker issues, please move back if it is a blocker. > HarFileSystem does not preserve permission, users and groups > > > Key: HADOOP-11436 > URL: https://issues.apache.org/jira/browse/HADOOP-11436 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.0.0-alpha2 >Reporter: John George >Assignee: Sarah Victor >Priority: Minor > Labels: BB2015-05-TBR > Attachments: HADOOP-11436.1.patch, HADOOP-11436.2.patch > > > HARFileSystem does not preserve permission, users or groups. The archive > itself has these stored, but the HarFileSystem ignores these. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16672) missing null check for UserGroupInformation while during IOSteam setup
[ https://issues.apache.org/jira/browse/HADOOP-16672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HADOOP-16672: -- Target Version/s: 3.4.0 Bulk update: moved all 3.3.0 non-blocker issues, please move back if it is a blocker. > missing null check for UserGroupInformation while during IOSteam setup > -- > > Key: HADOOP-16672 > URL: https://issues.apache.org/jira/browse/HADOOP-16672 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.3.0 >Reporter: Viraj Jasani >Priority: Major > Fix For: 3.3.0 > > > While setting up IOStreams, we might end up with NPE if UserGroupInformation > is null resulting from getTicket() call. Similar to other operations, we > should add null check for ticket.doAs() call. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13007) cherry pick s3 ehancements from PrestoS3FileSystem
[ https://issues.apache.org/jira/browse/HADOOP-13007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HADOOP-13007: -- Target Version/s: 3.4.0 (was: 3.3.0) Bulk update: moved all 3.3.0 non-blocker issues, please move back if it is a blocker. > cherry pick s3 ehancements from PrestoS3FileSystem > -- > > Key: HADOOP-13007 > URL: https://issues.apache.org/jira/browse/HADOOP-13007 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.9.0 >Reporter: Steve Loughran >Priority: Minor > > Looking at > [https://github.com/prestodb/presto/blob/master/presto-hive/src/main/java/com/facebook/presto/hive/s3/PrestoS3FileSystem.java], > they've done some interesting things: configurable connection timeouts and, > retry options, statistics to count exceptions caught/re-opened, and more > review them, if there is good stuff there, add it to S3a -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15009) hadoop-resourceestimator's shell scripts are a mess
[ https://issues.apache.org/jira/browse/HADOOP-15009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HADOOP-15009: -- Target Version/s: 3.4.0 (was: 3.3.0) Bulk update: moved all 3.3.0 non-blocker issues, please move back if it is a blocker. > hadoop-resourceestimator's shell scripts are a mess > --- > > Key: HADOOP-15009 > URL: https://issues.apache.org/jira/browse/HADOOP-15009 > Project: Hadoop Common > Issue Type: Bug > Components: scripts, tools >Affects Versions: 3.1.0 >Reporter: Allen Wittenauer >Assignee: Ajay Kumar >Priority: Major > Attachments: HADOOP-15009.001.patch, Screen Shot 2017-12-12 at > 11.16.23 AM.png > > > #1: > There's no reason for estimator.sh to exist. Just make it a subcommand under > yarn or whatever. > #2: > In it's current form, it's missing a BUNCH of boilerplate that makes certain > functionality completely fail. > #3 > start/stop-estimator.sh is full of copypasta that doesn't actually do > anything/work correctly. Additionally, if estimator.sh doesn't exist, > neither does this since yarn --daemon start/stop will do everything as > necessary. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14846) Wrong shell exit code if the shell process cannot be even started
[ https://issues.apache.org/jira/browse/HADOOP-14846?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HADOOP-14846: -- Target Version/s: 3.4.0 (was: 3.3.0) Bulk update: moved all 3.3.0 non-blocker issues, please move back if it is a blocker. > Wrong shell exit code if the shell process cannot be even started > - > > Key: HADOOP-14846 > URL: https://issues.apache.org/jira/browse/HADOOP-14846 > Project: Hadoop Common > Issue Type: Bug > Components: util >Affects Versions: 2.7.1 >Reporter: Yuqi Wang >Assignee: Yuqi Wang >Priority: Major > Labels: shell > Fix For: 3.0.0-alpha4 > > Attachments: HADOOP-14846.001.patch, HADOOP-14846.002.patch, > HADOOP-14846.002.patch > > > *Hadoop may hide shell failures (including start container and fs operation > failures), such as:* > Container exit diagnostics (note the container exit code is 0): > {code:java} > Exception from container-launch. Container id: > container_e5620_1503888150197_2979_01_003320 > Exit code: 0 > Exception message: > Cannot run program "D:\data\hadoop.latest\bin\winutils.exe" (in directory > "\data\yarnnm\local\usercache\hadoop\appcache\application_1503888150197_2979\container_e5620_1503888150197_2979_01_003320"): > > CreateProcess error=2, The system cannot find the file specified > Stack trace: java.io.IOException: Cannot run program > "D:\data\hadoop.latest\bin\winutils.exe" (in directory > "\data\yarnnm\local\usercache\hadoop\appcache\application_1503888150197_2979\container_e5620_1503888150197_2979_01_003320"): > CreateProcess error=2, The system cannot find the file specified > at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048) > at org.apache.hadoop.util.Shell.runCommand(Shell.java:517) > at org.apache.hadoop.util.Shell.run(Shell.java:490) > at > org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:756) > at > org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212) > at > org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:329) > at > org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:86) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-9851) dfs -chown does not like "+" plus sign in user name
[ https://issues.apache.org/jira/browse/HADOOP-9851?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HADOOP-9851: - Target Version/s: 3.4.0 (was: 3.3.0) Bulk update: moved all 3.3.0 non-blocker issues, please move back if it is a blocker. > dfs -chown does not like "+" plus sign in user name > --- > > Key: HADOOP-9851 > URL: https://issues.apache.org/jira/browse/HADOOP-9851 > Project: Hadoop Common > Issue Type: Bug > Components: fs >Affects Versions: 2.0.5-alpha >Reporter: Marc Villacorta >Assignee: Andras Bokor >Priority: Minor > Attachments: HADOOP-9851.01.patch > > > I intend to set user and group: > *User:* _MYCOMPANY+marc.villacorta_ > *Group:* hadoop > where _'+'_ is what we use as a winbind separator. > And this is what I get: > {code:none} > sudo -u hdfs hadoop fs -touchz /tmp/test.txt > sudo -u hdfs hadoop fs -chown MYCOMPANY+marc.villacorta:hadoop /tmp/test.txt > -chown: 'MYCOMPANY+marc.villacorta:hadoop' does not match expected pattern > for [owner][:group]. > Usage: hadoop fs [generic options] -chown [-R] [OWNER][:[GROUP]] PATH... > {code} > I am using version: 2.0.0-cdh4.3.0 > Quote > [source|http://h30097.www3.hp.com/docs/iass/OSIS_62/MAN/MAN8/0044.HTM]: > {quote} > winbind separator >The winbind separator option allows you to specify how NT domain names >and user names are combined into unix user names when presented to >users. By default, winbindd will use the traditional '\' separator so >that the unix user names look like DOMAIN\username. In some cases this >separator character may cause problems as the '\' character has >special meaning in unix shells. In that case you can use the winbind >separator option to specify an alternative separator character. Good >alternatives may be '/' (although that conflicts with the unix >directory separator) or a '+ 'character. The '+' character appears to >be the best choice for 100% compatibility with existing unix >utilities, but may be an aesthetically bad choice depending on your >taste. >Default: winbind separator = \ >Example: winbind separator = + > {quote} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15082) add AbstractContractRootDirectoryTest test for mkdir / ; wasb to implement the test
[ https://issues.apache.org/jira/browse/HADOOP-15082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HADOOP-15082: -- Target Version/s: 3.4.0 (was: 3.3.0) Bulk update: moved all 3.3.0 non-blocker issues, please move back if it is a blocker. > add AbstractContractRootDirectoryTest test for mkdir / ; wasb to implement > the test > --- > > Key: HADOOP-15082 > URL: https://issues.apache.org/jira/browse/HADOOP-15082 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs, fs/azure, test >Affects Versions: 2.9.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > Attachments: HADOOP-15082-001.patch, HADOOP-15082-002.patch, > HADOOP-15082-003.patch > > > I managed to get a stack trace on an older version of WASB with some coding > doing a mkdir(new Path("/"))some of the ranger parentage checks didn't > handle that specific case. > # Add a new root Fs contract test for this operation > # Have WASB implement the test suite as an integration test. > # if the test fails shows a problem fix -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15842) add fs.azure.account.oauth2.client.secret to hadoop.security.sensitive-config-keys
[ https://issues.apache.org/jira/browse/HADOOP-15842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HADOOP-15842: -- Target Version/s: 3.4.0 (was: 3.3.0) Bulk update: moved all 3.3.0 non-blocker issues, please move back if it is a blocker. > add fs.azure.account.oauth2.client.secret to > hadoop.security.sensitive-config-keys > -- > > Key: HADOOP-15842 > URL: https://issues.apache.org/jira/browse/HADOOP-15842 > Project: Hadoop Common > Issue Type: Bug > Components: conf >Affects Versions: 3.2.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > Attachments: HADOOP-15842-001.patch, HADOOP-15842-002.patch > > > in HADOOP-15839 I left out "fs.azure.account.oauth2.client.secret". Fix by > adding it -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13327) Add OutputStream + Syncable to the Filesystem Specification
[ https://issues.apache.org/jira/browse/HADOOP-13327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HADOOP-13327: -- Target Version/s: 3.4.0 (was: 3.3.0) Bulk update: moved all 3.3.0 non-blocker issues, please move back if it is a blocker. > Add OutputStream + Syncable to the Filesystem Specification > --- > > Key: HADOOP-13327 > URL: https://issues.apache.org/jira/browse/HADOOP-13327 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Attachments: HADOOP-13327-002.patch, HADOOP-13327-003.patch, > HADOOP-13327-branch-2-001.patch > > > Write down what a Filesystem output stream should do. While core the API is > defined in Java, that doesn't say what's expected about visibility, > durability, etc —and Hadoop Syncable interface is entirely ours to define. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13144) Enhancing IPC client throughput via multiple connections per user
[ https://issues.apache.org/jira/browse/HADOOP-13144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HADOOP-13144: -- Target Version/s: 3.4.0 (was: 3.3.0) Bulk update: moved all 3.3.0 non-blocker issues, please move back if it is a blocker. > Enhancing IPC client throughput via multiple connections per user > - > > Key: HADOOP-13144 > URL: https://issues.apache.org/jira/browse/HADOOP-13144 > Project: Hadoop Common > Issue Type: Improvement > Components: ipc >Reporter: Jason Kace >Assignee: Íñigo Goiri >Priority: Minor > Attachments: HADOOP-13144-performance.patch, HADOOP-13144.000.patch, > HADOOP-13144.001.patch, HADOOP-13144.002.patch, HADOOP-13144.003.patch > > > The generic IPC client ({{org.apache.hadoop.ipc.Client}}) utilizes a single > connection thread for each {{ConnectionId}}. The {{ConnectionId}} is unique > to the connection's remote address, ticket and protocol. Each ConnectionId > is 1:1 mapped to a connection thread by the client via a map cache. > The result is to serialize all IPC read/write activity through a single > thread for a each user/ticket + address. If a single user makes repeated > calls (1k-100k/sec) to the same destination, the IPC client becomes a > bottleneck. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16393) S3Guard init command uses global settings, not those of target bucket
[ https://issues.apache.org/jira/browse/HADOOP-16393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HADOOP-16393: -- Target Version/s: 3.4.0 Bulk update: moved all 3.3.0 non-blocker issues, please move back if it is a blocker. > S3Guard init command uses global settings, not those of target bucket > - > > Key: HADOOP-16393 > URL: https://issues.apache.org/jira/browse/HADOOP-16393 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.2.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Fix For: 3.3.0 > > > If you call {{s3guard init s3a://name/}} then the custom bucket options of > fs.s3a.bucket.name are not picked up, instead the global value is used. > Fix: take the name of the bucket and use that to eval properties and patch > the config used for the init command. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-10584) ActiveStandbyElector goes down if ZK quorum become unavailable
[ https://issues.apache.org/jira/browse/HADOOP-10584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HADOOP-10584: -- Target Version/s: 3.4.0 (was: 3.3.0) Bulk update: moved all 3.3.0 non-blocker issues, please move back if it is a blocker. > ActiveStandbyElector goes down if ZK quorum become unavailable > -- > > Key: HADOOP-10584 > URL: https://issues.apache.org/jira/browse/HADOOP-10584 > Project: Hadoop Common > Issue Type: Bug > Components: ha >Affects Versions: 2.4.0 >Reporter: Karthik Kambatla >Assignee: Daniel Templeton >Priority: Major > Attachments: HADOOP-10584.prelim.patch, hadoop-10584-prelim.patch, > rm.log > > > ActiveStandbyElector retries operations for a few times. If the ZK quorum > itself is down, it goes down and the daemons will have to be brought up > again. > Instead, it should log the fact that it is unable to talk to ZK, call > becomeStandby on its client, and continue to attempt connecting to ZK. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16818) ABFS: Combine append+flush calls for blockblob & appendblob
[ https://issues.apache.org/jira/browse/HADOOP-16818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HADOOP-16818: -- Target Version/s: 3.4.0 Bulk update: moved all 3.3.0 non-blocker issues, please move back if it is a blocker. > ABFS: Combine append+flush calls for blockblob & appendblob > > > Key: HADOOP-16818 > URL: https://issues.apache.org/jira/browse/HADOOP-16818 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.3.0 >Reporter: Bilahari T H >Assignee: Ishani >Priority: Minor > Fix For: 3.3.0 > > > Combine append+flush calls for blockblob & appendblob -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16577) Build fails as can't retrieve websocket-servlet
[ https://issues.apache.org/jira/browse/HADOOP-16577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HADOOP-16577: -- Target Version/s: 3.4.0 (was: 3.3.0) Bulk update: moved all 3.3.0 non-blocker issues, please move back if it is a blocker. > Build fails as can't retrieve websocket-servlet > --- > > Key: HADOOP-16577 > URL: https://issues.apache.org/jira/browse/HADOOP-16577 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.3.0 >Reporter: Erkin Alp Güney >Priority: Major > Labels: build, dependencies > > I encountered this error when building Hadoop: > Downloading: > https://repository.apache.org/content/repositories/snapshots/org/eclipse/jetty/websocket/websocket-server/9.3.27.v20190418/websocket-server-9.3.27.v20190418.jar > Sep 15, 2019 7:54:39 AM > org.apache.maven.wagon.providers.http.httpclient.impl.execchain.RetryExec > execute > INFO: I/O exception > (org.apache.maven.wagon.providers.http.httpclient.NoHttpResponseException) > caught when processing request to {s}->https://repository.apache.org:443: The > target server failed to respond > Sep 15, 2019 7:54:39 AM > org.apache.maven.wagon.providers.http.httpclient.impl.execchain.RetryExec > execute -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16878) Copy command in FileUtil to throw an exception if the source and destination is the same
[ https://issues.apache.org/jira/browse/HADOOP-16878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HADOOP-16878: -- Target Version/s: 3.4.0 (was: 3.3.0) Bulk update: moved all 3.3.0 non-blocker issues, please move back if it is a blocker. > Copy command in FileUtil to throw an exception if the source and destination > is the same > > > Key: HADOOP-16878 > URL: https://issues.apache.org/jira/browse/HADOOP-16878 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.3.0 >Reporter: Gabor Bota >Assignee: Gabor Bota >Priority: Major > Attachments: hdfsTest.patch > > > We encountered an error during a test in our QE when the file destination and > source path were the same. This happened during an ADLS test, and there were > no meaningful error messages, so it was hard to find the root cause of the > failure. > The error we saw was that file size has changed during the copy operation. > The new file creation in the destination - which is the same as the source - > creates a file and sets the file length to zero. After this, getting the > source file will fail because the sile size changed during the operation. > I propose a solution to at least log in error level in the {{FileUtil}} if > the source and destination of the copy operation is the same, so debugging > issues like this will be easier in the future. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16873) Upgrade to Apache ZooKeeper 3.5.7
[ https://issues.apache.org/jira/browse/HADOOP-16873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HADOOP-16873: -- Target Version/s: 3.4.0 (was: 3.3.0) Bulk update: moved all 3.3.0 non-blocker issues, please move back if it is a blocker. > Upgrade to Apache ZooKeeper 3.5.7 > - > > Key: HADOOP-16873 > URL: https://issues.apache.org/jira/browse/HADOOP-16873 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Norbert Kalmár >Assignee: Norbert Kalmár >Priority: Major > > Apache ZooKeeper 3.5.7 has been released, which contains some important fixes > including third party CVE, possible split brain and data loss in some very > rare but plausible scenarios etc. > the release has been tested by the curator team to be compatible with 4.2.0 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16507) S3Guard fsck: Add option to configure severity (level) for the scan
[ https://issues.apache.org/jira/browse/HADOOP-16507?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HADOOP-16507: -- Target Version/s: 3.4.0 (was: 3.3.0) Bulk update: moved all 3.3.0 non-blocker issues, please move back if it is a blocker. > S3Guard fsck: Add option to configure severity (level) for the scan > --- > > Key: HADOOP-16507 > URL: https://issues.apache.org/jira/browse/HADOOP-16507 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.3.0 >Reporter: Gabor Bota >Assignee: Mukund Thakur >Priority: Major > > There's the severity of Violation (inconsistency) defined in > {{org.apache.hadoop.fs.s3a.s3guard.S3GuardFsck.Violation}}. > This flag is only for defining the severity of the Violation, but not used to > filter the scan for issue severity. > The task to do: Use the severity level to define which issue should be logged > and/or fixed during the scan. > Note: the best way to avoid possible code duplication would be to not even > add the consistency violation pair to the list of violations during the scan. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14731) Update gitignore to exclude output of site build
[ https://issues.apache.org/jira/browse/HADOOP-14731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HADOOP-14731: -- Target Version/s: 3.4.0 (was: 3.3.0) Bulk update: moved all 3.3.0 non-blocker issues, please move back if it is a blocker. > Update gitignore to exclude output of site build > > > Key: HADOOP-14731 > URL: https://issues.apache.org/jira/browse/HADOOP-14731 > Project: Hadoop Common > Issue Type: Improvement > Components: build, site >Affects Versions: 3.0.0-alpha3 >Reporter: Andrew Wang >Assignee: Andrew Wang >Priority: Major > Attachments: HADOOP-14731.001.patch > > > Site build generates a bunch of files that aren't caught by gitignore, let's > update. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-5943) IOUtils#copyBytes methods should not close streams that are passed in as parameters
[ https://issues.apache.org/jira/browse/HADOOP-5943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HADOOP-5943: - Target Version/s: 3.4.0 (was: 3.3.0) Bulk update: moved all 3.3.0 non-blocker issues, please move back if it is a blocker. > IOUtils#copyBytes methods should not close streams that are passed in as > parameters > --- > > Key: HADOOP-5943 > URL: https://issues.apache.org/jira/browse/HADOOP-5943 > Project: Hadoop Common > Issue Type: Bug > Components: io >Reporter: Hairong Kuang >Assignee: Andras Bokor >Priority: Major > Attachments: HADOOP-5943.01.patch, HADOOP-5943.02.patch, > HADOOP-5943.03.patch > > > The following methods in IOUtils close the streams that are passed in as > parameters. Calling these methods can easily trigger findbug OBL: Method may > fail to clean up stream or resource (OBL_UNSATISFIED_OBLIGATION). A good > practice should be to close a stream in the same method where the stream is > opened. > public static void copyBytes(InputStream in, OutputStream out, int buffSize, > boolean close) > public static void copyBytes(InputStream in, OutputStream out, Configuration > conf, boolean close) > These methods should be deprecated. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16517) Allow optional mutual TLS in HttpServer2
[ https://issues.apache.org/jira/browse/HADOOP-16517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HADOOP-16517: -- Target Version/s: 3.4.0 (was: 3.3.0) Bulk update: moved all 3.3.0 non-blocker issues, please move back if it is a blocker. > Allow optional mutual TLS in HttpServer2 > > > Key: HADOOP-16517 > URL: https://issues.apache.org/jira/browse/HADOOP-16517 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Kihwal Lee >Assignee: Kihwal Lee >Priority: Major > Attachments: HADOOP-16517.1.patch, HADOOP-16517.patch > > > Currently the webservice can enforce mTLS by setting > "dfs.client.https.need-auth" on the server side. (The config name is > misleading, as it is actually server-side config. It has been deprecated from > the client config) A hadoop client can talk to mTLS enforced web service by > setting "hadoop.ssl.require.client.cert" with proper ssl config. > We have seen use case where mTLS needs to be enabled optionally for only > those clients who supplies their cert. In a mixed environment like this, > individual services may still enforce mTLS for a subset of endpoints by > checking the existence of x509 cert in the request. > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15864) Job submitter / executor fail when SBN domain name can not resolved
[ https://issues.apache.org/jira/browse/HADOOP-15864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HADOOP-15864: -- Target Version/s: 3.4.0 Bulk update: moved all 3.3.0 non-blocker issues, please move back if it is a blocker. > Job submitter / executor fail when SBN domain name can not resolved > --- > > Key: HADOOP-15864 > URL: https://issues.apache.org/jira/browse/HADOOP-15864 > Project: Hadoop Common > Issue Type: Bug >Reporter: Xiaoqiao He >Assignee: Xiaoqiao He >Priority: Critical > Fix For: 3.0.4, 3.3.0, 3.1.2 > > Attachments: HADOOP-15864-branch.2.7.001.patch, > HADOOP-15864-branch.2.7.002.patch, HADOOP-15864.003.patch, > HADOOP-15864.004.patch, HADOOP-15864.005.patch, > HADOOP-15864.branch.2.7.004.patch > > > Job submit failure and Task executes failure if Standby NameNode domain name > can not resolved on HDFS HA with DelegationToken feature. > This issue is triggered when create {{ConfiguredFailoverProxyProvider}} > instance which invoke {{HAUtil.cloneDelegationTokenForLogicalUri}} in HA mode > with Security. Since in HDFS HA mode UGI need include separate token for each > NameNode in order to dealing with Active-Standby switch, the double tokens' > content is same of course. > However when #setTokenService in {{HAUtil.cloneDelegationTokenForLogicalUri}} > it checks whether the address of NameNode has been resolved or not, if Not, > throw #IllegalArgumentException upon, then job submitter/ task executor fail. > HDFS-8068 and HADOOP-12125 try to fix it, but I don't think the two tickets > resolve completely. > Another questions many guys consider is why NameNode domain name can not > resolve? I think there are many scenarios, for instance node replace when > meet fault, and refresh DNS sometimes. Anyway, Standby NameNode failure > should not impact Hadoop cluster stability in my opinion. > a. code ref: org.apache.hadoop.security.SecurityUtil line373-386 > {code:java} > public static Text buildTokenService(InetSocketAddress addr) { > String host = null; > if (useIpForTokenService) { > if (addr.isUnresolved()) { // host has no ip address > throw new IllegalArgumentException( > new UnknownHostException(addr.getHostName()) > ); > } > host = addr.getAddress().getHostAddress(); > } else { > host = StringUtils.toLowerCase(addr.getHostName()); > } > return new Text(host + ":" + addr.getPort()); > } > {code} > b.exception log ref: > {code:xml} > at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.io.IOException: Couldn't create proxy provider class > org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider > at > org.apache.hadoop.hdfs.NameNodeProxies.createFailoverProxyProvider(NameNodeProxies.java:515) > at > org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:170) > at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:761) > at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:691) > at > org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:150) > at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2713) > at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:93) > at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2747) > at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2729) > at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:385) > at > org.apache.hadoop.fs.viewfs.ChRootedFileSystem.(ChRootedFileSystem.java:106) > at > org.apache.hadoop.fs.viewfs.ViewFileSystem$1.getTargetFileSystem(ViewFileSystem.java:178) > at > org.apache.hadoop.fs.viewfs.ViewFileSystem$1.getTargetFileSystem(ViewFileSystem.java:172) > at org.apache.hadoop.fs.viewfs.InodeTree.createLink(InodeTree.java:303) > at org.apache.hadoop.fs.viewfs.InodeTree.(InodeTree.java:377) > at > org.apache.hadoop.fs.viewfs.ViewFileSystem$1.(ViewFileSystem.java:172) > at > org.apache.hadoop.fs.viewfs.ViewFileSystem.initialize(ViewFileSystem.java:172) > at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2713) > at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:93) > at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2747) > at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2729) > at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:385) > at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:176) > at
[jira] [Updated] (HADOOP-15066) Spurious error stopping secure datanode
[ https://issues.apache.org/jira/browse/HADOOP-15066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HADOOP-15066: -- Target Version/s: 3.4.0 (was: 3.3.0) Bulk update: moved all 3.3.0 non-blocker issues, please move back if it is a blocker. > Spurious error stopping secure datanode > --- > > Key: HADOOP-15066 > URL: https://issues.apache.org/jira/browse/HADOOP-15066 > Project: Hadoop Common > Issue Type: Bug > Components: scripts >Affects Versions: 3.0.0 >Reporter: Arpit Agarwal >Assignee: Bharat Viswanadham >Priority: Major > Attachments: HADOOP-15066.00.patch, HADOOP-15066.01.patch > > > There is a spurious error when stopping a secure datanode. > {code} > # hdfs --daemon stop datanode > cat: /var/run/hadoop/hdfs//hadoop-hdfs-root-datanode.pid: No such file or > directory > WARNING: pid has changed for datanode, skip deleting pid file > cat: /var/run/hadoop/hdfs//hadoop-hdfs-root-datanode.pid: No such file or > directory > WARNING: daemon pid has changed for datanode, skip deleting daemon pid file > {code} > The error appears benign. The service was stopped correctly. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13551) hook up AwsSdkMetrics to hadoop metrics
[ https://issues.apache.org/jira/browse/HADOOP-13551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HADOOP-13551: -- Target Version/s: 3.4.0 (was: 3.3.0) Bulk update: moved all 3.3.0 non-blocker issues, please move back if it is a blocker. > hook up AwsSdkMetrics to hadoop metrics > --- > > Key: HADOOP-13551 > URL: https://issues.apache.org/jira/browse/HADOOP-13551 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.0.0-beta1 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > > There's an API in {{com.amazonaws.metrics.AwsSdkMetrics}} to give access to > the internal metrics of the AWS libraries. We might want to get at those -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15887) Add an option to avoid writing data locally in Distcp
[ https://issues.apache.org/jira/browse/HADOOP-15887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HADOOP-15887: -- Target Version/s: 3.4.0 (was: 3.3.0) Bulk update: moved all 3.3.0 non-blocker issues, please move back if it is a blocker. > Add an option to avoid writing data locally in Distcp > - > > Key: HADOOP-15887 > URL: https://issues.apache.org/jira/browse/HADOOP-15887 > Project: Hadoop Common > Issue Type: Improvement > Components: tools/distcp >Affects Versions: 2.8.2, 3.0.0 >Reporter: Tao Jie >Assignee: Tao Jie >Priority: Major > Attachments: HADOOP-15887.001.patch, HADOOP-15887.002.patch, > HADOOP-15887.003.patch, HADOOP-15887.004.patch, HADOOP-15887.005.patch > > > When copying large amount of data from one cluster to another via Distcp, and > the Distcp jobs run in the target cluster, the datanode local usage would be > imbalanced. Because the default placement policy chooses the local node to > store the first replication. > In https://issues.apache.org/jira/browse/HDFS-3702 we add a flag in DFSClient > to avoid replicating to the local datanode. We can make use of this flag in > Distcp. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-12889) Make kdiag something services can use directly on startup
[ https://issues.apache.org/jira/browse/HADOOP-12889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HADOOP-12889: -- Target Version/s: 3.4.0 (was: 3.3.0) Bulk update: moved all 3.3.0 non-blocker issues, please move back if it is a blocker. > Make kdiag something services can use directly on startup > - > > Key: HADOOP-12889 > URL: https://issues.apache.org/jira/browse/HADOOP-12889 > Project: Hadoop Common > Issue Type: Sub-task > Components: security >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Attachments: HADOOP-12289-002.patch, HADOOP-12889-001.patch > > > I want the ability to start kdiag as a service launches, without doing > anything with side-effects other than usual UGI Init (that is: no keytab > login), and hook this up so that services can start it. Then add an option > for the YARN and HDFS services to do this on launch (Default: off) -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16909) Typo in distcp counters
[ https://issues.apache.org/jira/browse/HADOOP-16909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HADOOP-16909: -- Target Version/s: 3.4.0 Bulk update: moved all 3.3.0 non-blocker issues, please move back if it is a blocker. > Typo in distcp counters > --- > > Key: HADOOP-16909 > URL: https://issues.apache.org/jira/browse/HADOOP-16909 > Project: Hadoop Common > Issue Type: Improvement > Components: tools/distcp >Reporter: Sebastian Nagel >Assignee: Sebastian Nagel >Priority: Trivial > Fix For: 3.3.0 > > > The logging of distcp job counters includes a type ("btyes" instead of > "bytes"): > {noformat} > DistCp Counters > Bandwidth in Btyes=1077528522 > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14703) ConsoleSink for metrics2
[ https://issues.apache.org/jira/browse/HADOOP-14703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HADOOP-14703: -- Target Version/s: 3.4.0 (was: 3.3.0) Bulk update: moved all 3.3.0 non-blocker issues, please move back if it is a blocker. > ConsoleSink for metrics2 > > > Key: HADOOP-14703 > URL: https://issues.apache.org/jira/browse/HADOOP-14703 > Project: Hadoop Common > Issue Type: Improvement > Components: common, metrics >Affects Versions: 3.0.0-beta1 >Reporter: Ronald Macmaster >Assignee: Ronald Macmaster >Priority: Major > Labels: newbie > Attachments: > 0001-HADOOP-14703.-ConsoleSink-for-simple-metrics-printin.patch, > HADOOP-14703.001.patch, HADOOP-14703.002.patch, HADOOP-14703.003.patch, > HADOOP-14703.004.patch, HADOOP-14703.006.patch > > Original Estimate: 6h > Remaining Estimate: 6h > > The ConsoleSink will provide a simple solution to dump metrics to the console > through std.out. > Quick access to metrics through the console will simplify the development, > testing, and debugging process. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14877) Trunk compilation fails in windows
[ https://issues.apache.org/jira/browse/HADOOP-14877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HADOOP-14877: -- Target Version/s: 3.4.0 (was: 3.3.0) Bulk update: moved all 3.3.0 non-blocker issues, please move back if it is a blocker. > Trunk compilation fails in windows > -- > > Key: HADOOP-14877 > URL: https://issues.apache.org/jira/browse/HADOOP-14877 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 3.1.0 > Environment: windows >Reporter: Brahma Reddy Battula >Priority: Major > Attachments: HADOOP-14877-001.patch > > > {noformat} > [INFO] Dependencies classpath: > D:\trunk\hadoop\hadoop-client-modules\hadoop-client-runtime\target\hadoop-client-runtime-3.1.0-SNAPSHOT.jar;D:\trunk\had > oop\hadoop-client-modules\hadoop-client-api\target\hadoop-client-api-3.1.0-SNAPSHOT.jar > [INFO] > [INFO] --- exec-maven-plugin:1.3.1:exec (check-jar-contents) @ > hadoop-client-check-invariants --- > java.io.FileNotFoundException: D (The system cannot find the file specified) > at java.util.zip.ZipFile.open(Native Method) > at java.util.zip.ZipFile.(ZipFile.java:219) > at java.util.zip.ZipFile.(ZipFile.java:149) > at java.util.zip.ZipFile.(ZipFile.java:120) > at sun.tools.jar.Main.list(Main.java:1115) > at sun.tools.jar.Main.run(Main.java:293) > at sun.tools.jar.Main.main(Main.java:1288) > java.io.FileNotFoundException: > \trunk\hadoop\hadoop-client-modules\hadoop-client-runtime\target\hadoop-client-runtime-3. > 1.0-SNAPSHOT.jar;D (The system cannot find the file specified) > at java.util.zip.ZipFile.open(Native Method) > at java.util.zip.ZipFile.(ZipFile.java:219) > at java.util.zip.ZipFile.(ZipFile.java:149) > at java.util.zip.ZipFile.(ZipFile.java:120) > at sun.tools.jar.Main.list(Main.java:1115) > at sun.tools.jar.Main.run(Main.java:293) > at sun.tools.jar.Main.main(Main.java:1288) > [INFO] Artifact looks correct: 'D' > [INFO] Artifact looks correct: 'hadoop-client-runtime-3.1.0-SNAPSHOT.jar;D' > [ERROR] Found artifact with unexpected contents: > '\trunk\hadoop\hadoop-client-modules\hadoop-client-api\target\hadoop-cl > ient-api-3.1.0-SNAPSHOT.jar' > Please check the following and either correct the build or update > the allowed list with reasoning. > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13344) Add option to exclude Hadoop's SLF4J binding
[ https://issues.apache.org/jira/browse/HADOOP-13344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HADOOP-13344: -- Target Version/s: 3.4.0 (was: 3.3.0) Bulk update: moved all 3.3.0 non-blocker issues, please move back if it is a blocker. > Add option to exclude Hadoop's SLF4J binding > > > Key: HADOOP-13344 > URL: https://issues.apache.org/jira/browse/HADOOP-13344 > Project: Hadoop Common > Issue Type: New Feature > Components: bin, scripts >Affects Versions: 2.8.0, 2.7.2 >Reporter: Thomas Poepping >Assignee: Thomas Poepping >Priority: Major > Labels: patch > Attachments: HADOOP-13344.01.patch, HADOOP-13344.patch > > > If another application that uses the Hadoop classpath brings in its own SLF4J > binding for logging, and that jar is not the exact same as the one brought in > by Hadoop, then there will be a conflict between logging jars between the two > classpaths. This patch introduces an optional setting to remove Hadoop's > SLF4J binding from the classpath, to get rid of this problem. > This patch should be applied to 2.8.0, as bin/ and hadoop-config.sh structure > has been changed in 3.0.0. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14347) Make KMS and HttpFS Jetty accept queue size configurable
[ https://issues.apache.org/jira/browse/HADOOP-14347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HADOOP-14347: -- Target Version/s: 3.4.0 (was: 3.3.0) Bulk update: moved all 3.3.0 non-blocker issues, please move back if it is a blocker. > Make KMS and HttpFS Jetty accept queue size configurable > > > Key: HADOOP-14347 > URL: https://issues.apache.org/jira/browse/HADOOP-14347 > Project: Hadoop Common > Issue Type: Improvement > Components: kms >Affects Versions: 3.0.0-alpha2 >Reporter: John Zhuge >Assignee: John Zhuge >Priority: Major > Attachments: HADOOP-14347.001.patch, HADOOP-14347.002.patch, > HADOOP-14347.003.patch > > > HADOOP-14003 enabled the customization of Tomcat attribute {{protocol}}, > {{acceptCount}}, and {{acceptorThreadCount}} for KMS in branch-2. See > https://tomcat.apache.org/tomcat-6.0-doc/config/http.html. > KMS switched from Tomcat to Jetty in trunk. Only {{acceptCount}} has a > counterpart in Jetty, {{acceptQueueSize}}. See > http://www.eclipse.org/jetty/documentation/9.3.x/configuring-connectors.html. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14234) ADLS to implement FileSystemContractBaseTest
[ https://issues.apache.org/jira/browse/HADOOP-14234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HADOOP-14234: -- Target Version/s: 3.4.0 (was: 3.3.0) Bulk update: moved all 3.3.0 non-blocker issues, please move back if it is a blocker. > ADLS to implement FileSystemContractBaseTest > > > Key: HADOOP-14234 > URL: https://issues.apache.org/jira/browse/HADOOP-14234 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/adl, test >Affects Versions: 2.8.0 >Reporter: John Zhuge >Priority: Minor > > HADOOP-14180 switches FileSystem contract tests to JUnit4 and makes various > enhancements. Improve ADLS FileSystem contract tests based on that. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13238) pid handling is failing on secure datanode
[ https://issues.apache.org/jira/browse/HADOOP-13238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HADOOP-13238: -- Target Version/s: 3.4.0 (was: 3.3.0) Bulk update: moved all 3.3.0 non-blocker issues, please move back if it is a blocker. > pid handling is failing on secure datanode > -- > > Key: HADOOP-13238 > URL: https://issues.apache.org/jira/browse/HADOOP-13238 > Project: Hadoop Common > Issue Type: Bug > Components: scripts, security >Reporter: Allen Wittenauer >Assignee: Andras Bokor >Priority: Major > Attachments: HADOOP-13238.01.patch, HADOOP-13238.02.patch > > > {code} > hdfs --daemon stop datanode > cat: /home/hadoop/H/pids/hadoop-hdfs-root-datanode.pid: No such file or > directory > WARNING: pid has changed for datanode, skip deleting pid file > cat: /home/hadoop/H/pids/hadoop-hdfs-root-datanode.pid: No such file or > directory > WARNING: daemon pid has changed for datanode, skip deleting daemon pid file > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org